You are on page 1of 422

Dual-Process Theories in Moral Psychology

Cordula Brand (Ed.)

Dual-Process Theories
in Moral Psychology
Interdisciplinary Approaches to
Theoretical, Empirical and Practical
Considerations

Editor
Cordula Brand
Universitt Tbingen
Tbingen, Germany
With the assistance of Margarita Berg
Financially supported by the German Federal Ministry of Education and Research (Bundesministerium fr Bildung und Forschung, BMBF)

ISBN 978-3-658-12052-8
ISBN 978-3-658-12053-5 (eBook)
DOI 10.1007/978-3-658-12053-5
Library of Congress Control Number: 2016934209
Springer VS
Springer Fachmedien Wiesbaden 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microlms or in any other physical way, and transmission or information storage and retrieval,
electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does
not imply, even in the absence of a specic statement, that such names are exempt from the relevant protective
laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication.
Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.
Lektorat: Frank Schindler, Monika Mlhausen
Printed on acid-free paper
Springer VS is a brand of Springer Fachmedien Wiesbaden
Springer Fachmedien Wiesbaden GmbH is part of Springer Science+Business Media
(www.springer.com)

Contents

Prologue: Morality
Robert Hepach

Preface
Cordula Brand

9
11

Part I
On the Relationship Between Ethics and Empirical Sciences
Dimensions of Moral Intuitions
Metaethics, Epistemology and Moral Psychology
Cordula Brand

19

Where and When Ethics Needs Empirical Facts


Dieter Birnbacher

41

Normativity of Moral Intuitions in the Social Intuitionist Model


Maciej Juzaszek

57

Psychology Instead of Ethics? Why Psychological Research Is Important


but Cannot Replace Ethics
Janett Triskiel

77

Part II
Empirical Approaches in Recent Moral Psychology Research
Young Childrens Concern for Others Well-Being as a Core Motive for
Developing Prosocial Behavior
Robert Hepach

101

Moral Argumentation Skills and Aggressive Behavior.


Implications for Philosophical Ethics
Michael von Grundherr

119

Psychologys Contribution to Ethics: Two Case Studies


Liz Gulliford

139

Contents

Moral Judgments and Moral Integrity Three Empirical Studies


Mariola Paruzel-Czachura

159

Moral Intuitionism and Empirical Data


Jonas Nagel / Alex Wiegmann

185

Can Biological Approaches Explain (Im)Moral Behavior?


Problems and Potentials of Studies Focused on a Genetic Predisposition
of Human Behavior
Stefan Walter

207

Part III
Reassessment of Established Terminology in Modern Debates
Aristotles Moral Philosophy and Moral Psychology. A Basic Terminology
Friedo Ricken

239

The New Synthesis in Moral Psychology versus Aristotelianism.


Content and Consequences
Kristjn Kristjnsson

249

Ethos, Eidos, Habitus. A Social Theoretical Contribution to


Morality and Ethics
Nathan Emmerich

271

Pragmatism, Religion, and Ethics. A Reminder from Rorty


Alissa MacMillan

297

Making Trouble: Mindfulness as a Care Ethic


Alexander I. Stingl / Sabrina M. Weiss

315

Part IV
Societal Implications of Dual-Process Theories in Moral Psychology
Rationalist vs. Intuitionist Views on Morality. A Sociological Perspective
Jan E. Stets

345

Can Antimoralism Avoid Moralizing? Reflections on the Relation of


Science to Ethics
Ronald de Sousa

367

Contents

Uncovering the Political in Political Psychology


David John Hall

387

The Crisis of Pedagogy and the Potentials of Professional Ethics


Alexandra Retkowski

411

Epilogue: The Word Thief


Liz Gulliford

433

Morality
Robert Hepach

Concerns not one, not two,


but at least three.
Where several Is to form a We
in lasting gusty unity.
Food I eat is gone for you.
The more one wants, the less for two.
Once you take what I maintain,
rest assured, we meet again.
Every cause has an effect.
What one does, one might regret.
The judge in ones community,
vengeful for all time to be.
If only feelings would suffice
to guide my actions to the Right.
But design of passions bore to end
natures biggest treason: Reason.

Preface
Cordula Brand

This anthology is the result of a very special form of conference, initiated and financed
by the German Federal Ministry of Education and Research (BMBF), the so-called
Klausurwoche. Since we were not able to find a decent English translation, we named
the event symposium as this term bears the connotation of a banquet and, as such, of
Platos Symposium. The Klausurwoche aims at bringing together young researchers as
well as already established experts from several disciplines to discuss for one week a
topic which deals with the ethical, legal and social aspects of modern life sciences. Our
topic recent developments in empirical moral psychology and their impact on the
ethical self-image of human beings differed from that discussed in Platos Symposium
but what was similar is the point that all participants brought in their special points of
view from different disciplines as well as different theories and furthermore, we did
not only discuss and argue but we also dined, drank and celebrated together.
The Symposium Can Psychology Replace Ethics? took place from 8th until 15th
March 2014 at the International Centre for Ethics in the Sciences and Humanities (Internationales Zentrum fr Ethik in den Wissenschaften, IZEW) of the University of
Tbingen. For the keynote lecture of the public opening ceremony, we could get Neil
Roughley on board. His talk was an excellent start for our discussions as he put the
most interesting key issues in a nutshell. The symposium itself contained closed sessions that entailed the presentations of the invited junior researchers, several talks by
international experts and special workshops, focusing on methodological and terminological questions as well as on societal implications. A distinct focus of the symposium
has been the urge to communicate the results of our discussions in a way that is as
readily understandable as possible. On the one hand, this is a requirement of the BMBF
proposal and on the other hand, it is also the general position of the IZEW to deal with
ethical questions not only from within the scientific disciplines in which those questions emerge but also in a way that researchers from various disciplines and everybody
interested in the topics can understand the points made. Therefore, we produced diverse materials to demonstrate the discussed problems in the form of visualizations,
and we chose a particular format for the public closing ceremony. The participants
created informative as well as entertaining short versions of some questions and an-

12

Cordula Brand

swers that accompanied us through the whole week. Supported by members of the
Harlekin-Theatre Tbingen, they presented these short sketches, stories and poems in
the form of a science theatre.1 Two examples of these contributions are included in this
volume as prologue and epilogue.
In the spirit of this sort of transdisciplinarity, the present anthology addresses scientists from the diverse disciplines that are represented by the authors philosophy, psychology, sociology, theology, educational science, law and politics as well as students,
teachers and everybody who is curious about developments in moral psychology and
their possible social effects.
Focusing on recent developments in moral psychology, the anthology mainly addresses the so-called dual-process theories. Dual-process theories are a common model of explanation in various psychological disciplines and are most prominent in social
psychology. All dual-process theories have in common that they suppose two different
ways in which decision making is performed. The first process is quick, implicit and
unconscious. The second process is slow, explicit and conscious. It was Daniel Kahneman who linked the quick and unconscious process to intuitions and the slow and conscious process to reasoning. From here, dual-process theories entered moral psychology and are now used very commonly to describe human moral behavior.
The book is organized into four sections, all of which have a special focus on one aspect of the discussion about moral psychology: the level of investigation, methodology,
terminology and application. This structure mirrors a classical philosophical methodology, namely the reflective equilibrium. The sections alter between theoretical, empirical and practical considerations, making each of them fruitful for the others. This
methodology makes sure that the diverse levels of investigations theoretical and empirical, descriptive and normative as well as theory and praxis are kept apart thoroughly and at the same time are associated with each other reasonably.
Therefore, within the first part, some foundational theoretical considerations are
presented that have to be taken into account when discussing empirical insights into
morality. The second part switches to some examples of those empirical insights that
examine the complex processes under discussion in more detail and with a critical eye
on the established dual-process theories. The third part brings us back onto the theoretical level by investigating the merits and problems of the terminology that is used in
discussions on moral psychology. Finally, within the fourth part, the focus of the discussion is expanded to different fields of application, examining how and in what sense
the psychological insights and philosophical considerations can be made fruitful on the
societal level.

These materials as well as a video of the science theatre are available on the homepage of the symposium:
http://www.uni-tuebingen.de/en/43311

Preface

13

Within the first part, considerations on the relationship between ethics and empirical sciences in general and within dual-process theories in particular are collected. This
section starts with a reflection on how dual process theories are located in the jungle of
metaethical positions and a proposal of conditions of adequacy those theories have to
fulfill, presented by Cordula Brand, the editor of this anthology. Afterwards, Dieter
Birnbacher asks when and where ethics in general and applied ethics in particular is in
need of empirical facts. In doing so, he presents some further criteria of a proper usage
of empirical studies within ethical considerations. The following two papers examine
the intermixture of descriptive and normative levels of investigation in more detail
within two very prominent theoretical approaches in moral psychology. Maciej Juzaszek analyzes the normative character of moral intuitions in the so-called social intuitionist model, presented by Jonathan Haidt. He addresses one of the main problems
that intuitionists have to deal with, namely the problem of lacking objectivity. Janett
Triskiel deals with the heuristic approach of Gerd Gigerenzer. She points out how such
a heuristic approach can be reconciled with a rationalist position by differentiating
between adaptive and erroneous heuristics.
The second section of the anthology presents some up-to-date empirical studies that
use different methodologies, quantitative as well as qualitative ones. They all have in
common that they broaden the focus of investigation so far assumed by the prominent
advocates of dual-process theories. First, Robert Hepach presents his work within developmental psychology on the concern of very young children for the well-being of
their fellow human beings. He proposes that this early form of social behavior might be
a core motive for developing more complex prosocial behavior. Afterwards, Michael
von Grundherr investigates the behavior of school children. He establishes a connection
between moral argumentation skills and aggressive behavior and shows what kind of
implications these studies entail for philosophical ethics. Focusing on adults, Liz Gulliford presents two case studies that show different ways in which psychological investigations contribute to ethical considerations one that focusses on the way lay people
really use ethical concepts and one that helps to understand how we can achieve ethically desirable behavior. Mariola Paruzel-Czachura investigates ratings of lay people on
moral concepts as well. She deals with the question to what extent people take into
account the degree of moral integrity of people if they have to judge the behavior of
these persons. Finally, this section includes two contributions that remind us of several
difficulties and pitfalls we have to be aware of when interpreting empirical results. Jonas
Nagel and Alex Wiegmann analyze the procedure of inducing an abstract principle
from a set of case-based moral intuitions. They remind us of the importance of being
aware of the requirements for objective facts and argue for the criterion of intersubjectively shared concepts. Stefan Walter also focusses on the requirements, delivered by
philosophy of science, which empirical studies have to fulfill. He criticizes a famous

14

Cordula Brand

study within genetics which states a causal link between a certain genetic predisposition
and moral behavior by pointing out diverse methodological failures.
The third section of the anthology compiles contributions that reassess traditional
philosophical terminologies in the modern debate on moral psychology. Friedo Ricken
starts these considerations by bringing back to our mind the reflections of Aristotle and
highlights the usefulness of this traditional approach. Kristjn Kristjnsson takes up
this line of argumentation and discusses the merits of an Aristotelian understanding in
contrast to the modern understanding of a new synthesis in moral psychology. In
doing so, he especially emphasizes some of the consequences the new understanding
would have on our societies. Focusing on social theory, Nathan Emmerich then suggests reinterpreting the findings of empirical moral psychology within the terminological framework of Pierre Bourdieu. He especially makes use of the terms ethos, eidos
and habitus by showing how they can be made fruitful to understand the interplay of
morality and ethics within a practice-oriented approach. Alissa MacMillan takes up the
path of pragmatism and reminds us of insights from Richard Rorty. She argues that
whenever it comes to ethical considerations, we should take care to deal with the urgent
ethical topics first and not make the mistake of taking metaphysical questions as more
important. Alexander Stingl and Sabrina Weiss close this section by introducing the
terminology of care ethics into the discussion. They analyze the discourse on morality
in moral psychology by taking seriously a non-reductive understanding of embodiment, the notion of mindfulness and a practical stance that starts with the subject.
Finally, within section four, societal implications of dual-process theories are under
investigation. Jan Stets starts by presenting a sociological perspective on how to take
seriously both the conscious as well as the unconscious process underlying moral behavior and decision making. This, she argues, is the only way to understand individuals
as moral actors who, on the basis of how they see themselves in moral terms, will behave in ways that attempt to verify their self-view along the moral dimension. Ronald
de Sousa discusses the question of how we can make sense of a situation where liberal
positions within ethics supported by insights from empirical sciences oppose more
traditional concepts of morality by asking whether antimoralism can avoid moralizing.
David Hall draws the line from moral psychology to political considerations in analyzing the political force of dual-process theories. In doing so, he criticizes some of the
arguments that entered not only political philosophy but also public debates. Alexandra
Retkowski finally investigates how dual-process theories might help to develop programs within educational organizations that serve to diminish the number of cases of
abuse in childcare.
Taken together, the present anthology gives an overview of a wide range of up-todate research within moral psychology and asks diverse questions about the impact
these insights might, should and shall not have on further discussions in normative as
well as in applied ethics.

Preface

15

Neither this anthology, nor the symposium itself would have been possible without
the support of many people. First, I wish to thank all speakers, workshop supervisors
and participants for an amazing week, exciting discussions, wonderful evenings and the
great collaboration in preparing the anthology. It was a pleasure to work with all of
them and I am already looking forward to meeting them again! My special thanks goes
to Margarita Berg for her scrupulous proofreading, her excellent translations, her advice on biological contents and her constant supply of comfort food and drinks. It is
indeed hard to find the right words of thanks for Julia Dietrich, head of the project as
well as of the department of ethics and education. She supported me from beginning to
end in every circumstance with her constant advice as well as her humor that never
failed to get me back on my feet.
My colleagues from the IZEW and especially the members of the research training
group Bioethics I want to thank for their input and many discussions on several questions concerning moral psychology that helped a lot to prepare the proposal as well as
the conceptual framework of the symposium.
Concerning the preparation of the symposium, I thank Igor Wroblewski for his
friendly support of all participants. During the symposium, Pia Mozer really covered
my back in some difficult organizational moments; thanks a lot for that, it was invaluable. Furthermore, also in the name of all participants, I wish to thank all student and
graduate assistants who supplied us with lots of coffee, tea, and anything else we needed: Michael Botsch, Lea Schumacher, Elena Schilling, Andri Knig and Jenny Fadranski. As always, the front office team of the IZEW, Birgit Leweke and Matthias Schlee,
was very helpful: Many thanks for saving me from all the administrative pitfalls.
The members of the project management agency of the German Aerospace Center
(Deutsches Zentrum fr Luft- und Raumfahrt, DLR) I want to thank for their excellent
support and their friendly advice. Furthermore, I am grateful to Frank Schindler, my
editor at Springer publishing house, for his encouragement and the fruitful dialogue.
Last but not least, I am now allowed not only to name but also to officially thank the
referees, who had the difficult task of selecting the participants from a multitude of
applications: Gertrud Nunner-Winkler, Friedrich Hermanni and Matthias Kettner.
I do hope that this anthology will help to create new momentums, open up new
questions and broaden the social perspective of the fascinating research adventure of
moral psychology.

Part I
On the Relationship Between
Ethics and Empirical Sciences

Dimensions of Moral Intuitions Metaethics, Epistemology


and Moral Psychology
Cordula Brand

Abstract
In everyday life we are permanently confronted with moral problems. But we are also
faced with many difficulties and many different forms of how to come to a moral
judgment. Modern moral psychology, as an interdisciplinary approach, tries to shed
light on moral decision-making processes. Within philosophy it has been discussed
since Humes sentimentalist approach to ethics what kind of moral judgment is the
more convincing one a judgment that relies on emotions or a judgment that relies on
rational thinking. This debate nowadays heats up again as empirical data enters the
philosophical realm. Psychologists, sociologists, neuroscientists and biologists deliver
studies that seem to support the conviction that moral decision making actually does
not have much to do with rational considerations instead, it seems to be an intuitive
process that depends on emotions instead of deliberation. In doing so, many researchers opt for a dual process theory that relies on implicit as well as explicit structures of
reasoning. However, in many cases they reduce the explicit paths in their work. I will
argue that this is due to the fact that different kinds of explanation levels are not kept
apart thoroughly. I will start my considerations on the merits and problems of the recent developments in moral psychology from the epistemological point of view and
distinguish this from a psychological and a normative understanding. In the light of
these considerations I will show what kind of problems can occur through an analysis
of Haidts social intuitionism and draw some conclusions on how we could elaborate a
dual process theory that is theoretically informed as well as empirically supported.

Cordula Brand
University of Tuebingen
International Centre for Ethics in the Sciences and Humanities (IZEW)
cordula.brand@uni-tuebingen.de

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_1

20

Cordula Brand

Struggling With Intuitions

In everyday life we are permanently confronted with moral problems. We have to


choose between different alternatives of moral actions. We have to decide whether our
actions or the actions of our fellows are right or wrong, permissible or not permissible.
Furthermore, we have to explain to and teach our children what kind of behavior is
right or wrong and how to act accordingly. However, we are faced with many difficulties and many different forms of how to come to a sound moral judgment. Sometimes
we are very certain about these judgments and sometimes we are not. Sometimes we
feel secure about our judgment, but other people are as secure about the opposite decision. Sometimes we immediately know what is right or wrong, sometimes we have to
think about it and consider different possibilities, and sometimes we realize afterwards
that our immediate action was misguided or the other way around that our thorough considerations lead to the wrong decision whereas our gut feelings had been right
from the beginning. I guess that all of us are familiar with all these different situations.
The question is, what do we make out of this? Is there any main route to come to moral
decisions?
Within practical philosophy it has been discussed since Humes sentimentalist approach to ethics what kind of moral judgment is the more convincing one a judgment
that relies on emotions or a judgment that relies on rational thinking. This debate nowadays heats up again as empirical data enters the philosophical realm, e.g. in the form
of experimental philosophy, neuroscience of ethics, or a new science of morality. Correspondingly, psychologists, sociologists, neuroscientists and biologists deliver studies
that seem to support the conviction that moral decision making actually does not have
much to do with rational considerations instead, it seems to be an intuitive process
that relies on emotions instead of deliberation (e.g. Haidt 2013; Prinz 2007b; Harris
2009). However, if that were the case this insight would change our worldview extensively. Would it still make sense to think about and deliberate on moral decisions or
could we just follow our immediate feelings and therefore be right? This seems a bit
odd, doesnt it? So, we have to ask ourselves and all the scientists working on the topic
of moral decision making no matter whether they are coming from the empirical
sciences or the humanities what role moral intuitions, emotions and rational judgments really do play in everyday life, in dilemma situations and in ethical debates.
It is not that easy to define what we mean when we talk about moral intuitions. In
everyday language, we might refer to those of our moral convictions that we take, more
or less, for granted, like one has to keep a promise or murder is wrong. Within the
philosophical discussion some of the most prominent examples to address the topic of
moral intuitions occur in thought experiments, especially in the so-called trolley-

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

21

cases (Foot 1967; Thomson 1986b).1 The famous trolley problem differentiates between two situations, the bystander case and the fat-man or politically correct the
footbridge case. In the bystander case, a person can save a group of people that is standing on a railway line by hitting a switch so that an oncoming trolley will change the
track and therefore kill only one instead of several people. A general intuitive judgment
that might occur is: It is permissible to hit the switch to divert the trolley. In the second
scenario one can save the whole group by throwing a fat man in the path of the trolley.
The common intuitive answer here is: It is not permissible to push a person onto the
track to stop the trolley. As you can see, even if the outcome of the two acts hitting
the switch and pushing the man off the footbridge would be the same, peoples intuitions vary according to the means and therefore lead to a dilemma situation. So the
question still is: Imagine you can save five people by killing one person: What would
you do? What would be your intuitive answer?
These kinds of thought experiments and the intuitions they are evoking, as well as
empirical studies addressing them and other forms of moral decision making, are nowadays investigated by an interdisciplinary approach that calls itself moral psychology.
Classically understood, moral psychology belongs to the realm of metaethics (Fisher
2011). I will start my considerations on the merits and problems of the recent developments and discussions of the young discipline of moral psychology with some short
explanations of what metaethics is all about and how moral intuitions fit into the picture.2 As we will see, most of the metaethical debates around moral intuitions belong to
the realm of epistemology. Therefore, the main focus of this paper concerns epistemological questions. As will become clear in the course of the arguments, positions within
moral epistemology might be intertwined and intermingled with other metaethical
questions and levels of investigation. So we will have to consider some of those and see
what additionally we will have to take into account when discussing moral reasoning and moral behavior in the light of ethical and empirical arguments. Then we turn to
the most promising theories on the interplay of reason and intuition, the dual process
theories. I will focus on Haidts (2001) theory of social intuitionism as this is nowadays
most prominently discussed within political (Haidt and Graham 2007; Gunnell 2007)
and educational contexts (e.g. Schmidt 2015). In the light of the metaethical considerations presented at the beginning, I will offer an analysis of this approach and draw
some conclusions on how we could elaborate a dual process theory that is theoretically
informed as well as empirically supported. Finally, I will sum up some open questions
and ideas for further research that arise from my analysis.
1

The discussion of the trolley cases is especially prominent in experimental philosophy (X-Phi, see e.g. Knobe and
Nichols 2008). There are so many different papers and positions that one even speaks about trolleyology nowadays (Appiah 2008).
2
For their very helpful comments on earlier drafts of the paper I would like to thank Robert Ranisch, Tobias
Meilinger and Margarita Berg.

22

Cordula Brand

How Moral Intuitions Fit Into the Metaethical Picture

We can differentiate between three main areas of ethics: applied ethics, normative ethics and metaethics.3 Applied ethics is concerned with making specific moral judgments
in concrete situations, e.g. whether we should allow preimplantation genetic diagnostics or not. Normative ethics sets the guidelines for specific moral judgments, e.g.
whether we should judge according to what would be best for the greatest number of
people (utilitarianism) or what would most serve a good life (virtue ethics). Metaethics
analyzes the praxis of ethics, i.e., the praxis of making specific moral judgments and the
praxis of setting guidelines for those judgments. It is important to note that whereas
applied and normative ethics render normative judgments, metaethics does not.
Metaethical arguments do not entail normative judgments themselves. However, as it is
equally important to consider the normative level of explanation, I will come back to
this point later (sect. 3.3). Nevertheless, right now I will focus on metaethics for a while.
Almost all other philosophical dimensions of argumentation are included within
metaethics.4 On the epistemological level one asks by which means a moral belief is
justified. How can we know that a moral conviction is adequate or even right? On the
ontological level one discusses whether something like moral facts or properties exists
and if so, whether we are dealing with natural or non-natural properties. Furthermore,
one can investigate how moral language works and what kind of psychological processes are involved when we perform moral judgments. The latter topics are interdisciplinarily discussed, together with linguists, psychologists, sociologists and so forth.
Arguments concerning the role of moral intuitions do take place on the psychological as well as on the epistemological level. On the psychological level, moral intuitions
are investigated in the context of motivation and of characterizing moral beliefs.5 First,

An overview of the different topics addressed within the three fields of ethics or moral philosophy can be found
e.g. in Gensler (2011).
4
In a traditional understanding of metaethics, all levels of investigation mentioned before are restricted to a priori
argumentation (Pust 2014). This means that no empirical data is needed to discuss metaethical topics. We can
know by the right way of thinking alone which statements are correct and which are not. Questions within the
psychological realm fell under this verdict, too. Nowadays, the understanding of metaethics is much broader and
explicitly includes empirical investigations. This is exactly why we are discussing the relationship between empirical and theoretical arguments within metaethics and beyond that the relations between metaethics and normative ethics.
5
After some developments within metaethics in general and following the cognitive turn, moral psychology deals
generally with understanding the psychological processes of forming moral beliefs. This includes questions of
how humans actually think about morality, how they really do make moral judgments and how they behave in
moral situations. Within such a broad understanding, moral psychology exceeds its traditional topic and enters all
areas of metaethical arguments. Empirical investigations within the expanding field of moral psychology include
developmental questions, the nature \ nurture discussion, the existence of special character traits, the phenomena
of agency and altruism, and many more. A comprehensive collection of examples from the different areas can be
found in Sinnott-Armstrong (2008).

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

23

the question is addressed whether a special kind of motivation has to be attached to


beliefs to actually turn them into moral beliefs. Second, one discusses whether moral
convictions actually are beliefs or some non-belief states, whether they are cognitive or
non-cognitive states. In a broader and more modern understanding of the psychological dimension of moral decision making, one can further investigate how processes of
moral decision making are performed. Nowadays, the most prominent answer to that
last question is that a moral judgment is reached intuitively. This is the point where
intuitions occur most explicitly in the discussion. We will come back to all these considerations later on in sect. 3.1. There, I will also consider what exactly might be understood by the term intuition.
As already mentioned above, moral intuitions do play another important role in the
area of epistemology. One prominent answer to the epistemological question about the
justification of moral beliefs is the following: A moral belief is justified by moral intuitions. This means that we can know that a moral belief is adequate if it is accompanied
or caused by an intuitive persuasion. To understand what this kind of approach is all
about we have to go a bit more into detail concerning the epistemological background.
Principally, the epistemological question of justification addresses all kinds of beliefs,
e.g. the belief that grass is green or the belief that Pluto is a planet. Moral beliefs are a
sub-group of beliefs. So the question here is how we can justify our moral beliefs. The
notion of justification, included in this question, leads to the classical problem of the
so-called epistemic regress:
To know that a belief is right is to have a belief that is justified. This means we have
to have a good reason to think that it is true. A good reason is, e.g., if you can infer a
moral belief from something else. Lets take the trolley case as an example: Hitting the
switch is right is a moral belief. We can justify this belief by arguing that it would be
wrong to let five people die instead of one. However, this argument entails a moral
belief as well. Now we can go on by stating, like e.g. Thomson (1986a) does, that it is
permissible to kill the one because it is a mere side effect of the action6. But, again, this
argument entails a moral belief that has to be justified and so we can go on forever ad
infinitum.
How can we approach this explanatory regress? We have three possibilities. First, we
can choose to become skeptics. We can accept the epistemic regress and consent that in
the end there will never be a final justification for moral beliefs (Sinnott-Armstrong
2011). Then we would have to give up the idea of any moral knowledge. If you do not
want to abandon the possibility of some kind of moral knowledge, you have two other
options: You can commit yourself either to coherentism or to intuitionism. Coherentism states that beliefs are justified if they were integrated into a coherent set of beliefs
(Dorsey 2006). One instant problem of this position is how and when do you know that
6

In contrast, in the footbridge case it would be a primary act to throw the man onto the track (Thomson 1986a).

24

Cordula Brand

this is the case? You have to define criteria for coherence, like Rawls did with his famous reflective equilibrium (Rawls 1971, 48ff.). The problem is to define what exactly
counts as a coherent set of beliefs and to what degree such coherence has to be fulfilled
to speak of a justified belief.
The third and last possibility is to opt for intuitionism.7 Intuitionists state that at
some point in the chain of justification you will need no further inference. You just
know that a belief is true. You are sure that there is no further justification needed. This
awareness, as a kind of immediate knowledge, prepares the ground for all other inferences. So intuitionists state that moral intuitions like murder is wrong are noninferentially justified beliefs. This implies some typical statements like moral intuitions
are the basis of moral knowledge, moral values are understandable without reflection
or we have an intuitive knowledge of moral values (Fisher 2011, 146ff.).
Intuitionism has evoked extended criticism that leaves us with three major challenges. Opponents of intuitionism state that our intuitions are not able to justify our moral
beliefs because they are unreliable, irrelevant and inexplicable. Everyday experiences as
well as philosophical thought experiments (like the trolley cases) show that intuitions
can be inconsistent and even paradoxical therefore they are unreliable. As Harman
(1977) and later Unger (1996) state, some intuitions are shaped by morally irrelevant
facts. Therefore, they do not reveal anything about moral obligations and accordingly
do not have an additional explanatory benefit. This makes them irrelevant for justificatory questions. Furthermore, as it is not possible to explain why intuitions should be
reliable, Field (1988) argues that they are inexplicable.
Still we might think that intuitionism is on to something, that in some sense and in
some situations, we really can trust our intuitions concerning moral problems. Then we
have two ways of defending the epistemological role of intuitions: a theoretical and an
empirical one (Pust 2014).
The theoretical path chosen by rationalists like Moore (1903) and Ross (1930)
works via the argument that intuitions are prima facie justified beliefs. Moral intuitions
in the rationalist sense are like mathematical knowledge; they are immediately grasped
to be true, like 2+2=4.8 As such they are a priori, theoretical, self-evident beliefs. However, this statement is again a self-evident insight, an intuition. Now, the argument goes
that this epistemic circularity which emerges is not necessarily vicious but can be understood as virtuous circularity: intuitions are special exactly because they are both
required for a decent epistemology and serving epistemic self-support (Pust 2014;
Huemer 2005).
7

Within the literature, you can also find the triad of foundationalism, coherentism and skepticism. Foundationalism is a more classical and broader term within general epistemology. Within metaethics, the term intuitionism
is more often used, so I will stick to this form of the triad.
8
There are several ways of how to characterize intuitions in the rationalistic sense. They can be understood as selfevident beliefs, as dispositions or as sui generis states (see Pust 2014).

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

25

In case you are not convinced by this strategy, you can choose the empirical option,
like proponents of a moral sense theory do (e.g. Hume in his Treatise of Human Nature
or Smith 1759). The terms moral sense theory and sentimentalism are often used
ambiguously or sentimentalism is understood as a generic term that covers different
approaches to the role intuitions are playing (e.g. by Prinz 2007b). In the following I
will denominate the epistemological theory moral sense theory and theories on the
psychological processes behind moral reasoning sentimentalism. As you will see later
on, it is of quite some importance to distinguish between these two kinds of approaches.
Moral sense theory the epistemological approach draws an analogy to perception
as well as to aesthetics. There are many possibilities to understand intuition in this
setting. Sometimes those intuitions are addressed as feelings or desires, as affects but
also as plans, or as dispositions to have such things. The point is that it is suggested to
be possible to justify these intuitions by experience or other non-intuitive evidence. In
other words, it is possible to prove inductively that your intuition is reliable. We will
come back to these topics later (sect. 3.2) in more detail.
What is interesting as well as challenging is the problem that the epistemological discussion about intuitions is connected directly or indirectly with the psychological discussion, briefly introduced above. The most direct link leads to the question of how
moral knowledge can be characterized.
Again, rationalists state that moral knowledge is formed by non-inferentially justified moral beliefs. Such moral beliefs have a propositional content as they express a
belief about things or situations in the world. Therefore, rationalists are committed to
cognitivism, stressing that moral convictions must have a propositional content, that
they are thoughts. Moral sense theory often states that the basis of moral judgment is
formed by emotions (or desires, affection etc.). Emotions differ from beliefs in that they
do not have to have a propositional content. Therefore sentimentalists rather seem to
be committed to non-cognitivism even if they would not be committed to follow that
path.9 As will be shown in further detail later (sect. 3.1), this possible connection between the psychological and the epistemological realm bears some tripping points of its
own.
The question right now is whether recent, empirical insights into moral psychology
can help us to find a way through the maze of positions and problems. Many people,
like e.g. Haidt (2006), Greene (2007) or Prinz (2006), are convinced that this is definite9

The debate between cognitivism and non-cognitivism about the character of moral knowledge beliefs or nonbeliefs leads us into a jungle of metaethical positions, e.g. realism / non-realism, and many standard arguments
and counter-arguments. These discussions are complicated because they also entail different notions of cognitivism and non-cognitivism, referring to different levels of investigation. I will come to some of them later. Here
you just have to remember that the justification and the characterization of our moral knowledge might interact
somehow which leads to further discussions.

26

Cordula Brand

ly possible. They state that we have to detect the empirical content of a certain philosophical theory and put it to the test. Depending on these results, we could develop
further arguments for challenging or defending the theory in question. So lets have a
look at what kind of help is offered nowadays.10
There are several fields of investigation, from social psychology, cognitive sciences,
and neuroscience to genetics, that are taken into account in modern moral psychology.
For instance, neural correlates of moral intuitions (Greene et al. 2001; Greene et al.
2004) as well as social behavior in general (e.g. Skerry and Saxe 2014; Moll et al. 2002)
are investigated via fMRI studies. Within social science and social psychology, the impact of automatic and implicit cognitive processes on decision making is examined
(Bargh and Chartrand 1999; Greenwald and Banaji 1995). Cognitive scientists engage
in research concerning cognitive dissonance (Chaiken 1980), as well as post hoc causal
explanations (Gazzaniga et al. 1996, Nisbett and Wilson 1977). Last but not least several
interdisciplinary research groups focus on examining questionnaires concerning different moral situations (Turiel et al. 1991; Haidt et al. 1993; Haidt and Hersh 2001) or
thought experiments (Petrinovich 2000; Fischer and Ravizza 1992; Greene et al. 2004).
There are several approaches within metaethics and / or moral psychology to use
these studies to create empirically informed arguments. Researchers like Haidt, Prinz
or Greene all include empirical data in their argumentation concerning the role of
emotions, intuitions and reasoning in moral decision making. Furthermore, most of
them share the theoretical fundament of the dual process theories, assuming a rational
and an emotional pathway of decision making (Greene and Haidt 2002). In the following, I will shortly present what dual process theories are all about and illustrate typical
strategies these theories follow. Afterwards, I will use Haidts model of social intuitionism to identify a couple of standard problems. The corresponding list of misunderstandings will then serve to formulate points to consider when engaging in moral psychology research in general and working on a dual process theory in particular.

Dual Process Theories in Moral Psychology

The so-called dual process theories have been established especially within social psychology (Petty and Cacioppo 1986) and cognitive science (Evans 1984). The core concept proclaims two different cognitive processes that are engaged in reasoning procedures: an unconscious, implicit automatic process (P1) and a conscious, explicit one
that we can control (P2). The first process is understood as serving as a fast and contextual method that allows quick decision making. It is sometimes imagined as emotional10

As you can find many examples not only in the literature in general but also in the present anthology, I restrict
my listing here to the most prominent examples.

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

27

ly triggered, sometimes as an intuitive process or as a simple rule-following path in the


sense of heuristics. All these variants are found in the literature. The second process is
understood as a slow, verbally expressible method that affords a cognitive effort. It is
often called something like rational decision making.
Beginning with the metaethical work of Hare (1981), who highlighted that we have
to take into account a critical and an intuitive level of consideration, there are now
several authors e.g. Greene (2007), Kahneman (2012), Haidt (2001) and Saunders
(2009) who are following the path of two processes to explain moral behavior and
moral reasoning processes.11 These dual process models are right now under discussion, because they so the main criticism tend to oversimplify both processes (Evans
and Frankish 2009). However, there are some other points to consider as well.
Within the recent approaches to explain moral decision making, the two mentioned
processes the unconscious P1 and the conscious P2 are not only oversimplified but
they are evaluated differently. Haidt, e.g., highlights the role P1 is playing, Greene, e.g.,
dismisses P1 in favor of P2. Both approaches, as well as many others in the literature
that call themselves dual process theory, leave the mark of actually not taking seriously
the duality of the processes that are involved in moral decision making.12 Either the
rational process is completely underrepresented and turned into the tail of the emotional dog (Haidt 2001) or the intuitive processes are rendered as irrational and misguiding (Greene 2007). Both ways of dealing with moral decision making seem to exaggerate one of the two processes in question and at the same time underestimate the
other one.
Another problem of several approaches within the field is the terminological fuzziness that one has to deal with. One can never be sure what is really talked about and
what has been actually examined by the empirical studies that are drawn on to support
the theories in question emotions, feelings, unconscious rule following, etc. Intuition
seems to be used as a kind of umbrella term to collect everything that is somehow nonrational under one and the same roof. This makes it hard to differentiate between processes with or without propositional content, cognitive versus non-cognitive processes
as well as conscious and non-conscious processes. Therefore, it is very difficult if not
impossible to figure out what the different theories are actually talking about and what
the studies that are used to support them actually examine.13 Furthermore, many theories do not differentiate properly between moral behavior, moral judgments and moral
decision-making processes, which makes the fuzziness even more serious. The authors

11

A helpful overview of different approaches to dual process theories, bringing together philosophical theories with
social and cognitive psychology as well as developmental considerations, can be found in Evans and Frankish
(2009).
12
An interesting and comprehensive analysis of possible relations between P1 and P2 can be found in Liao (2011).
13
Such an analysis concerning the work of Greene and colleagues can be found in Kahane and Shackel (2010).

28

Cordula Brand

switch between these concepts like they switch between several understandings of the
term intuition.
It will be argued in the following that both maneuvers pushing one of the processes
into marginality as well as allowing terminological fuzziness serve to mix up several
levels of explanation. If one took seriously the boundaries between the psychological,
epistemological and the normative levels of explanation, one could see that this mixture
entails three categorical mistakes.
To demonstrate these points, I will take Haidts social intuitionism as an example.
Social intuitionism states, to summarize briefly in Haidts own words, that
(a) [t]here are two cognitive processes at work reasoning and intuition and the reasoning process has been overemphasized, (b) reasoning is often motivated, (c) the reasoningprocess constructs post hoc justifications, yet we experience the illusion of objective reasoning; and (d) moral action covaries with moral emotion more than with moral reasoning (Haidt 2001, 815).

With these theses in mind, we can try to locate this version of a dual process theory
within the metaethical structure. This maneuver serves to show that we have to consider the different levels of explanation when engaging in thorough research on moral
decision making.

3.1

On the Psychological Level of Explanation

On the descriptive level we ask how humans form a moral judgment, how a moral
judgment can be characterized and what motivates us to perform a moral action. Haidt
states that all kinds of unconscious processes influence our moral judgments. So, what
does he mean by that? On the descriptive psychological level, he seems to state that we
are driven by emotionally influenced gut feelings when we reach moral decisions:
[R]easoning may be the tail wagged by the dog. The dog itself may turn out to be moral intuitions and emotions such as empathy and love (for positive morality) and shame, guilt,
and remorse, along with emotional self-regulation abilities (for negative morality []). A
dogs tail is worth studying because dogs use their tails so frequently for communication.
Similarly, moral reasoning is worth studying because people use moral reasoning so frequently for communication. To really understand how human morality works, however, it
may be advisable to shift attention away from the study of moral reasoning and towards
the study of intuitive and emotional processes (Haidt 2001, 825).

With the terminological fuzziness and the missing distinctions that have already been
addressed above, namely between cognitive and non-cognitive contents as well as action and behavior, we face the problem that we cannot be sure what is meant by that
statement.

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

29

Lets start with the distinction between action and behavior. To demonstrate the
point, consider the following situation you might be familiar with: You are in a hurry
because you have a very important meeting, like a job interview, and you are already a
little late. In front of you, an elderly lady accidentally drops her shopping bag and all
the groceries are spread on the street. Now, what is going on in your head? Maybe
something like this: Oh dear! Now the question is, what does this oh dear mean?
Does it mean that you feel something like fear to miss the meeting fear wins and so
you pass by to get to your meeting without any form of conscious, propositional
thought? Or does it mean that you are quickly considering your options consciously? In
the first case you would behave, in the latter case you would act.
In Haidts approach, when we take seriously that all our moral decisions depend on
unconscious processes and do not entail conscious considerations in the first place, we
never act, we always behave (Anscombe 2000, 5). Our impression that we sometimes
consider what to do and weight pros and cons as well as different alternatives is, according to Haidt, in most cases just a post hoc rationalization which comes with the
illusion of a rational decision. If we followed him in this analysis, we could find an explanation why Haidt does not need to distinguish between moral behavior and moral
acts. There would be just moral behavior. The question is whether we buy this.
Sometimes and as some arguments go, almost all of the time , there are no immediate judgments or decisions implied, e.g. when we spontaneously help somebody in an
emergency situation. In such a scenario, we behave morally. Nobody denies that this
form of behavior occurs. In many everyday situations, we might not really think about
what we are doing and might be guided by diverse influences other than reflective deliberation. What is in fact interesting and important to know and what makes the
empirical insights into moral psychology so valuable is to learn how often, by which
means and in which way we are influenced, distracted, guided and misguided in our
moral decision-making processes by emotional affairs, external influences and automated processes that we become seldom aware of.14 What is problematic is to reduce all
forms of moral decision-making processes to such a simplified model of behavior.
However, sometimes, moral behavior follows moral decisions that rely at least in
part on moral judgments. Haidt (2001, 818f.) admits that. In this case we could speak
of a moral action presuming that an act has to be performed willingly, intentionally
and on the basis of knowledge about the circumstances, the possible alternatives and
the consequences of the act (Ricken 2013, 96ff.). Haidts argument now is that we might
act morally sometimes but we always do that after we already reached a moral decision.
He names three different ways in which reasoning enters the picture in such a way: the
post hoc reasoning link (Haidt 2001, 818), the reasoned persuasion link (ibid.,
14

One example is how we are influenced in our judgments by disgust, as is shown e.g. by Curtis (2011), Haidt et al.
(1994) or Pizarro et al. (2011).

30

Cordula Brand

818f.) and the social persuasion link (ibid., 819). These links all have in common that
they do not alter or have any influence on the intuitively reached moral judgment;
however, the last one is used to influence the intuitions of other people. However,
Haidt names two further links that really do have a causal effect on our own moral
judgments, the reasoned judgement link (ibid.) and the private reflection link
(ibid.). But, he immediately denies the importance of these two links by stating: However, such reasoning is hypothesized to be rare, occurring primarily in cases in which
the initial intuition is weak []. (ibid.).
The counter-argument would therefore be that we do really act morally much more
often than Haidt supposes or that moral action and moral behavior go hand in hand as
any moral action is situated in contexts and accompanied by emotions, etc. However
those connections might occur, we have to differentiate between moral behavior as
spontaneous reaction to a given situation and moral action as following a decision that
includes judgments. Only if we would do so, we would be able to investigate the possible differences between the two sorts. In doing so and a dual process theory taken
seriously can be a fruitful theoretical background we might gain a more appropriate
description of the complex situations we are faced with in the actual world and therefore in our everyday life.
Another example for problems that arise out of terminological fuzziness has to do
with the umbrella term intuition. This becomes clear in the following list of five sets of
intuitions, determined by Haidt and Bjorklund (2008, 203):

harm / care (a sensitivity to or dislike of signs of pain and suffering in others [])
fairness / reciprocity (a set of emotional responses related to playing tit-for-tat
[])
authority / respect (a set of concerns about navigating status hierarchies)
purity / sanctity (related to the emotion of disgust [])
prejudice / exclusion (for in-group concerns)

Now, consider the situation mentioned before. It becomes almost immediately clear
that you cannot do both, help the lady to collect her groceries and arrive at the meeting
on time is that an intuition? You are in a typical dilemma situation and you have to
decide immediately you do not have the possibility to take your time and think everything through thoroughly. Arguing in Haidts model or as a sentimentalist, we could
say that our decision to hurry on without helping is driven by the intuition of harm /
care as well as the intuition of authority / respect. Authority then, in that situation,
outweighs harm / care, you might say. But is this the only possible explanation? You
might also argue that some form of rule following is at stake. It might be the case that
you quickly apply a certain kind of heuristic (Gigerenzer and Todd 1999), namely: opt
for the long-run profit of getting the job or for a short sensation of having been helpful.

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

31

Or you applied a quick sort of cost-risk calculation. But this interpretation differs from
emotions as it is at least more conscious and might entail propositional contents. What
really happens might as well be a mixture of all mentioned possibilities. I do not want
to argue for or against one of the different possibilities here. I just wanted to show that
the explanation of unconscious processing of moral decisions, without any kind of
conscious, cognitive considerations with propositional contents, seems to be too simple
to cover complex everyday situations.15
What is needed here, on the descriptive psychological level, is much more empirical
research. But, and this is an important point, the studies have to be well designed to
cope with the complexity of the subject and they have to be terminologically settled so
that it is clear what is examined behavior or action, automatic rule-following or emotions or conscious thinking.
Such well-informed studies in combination with a dual process background could
also shed light on the other psychological questions that are addressed in the classical
metaethical discussion on moral psychology, namely the debate of cognitivism versus
non-cognitivism. If we ask the question how moral knowledge can be characterized, as
belief or as something else, we might no longer be forced to choose between the two.
Again: Cognitivism, on the one hand, states that moral judgments express beliefs which
have a propositional content. Non-cognitivism, on the other hand, stresses that moral
judgments express non-belief states which do not have to have a propositional content.
If two processes are at stake they might culminate in something that is both or neither /
nor.16
The same could be true for the debate around the motivation of moral acting. Motivational internalism prominently advocated by Aristotle, Kant and Smith (1995)
states that as soon as we discern a good reason for an action, e.g. that the action is virtuous or conforming to law, we are motivated to act accordingly. Therefore, an action
becomes a moral action if a good reason is part of the motivation to act. Motivational
externalists doubt such a close connection between reason and motivation. They state
that there is a gap between them (e.g. Shafer-Landau 2000; Svavarsdottir 1999). Another motive is needed, something like guilt, fear or self-esteem, to make us act at all and
therefore, to make us act morally as well. Emotions, unlike beliefs, are essentially motivational. This motivational character would solve the problem, discussed since Humes
approach to moral decision making, of how a moral conviction can have any impact on
our acting or, to put it differently, how moral acting can be intrinsically motivated. If
we were driven by reason alone, the motivation would have to be added extrinsically
which would diminish, so the argument goes, the moral value (Smith 1995). Again, a
15

This insight is far from new and corresponding critique is spreading more and more in the literature (Evans
2009) and there are a couple of authors sharing that view (e.g. Helion and David 2015).
16
A decent combination might also explain why we do not talk as sentimentalists, even if moral behavior is at stake
and we reacted automatically. We say killing is wrong, not something like killing: bah.

32

Cordula Brand

mixture of the two processes might address this question in a new way, but this is, at
present, only a guess that would have to be worked out properly.
However, and that brings us to the next level of explanation, we have to ask whether,
and if, these insights do tell us anything about the justification of our moral beliefs.

3.2

On the Justificatory Level of Explanation

On the epistemic / justificatory level of explanation we ask how we can justify moral
judgments. Taking the title of his theory seriously, Haidt opts for intuitionism on the
justificatory level (Haidt 2001, 814). This means that our moral judgments can be trusted if they relied on non-inferentially justified moral beliefs. However, as has already
been mentioned, in the range of intuitionism one has two options of how to exactly
understand the notion of non-inferentially justified beliefs: as self-evident insights (rationalism) or as emotions (moral sense theory).
We can guess which path of justification Haidt follows on the justificatory level by
again referring to the title of his prominent article the emotional dog and its rational
tail (2001): He is in line with a moral sense theory of justification. This means that, on
the epistemological level, Haidt could state that moral beliefs are non-inferentially justified by our emotions. This means that we can justify our unconsciously built moral
judgments if they rely on an emotional basis, like feeling angry, ashamed or selfconfident. If Haidt confirmed this statement, as he might do, he would buy the merits
as well as the problems of the moral sense theory, which have been intensively discussed in metaethical literature since David Hume (e.g. Smith 1995; Nichols 2004).17
I do not want to go into detail here but stress another point: even if you are opting
for a sentimental approach on the psychological level of investigation, and even if you
combine it with non-cognitivism, you are not committed to adopt moral sense theory.
You can also follow the rationalistic path on the epistemological level by stating that in
the end, our unconsciously constructed moral judgments are justified by self-evident
insights which might be accompanied by emotions or are emotionally tinted. This
means that even if empirical studies showed that our moral judgments are the outcome
of several non-cognitive or only partially cognitive processes, like feelings and / or automatic rule-following, this would not inevitably mean that we justify these judgments
by them as well. You can do so, but you do not have to. You can still maintain a rationalist position. This is why you have to be aware of the difference between the two levels
of investigation, the psychological and the epistemological one: you have to argue independently for each of them. You cannot infer directly from sentimentalism on the psy17

E.g. emotions, unlike referentially justified beliefs, are not truth-apt. They are more similar to taste, which cannot
be criticized as it is ultimately subjective. That is why you need external empirical proof for justification, starting
the epistemic regress again.

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

33

chological level to moral sense on the epistemological level or vice versa. If you did so,
you would commit a categorical mistake. As Haidt remains notoriously unclear about
the differentiation between behavior, action and justification, he has at least to clarify
his position in order to not be accused of that mistake.
An interesting question is whether a dual process theory as such can even say anything about the justificatory level of investigation even if this is stated by several authors who use evolutionary debunking arguments (e.g. Kahane 2011 or Joyce 2013).
Can we make up a scenario of justification where we combine rationalistic and moral
sense argumentation to reach a dual account? Here, more epistemological research has
to be conducted. So far, dual process theories seem to be restricted to the psychological
level of explanation and commit a categorical mistake if they switch the level of explanation without any further argument.

3.3

On the Normative Level of Explanation

This brings us to the last point I want to make. As usual when engaging in research
about moral topics we have to be careful to keep apart the descriptive from the normative levels of investigation and in this case the epistemological from the normative level
as well. When we describe what happens when people make moral judgments, we could
take the sentimentalist position and state that they do so intuitively and combine this
with the moral-sense position that these intuitive judgments are justified by basic emotions. But it remains the task of normative consideration to think about which emotions are good ones and which are bad, and therefore, which judgments are right and
which are wrong in the moral sense (see e.g. Gibbard 2002). The same applies to choosing a rationalistic strategy and opting for intuitions as self-evident beliefs. In case we
state that our judgments are justified by self-evident insights, we have to normatively
argue which one counts. So, as we cannot move from the psychological to the justificatory level without an extra argument, equally we cannot move on from there to the
normative level. If we gathered normative conclusions from psychological investigations without a further normative premise, we would commit the famous naturalistic
fallacy.
Interestingly enough, this naturalistic fallacy is denied by most opponents of sentimentalism (e.g. Haidt and Bjorklund 2008, 214ff.; Greene 2003; Prinz 2007a). In doing
so, they confirm a program that leads to a naturalization of normative argumentations.
This is in line with the statement of moral sense theory that finally, emotions are justified by external empirical evidence. Furthermore, it is in line with the strategy of not
differentiating between moral behavior and moral action. However, as I pointed out
earlier, it would be a categorical mistake as well to infer normative judgments from
epistemological insights.

34

Cordula Brand

At least on this normative level of investigation, when we argue about the normative
implications and question the justification of our non-inferential moral beliefs be
they emotions or self-evident insights we are not engaging in unconscious cognitive
processes but do that quite consciously, as even Haidt admits. When we try to figure
out which of our moral acts are right and wrong for what reason in which normative
system it seems to be impossible that we should do that by emotions alone. This is
also true for the case in which you try to argue that we could somehow make out empirically what should be right or wrong. Even in that case, as soon as we question the
basis of our moral beliefs we are engaging in higher cognitive processes that might be
accompanied by emotions but are not entirely emotional themselves (see also Kennett
and Fine 2009; Fine 2006; Pizarro and Bloom 2003; Saltzstein and Kasachkoff 2004).
Taking all these considerations together, one can understand the mentioned points
to consider as challenges, which you have to be aware of when doing research on moral
behavior, acting and judging. Therefore, in the following I will suggest a list of adequacy criteria that researchers should be aware of. This list can serve to develop a dual
process theory that is complex enough to gather all the diverse influences and processes
that come together when human beings engage in moral decision making.

Adequacy

In the following, I will collect some challenges that theoretical as well as empirical researchers on the cognitive processes of moral decision making might bear in mind
when engaging in disciplinary as well as interdisciplinary studies. This collection is a
starting point that summarizes the above-mentioned problems and challenges. One can
already distinguish four areas within which adequacy criteria can be classified: a theoretical, a terminological, a methodological and a categorical area.
In the theoretical area we can collect the challenges and pitfalls that stem from the
metaethical considerations above. Researchers could be much more aware of the
metaethical background in which their theories have to be placed and this is advisable
for the natural sciences as well as for the humanities. Whether you opt for rationalism
or moral sense theory and cognitivism or non-cognitivism, all possible combinatory
variants have several implications in the other metaethical realms, like e.g. ontology.
Some of these implications you might want to know about even more so, if you do
not like them. Especially if you engage in interdisciplinary research, being aware of the
theoretical background and its implications helps a lot to communicate.
Within the terminological area we face the problem that we are dealing with at least
two different language games with different terminological sets. The classical philosophical set differs from the modern psychological set in that e.g. the terms intuition

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

35

and emotion are differently used. Intuition in the philosophical terminology is a


technical term that refers to an epistemological level of explanation and as such has a
narrow scope, but covers on the other hand all forms of non-inferentially justified beliefs (emotional as well as self-evident beliefs). In the psychological terminology intuition is broader in the sense that it covers all sorts of spontaneously occurring insights,
ideas or thoughts, and more narrow in the sense that intuitions are clearly cognitive
and have a propositional content (see e.g. Klein 1999). The term emotion, again, has a
very broad scope in the philosophical community and a quite narrow one in the psychological set. Furthermore, one has to make explicit whether one is talking about behavior or action. In the psychological realm, due to methodological constrains, one
talks about and investigates behavior. In the philosophical realm, this might not be the
case at all as philosophers and especially ethicists normally talk about and investigate
actions. Finally, one can distinguish between a judgment and the decision-making process it might be the outcome of. I do not want to discuss here which set is the more
appropriate variant, what I want to highlight is that you should explicitly choose one
set or define your terms according to those sets. This is the only way to make clear to an
interdisciplinary community what you are talking about and / or what the study investigated refers to.
The terminological criteria of adequacy directly lead us to the general methodological
commitments of empirical studies that have to be made transparent and that have to be
reflected on. As you can find several general requirements in the literature as well as in
the present anthology18, I only mention briefly two main points. The terminological
challenges, talked about above, might lead to an overgeneralization of the investigated
phenomenon, mixing up e.g. emotions, unconscious rule following and justification.
Another challenge for designing an adequate study is the complexity of the everyday
situations in which we engage in moral decision-making processes. Of course this is
due to the fact that almost all psychological studies try to understand real-time behavior and you have to break down the variables to a point that makes them investigable.
But as for all studies, it applies to moral psychology as well that you have to be aware of
the fact that you only describe a part of the real situation.19 What you can do is, again,
to make explicit in the discussion what are the limits of explanation and thereby make
them more transparent.
Finally, we have to face the categorical area of adequacy and the connected pitfalls.
When investigating moral psychology you are challenged to avoid three different categorical mistakes that occur when you are mixing up the different levels of investigation.
First, you could mix up the psychological with the justificatory (the epistemological)

18

E.g. a collection of general problems of fMRI studies can be found in Christen et al. (2013) or Schleim (2008).
Another point that makes generalization difficult is the fact that you often face the problem of a low diversity of
the subjects, like 20 white, male psychology students from the US.

19

36

Cordula Brand

level of investigation, therefore committing a genetic fallacy. Second, you could mix up
the justificatory with the normative level of investigation. Third, you could mix up the
normative with the descriptive level of investigation, herewith committing the naturalistic fallacy.20
I would like to stress here that all these points to consider and adequacy considerations should not be understood only as criticism of the existing studies and theories. I
rather aim at understanding these points as open questions and challenges that should
be topics of further interesting and inspiring collaborative research.

A Brief Look Into the Future

Dual process theories of moral decision-making processes seem to be a promising research frame to understand the complex interplay of reasoning and other cognitive as
well as non-cognitive processes involved. To go on in that direction of explanation
affords well designed studies as well as thoughtful theoretical considerations in an interdisciplinary setting that takes seriously the above-named criteria of adequacy. Engaging in such research then could not only deliver many more fascinating insights into
human moral behavior, acting and decision making on the descriptive level but also
offer some new pathways within the metaethical jungle. For example, maybe questions
about motivation as well as the character of moral beliefs and the problem of justification could be reconsidered in an integrative way.
So, how can we profit from what has been done so far? We can reassure ourselves
that multi-disciplinary collaborations indeed are taxing but they are not only but
especially in the field of moral psychology unavoidable and they are worth the effort.
Furthermore, we got an idea that many more well designed empirical studies are required to shed light on the many different processes and their interactions to get a clue
about what is really going on when we make moral judgments. Even more studies are
needed when it comes to the question of what we can make, in a practical sense, out of
the results. Think about the vast field of educational implications that are connected
with the topic, like e.g. the possibilities to train, change and modulate behavior, action
and decision making.
The good news then is that you do not have to do all that all by yourself. There are
many specialists around just talk to them. And last but not least: there is lots of interesting work to do lets get started!
20

To complete the list we have to add another category of practical investigation that can easily be mixed up with
the other ones as well. Even if we had sufficient answers on all three levels of investigation, we still would not know
how to actually change, influence and model our behavior and our actions accordingly. However, this problem has
to be addressed more thoroughly another time.

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

37

References

Anscombe, G. E. M. (2000). Intention. Cambridge, Mass: Harvard University Press.


Appiah, A. (2008). Experiments in ethics. Cambridge, Mass: Harvard University Press.
Bargh, J. A., & Chartrand, T. (1999). The unbearable automaticity of being. American Psychologist 54, 462479.
Chaiken, S. (1980). Heuristic versus systematic information processes and the use of source
versus message cues in persuasion. Journal of Personality and Social Psychology 39, 753
766.
Christen, M., Vitacco, D. A., Huber, L., Harboe, J., Fabrikant, S. I., & Brugger, P. (2013). Colorful
brains: 14 years of display practice in functional neuroimaging. NeuroImage, doi:
10.1016/j.neuroimage.2013.01.068.
Curtis, V. (2011). Why disgust matters. Philosophical Transactions of the Royal Society B: Biological Sciences, doi: 10.1098/rstb.2011.0165.
Dorsey, D. (2006). A coherence theory of truth in ethics. Philosophical Studies 127(3), 493523.
Evans, J. (1984). Heuristic and analytic processes in reasoning. British Journal of Psychology 75,
451468.
Evans, J. B. T., & Frankish, K. (Eds.) (2009). In two minds: Dual processes and beyond. Oxford,
New York: Oxford University Press.
Field, H. (1988). Realism, Mathematics and Modality. Philosophical Topics 16(1), 57107.
Fine, C. (2006). Is the emotional dog wagging its rational tail, or chasing it? Philosophical Explorations, doi: 10.1080/13869790500492680.
Fischer, J. M., & Ravizza, M. (1992). Responsibility, Freedom and Reason. Ethics 102(2), 368
389.
Fisher, A. (2011). Metaethics: An introduction. Durham: Acumen Pub.
Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review 5, 5
15.
Gazzaniga, M. S., Bogen, J. E., & Sperry, R. W. (1996). Some functional effects of sectioning the
cerebral commissures in man. Proceedings of the National Academy of Sciences 48, 1765
1769.
Gensler, H. J. (2011). Ethics: A contemporary introduction. New York: Routledge.
Gibbard, A. (2002). Wise choices, apt feelings: A theory of normative judgement. Oxford: Clarendon Press.
Gigerenzer, G., & Todd, P. M. (1999). Fast and frugal heuristics: The adaptive toolbox. In G.
Gigerenzer & P. M. Todd (Eds.), Simple heuristics that make us smart (pp. 334). New
York: Oxford University Press.
Greene, J. (2003). Opinion: From neural 'is' to moral 'ought': what are the moral implications of
neuroscientific moral psychology? Nature Reviews Neuroscience, doi: 10.1038/nrn1224.
Greene, J. D. (2007). The Secret Joke of Kants Soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development (pp. 35
79). Cambridge, MA: MIT Press.
Greene, J., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive
Sciences, doi: 10.1016/S1364-6613(02)02011-9.

38

Cordula Brand

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The Neural
Bases of Cognitive Conflict and Control in Moral Judgment. Neuron, doi: 10.1016/
j.neuron.2004.09.027.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI
Investigation of Emotional Engagement in Moral Judgment. Science, doi: 10.1126/
science.1062872.
Greenwald, A. G., & Banaji, M. R. (1995). Implicit Social Cognition: Attitudes, Self-Esteem, and
Stereotypes. Psychological Review 102(1), 427.
Gunnell, J. G. (2007). Are we losing our minds?: Cognitive science and the study of politics.
Political Theory 35(6), 70431.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Review (108), 814834.
Haidt, J. (2006). The happiness hypothesis: Putting ancient wisdom and philosophy to the test of
modern science. London: Arrow Books.
Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion.
London et al.: Penguin Books.
Haidt, J., & Bjorklund, F. (2008). Social Intuitionists Answer Six Questions about Moral Psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol.2: The cognitive science of
morality: intuition and diversity (pp. 181217). Cambridge, Mass.: MIT Press.
Haidt, J., & Graham, J. (2007). When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals may not Recognize. Social Justice Research, doi: 10.1007/s11211-0070034-z.
Haidt, J., & Hersh, M. A. (2001). Sexual Morality: The Cultures and Emotions of Conservatives
and Liberals. Journal of Applied Social Psychology, doi: 10.1111/j.1559-1816.2001.tb02489.x.
Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat
your dog? Journal of Personality and Social Psychology 65(4), 613628.
Haidt, J., McCauley, C., & Rozin, P. (1994). Individual differences in sensitivity to disgust: A
scale sampling seven domains of disgust elicitors. Personality and Individual Differences,
doi: 10.1016/0191-8869(94)90212-7.
Hare, R. M. (1981). Moral thinking: Its levels, method, and point. Oxford, New York: Clarendon
Press.
Harman, G. (1977). The nature of morality: An introduction to ethics. New York: Oxford University Press.
Harris, S. (2009). The moral landscape: How science could determine human value. New York:
Free Press.
Helion, C., & David, A. P. (2015). Beyond Dual-Processes: The Interplay of Reason and Emotion
in Moral Judgment. In J. Clausen & N. Levy (Eds.), Handbook of neuroethics (pp. 109125).
Dordrecht: Springer.
Huemer, M. (2005). Ethical intuitionism. Basingstoke, New York: Palgrave Macmillan.
Hume, D., Norton, M. J., & Norton, D. F. (2007). A treatise of human nature: A critical edition.
Oxford, New York: Clarendon Press.
Joyce, R. (2013). The evolutionary debunking of morality. In J. Feinberg & R. Shafer-Landau
(Eds.), Reason and responsibility: Readings in some basic problems of philosophy (pp. 527
537). Boston: Wadsworth.

Dimensions of Moral Intuitions Metaethics, Epistemology and Moral Psychology

39

Kahane, G. (2011). Evolutionary Debunking Arguments. Nos, doi: 10.1111/j.1468-0068.2010.


00770.x.
Kahane, G., & Shackel, N. (2010). Methodological Issues in the Neuroscience of Moral Judgement. Mind & Language, doi: 10.1111/j.1468-0017.2010.01401.x.
Kahneman, D. (2012). Thinking, fast and slow. London: Penguin Books.
Kennett, J., & Fine, C. (2009). Will the real moral judgment please stand up? Ethical Theory and
Moral Practice 12(1), 7796.
Klein, G. A. (1999). Sources of power: How people make decisions. Cambridge, Mass: MIT Press.
Knobe, J. M., & Nichols, S. (2008). Experimental philosophy. Oxford, New York: Oxford University Press.
Liao, M. (2011). Bias and Reasoning: Haidt's Theory of Moral Judgment. In T. Brooks (Ed.), New
waves in ethics (pp. 108127). Houndmills, Basingstoke, Hampshire, New York: Palgrave
Macmillan.
Moll, J., de Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mouro-Miranda, J., Andreiuolo, P.
A., et al. (2002). The Neural Correlates of Moral Sensitivity: A Functional Magnetic Resonance Imaging Investigation of Basic and Moral Emotions. The Journal of Neuroscience
22(7), 27302736.
Moore, G. E. (1903). Principia ethica. Cambridge: University Press.
Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment. Oxford,
New York: Oxford University Press.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental
processes. Psychological Review, doi: 10.1037/0033-295X.84.3.231.
Petrinovich, L. F. (2000). The cannibal within (Evolutionary foundations of human behavior).
New York: Aldine de Gruyter.
Petty, R. E., & Cacioppo, J. T. (1986). Communication and Persuasion: Central and Peripheral
Routes to Attitude Change. New York, NY: Springer New York.
Pizarro, D. A., & Bloom, P. (2003). The intelligence of the moral intuitions: A comment on
Haidt (2001). Psychological Review, doi: 10.1037/0033-295X.110.1.193.
Pizarro, D., Inbar, Y., & Helion, C. (2011). On Disgust and Moral Judgment. Emotion Review,
doi: 10.1177/1754073911402394.
Prinz, J. (2006). The Emotional Basis of Moral Judgements. Philosophical Explorations (9), 29
43.
Prinz, J. (2007a). Can Moral Obligations Be Empirically Discovered? Midwest Studies in Philosophy, doi: 10.1111/j.1475-4975.2007.00148.x.
Prinz, J. (2007b). The emotional construction of morals. Oxford, New York: Oxford University
Press.
Pust, J. (2014). Intuition. The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/
archives/fall2014/entries/intuition/. Accessed 11.11.2014.
Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.
Ricken, F. (2013). Allgemeine Ethik (5th edn, Urban-Taschenbcher, Vol. 348). Stuttgart: Kohlhammer.
Ross, W. D. (1930). The right and the good. Oxford: Clarendon Press.
Saltzstein, H. D., & Kasachkoff, T. (2004). Haidt's Moral Intuitionist Theory: A Psychological
and Philosophical Critique. Review of General Psychology, doi: 10.1037/1089-2680.8.4.273.

40

Cordula Brand

Saunders, L. F. (2009). Reason and Intuition in the Moral Life: A Dual Process Account of Moral
Justification. In J. B. T. Evans & K. Frankish (Eds.), In two minds: Dual processes and beyond (pp. 335354). Oxford, New York: Oxford University Press.
Schleim, S. (2008). Gedankenlesen: Pionierarbeit der Hirnforschung (1st edn, Telepolis). Hannover: Heise.
Schmidt, D. (2015): Intuition und Emotion. In: Zeitschrift fr Didaktik der Philosophie und Ethik
(2).
Shafer-Landau, R. (2000). A defense of motivational externalism. Philosophical Studies 97(3),
267291.
Sinnott-Armstrong, W. (2008). Moral psychology. Cambridge, Mass: MIT Press.
Sinnott-Armstrong, W. (2011). Moral Skepticism. The Stanford Encyclopedia of Philosophy.
http://plato.stanford.edu/archives/fall2011/entries/skepticism-moral/. Accessed 11.11.2014.
Skerry, A. E., & Saxe, R. (2014). A Common Neural Code for Perceived and Inferred Emotion.
Journal of Neuroscience, doi: 10.1523/JNEUROSCI.1676-14.2014.
Smith, A. (1759). The theory of the moral sentiments. Edinburgh: A. Millar; A. Kincaid & J. Bell.
Smith, M. (1995). The moral problem. Oxford, UK; Cambridge, Mass., USA: Blackwell.
Svavarsdottir, S. (1999). Moral Cognitivism and Motivation. Psychological Review 108(2), 161
219.
Thomson, J. J. (1986a). Killing, Letting Die, and the Trolley Problem. In W. Parent (Ed.), Rights,
restitution, and risk: Essays, in moral theory (pp. 7893). Cambridge, Mass: Harvard University Press.
Thomson, J. J. (1986b). The Trolley Problem. In W. Parent (Ed.), Rights, restitution, and risk:
Essays, in moral theory (pp. 94116). Cambridge, Mass: Harvard University Press.
Turiel, E., Hildebrandt, C., & Wainryb, C. (1991). Judging social issues: Difficulties, inconsistencies, and consistencies. Chicago: University of Chicago Press.
Unger, P. K. (1996). Living high and letting die: Our illusion of innocence. New York: Oxford
University Press.
Williams, B. (1981). Internal and External Reasons. In Moral Luck: Philosophical Papers, 19731980 (pp. 101113). New York: Cambridge University Press.

Where and When Ethics Needs Empirical Facts


Dieter Birnbacher

Abstract
It is argued in this contribution that applied ethics has to incorporate sociological and
psychological data and theories in order to do the work it is expected to do. The necessity of taking into account empirical facts arises, first, from the necessity to assess the
impact of its own principles on the concrete realities, which these principles are to influence. Second, it arises from the necessity to adapt the practice rules proposed to the
norms and attitudes prevalent in their respective contexts of application with a view to
prospects of acceptance, motivation, and forestalling of 'slippery slopes'. It is argued
that this necessity holds alike for foundationalist and non-foundationalist approaches
in applied ethics as well as (though with significant differences) for consequentialist
and deontological basic principles. The relevance of empirical hypotheses for some of
the perennial problems of applied ethics is shown in an exemplary way by the role
played by empirical theories in the relation between utility maximization and (seemingly) independent criteria of distributive justice.

Introduction

An important stimulus to my reflections on the relation between the methodologies of


ethics and empirical disciplines like psychology and social science was a paper on the
differences and similarities between philosophical and empirical approaches to issues
of social justice by the Singapore-based sociologist Volker Schmidt in the 1990s.1 In the
paper, Schmidt noted a curious "crossing-over" between both disciplines in point of

Dieter Birnbacher
Heinrich Heine University Dsseldorf
Institute of Philosophy
Dieter.Birnbacher@uni-duesseldorf.de
1
Part of this article is an update of Birnbacher 1999.

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_2

42

Dieter Birnbacher

methodology: Whilst sociological analyses of the concept of justice in social contexts


had begun to style themselves "moral science", philosophical studies of the "contexts" or
"spheres" of justice (Walzer 1983) had adopted a more or less sociological methodology. Instead of setting themselves the traditional philosophical tasks of conceptual analysis, theory construction and critical evaluation, more and more philosophical studies of
social justice followed a more or less descriptive methodology concentrating on the
reconstruction of the social meanings of justice in a variety of social contexts. Not surprisingly, this approach was particularly widespread among authors roughly associated
with the communitarian and the postmodernist school in social philosophy. As opposed to these developments, Schmidt insisted on the distinctness of ethics and social
science both in aims and in methods: Without denying the complementary role of both
approaches in the analysis and solution of practical problems, ethics and social science
should work and understand themselves as separate disciplines, the one being concerned with questions of conceptual clarification and normative justification, the other
with questions of empirical description, reconstruction and analysis (Schmidt 1994,
318).
This separation of disciplinary aims and methods seems sensible not only for sociology but also for psychology and the neurosciences, and in particular in areas where
descriptive and normative questions overlap, such as in moral psychology and the
study of the neurological foundations of morality. It is an open question in what way, if
at all, neuroscientific evidence can have an impact on substantive moral beliefs, or, for
that matter, on metaethical views such as views on the nature of morality in general.
Marc Hauser in his book Moral Minds (Hauser 2006) has claimed that there are universal or near-universal tendencies of moral judgment due to the identity or at least similarity of the brain structures in which moral judgments are generated. To support this
claim he presents impressive evidence from internet questionnaire tests which show
that certain moral judgments and certain kinds of moral differentiation are universally
constant in spite of the cultural diversity of his respondents. This suggests that at least
some fundamental tendencies of moral judgment are hard-wired and transmitted from
one generation to the next by biological pathways. At the same time, it leaves open the
question of how far this fact (if it is a fact) is able to show that the judgments thus generated are adequate, or more adequate than alternative judgments. After all, a tendency
of judgment universally instantiated in all human brains might be nothing more than a
heuristic device leading us to adequate reactions in most, but not in all cases. In the
worst case, it might be nothing more than a prejudice.
In a similar vein, Joshua Greene has argued that brain structures ensure that our intuitive moral judgments are more of a deontological than a consequentialist kind
(Greene 2008). Differently from Hauser, however, he thinks that consequentialist
judgments are in general more trustworthy because they involve cognitive functions to
a greater extent than deontological judgments which are more or less a matter of emo-

Where and When Ethics Needs Empirical Facts

43

tion. Whereas our spontaneous reactions and heuristics mirror the conditions that
prevailed in the evolution of the human brain, consequentialist reasoning is able to
overcome these restrictions and to face the changed realities of the modern world
(Greene 2008, cf., similarly, Singer 2006, 146 ff.). But, again, it must be doubted whether a far-reaching conclusion like this can be derived from the evidence. The fact that
intuitive judgments should never be the last word in moral matters does not imply that
the last word must be a consequentialist one. It might just as well be a more refined
deontological one, possibly modified by consequentialist elements. Or it might turn
out, as Richard Hare (1981) would have it, that our intuitive judgments can, after all, be
justified on consequentialist principles as far as they are interpreted as secondary rules
that function as useful shortcuts in situations too complex to allow for a comprehensive
calculation of consequences.
This is not to say that neuroethics, understood as the neuroscience of morality,2 is
irrelevant to ethics. Though neuroscientific findings cannot have a direct impact on our
moral or metaethical beliefs, they may nevertheless be relevant to these beliefs in an
indirect way, e. g. by challenging some of the presuppositions underlying these beliefs.
By offering explanations for the capacity of making moral judgments, and partly even
for the content of these judgments, in completely naturalistic terms, these findings
constitute a challenge to interpretations of moral judgment in terms of supernatural
factors such as divine inspiration or controversial items such as transcendent absolute
values (cf. Churchland 2006, 3). Though neuroethics cannot by itself refute metaphysical conceptions of this kind, it substantially weakens this kind of view by making it
plausible that morality is a product of natural evolution no less than other human capacities for which a supernatural origin is less likely to be assumed. In this way, neuroethics functions in a way analogous to neurotheology (cf. Newberg et al. 2001). Neurotheology can show neither the existence nor the non-existence of transcendent religious objects. Nevertheless, it is indirectly relevant to religious belief by offering naturalistic explanations for its existence and origin. The very possibility of a naturalistic
explanation throws doubt on the assumption typically made by religious believers that
their beliefs and feelings originate in the objects of their beliefs. While neurotheology
inherently supports what David Hume (1956) called Natural History of Religion, neuroethics supports what might be called a natural history of morality.
These interrelations between the realm of the descriptive and the realm of the normative do not, however, call into question the fundamental division between the descriptive and explanatory concerns of the sociology, psychology and neuroscience of
morals on the one hand, and the normative and meta-ethical concerns of moral philosophy on the other. The close connection that exists between moral psychology and

'Neuroethics in this sense should be distinguished from 'neuroethics as the ethics of neuroscience.

44

Dieter Birnbacher

normative ethics is in no way able to weaken the conceptual distinction between the
empirical and the normative.
This holds for 'practical ethics no less than for theoretical ethics. There is, however,
a difference. 'Practical' or 'applied' ethics differs from theoretical ethics in the role
played by empirical fact. As it will be argued in the following, empirical descriptions,
theories and hypotheses are not only desirable supplements to applied ethics but a necessary part of it. Empirical elements play a variety of roles in applied ethics, their exact
nature depending on the paradigm on which the respective contribution to applied
ethics is modeled.

Empirical Facts as Parts of Applied Ethics

One reason why empirical descriptions, theories and hypotheses form a part of applied
ethics concerns the functional context in which applied ethics is situated. Applied ethics
purports to have practical import. Any serious attempt to influence practice, however,
requires consideration of the pragmatic conditions of putting ethics into practice. Applied ethics cannot limit its view to questions of principle but must enter into questions
related to how these principles are likely to be applied in practice. It must not only assess the contents, presuppositions and implications of the principles it advocates but
also the conditions and consequences of this advocacy itself. To the extent that it steps
outside the ivory tower and aims at influencing reality, it is under an obligation to take
into consideration the repercussions its normative principles are likely to have in reality
once they are publicly declared and advocated. And it has to reflect on these repercussions from the very start, already at the level of theory which accords well with Kant's
dictum that if a theory proves ill-suited to practice, the blame should be laid not on the
fact that it is a theory but on the fact that there isn't enough theory (Kant 1923, 275).
Part of what is lacking in a theory which is incomplete in Kant's sense is a reflection
on how the principles of the theory, once they are publicly declared, are interpreted (or
misinterpreted), whether they are accepted or rejected, how they are integrated into
individual belief systems and institutional arrangements and procedures, how they
transform attitudes and evaluations, how they influence speech, behavior and policies,
and how far they are suited to the practical ends they are designed to realize. A complex
assessment along these lines necessarily exceeds the capacities of the armchair philosopher. He is bound to draw on the resources of the psychologist or sociologist or even
better, to co-operate with them from the very start. In fact, he is in no other position
than the lawyer dealing with proposals of legislation. Just as the lawyers task is not only
to make sure that a particular proposal of legislation is compatible with constitutional
norms and the general principles recognized in the system of law concerned, but also to

Where and When Ethics Needs Empirical Facts

45

look to the practicability and effectiveness of the proposed piece of legislation (given its
aims), so the applied ethicist has the same dual responsibility. His role is not only to
inquire into the theoretical credits of a proposed norm of practical morality (in terms
of internal consistency, coherence with other rules of social morality, and compatibility
with underlying principles) but also to consider its practical feasibility, its psychological
acceptability and its potential effectiveness in changing attitudes and behavior in the
desired direction. Or, to vary another of Kant's dicta: Sociology and psychology without
ethics is crypto-normative, applied ethics without empirical facts is sterile. Sociology
and psychology without ethics is crypto-normative because it often fails to make explicit the principles underlying its evaluations; applied ethics without empirical facts may
be interesting as a theoretical exercise but easily ineffective or even harmful in practice.
In so far the practical ethicist is interested not only in explanation analysis but also in
changing views and attitudes he is well advised to take into account what moral psychology has to say.

Empirical Facts from a Foundationalist Perspective

There are quite a number of paradigms of applied ethics, and empirical facts play different roles in each of them. One paradigm is the contextualist one that starts from the
normative givens of one of the various social contexts in which moral norms operate
and explores their structure and functioning without relating them to more general
principles. Another, which has become prominent in the practice of medical ethics, is
principlism, the formulation of more or less universally accepted principia media on a
more general level which are commonly appealed to in casuistic problem-solving. The
distinctive feature of principlism is that it is indifferent to first principles. The 'principles' in this approach are open to being justified on a variety of different basic principles so that practical agreement becomes possible even when disagreement persists on
fundamentals.
There are serious problems with both these approaches, theoretical as well as practical ones (cf. Birnbacher 1994), so that I will concentrate in what follows on the more
traditional foundationalist paradigm of applied ethics. The foundationalist paradigm
conceives of applied ethics as the application, literally understood, of theoretical principles to real-life cases via middle-range principles and contextual practice rules.
Whereas in contextualism practice rules and in principlism middle-range principles are
taken for granted, foundationalism attempts to derive these rules, as far as it goes, from
more general principles such as the Utilitarian principle of happiness maximization or
the Categorical Imperative. According to foundationalism, applied ethics deals with the
translation, as it were, of theoretical principles into workable social moral rules, mak-

46

Dieter Birnbacher

ing them available for everyday judgments and decisions. It is evident that this program
essentially depends on empirical facts over and above those involved in the practice of
applied ethics generally. Its very program of deducing concrete consequences from a
set of basic principles can be carried out only if these principles are supplemented with
empirical premises.
Within the foundationalist paradigm the task of translating basic principles into
practice rules and of enriching their empirical content takes a different turn with deontological and with consequentialist basic principles. It is characteristic of deontological
principles to leave much less room for considerations of empirical adequacy and efficiency than consequentialist ones. The reason is that in the process of subsuming individual cases under these principles, deontological principles stand in need of a semantic
interpretation, whereas consequentialist principles stand in need of an empirical interpretation over and above the semantic interpretation. Once the exact meanings of the
terms of a principle are fixed, a deontological principle determines more unambiguously than a consequentialist one what is to be done or not to be done in relevant situations. For Kant, this fixity in content of deontological principles was one of the central
arguments in favor of such principles.
With consequentialist principles, the semantic interpretation has to be supplemented with an empirical assessment of how to realize the objectives specified by the principle under the given circumstances. A deontological prohibition to kill another human
being simply does not seem to leave much room for empirical considerations of prospects and probabilities in the way a consequentialist principle of maximizing happiness or, for that matter, aggregate lifeyears, does. There does not seem to exist any logical gap between the ethical principle and the concrete rule of action which might have
to be filled by empirical considerations. If, to take a famous example, the Kantian absolute prohibition of suicide is upheld, there is no room for taking account of consequences for others or for the suicidal person himself.
This impression is, however, misleading. Absolute prohibitions like the Kantian verdict on suicide or telling lies, are, even in deontological systems, the exception rather
than the rule. Most deontological theories contain within themselves a consequentialist component for which the moral rightness of an act depends, among others, on the
moral rightness of the acts (the agent's own or others') following from it. According to
the predominant interpretation of the deontological norms against abortion or against
embryo research, for example, these norms do not only contain an injunction not to
abort a human fetus or not to make human embryos an object of research, but also an
injunction to take appropriate measures to prevent these acts by others. A deontological axiology is combined with a consequentialist normative theory, postulating a moral
duty to prevent actions held to be morally wrong in themselves. Thus, most deontological theories are really hybrids, combining deontological and consequentialist elements.
As far as these consequentialist elements go, empirical elements come in. This is inevi-

Where and When Ethics Needs Empirical Facts

47

table since the relation between the act of doing x and the act of preventing others from
doing x, for example by suitable legislation, is an empirical relation. It is an empirical
question which means are appropriate and efficient to prevent others from doing x.
Under certain circumstances, even x, the act that is ethically prohibited, might be a
means of preventing others from doing x, so that consequentialist consideration might
make doing x legitimate even without overstepping the deontological paradigm. One
such circumstance can be present when killing one innocent is the only means to prevent someone (a criminal, a tyrant, an enemy, nature) from killing a significantly greater number of innocents (see e.g. the Jim case, presented by Williams 1973, 98).
Cases of this kind exemplify a general pattern: that it seems legitimate or even obligatory, to do something wrong in order to prevent someone else from doing more
wrong. What happens in these cases is that the basic principle is modified, or even
turned upside down, by contingent factors making it counterproductive to follow it as a
reliable guide to practice.

Empirical Elements in Operationalizing Principles

What kinds of empirical elements are called for in order to translate basic principles
into practice rules within the framework of the foundationalist paradigm of applied
ethics? Obviously this depends on the kind of adaptations required:
1. Psychological and other empirical elements go into the process of translating basic
principles into practice rules in order to take account of limited information and limited
rationality. Practice rules must account for limitations of available information, information retrieval, information processing capacities or opportunities and of the capacity
to reflect on what basic principles imply for a given situation. Limitations of rationality
have been exposed especially in the context of probabilistic information and the attitudes to risks (see, e.g., Tversky and Kahneman 1974, 1981; Slovic et al. 1979; Gigerenzer 1999). It is an empirical matter how far these limitations go and what kinds of adaptations are necessary to account for them.
2. Basic principles are often too much at variance with intuitive or everyday standards
to find sufficient acceptance, i. e. acceptance to a degree sufficient to realize the values
inherent in these principles. Practice rules must therefore be formulated in a way that
stresses their continuity with traditional moral beliefs. How this is best done is, again,
an empirical matter.
3. Psychological hypotheses underlie judgments about the extent to which practice
rules can be expected to motivate appropriate attitudes and actions. Rules cannot, by
themselves, compel conformity. All they do is to prescribe, or recommend, a certain

48

Dieter Birnbacher

course of action. In order to make someone act accordingly they have to rely on further
factors. Moral psychological evidence strongly suggests, for example, that the capacity
to make moral judgements is insufficient for acting in accordance with them (cf., e.g.,
Montada 1993, 268). Besides that, practice rules should demand neither too much nor
too little. Both a tax rate that is set too low and a tax rate that is set too high miss the
aim of taxation. The low tax fails to level the revenue required, the high tax does the
same by provoking evasion strategies.
4. Sociological and psychological hypotheses underlie assessments of the degree to
which practice rules are immune against potential misuse and abuse, and against the
threat of slippery slopes leading to applications which are no longer covered by the
basic principle, either by excessive tolerance or excessive rigidity.3
5. In the framework of a consequentialist ethics, the selection of appropriate practice
rules must take account of all morally relevant consequences which the acceptance and
observance of a system of practice rules might have for the individual and for society.
This, again, calls for a great variety of social, psychological and historical assessments:
Is a proposed practice rule liable to confirm or to deepen socially harmful prejudices? Is
there a risk of weakening attitudes and dispositions that are desirable on other
grounds? Is the practice rule compatible with the maintenance of a stable core morality essential to social co-operation and trust? In each case, the way a given basic principle is operationalized depends on empirical considerations no less than on the content
of the principle itself. The reason is that for a consequentialist applied ethics (and for a
deontological applied ethics to the extent that it contains consequentialist elements) the
relation between the content of the basic principle and its corresponding practice rules
is contingent. It is possible, therefore, that this process may sometimes result in considerable qualitative changes and in extreme cases in a downright reversal of content and
direction.
A reversal of content is the exception rather than the rule, but there are two kinds of
constellation in which it may occur. The first constellation is moral heteronomy, the
second a purely functional justification of practice rules. Moral heteronomy occurs
when an agent A subscribes to a universalistic subjectivist axiology which obliges him
to take into account, to a certain extent at least, the preferences of others. In asking
himself what practice rules to follow with regard to a certain domain, his decision will
partly depend on the preferences of others, including their moral preferences. If these
preferences happen to be fundamentally opposed to his own, he may well end up with a
practice rule that reflects the values of others more than his own (though, of course, it

For example, from the perspective of an utilitarian basic principle, a case of excessive tolerance would be a categorical prohibition of paternalistic acts (in relation to a legitimate principle of respecting personal freedom), a case
of excessive rigidity a categorical prohibition of euthanasia (in relation to a legitimate principle of preserving life).

Where and When Ethics Needs Empirical Facts

49

still reflects his own values in so far as these enjoin him to honor the preferences of
others).
A contemporary controversy in applied ethics for which this constellation might in
fact obtain is the controversy on research on human embryos. From the viewpoint of an
agent holding a welfarist principle as his basic principle there is no direct moral reason
to adopt a practice rule against embryo research: the embryos subjected to experimentation (up to a stage of development of two weeks, say) cannot be honored with
any kind of conscious experience or subjectivity. This kind of research cannot, therefore, be opposed to the welfare or interests of those directly concerned, especially if, as
is the case, the embryos chosen as objects of experimenting are destined to be discarded
anyway. If it is certain that a human embryo will not reach the stage of development at
which consciousness sets in, it must be indifferent, from the viewpoint of a welfarist
ethics, whether experiments are carried out. It would even seem indefensible to miss
the chance offered by modern reproductive medicine to acquire scientific and medical
knowledge which could not be obtained otherwise.
On the other hand, embryo research meets with substantial negative reactions in a
large proportion of the population and arouses feelings of uneasiness and anxiety of a
sometimes quite powerful kind. Apart from that, this research is opposed to widely
held moral notions of human dignity, at least wherever dignity is interpreted as covering all stages of human development from conception on.
Within the framework of a welfarist or interest-oriented ethics all these adverse reactions must carry weight in exact proportion to the number of third parties opposed to
the research, the intensity of their adverse reactions, and their resilience in regard to
information and appeals to rationality. This weight must be balanced against the prospects of the infringement of vital human interests implied by not doing or prohibiting
embryo research. Given these conditions, such balancing may well lead to the result
that the welfarist should favor a practice rule against embryo research.
This example may, at the same time, serve to bring out a further feature of practice
rules: their relativity. While some of the factors determining the shape of practice rules
are more or less constant (such as limited altruism and limited rationality as two fundamental anthropological givens), others are more dependent on cultural perspectives
and local traditions which are themselves liable to change, for example by the progress
of science and technology. Imagine, for example, that a promising cancer cure is discovered which can only be developed into a standard therapy by extensive embryo
experimentation. It is perfectly possible that the reservations against embryo experimentation would in this case fade away (as the reservations against in-vitro fertilization
have faded away) and that embryo research would not only be held to be permissible
but even obligatory.
The other constellation which is not unlikely to lead to a reversal of content arises
whenever a practice rule is given a purely functional justification, i. e. one that invokes

50

Dieter Birnbacher

causal mechanisms leading from the observance of the practice rule to the satisfaction
of the basic principle, independently of any semantic or otherwise internal connections
between them. Examples of such purely functional justifications are to be found in
some variants of nature ethics. The most well-known one is the land ethic proposed by
the American pioneer of ecological ethics, Aldo Leopold (1949), advocating a comprehensive respect for all individual members of natural bio-systems as well as for these
systems themselves.4 Though the standard interpretations hesitate to acknowledge the
fact (with the notable exception of one of its commentators, Baird Callicott (1987)),
Leopold's ethics is a multi-layered structure combining a conventional anthropocentric
ethics at the level of basic principles with a decidedly anti-anthropocentric and holistic
ethic at the practice level. Thus, the practice rules of the land ethic can be interpreted as
one comprehensive rule of thumb expected to adapt an underlying interest-based
ethics to a particularly intransparent domain. Leopold himself characterizes the land
ethic "as a mode of guidance for meeting ecological situations so new or intricate, or
involving such deferred reactions, that the path of social expediency is not discernible
to the average individual" (Leopold 1949, 203). As this quotation shows, Leopold's leading motive in proposing the land ethic as a system of non-anthropocentric practice
rules was the limited human capacity to assess indirect and long-term effects of interventions in the biosphere. The land ethic, Leopold thought, might be better suited to
protect nature from excessive, and ultimately suicidal, human interventions than a
purely anthropocentric practical orientation, however enlightened.
In both examples, empirical premises are crucial for the selection of practice rules.
This is evident from the fact that it is far from clear if these premises are really borne
out by reality. Are the negative reactions to embryo research really as deeply entrenched and stable as the argument for a practice rule against embryo research presupposes? Should not the low degree of international consensus on this issue be seen as
proof of the fact that the rejection of this research is bound up with local peculiarities
of perspective and attitude that cannot be taken for granted? Similar uncertainties surround Leopold's implicit assumption that the appeal to ecocentric ecological values is
more motivating with regard to protective behavior than anthropocentric ones. The
soundness of this claim has never been demonstrated. What makes one doubt is the
observation that eco-activists of ecocentric persuasion quite frequently adduce anthropocentric instead of ecocentric reasons for the preservation of biodiversity. The motive
behind this seems to be the conviction that an appeal to anthropocentric reasons is not
less but more effective in gaining acceptance for preservation policies, which in turn has
led to the dilemma that, as David Ehrenfeld complains, conservationists are thereby

In the words of the often-quoted key sentence of Leopold's land ethic: "A land ethic [...] implies respect for his
fellow-members, and also respect for the community as such" (Leopold 1949, 204)

Where and When Ethics Needs Empirical Facts

51

provoked into exaggerating and distorting the humanistic values of non-resources


(Ehrenfeld 1978, 193).
Whatever the facts may be in these individual cases, it is clear that the task of clarifying the empirical issues on which the normative positions advocated depend belongs to
psychology and sociology rather than to philosophy. Ethics can become practical only
by a co-operative effort, integrating a priori methods of semantic and normative construction and reconstruction and a posteriori methods of empirical analysis and theory-building. Even now, some of the most controversial debates in applied ethics are not
so much controversies about matters of principle but about matters of empirical consequences.
Take, for example, the debate about active euthanasia and the potential risks of a
slippery slope leading to involuntary euthanasia once the ban on voluntary active euthanasia is relieved. As far as I see, the majority of philosophers, theologians and lawyers rejecting active euthanasia do so for contingent rather than categorical reasons.
They refer to the threat of a slippery slope and the uncertain, and controversial, lessons to be drawn from the ongoing Benelux experiment. This means that the debate is
essentially transferred to an empirical level, making the question of the permissibility of
active euthanasia amenable to methods of empirical analysis known from other areas of
social policy. Typically, defenders of active euthanasia argue that in some countries
with rigid practice rules and laws prohibiting active euthanasia, active euthanasia is
more frequently practiced than in more permissive countries (cf. Kuhse 1992).

Empirical Elements in Non-Foundationalist Approaches

For some time now, foundationalist approaches to applied ethics have ceased to be the
dominant paradigm, in applied ethics as in ethics generally. It is no longer universally
seen as a central, and indispensable, aim of ethics to start from basic principles, nor is it
any longer seen as imperative to integrate the practical maxims we apply, or should apply,
in different contexts, into one overarching system. Both aims are explicitly renounced by
well-known approaches to important domains in applied ethics such as Michael Walzer's
or Michael Sandels theories of justice (Walzer 1983; Sandel 1982) or Christopher Stone's
"moral pluralism" (Stone 1987). The anti-foundationalism of these methodologies is particularly radical because they not only postulate that establishing basic principles from
which solutions of concrete problems can be deduced is impossible but also that it is undesirable. Ethics should not even try to find a common denominator for the pluralism of
context-dependent practice rules postulated for their respective spheres of application.
The fact that we apply (as we think, rightly) different and mutually incompatible principles of distributive justice in the spheres of economics, education, and social security, say,

52

Dieter Birnbacher

or the fact that we apply different moral principles in the treatment of humans and in the
treatment of animals (in point of euthanasia, for example), is, in these ethical theories, no
longer seen as a challenge for ethics: a challenge to inquire what the differentiating characteristics are that explain, and possibly justify, the pluralism of values against the background of a limited number of ultimate principles. Instead, the pluralism of rules, ideals,
virtue concepts etc. is left as it is, without an effort at integration into a coherent whole,
and often without critical principles that might counteract the inherent tendency of these
approaches to moral conservatism.
According to the pluralistic picture, the heterogeneity of the criteria applied in practice is irreducible and the plurality of normative considerations should be accepted as a
moral ultimate. According to the monistic picture, the heterogeneity of practice rules is
merely a phenomenon of surface grammar to be explained (and possibly justified) by
the empirical features of their contexts of application. The best illustration for both
models is the different way they explain the content of the concept of justice.
From the perspective of a pluralist theory, such as Nicholas Rescher's conception of a
"canon of claims" (1966, 81ff.), there is an ultimate and irreducible plurality of conflicting claims of justice, with no chance of reducing the conflicts by correlating each type
of claim with one or more of a number of mutually exclusive spheres. Thus, the income
structure of a society is usually judged to be just or unjust by applying not one, but a
variety of criteria, with the criteria of achievement and equality of results occupying a
dominating position. Whilst Rescher's theory leaves the exact weight of the various
criteria undecided, John Rawls' pluralistic theory of justice (Rawls 1971) is more explicit in giving priority to equality in the domain of basic rights, to equality of chances in
the domain of access to public office, and to equality in the domain of income.
The school of thought best known for asserting that the canon of claims made on
behalf of justice can be subsumed under one ultimate principle has been the utilitarian
one. In its early days, utilitarian thinkers such as the Benthamite lawyer John Austin
even went so far as to assert a logical, or semantic, correspondence between the ideas of
justice and collective welfare: "When a positive human rule is styled unjust, [...] justice
is nearly equivalent to general utility" (Austin, quoted after Bedau 1963, 288). The next
generation of utilitarians, including John Stuart Mill was more cautious in this regard.
In the chapter Utility and Justice of Utilitarianism (Mill 2006, 124 ff.), he presented
himself as one of the first philosophers sensitive both to the plurality of competing
criteria of justice commonly employed in the justification of social distributions (with
irresolvable conflicts in areas such as salaries, taxes and subsidies) and to the need to
look for a common denominator which Mill himself thought to have found in the principle of utility. Unfortunately, Mill never showed how this actually worked out, and
since then, the very possibility of reconciling the principles of distributive justice with
utility maximization has been energetically disputed with reference to the fact that the
principle of utility as a purely aggregative principle is completely, and deliberately,

Where and When Ethics Needs Empirical Facts

53

insensitive to the way a given utility is distributed among the members of a society. In
fact, on the purely aggregative criterion all possible distributions are axiologically
equivalent as long as aggregate utility is maximal.
This, however, is true only when comparing isolated time slices of social distributions. From a dynamic perspective, including the short- and long-term consequences of
social distributions, this equivalence no longer holds since certain patterns of distributions will have a greater tendency to be productive of future utility than others. Even
then, however, these patterns will not necessarily correspond to widespread ideas of
equity. The utilitarian principle will in general favor those with the highest chances of
being productive of later utility, not those with the greatest need of additional utility in
order to attain a decent minimum. And it will favor those who are particularly good at
transforming a given quantity of goods into utility (cf. Edgeworth 1881, 157ff.), with
the consequence of concentrating the goods of a society in the hands of the more intelligent and the more sensitive, favoring, again, an uneven distribution.
The controversy between utilitarians and others in point of distributive justice boils
down to the question whether a utilitarian basic principle of utility maximization provides reasons to employ non-utilitarian practice rules of distributive justice in judging
concrete cases. This question can have an adequate answer only with the help of empirical disciplines such as sociology and social psychology. It is an empirical question
whether a distributional strategy applying the utilitarian criterion directly will in fact
lead to maximal utility results. It may well turn out that a non-utilitarian rule is in fact
superior by utilitarian standards. This can come about, for example, because under
real-life conditions important psychological dimensions of a given problem tend to be
overlooked, such as the feelings of discrimination of those disadvantaged by the application of the utilitarian rule.
However that may be, the question does not admit of an a priori answer. All that can
be said a priori is that the answer depends on the empirical facts about the interrelation
of utilities. If the feelings of relative deprivation of those who have less than the average,
say, are significantly stronger than the satisfaction of those who have more than the
average (as is to be expected in most democratic countries), there will be a strong utilitarian reason for a transfer of wealth, income, rights (or whatever else underlies the
gradient in utility status) from the better-off to the worse-off, provided this is possible
without a decrease in aggregate utility. In modern societies assigning equal basic rights
to every individual, a constant pull in the direction of economic and educational
equality of results is to be expected. Initial inequalities of chances are no longer regarded as God-given and equality of chances is recognized as a poor substitute for equality
of results in view of the fact that the potential to profit from equal chances is again
strongly dependent on initial endowment. As a consequence, positional goods and
feelings of relative deprivation have gained considerable importance for self-respect
and individual satisfaction with life.

54

Dieter Birnbacher

These considerations might heighten the attractiveness of an independent equalitypromoting distributive rule even for the utilitarian indifferent to distributional values at
the level of basic principles (for a variant of this see Trapp 1990). But whether this is so
is, again, for the social scientist and not for the philosopher to decide.

Conclusion

In conclusion it can be said that although the distinction between applied ethics as a
normative enterprise and moral psychology and moral sociology should be upheld,
there are several reasons why the practical ethicist is under a necessity to take account
of empirical fact and of the empirically based theories of sociology, moral psychology,
and 'neuroethics' in the sense of the neuroscientific study of moral phenomena. One
reason is that, as a rule, he wishes not only to analyze but also to change the public's
moral views and actions. To do this responsibly, he must consider the repercussions his
normative proposals are likely to have in reality once they are declared and advocated.
As a public figure, he must furthermore assess the practical feasibility, the psychological
acceptability and the risks of misuse and abuse of any norm or normative view proposed. For all this, he strongly depends on resources provided by empirical disciplines.
This dependence, however, is not completely asymmetrical. As the example of Joshua
Greene's criticism of Marc Hauser shows, descriptive disciplines like neuroethics depend on philosophical analysis to clarify how far normative conclusions can be validly
drawn from empirical evidence.

References

Bedau, H. A. (1963). Justice and classical utilitarianism. In C. J. Friedrich & J. W. Chapman


(eds.), Justice (pp. 284305). New York: Atherton Press.
Birnbacher, D. (1994).Two methods of doing bioethics. In H. Pauer-Studer (Ed.), Norms, values,
and society (pp.173185). Dordrecht: Kluwer.
Birnbacher, D. (1999). Ethics and social science: Which kind of co-operation? Ethical Theory
and Moral Practice 2, 319336.
Callicott, J. B. (1987). The conceptual foundations of the Land Ethic. In J. B. Callicott (Ed.), A
companion to A Sand County Almanac. Interpretative and critical essays (pp. 186217).
Madison: University of Wisconsin Press.
Churchland, P. S. (2006). Moral decision-making and the brain. In J. Illes (Ed.), Neuroethics.
Defining the issues in theory, practice, and policy (pp. 416). Oxford: Oxford University
Press.
Edgeworth, F. Y. (1881). Mathematical psychics. London: C. K. Paul & Co.

Where and When Ethics Needs Empirical Facts

55

Ehrenfeld, D. (1978). The arrogance of humanism. New York: Oxford University Press.
Gigerenzer, G., Swijtink, Z., Porter, T., Daston, L., Beatty, J., & Krger, L. (1999). Das Reich des
Zufalls: Wissen zwischen Wahrscheinlichkeiten, Hufigkeiten und Unschrfen. Heidelberg:
Spektrum.
Greene, J. D. (2008). The secret joke of Kant's soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology, vol. 3 (pp. 3580). Cambridge, Mass./London: MIT Press.
Hare, R. M. (1981). Moral thinking. Its levels, method and point. Oxford: Oxford University
Press.
Hauser, M. D. (2006). Moral minds. The nature of right and wrong. New York: Ecco.
Hume, D. (1956). The Natural History of Religion. London: Black.
Kant, I. (1923). ber den Gemeinspruch: "Das mag in der Theorie richtig sein, taugt aber nicht
fr die Praxis". In Kants Werke, Akademie-Ausgabe, vol. 8 (pp. 273314). Berlin: de Gruyter.
Kuhse, H. (1992). Voluntary euthanasia in the Netherlands and slippery slopes. Bioethics News
11(4), 17.
Leopold, A. (1949). The Land Ethic. In A. Leopold, A Sand County Almanac and Sketches here
and there (pp. 201226). New York: Oxford University Press.
Mill, J. S. (2006). Utilitarianism/Der Utilitarismus. Stuttgart: Reclam.
Montada, L. (1993). Moralische Gefhle. In W. Edelstein, G. Nunner-Winkler, & G. Noam
(Eds.), Moral und Person (pp. 259277). Frankfurt am Main: Suhrkamp.
Newberg, A., dAquili, E., & Rause, V. (2001). Why God wont go away. Brain science and the
biology of belief. New York: Ballantine Books.
Rawls, J. (1971). A theory of justice. Cambridge MA: Harvard University Press.
Rescher, N. (1966). Distributive justice. A constructive critique of the utilitarian theory of distribution. Indianapolis/New York: The Bobbs-Merrill Co.
Sandel, M. J. (1982). Liberalism and the limits of justice. Cambridge: Cambridge University Press.
Schmidt, V. H. (1994). Bounded justice. Social Science Information 33 (2), 305333.
Singer, P. (2006). Morality, reason, and the rights of animals. In F. de Waal (Ed.), Primates and
philosophers (pp. 140158). Princeton: Princeton University Press.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1979). Rating the risks. Environment 2, 1439.
Stone, C. D. (1987). Earth and other ethics. The case for moral pluralism. New York: Harper &
Row.
Trapp, R. W. (1990). 'Utilitarianism incorporating justice' - a decentralised model of ethical
decision making. Erkenntnis 32 (3), 341381.
Tversky, A., & Kahneman, D. (1974). Judgement under uncertainty, heuristics and biases. Science 185, 11241131.
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice.
Science 211, 453458.
Walzer, M. (1983). Spheres of justice. A defense of pluralism and equality. New York: Basic
Books.
Williams, B. (1973). A critique of utilitarianism. In J. J. C. Smart & B. Williams (Eds.), Utilitarianism for and against (pp. 77150). Cambridge: Cambridge University Press.

Normativity of Moral Intuitions in the Social Intuitionist


Model
Maciej Juzaszek

Abstract*
The aim of this paper is to answer the question of whether moral intuitions, understood in terms of Jonathan Haidt's Social Intuitionist Model (SIM), have any normative
power. The conclusion is no. And there are many separate arguments in favor of it.
First, these moral intuitions cannot be objective, justifying reasons that are expected to
arise in the course of making a real moral judgment. Second, we do not even know if
they actually represent the grounds for moral judgments. There are too few reasons to
exclude the possibility that, when we make moral judgments, we unconsciously follow
moral rules, which can be objective moral reasons. Furthermore, in Haidt's terms, moral intuitions are most probably heuristic by nature. But if they are, it is even more problematic for their normativity because they can lead to mistakes. There is also a lacuna in
the research concerning problems with resolving moral dilemmas in which two strong
moral intuitions are involved. Third, philosophers claim that there is some other kind
of justified moral intuitions and psychologists often mistakenly mix together these two
phenomena. In this paper, all of these arguments will be examined and they will serve
to justify the lack of normativity of moral intuitions in the SIM.

* Maciej Juzaszek
Department of Professional Ethics
Jagiellonian University
Krakow, Poland
maciej.juzaszek@uj.edu.pl
The paper is a result of the research project Justice in Health Care funded by Polish National Science Centre
(number 2013/08/A/ HS1/00079).

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_3

58

Maciej Juzaszek

Introduction

This paper is an attempt to answer the question of whether moral intuitions, understood in terms of the influential and enthusiastically debated Social Intuitionist Model
(SIM), the psychological theory of Jonathan Haidt (2001), have any normative power;
that is, whether or not we should form moral judgments on their basis. In the beginning I will present the main assumptions and statements of the SIM and the general
problem of normativity. Then I will move on to present the interesting idea of the real
moral judgment delivered by Jeanette Kennett and Cordelia Fine (2009) and the conception of moral judgment based on normative objective reasons which do not include
moral intuitions in terms of the SIM. Kenneth and Fine's theory is a convincing form of
rejection of the arguments from contemporary moral psychology against ethical rationalism, which seems to me a legitimate position which is worth defending. After
that, I will continue with three minor problems and attempt to identify some gaps in
the research conducted within the SIM, which might justify the claim that moral intuitions are not normative. At the end, I will mention the difference between psychologists' and philosophers' ways of understanding moral intuitions. All these arguments
will lead me to the conclusion that moral intuitions in terms of Haidt's theory are not
the best notions of normativity.

The Social Intuitionist Model (SIM)

According to Jonathan Haidt (2013), the SIM is the result of certain trends that
emerged in late 20th century psychology, from the affective revolution, the rebirth of
cultural psychology, and the automaticity revolution, to the new research in neuroscience and primatology. These developments caused researchers to begin questioning the
rationalistic theory of moral development based on Jean Piaget's (1997) work and further research offered by Lawrence Kohlberg (1981). Kohlberg claimed that moral
judgment is the result of deliberative reasoning carried out by a person who is at one of
the six levels of moral development. He defined moral reasoning as the conscious
process of using ordinary moral language (Kohlberg and Hewer 1983, 69)1. Haidt
(2001) challenged this approach by demonstrating through experimentation that when
people are asked to evaluate a moral case and are then asked to provide reasons for
their moral judgment, they often provide arguments that are irrelevant to the assessed

Actually Kohlberg referred to Kantian ethics and understood moral reasoning as a rationalistic inner discourse of
an agent using reasons available at one of the six levels of moral development. At the highest, rarely reached level,
an agent makes moral judgments on the basis of universal imperatives (cf. Kohlberg 1981).

Normativity of Moral Intuitions in the Social Intuitionist Model

59

problem. And when the interviewer questions their justification, it becomes apparent
that the tested person is unable to give any better argument supporting his or her moral
judgment, which turns out to have been based only on gut feelings. Even so, the tested
person is still unwilling to withdraw the given assessment. Haidt (2000) calls this phenomenon moral dumbfounding.
Let us give an illustration of moral dumbfounding from Haidt's research:
Julie and Mark are brother and sister. They are traveling together in France on summer
vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it
would be a new experience for each of them. Julie was already taking birth control pills,
but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide
not to do it again. They keep that night as a special secret, which makes them feel even
closer to each other. What do you think about that? Was it OK for them to make love?
(Haidt 2001, 814)

Most tested people say that the behavior of the siblings is blameworthy, and they
point out the dangers of inbreeding, only to remember that Julie and Mark used two forms
of birth control. They argue that Julie and Mark will be hurt, perhaps emotionally, even
though the story makes it clear that no harm befell them. Eventually, many people say
something like, I don't know, I can't explain it, I just know it's wrong (ibid.).

Therefore, if moral reasoning is not essential for moral judgment, what is, then? Haidt's
answer is: moral intuitions. Before presenting in detail his idea, let us start from a few
important definitions. In the SIM, moral judgments are understood broadly as evaluations (good vs. bad) of the actions or character of a person that are made with respect
to a set of virtues held to be obligatory by a culture or subculture (ibid., 817). Moral
reasoning is just a conscious mental activity that consists of transforming given information about people in order to reach a moral judgment (ibid., 818) and, on the other
hand, intuitions are
judgments, solutions, and ideas that pop into consciousness without our being aware of
the mental processes that led to them. When you suddenly know the answer to a problem
youve been mulling, or when you know that you like someone but cant tell why, your
knowledge is intuitive. Moral intuitions are a subclass of intuitions, in which feelings of
approval or disapproval pop into awareness as we see or hear about something someone
did, or as we consider choices for ourselves (Haidt and Craig 2004, 56).

What is very important is that Haidt's conception is based on the so-called dual-process
theory (cf. Kahneman 2011), which divides the processes occurring in the human brain
into two groups: conscious (most often characterized by slowness, controllability, sequentiality, requirement of effort; resulting in the recent but only human evolution)
and unconscious (characterized by speed, uncontrollability, simultaneity, effortlessness;
occurring quite commonly among animals in the early stages of evolution) (Pie-

60

Maciej Juzaszek

trzykowski 2012, 141). Unconscious processes, which include intuitions, are beyond
our control, occur automatically, and allow us to make decisions quickly and thus save
energy for other activities that require more energy input:
It is a result of repeated practice of solving similar types of problems, particularly with the
accompaniment of clear positive or negative feedback. Hence the recognition-primed decision domain contains both common, repetitive, everyday problems and issues which are
the subject of expert practice, repeatedly done by experienced professionals (ibid., 178).

On the other hand, conscious processes are responsible for the control, supervision,
verification, and interpretation of intuitive data provided by unconscious processes.
They can supply ex post rationalizations for intuitive judgments, examine their coherence, and even modify them. According to Haidt, moral judgments are the result of the
former processes. Thus, we first have intuitions and then make moral judgments. The
role of reasoning is to give post factum justification for already made moral judgments,
behaving like a lawyer or a press secretary whose job is to defend his or her boss' decision (Haidt 2007, 1000). Of course, it is not always like this. Haidt (2001) does not deny
that we can make our moral judgments after reflection; he just alludes to the rarity of
doing so:
We often engage in conscious verbal reasoning too, but this controlled process can occur
only after the first automatic process has run, and it is often influenced by the initial moral
intuition. Moral reasoning, when it occurs, is usually a post-hoc process in which we
search for evidence to support our initial intuitive reaction (Haidt 2007, 998).

In his mostly psychological works, Haidt usually makes descriptive claims. For instance, he writes that by saying that the grounds for moral judgments are intuitions
rather than reasoning, we make a descriptive claim, about how moral judgments are
actually made. It is not a normative or prescriptive claim, about how moral judgments
ought to be made (Haidt 2001, 815). However, in many of his philosophical or political
works, Haidt claims that his theory of moral intuitions also has normative consequences, especially for policymakers. In The Righteous Mind, Haidt reveals himself as a
"Durkheimian utilitarian", which implies utilitarianism concerned with human connectedness. He confesses:
I dont know what the best normative ethical theory is for individuals in their private lives.
But when we talk about making laws and implementing public policies in Western democracies that contain some degree of ethnic and moral diversity, then I think there is no
compelling alternative to utilitarianism. I think Jeremy Bentham was right that laws and
public policies should aim, as a first approximation, to produce the greatest total good
(Haidt 2012, 272).

Elsewhere he mentions that a correct understanding of the intuitive basis of moral


judgment may therefore be useful in helping decision makers avoid mistakes and in

Normativity of Moral Intuitions in the Social Intuitionist Model

61

helping educators design programs (and environments) to improve the quality of moral judgment and behavior (Haidt 2001, 815), and in another place he claims that [...] a
truly utilitarian approach to public policy would take into account many moral goods
that may not be obvious to all policymakers; it would have to recruit moral intuition as
a guide and ally of reasoning to help it understand the forest of value it is trying to improve (Haidt and Kesebir 2007, 224). This means that Haidt emphasizes the credibility
of moral intuitions as moral guides. Although moral intuitions often blind our moral
faculty, their main evolutionary aim is to bind people within groups. And this aim goes
hand in hand with the aims of policymakers. According to Haidt, that is the reason why
the creation and implementation of public policies should be rooted in binding intuition. Why are binding intuitions, according to Haidt, normative? He writes:
[A] Durkheimian version of utilitarianism would recognize that human flourishing requires social order and embeddedness. It would begin with the premise that social order is
extraordinarily precious and difficult to achieve. A Durkheimian utilitarianism would be
open to the possibility that the binding foundations Loyalty, Authority, and Sanctity
have a crucial role to play in a good society (Haidt 2012, 272).

Let us now consider if moral intuitions in terms of the SIM really can serve as normative reasons in moral considerations.

Normativity and Reasons

Although the nature of normativity has been a popular subject of research conducted
by philosophers, neuroscientists, psychologists, and legal scholars (cf. Stelmach et al.
2013), it still remains puzzling. Even if the majority of philosophers agree that there are
entities like normative concepts, there is a dispute about what it does mean that they
are normative (cf. Korsgaard 1996). There are, however, a few things that seem to be
quite well established. There is a difference between descriptive research, which focuses
on characterizing and explaining how people behave and the causes of such behavior,
and normative research, which attempts to answer questions of how people should
behave and to give the justification for such claims. The psychological approach to
moral intuitions is obviously a descriptive research program as it analyzes the practice
of moral judgment and provides us with an explanation of that which lies behind it
that is, its mechanisms. But the main problem is whether it also indexes some normative implications that do not refer to how it is, but how it should be.
We have already explained the mechanism of making moral judgments and showed
that "intuitions (including moral emotions) come first" (Haidt 2001, 814) and that they
have an affective nature. At the same time we have learned that those put in a position
of moral dumbfounding do not withdraw their moral judgments but are strongly at-

62

Maciej Juzaszek

tached to them. This knowledge is sufficient to reach the conclusion that emotions,
which stand for moral intuitions, are constitutive for making any moral judgments.
This means that, when making a moral judgment, we should listen to our gut feelings
(this is a normative claim!), because morality is not a sphere of reason, but a sphere of
emotions. Although I do not share this view, such ideas are consistent with the views of
at least some moral sentimentalists (e.g. Prinz 2007), who believe that our moral thinking is not really rational but sentimental and/or it is essential for moral facts to refer to
our sentiments (cf. Kauppinen 2014). According to this approach, moral intuitions in
terms of the SIM could have normative power.
And this poses the problem of how to bridge the Is-Ought Gap, encapsulated in a
famous excerpt from Hume:
In every system of morality, which I have hitherto met with, I have always remarked, that
the author proceeds for some time in the ordinary ways of reasoning, and establishes the
being of a God, or makes observations concerning human affairs; when all of a sudden I
am surprised to find, that instead of the usual copulations of propositions, is, and is not, I
meet with no proposition that is not connected with an ought, or an ought not. This change
is imperceptible; but is however, of the last consequence. For as this ought, or ought not,
expresses some new relation or affirmation, 'tis necessary that it should be observed and
explained; and at the same time that a reason should be given; for what seems altogether
inconceivable, how this new relation can be a deduction from others, which are entirely
different from it (Hume 1739, 335).

I fully agree with Michael Huemer (2005, 72-83) that, so far, no one has convincingly
explained how it is possible to derive ought from is. And if so, to be able to relate to
the sphere of ought, moral intuitions must be normative, not descriptive. But what does
it actually mean to be normative? To answer this fundamental question, let us first
introduce two other kinds of reasons: explanatory and justificatory (normative). The
former concerns an explanation of how and why a person did what he or she did. For
instance, when we ask for an explanation of why Peter broke the car window, the explaining reason is that he wanted to get inside. But why did Peter want to get into the
car? The explanation is that he wanted to steal the vehicle. Why did Peter want to steal
the car? The explaining reason is probably that he wished to take it for a joyride2. But
justification, especially of the moral sort, is something entirely separate from explanation. We ask for justificatory reasons when we try to determine whether the action is
really what should be (or has been) done. We presuppose here the objective nature of
moral justification, i.e. mind-independent3 (Kramer 2009, 15-26). It means that there
must be a possibility that the individual is mistaken as to whether the reason actually

This explanation could probably go deeper and deeper.


And there is no need to make a metaphysical decision whether it is the strong or weak version of mind independence.

Normativity of Moral Intuitions in the Social Intuitionist Model

63

supports her action or judgment (cf. Pendlebury 2007, 536). Even if we explain Peters
behavior entirely and even if he believes that his drinking beer is justified he is not justified in his action if he wanted to joyride. He just lacks objective moral reasons. One
such example reason could be that he was stealing the car out of necessity for transporting a dying person to the hospital.
Let us imagine a situation in which a father demands respect from his teenage
daughter. In return, she asks, But why do I have to treat you with respect? She seeks
justification for a normative reason to obey her father's demand. If the father cannot
provide any good reason, then the daughter can tell him, So I don't see why I should
respect you. Still, the reason must be objective. Otherwise, for instance when the father
answers, "Because I say so", the daughter, even if she is rational and reasonable, has no
reason to obey her father. It is perhaps a good reason for him, but it is certainly not
objective. The nature of objective moral reasons is perfectly grasped by Christine
Korsgaard in the following passage:
Why should I be moral? [...] Even those who are convinced that 'it is right' must be in itself
a sufficient reason for action may request an account of rightness that this conviction will
survive. [] When we seek a philosophical foundation for morality, we are not looking
merely for an explanation of moral practices. We are asking what justifies the claims that
morality makes on us (Korsgaard 1996, 9-10).

Although the SIM can provide explanatory reasons for making a moral judgment, and
although it could probably lead us to good motivating reasons for justifying moral
judgment by moral intuitions, such judgment seems to lack objective moral reasons.
In a critical response to the SIM, Kennett and Fine propose the idea of real moral
judgment as the judgment the agent would have made in a more reflective or cognitively resourced situation (Kennett and Fine 2009, 93) in contrast to the unreflective
and unconscious in the SIM. The real moral judgment is ultimately the one that the
agent can reflectively endorse (ibid.). In other words, the concept of real moral judgment is the concept we appreciate and presuppose while making moral judgments, but
it need not be the concept we actualize and realize in everyday moral practice (if the
conclusions from Haidt's research are true).
At this point it is worth to consider Karen Jones' (2003) distinction between reasontracking and reason-responding. The former is an automatic and fast process of following the reasons with only results visible for consciousness and without any need of
having the concept of reason. The latter is a capability of tracking reasons in virtue of
responding to them as reasons (ibid., 81), which is conscious, slower and deliberative.
According to Haidt, reason-tracking is the most important for making moral judgments. Kennett and Fine do not question that we often track reasons, but that our concept of moral judgment (the 'real' moral judgment) is based on reason-responding. We
see that the adjective 'real' does not mean any metaphysical reality here, but rather the

64

Maciej Juzaszek

way people understand the concept of moral judgment (in contrast to the way they
apply it). Why should only real moral judgment have a normative force? Kennett and
Fine answer:
When an individual makes a moral judgment, it is plausible to suppose that the reasons
implied or adduced in support of the judgment must be reasons which the agent herself
(albeit perhaps mistakenly) takes to justify and not merely to explain the judgment. Otherwise the judgment can have no normative authority for her (ibid., 81).

What is interesting is that such a view on the normativity of moral judgment is based
on reasons that emerge from Haidt's research, which resulted in the discovery of the
phenomenon called moral dumbfounding. In this examination researchers tried to investigate the process of reason-tracking but they asked participants to deliver justificatory reasons (Haidt 2001, 817). By doing it, they presupposed that the concept of moral
judgment is based on reason-responding. The problem has been that the respondents
were not able to meet the requirement of providing the expected moral justification,
constituted by objective normative reasons.
The possible opponents of Kennett and Fine could retort that perhaps if those tested
do not give objective moral reasons that could justify their judgments and if the tested
people stand by their moral intuitions, the standards of justification assumed by the
real concept of moral judgment are too difficult to fulfill. These standards were presupposed by Haidt and other researchers, who questioned all the reasons given as a
justification for moral judgments made by examined people. People taking part in the
research could not give any further justificatory reasons and they were simply morally
dumbfounded. It would be convincing if we assumed that people are not able to overcome the problem of justification or that objective moral reasons are inaccessible. But
this does not seem to be true. Even Haidt (2003) does not deny that people engage in
moral reasoning, the objects thereof undoubtedly being objective moral reasons. There
is an empirical question of how frequently people use moral reasoning, but it is not
disputed that we have the ability to use it. And if we have such ability, the normative
concept of moral judgments is not unattainable; it is, at least, demanding, but not insurmountable for normative theory. In conclusion, it seems reasonable to concur with
Kennett and Fine, that our concept of moral judgment is judgments ('real') justified by
objective reasons. If so, the discovery that we deliver moral judgments based on moral
intuitions does not automatically make these intuitions justifiers of moral judgments.
To justify moral judgment, it is necessary to give objective reasons. And moral intuitions lack this feature.

Normativity of Moral Intuitions in the Social Intuitionist Model

65

Unconscious Rules

After considering the general problem of normativity and objective reasons, we can
move now to the next three problems with the normativity of moral intuitions, the
minor ones. They concern some gaps in empirical research regarding the making of
moral judgments. These gaps allow various interpretations, and one of them is the possibility that moral intuitions lack normative power. First, let us look once more at
Haidts above-mentioned example of moral dumbfounding with incest between siblings. Those who assessed the given case as morally blameworthy and were then asked
to justify their statements delivered arguments that concern the possibility of conceiving a genetically defective child, subsequent psychical problems, or a public scandal.
But none of these reasons corresponds with the circumstances of the case. Hence,
Haidts conclusion is that the respondents relied on their moral intuitions (Haidt 2001,
823).
However, it seems that there is another possible hypothesis. The tested people could
rely on some objective moral reasons moral rules justified in advance, which are applied unconsciously. It is possible that the majority of those who believe that incest is
blameworthy and cannot give a justification for such moral judgment in this particular
case simply unconsciously follow the rule prohibiting incest, which is generally justified
by the aforementioned high possibility of conceiving a genetically defective child, subsequent psychical problems, or a public scandal. Of course, we can imagine situations
(Haidt's incest case is a good example) in which the risks of negative consequences are
minimized or even completely neutralized. However, these are rare cases. Moreover, it
is often difficult to precisely determine the level of risk of adverse effects. Therefore, it
is more profitable to apply the general rule in any case. Investigation of whether all the
reasons justifying the rule also apply to the particular cases situation would be extremely difficult and inefficient for the agent due to limitations of time and his intellectual abilities.
A persuasive analogy can be found in the law. Traffic regulations in many countries
prohibit driving under the influence of alcohol, as it is highly risky behavior. Of course,
in many cases the drunk driver is no threat to anyone on the road, such as if one is
driving down an empty road in the middle of the night in an open space with no one
around. The driver does not even need to fear sanctions because there is probably no
police officer within a number of kilometers. The prohibition, however, is still binding,
and the drunken person behind the wheel commits a crime.
Since the rule is well justified, we should adhere to it in every situation, since making
exceptions may lead to its complete negation. If this is so, why should the respondents
not just admit that they follow a rule that is, in most cases, justified? Perhaps because
following the rules does not have to be a conscious process. This position is presented

66

Maciej Juzaszek

by Ron Mallon and Shaun Nichols (2010, 304), who write that because Haidts attack
on conscious reasoning leaves the door wide open to rational, rule-governed inference
at the unconscious level, his critique doesnt address whether moral rules play a role in
moral judgment. If they are right, then maybe the most common and important
grounds for moral judgments are not moral intuitions but moral rules justified in advance, which are objective normative reasons. However, they are internalized to such
an extent that they can be applied unconsciously. If it turned out that this hypothesis is
true, we could demonstrate that unconsciousness does not exclude the possibility of
justification based on objective reasons... But this problem requires further research on
the mechanisms of both the unconscious processes of decision-making and rulefollowing in order to verify the abovementioned hypothesis.

Moral Heuristics

The second gap in research on moral intuitions concerns the question of whether moral intuitions (at least some of them) have a heuristic nature. If they do, we should have
serious doubts about their normativity. Let us start by defining what heuristics are.
Generally, we can bracket together any mental short-cuts or rules of thumb that generally work well in common circumstances but also lead to systematic errors in unusual
situations4 (Sinnott-Armstrong et al. 2010, 250), but in a narrower sense, the mechanism of heuristics usage is unconscious attribute substitution, explained by SinnottArmstrong et al. in the following way:
A person wants to determine whether an object, X, has a target attribute, T. This target attribute is difficult to detect directly, often due to the believers lack of information or time.
Hence, instead of directly investigating whether the object has the target attribute, the believer uses information about a different attribute, the heuristic attribute, H, which is easier to detect. The believer usually does not consciously notice that he is answering a different question: Does object, X, have heuristic attribute, H? instead of Does object, X, have
target attribute, T? The believer simply forms the belief that the object has the target attribute, T, if he detects the heuristic attribute, H (ibid., 250).

To understand how heuristics work in practice, let us imagine a woman called Joanna
(cf. Kahneman and Tversky 1974). She is 23 years old, has dreadlocks and pierced eye-

As mentioned above, the use of moral rules, similarly to rules of thumb, must not be justified in each particular
situation. The difference between the former and the latter lies in the fact that moral rules are generally justified by
objective normative reasons in most of the situations of application of moral rule. The use of moral heuristics is
justified primarily by their efficiency of decision-making in difficult conditions, with limited resources of time and
energy. Moral heuristics are not justified by objective reasons and, as we shall see, are not good candidates for
justifiers of moral judgments under normal conditions.

Normativity of Moral Intuitions in the Social Intuitionist Model

67

brows, and typically wears long dresses with floral prints and lots of bracelets around
her wrists. What do you think about Joanna? Is it more likely that she is a librarian, or a
librarian and activist of an NGO fighting for the rights of animals (conjunction)? Presumably the majority of people would say that she is probably a librarian and activist
rather than only a librarian. However, this is wrong because the group of librarians who
at the same time are activists is of course a subset of all librarians, so it has to be smaller. Instead of answering the question about the target attribute (the probability), we try
to answer another question about the heuristic attribute, which in this case is our
judgment of representativeness of people with an appearance similar to Joanna among
animal rights activists. This is a typical instance of representativeness heuristic. Recognition heuristic is another example (see Gigerenzer 2008). When we ask European people
which city is bigger, Washington or Memphis, probably most of them would answer
Washington. But only a few of them really know how big these two cities are (so they
do not have access to the target attribute). Rather, their answers are based on a heuristic
attribute, which is public recognition of both cities. They have probably heard about
Washington more frequently than about Memphis and that is why this city sounds
more familiar to them. So, if they do not know which city is bigger, it is easier for them
to use information they already have, namely that Washington is better known.
However, if moral intuitions really are heuristic in nature, the question arises: what
are the target and the heuristic attributes in decisions concerning moral blameworthiness? Let us start from the latter. Sinnott-Armstrong et al. (2010, 257-260) examined
several different possibilities but ultimately come to the conclusion that the best candidates for heuristic attributes are emotions. Moral intuitions are therefore affective heuristics according to the rule if thinking about the act (whatever the act might be)
makes you feel bad in a certain way, then judge that it is morally wrong (ibid., 260).
Hence, when we try to figure out if some act was blameworthy or not, it is easier for us
to look for heuristic attributes, that is to look deep into our feelings.
Coming back to the first part of the question: what is the target attribute? In the case
of non-moral heuristics, it seems that in a particular case we always have a benchmark
for saying that using heuristics leads us to mistakes. In case of non-moral heuristics, we
always have some external way of determining whether our decision based on heuristics is correct or not. For instance, when we ask the question about Memphis and
Washington, there is a definite way to check which city is really bigger (e.g. in statistical
yearbooks). Is there something similar in case of questions about moral blameworthiness? The most probable hypothesis is that the target attribute would be moral wrongness (ibid., 255). However, its nature is embroiled in a long-standing controversy about
the nature of wrongness. Similarly, we often do not try to answer the question, Is the
drunk driver morally blameworthy for killing a person in an accident? but a rather
different one one that concerns our feelings of outrage, because the answer to the
former question is typically epistemically inaccessible (i.e. we do not know whether the

68

Maciej Juzaszek

pedestrian ran into the street, or whether the brakes of the car broke, and perhaps a
sober driver could not have prevented the accident). Even if we assume that some moral dilemmas have objective answers (this does not necessarily imply any metaphysical
assumptions), we must admit that they are highly inaccessible (ibid., 256-257). This is
probably the reason why we use moral heuristics to make moral judgments, even if
using them exposes us to the risk of failure.
If the target attribute in moral considerations is usually rightness or wrongness, what
do these terms designate? We do not really know and this is the point. The access to
these concepts is very difficult, at least according to the majority of the most common
ethical theories (ibid., 255-257). This seems to be confirmed by the facts of moral disagreement and moral pluralism, which would not occur if rightness and wrongness were
easily accessible. So if we make a minimal assumption of moral objectivity, there is
some benchmark for our moral judgments, but achieving it is so difficult (e.g. because
of lack of adequate knowledge) that we look for some heuristic attributes. In the domain of moral psychology, good candidates for such heuristic attributes are moral
emotions. When we assess a persons bad behavior, its real wrongness can be inaccessible for us, but our feelings of outrage are easily accessible (ibid., 260-261).
Some authors, e.g. Gerd Gigerenzer (2008), claim that heuristics are fast and frugal
and ecologically rational, which means that in some conditions (e.g. epistemic or timelimited), decision-making based on heuristics can be as effective as fully informed decision-making. Other researchers point out, however, that relying on heuristics can lead
to many mistakes (Sunstein 2008). When we apply it to morality, moral heuristics can
be useful in an environment of social conditions of everyday life, when we often do not
have enough information or time to make deliberate moral judgments. But it does not
guarantee that we will make the right decisions, still assuming that there is some moral
objectivity. In this sense, moral intuitions understood as moral heuristics can be ecologically rational in some situations and even overlap with the moral target attribute
rightness or wrongness. But ecological rationality makes the normativity of moral heuristics very narrow if it is limited only to extreme situations. When we ask ourselves the
normative question what should we do? in a situation when we run out of time or
energy to deliberate and make 'real' moral judgments, then moral heuristics can have
some normative power (let us call it ecological normativity). But it is only because we
have no better reasons at the moment of decision and it is better to follow our heuristics than make a completely random decision. Although we frequently encountered
such situations at the beginning of human evolution and development (when heuristics
evolved), nowadays most people usually can spend more time thinking about the moral
problems that they face. In this situation, the use of such an uncertain instrument for
making moral decisions seems unjustified in the majority of cases. They just do not
meet the requirement for objective reasons to act, which directly determines the target
attribute. However, still, more research is needed on the concept of ecological norma-

Normativity of Moral Intuitions in the Social Intuitionist Model

69

tivity and whether moral heuristics can be objective moral reasons in extreme situations.

Conflicts Between Intuitions

The third problem is also associated with the lack of sufficient research on moral intuitions and concerns conflicts between them (and actually corresponds with Haidt's
skepticism concerning moral intuitions). Haidt (2001, 823) demonstrates that, when
making moral judgments, we usually use the emotional moral intuitions that appear
unconsciously in our heads. But there are sometimes situations in which we find ourselves facing a moral dilemma, i.e. we have two (or even more) strong feelings that
support opposite judgments. For instance, when we say that we have mixed feelings, we
are inhibited from forming an explicit judgment or, after making up our minds, we feel
bad. Psychological research on the SIM paradigm has not yet provided a satisfactory
explanation of how people solve this kind of dilemma between two equally strong emotions. Are those who lean towards an intuition in such a situation more consistent with
their other intuitions and feelings, or with their image of themselves? Or perhaps, do
they enact a much wider reflective equilibrium? We simply do not know the answer,
and we can only offer a few hypotheses.
However, Richard Hare (1981) made a note precisely on this issue, proposing his
theory of two levels of moral thinking. This metaethical conception with normative
consequences (ibid., 5) is a way of figuring out the compromise theory which would, on
the one hand, lack the weak points of simply act-utilitarianism or simply ruleutilitarianism and, on the other hand, be resistant to the objections from other ethical
theories, such as intuitionism. Hare's idea was that human beings operate at two levels
when making moral decisions. On the first one, the intuitive level, we follow prima
facie rules, which have properties very similar to moral heuristics (cf. Sect. 5), i.e. they
are a practical guide [] unspecific enough to cover a variety of situations all of which
have certain salient features in common (ibid., 36) and
an indispensable help in coping with the world (whether we are speaking of moral decisions or of prudential or technical ones, which in this are similar), namely the formation in
ourselves of relatively simple reaction-patterns (whose expression in words, if they had
one, would be relatively simple prescriptive principles) which prepare us to meet new contingencies resembling in their important features contingencies in which we have found
ourselves in the past (ibid.).

Although upbringing and education as well as life experience may be the possible
sources of many moral intuitions, in 1981 Hare could not know what we know today
about some innate mechanisms developed during evolution that are compatible with

70

Maciej Juzaszek

his view on moral intuitions. The simplicity of prima facie rules becomes a problem
when we encounter situations in which we face moral dilemmas:
Although the relatively simple principles that are used at the intuitive level are necessary
for human moral thinking, they are not sufficient. Since any new situation will be unlike
any previous situation in some respects, the question immediately arises whether the differences are relevant to its appraisal, moral or other. If they are relevant, the principles
which we have learnt in dealing with past situations may not be appropriate to the new
one. So the further question arises of how we are to decide whether they are appropriate
(ibid., 39).

The simplicity of prima facie rules means that they can come into conflict with each
other: the principle of "do not kill" may come into conflict with the principle of "protect
your friends". When we meet common moral problems, all the prima facie rules at the
intuitive level are equally justified by our upbringing and past moral decisions. None of
them has priority over any others.
In this case, to solve a moral dilemma (and pragmatically, we are often forced to
make some decision), we must appeal to some other reasons. These are the rules of the
second level of moral thinking the critical one. They may already be very complicated, but most importantly, they lead to one objectively right moral decision determined
by the principles of the logic of moral language. This sense of objectivity is deeply embedded in Hare's theory of prescriptivism and assumes that every agent who makes
moral judgments from the impersonal standpoint and according to the rules of universalization will form the same, correct moral judgment (cf. Hare 1981, 206-214). To
better illustrate his theory, Hare uses two characters: the prole, whose mind is computationally very limited and who thus uses only intuitive moral rules, and the archangel,
who does not have any limitations and hence uses only critical rules and always makes
objectively right decisions (ibid., 44-64).
Now, lets assume that particularized affective heuristics (cf. Sect. 5) work like Hare's
prima facie rules: if thinking about the act makes you feel disgust, then judge that it is
morally wrong, if thinking about the act makes you feel angry, then judge that it is
morally wrong, if thinking about the act makes you happy, then judge that it is morally right, etc. This leads to the following question: if moral intuitions are based on
emotions and may come into conflict with each other, how can such conflicts be resolved? This is the third gap in the research concerning making moral judgments.
From the descriptive point of view, we do not have sufficient data about the way individuals cope with resolving such dilemmas. However, the normative question remains:
how should people resolve such conflicts? If we are dealing with two equally strong
moral intuitions, how should we choose between them? Referring to Hare's aforementioned idea, we cannot appeal to another moral intuition because that would be regression. Maybe this is the moment when our conscious and slow moral reasoning comes

Normativity of Moral Intuitions in the Social Intuitionist Model

71

to play a big role? We need other arguments. These arguments may be objective reasons for action. It is worth noting that the normative moral considerations have a practical purpose; that is, they lead us to solve the moral dilemma we face. Sometimes we
find ourselves in situations where we cannot simply throw up our hands and decide
something. We have to find something other than intuitions. This perhaps opens up a
field for conscious moral reasoning in which we make decisions based on reasons, but
this problem requires even broader empirical research on the mechanisms of making
moral judgments. One hypothesis is that in the situation of moral dilemma people
change from following unconscious intuitions to conscious moral reasoning and look
for other reasons to solve the dilemma (in Hare's terms, they change the level of moral
thinking and think more like the archangel than the prole). If it is true, then we can
claim that moral intuitions in terms of the SIM are not real normative reasons; they just
help to make moral judgments in easy situations. We look for normative reasons when
we do not really know what to do or how to decide, e.g. in the case of moral dilemma.

Varieties of Moral Intuitions

Nearing the end, I would like to add one more general remark concerning the lack of
one single concept of moral intuitions. I am going to go way out on a limb and say that
the relationship between different concepts of moral intuitions is like family resemblance in the Wittgensteinian sense. They are all similar to one another, although there
is no core of features shared by all of them. Moral intuitions in the psychological sense
are usually not the same as those in the philosophical sense. Rationalistic philosophers
would probably say that unconscious beliefs, gut feelings, or immediate emotional reactions are surface moral intuitions rather than robust moral intuitions, the latter being
of real interest from the epistemological point of view5.
Among philosophical conceptions of moral intuitions one may distinguish, after
Antti Kauppinen (2013, 13), at least three types: (a) self-evidence intuitionism, (b)
seeming-state intuitionism, and (c) coherentism. The first approach is influenced by
W.D. Ross (2002) and is represented today by Robert Audi (2004). According to Audi,
there are different kinds of moral intuitions; however, he is interested only in one special type, the most important from the justificatory point of view: moral intuitions that
meet four conditions: (1) they are non-inferential, so the intuited proposition is not
at the time it is intuitively held held on a basis of a premise (Audi 2004, 34); (2) they
are the result of moderately firm cognition (ibid.), i.e. beliefs should be hard to overcome by doubts or counter-evidence; (3) their holder has at least a minimal under5

The distinction between surface and robust moral intuitions was adopted from Kauppinen (2007).

72

Maciej Juzaszek

standing of their content; and (4) they are independent of any former theories and
cannot be theoretical hypotheses themselves (ibid., 33-36). For W.D. Ross (2002, 2930), some examples of such intuitions are prima facie moral rules. As Kauppinen (2013,
12) writes, new versions of self-evident intuitionism neither need to appeal to special
intuitive cognition nor claim that intuitions must be self-evident for everyone. The
most important aspect is that they are justified by mere understanding; and, after all,
not everyone has to understand them to the same extent (ibid.). According to Audi,
self-evident propositions must satisfy two conditions:
(a) in virtue of having that understanding, one is justified in believing the proposition (i.e.,
has justification for believing it, whether one in fact believes it or not); and (b) if one believes the proposition on the basis of that understanding of it, then one knows it (Audi
1999, 206).

Therefore, Audi's moral intuitions are beliefs that are non-inferentially justified simply
by their self-evidence.
The second conception of moral intuitionism refers to moral intuitions as intellectual appearances. Michael Huemer (2005, 102) defines them as follows: An intuition that
p is a state of its seeming to one that p that is not dependent on inference from other
beliefs and that results from thinking about p, as opposed to perceiving, remembering,
or introspecting. An example of such p may be the statement, If A is better than B
and B is better than C, then A is better than C (ibid.). Before we think about arguments for and against this statement, we can say that it seems to be true, unlike a proposition such as Assisted suicide is immoral, which is inherently controversial. What is
important is that the state of an utterance seeming true is a result of just thinking about
the statement. Huemer introduces here what he calls a principle of phenomenological
conservatism which assumes that it is reasonable for us to believe that things are as they
appear, unless we have strong grounds for doubting this. According to this principle,
if it seems to S that p, then, in the absence of defeaters, S thereby has at least some
degree of justification for believing that p (Huemer 2007, 30).
The third moral intuitionist theory is so-called coherentism, which has been primarily represented by John Rawls with his famed idea of reflective equilibrium (RE).
The method of justification consists in searching for coherence between our considered
judgments about particular cases, the principles that we follow, and our theoretical
considerations. It is important that we need to be prepared to modify each of the elements. In one of the variants of RE, the wide version of the RE should be the end of a
deliberative process in which we revise our initially credible, considered judgments so
that they cohere not only with substantial moral principles but also with social science
and with ideals of the person and of society (Sinnott-Armstrong 1996, 33). What are
these considered judgments? Rawls writes that we can bracket together various beliefs
from those about particular situations and institutions up through broad standards

Normativity of Moral Intuitions in the Social Intuitionist Model

73

and first principles to formal and abstract conditions on moral conceptions (Rawls
1974, 8). Because the nature of initial credibility of these starting points is not clear, I
will not discuss them in detail here.
To conduct adequate psychological research, awareness of conceptual distinctions is
needed, and these distinctions are often very subtle. Otherwise, we (philosophers and
psychologists) might think that we are examining some important problem, not noticing that the object of our study is really only a segment of a larger phenomenon, or we
might just investigate different things and pass each other. This leads on the one hand
to the incompleteness of conclusions and on the other to conceptual chaos, which
eventually leads to the impossibility of rational and conclusive discussion. Participants
of the discussion think that they are discussing the same issue and criticize their opponents for being completely wrong or not properly understanding the problem. But
actually the issues they talk about are different. For instance, some psychologists and
philosophers claim that ethical theories are undermined by evolutionary debunking
arguments (e.g. Greene 2013; Kahane 2012; Singer 2005; Joyce 2006; Kauppinen 2014)
or by being based on non-consequentialist moral intuitions that are sensitive to morally irrelevant features (Kauppinen 2013, 6). Opponents of such theses rebut that, first,
non-consequentialist moral theories are not based on moral intuitions (e.g. Wood
2008) and that, second, even if they were, moral intuitions which are grounds for moral
theories are not intuitions in the psychological sense (Kauppinen 2007, 2013). Most
importantly, for philosophers moral intuitions really matter if they are bearers of justification. As presented, moral intuitions understood in the SIM are only interesting
descriptive phenomena without any normative implications.

Conclusion

To conclude, all of the presented arguments support the thesis that moral intuitions,
understood through the SIM paradigm, do not have normative power. First, even if
they are decent explanatory reasons, they do not meet the requirements of objective
moral reasons set by the normative concept of real moral judgment, the judgment that
we (and Haidt) expect from people. Second, there are some gaps in the research provided by the SIM that question the role of moral intuitions in forming moral judgments: we do not know if we unconsciously follow moral rules as objective reasons
rather than moral intuitions; we do not know how exactly we solve moral dilemmas
with two equally strong moral emotions involved; and, finally, we have to check if the
hypothesis on the heuristic nature of moral intuitions is really true if it is, moral heuristics are not good candidates for objective moral reasons. Third, philosophers have
their own reflective moral intuitions, which are completely different entities than psy-

74

Maciej Juzaszek

chologists' moral intuitions. I look forward to further studies, but now I can admit
something with certainty: as long as philosophers look for normative answers to how
we should behave instead of, simply, how we do behave, psychology will not replace
ethics.

References

Audi, R. (1999). Self-Evidence. Nos 33(s13), 205228.


Audi, R. (2004). The good in the right: A theory of intuition and intrinsic value. Princeton:
Princeton University Press.
Chudnoff, E. (2013). Intuition. Oxford: Oxford University Press.
Gigerenzer, G. (2008). Moral Intuition = Fast and Frugal Heuristics? In W. Sinnot-Armstrong
(Ed.), Moral Psychology: The cognitive science of morality: intuition and diversity. Vol. 2 (pp.
126). Cambridge, MA: The MIT Press.
Gigerenzer, G., & Todd, P. (1999). Simple heuristics that make us smart. Oxford: Oxford University Press.
Greene, J. (2003). From Neural Is to Moral Ought: What Are the Implications of Neuroscientific Moral Psychology? Nature Reviews Neuroscience 4, 847850.
Greene, J. (2008). The Secret Joke of Kant's Soul. In W. Sinnot-Armstrong (Ed.), Moral Psychology: The cognitive science of morality: intuition and diversity. Vol. 2 (pp. 3580). Cambridge,
MA: The MIT Press.
Greene, J. (2013). Moral Tribes: Emotion, Reason and the Gap Between Us and Them. London:
Atlantic Books.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral
judgment. Psychological review 108(4), 814834.
Haidt, J. (2003). The emotional dog does learn new tricks: A reply to Pizarro and Bloom. Psychological Review 110(1), 197198.
Haidt, J. (2007). The new synthesis in moral psychology. Science 316, 9981002.
Haidt, J. (2012). The Righteous Mind. New York: Random House.
Haidt, J., & Craig, J. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus 133(4), 5566.
Haidt, J., & Kesebir, S. (2007). In the Forest of Value. Why Moral Intuitions Are Different From
Other Kinds. In H. Plessner, C. Betsch, & T. Betsch (Eds.), Intuition in judgment and decision making (pp. 209230). New York: Psychology Press.
Hare, R. M. (1981). Moral thinking. Its levels, method and point. Oxford: Clarendon Press.
Huemer, M. (2005). Ethical intuitionism. New York: Palgrave Macmillan.
Huemer, M. (2007). Compassionate Phenomenal Conservatism. Philosophy and Phenomenological Research 74, 3055.
Hume, D. (1739). A Treatise of Human Nature. London: John Noon.
Jones, K. (2003). Emotion, weakness of will, and the normative conception of agency. Royal
Institute of Philosophy Supplement 52, 181200.

Normativity of Moral Intuitions in the Social Intuitionist Model

75

Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press.


Juzaszek, M. (in progress). Moral Intuitions, Moral Luck and Normativity.
Kahane, G. (2012). On the Wrong Track: Process and Content in Moral Psychology. Mind and
Language 27(5), 519545.
Kahneman, D. (2011). Thinking, fast and slow. New York: Macmillan.
Kahneman, D., & Tversky, A. (1974). Judgment under uncertainty: Heuristics and biases. Science 185(4,157), 11241131.
Kauppinen, A. (2007). The rise and fall of experimental philosophy. Philosophical explorations 10(2), 95118.
Kauppinen, A. (2013). Moral Intuition in Philosophy and Psychology.
https://www.academia.edu/2306073/Moral_Intuition_in_Philosophy_and_Psychology.
Accessed 30 January 2014.
Kauppinen, A. (2014). Moral Sentimentalism. Stanford Encyclopedia of Philosophy.
https://www.academia.edu/2306073/Moral_Intuition_in_Philosophy_and_Psychology.
Accessed 30 May 2014.
Kennett, J., & Fine, C. (2009). Will the real moral judgment please stand up? Ethical Theory and
Moral Practice 12(1), 7796.
Kohlberg, L. (1981). The philosophy of moral development: Moral stages and the idea of justice.
San Francisco: Harper & Row.
Kohlberg, L., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics.
Basel, Switzerland: Karger.
Korsgaard, C. (1996). The sources of normativity. Cambridge: Cambridge University Press.
Kramer, M. H. (2009). Moral realism as a moral doctrine. John Wiley & Sons.
Mallon, R., & Nichols, S. (2010). Rules. In J. Doris and the Moral Psychology Research Group
(Eds.), The Moral Psychology Handbook (pp. 297320). New York: Oxford University Press.
Pendlebury, M. (2007). Objective reasons. The Southern Journal of Philosophy 45(4), 533563.
Piaget, J. (1997). The moral judgement of the child. New York: Simon and Schuster.
Pietrzykowski, T. (2012). Intuicja prawnicza. W kierunku zewntrznej integracji teorii prawa
(Legal intuition. Towards an outward integration of legal theory). Warszawa: Difin.
Prinz, J. (2007). The emotional construction of morals. Oxford: Oxford University Press.
Rawls, J. (1974). The independence of moral theory. In Proceedings and Addresses of the American Philosophical Association. Vol. 48. American Philosophical Association.
Ross, W. D. (2002). The right and the good. Oxford: Oxford University Press.
Saja, K. (2008). Jzyk etyki a utylitaryzm. Filozofia moralna Richarda M. Hare'a (The Moral
Language and Utilitarism. Moral Philosophy of Richard M. Hare). Krakw: Aureus.
Singer, P. (2005). Ethics and Intuitions. Journal of Ethics 9, 331352.
Sinnott-Armstrong, W. (1996). Moral skepticism and justification. In W. Sinnot-Armstrong
and M. Timmons (Eds.), Moral Knowledge? New Readings in Moral Epistemology (pp. 3
48). New York: Oxford University Press.
Sinnott-Armstrong, W., Young, L., & Cushman, F. (2010). Moral intuitions. In J. Doris and the
Moral Psychology Research Group (Eds.), The Moral Psychology Handbook (pp. 246272).
New York: Oxford University Press.
Sunstein, C. (2005). Moral Heuristics. Behavioral and Brain Sciences 28, 531542.

76

Maciej Juzaszek

Sunstein, C. (2008). Fast, Frugal, and (Sometimes) Wrong. In W. Sinnot-Armstrong (Ed.), Moral
Psychology: The cognitive science of morality: intuition and diversity. Vol. 2 (pp. 2746).
Cambridge, MA: The MIT Press.
Stelmach, J., Brozek, B., & Hohol, M. (2013). The Many Faces of Normativity. Krakow: Copernicus Center Press.
Wood, A. (2008). Kantian ethics. Cambridge: Cambridge University Press.

Psychology Instead of Ethics?


Why Psychological Research Is Important but Cannot
Replace Ethics
Janett Triskiel

Abstract
Recent research in cognitive and moral psychology suggests that our judgments and
decisions are primarily driven by intuitions and that giving reasons is a matter of posthoc rationalization or even confabulation thus challenging the ethical self-conception
held by common sense and many philosophers. Do these empirical findings prompt us
to abandon the belief that we act and decide on the basis of reasons? I will point to our
everyday practice and use the heuristic approach of decision making to argue that they
do not. We have, at least, two good reasons to answer the question Psychology instead
of Ethics? in the negative. The heuristic approach is not only compatible with a rationalist position; it specifies the underlying rules of moral judgments. Combining my arguments from common sense reasoning with the heuristic approach allows me to reinterpret the empirical findings as being about application and systematic errors (bias) of
otherwise adaptive heuristics. My proposal for a reconciliation of the philosophical and
psychological position will lead to the conclusion that normative and psychological
questions are mutually dependent. While not sufficient on their own, both positions are
necessary for an informed picture about our reasoning abilities as well as for our ethical
self-conception.

Introduction

The appeal to reasons is widely shared by common sense and philosophical reasoning.
Common sense refers to reasons to explain and to justify behavior. According to ra

Janett Triskiel
Research Center for Neurophilosophy and Ethics of Neurosciences
Ludwig-Maximilians-Universitt Mnchen
Janett.Triskiel@campus.lmu.de

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_4

78

Janett Triskiel

tionalism in moral philosophy, (the right kind of) reasoning, i.e., deliberating (the right
kind of) reasons, is the basis of our moral judgments. This reliably leads to correct
judgments. I assume that the ethical self-conception shared by common sense and rationalism is based on the belief that the link between reasons respectively reasoning1
and judgment making is not just accidental.
This ethical self-conception is called into question by a new line of research that can
be called the affective program. Various researchers claim on the basis of their findings
that the established picture of how humans act and decide must be reconsidered. Conscious reasoning is reported to play at best a minor role for judgment making. Crucial
for the challenging line of argumentation is the conviction that the described ethical
self-conception rests on false assumptions about our reasoning capacities:
Moral judgments are not sufficiently determined by reasoning processes. Rather
than by deliberation prior to a judgment, moral behavior is determined by a compound
of automatic unconscious processes and external cues. I sum this up as concerns regarding the truth source of our moral judgments.
Our self-reports are less reliable than we think they are. Rather than recalling our
motives we either confabulate reasons or we rationalize our initial intuition in a posthoc manner. I sum this up as concerns regarding the reliability of our reporting abilities.
In what follows, I will argue that the empirical findings do not indicate that we have
to reframe our ethical self-conception. There are different constructive strategies to
refuse the challenge. One can raise methodological objections with regard to the ecological validity or point to counterevidence. In this article, I pursue the strategy of reinterpreting the data to offer alternative conclusions. I try to reconcile the rationalist
philosophical and recent psychological position (i.e., the affective program) to show that
normative and psychological questions are mutually dependent.
Therefore, I will firstly spell out how ordinary people reason in everyday life. I will
introduce what I call the Game of Reasons (GoR). In addition, I will display to which
claims a philosophical rationalist is committed. I will distinguish between Psychological
Rationalism on the one hand and Justificatory Rationalism on the other. I will then
present the heuristic approach of decision making as a psychological explanation model of moral judgment. This approach is compatible with the introduced philosophical
rationalist positions and additionally promising in predicting when moral judgments
will go wrong. However, within the heuristic approach a normative criterion is needed
to distinguish whether a heuristic reliably leads to the morally right judgment. I will
point out that it is moral philosophy that offers us different criterions for being morally right. Thus psychology has to be informed by philosophy the first reason to answer
the question Psychology instead of Ethics? in the negative. The heuristic approach
allows me to reinterpret the empirical findings as application and systematic errors
1

By reasoning I refer to a cognitive process where to follow reasons is to meet normative standards.

Psychology Instead of Ethics?

79

(bias) of otherwise adaptive heuristics. If the application of heuristics produces systematic errors (bias), it is important to investigate if we can avoid these errors. This is an
empirical question. As ought implies can, normative philosophy cannot demand from
individuals what they are unable to achieve. In this sense, philosophy has to be informed by psychology the second reason to answer the question Psychology instead
of Ethics? in the negative.

Explaining Reasoning

In this section, I will introduce the two main addressees of the challenge regarding the
role reasoning processes and reasons play for our decisions, namely common sense
reasoning and moral rationalism.

2.1

Reasoning from a Common Sense Perspective

I will now spell out in more detail the assumption that the reasons-talk is deeply rooted
in our daily practice and plays a fundamental role in our lives. We can consider the
practice of common sense reasons-talk as a Game of Reasons (GoR). Participating successfully in this game consists in the preparedness to give reasons, in understanding the
norms of giving reasons, and in the willingness to be influenced by reasons. The
game-metaphor with regard to reasons leads back to Sellars (1997) who introduced
the game of giving and providing reasons, which was then elaborated by Brandom
(1994). While Sellars and Brandom made valuable contributions to the philosophy of
language, I do not commit myself to any of their specific claims. But I share their conclusion that the game of giving and providing reasons is part of our social, normative
practice to explain and to justify behavior.
I believe it is hard to argue that reasons do not play any role in our life. We all find
ourselves often in contexts of justification. Being an agent implies being able to provide
vindicatory reasons, because human behavior, where rational, functions on the basis
of reasons (Searle 2000, 106). Rationality, in turn, is a prerequisite of making sense of
human behavior,
[f]or we could not begin to decode a man's sayings if we could not make out his attitudes
towards his sentences, such as holding, wishing, or wanting them to be true. Beginning
from these attitudes, we must work out a theory of what he means, thus simultaneously
giving content to his attitudes and to his words. In our need to make him make sense, we
will try for a theory that finds him consistent (Davidson 1980, 222).

80

Janett Triskiel

Attributing consistent reasons and expecting any of our peers to be able to give consistent reasons for their actions, is thus an integral part of everyday conduct. I conclude
that the preparedness to give reasons is a fundamental part of the GoR and therefore of
our social practice; without it, we could neither ascribe the status as agent nor explain
actions.
As any other game, the GoR is governed by norms and rules that can either be kept
or violated. Sellars distinguishes between justifying as practical activity and being justified as a normative status (Brandom 1997, 157). Brandom (1994) has transferred and
extended this idea into his famous notion of deontic scorekeeping. When making a
claim, an agent actively endorses that claim. She commits herself to provide reasons for
the claim. Other agents - scorekeepers - evaluate whether the speaker was entitled to
make that claim. They evaluate if the provided reasons suffice to justify the claim.
Commitment and entitlement are normative states that are constituted by the game of
giving and providing reasons (Brandom 1994, 141f.). Understanding the norms of reason giving is the second fundamental part of the GoR.
The third part of the GoR consists in the willingness to be influenced by reasons. Mercier and Sperber (2011) link reasoning to communication. They argue that we mainly
reason to convince other people. This public action of verbally producing arguments,
evaluating, and accepting conclusions (ibid., 59) presupposes the willingness to be influenced by reasons.

2.2

Reasoning from a Moral Rationalist Perspective

Joyce (2008) distinguishes psychological, conceptual, and justificatory rationalism.


According to psychological rationalism, moral judgments and deliberations flow from
a rational faculty (ibid., 377). Conceptual rationalism is the view that it is a conceptual
truth that moral transgressions are transgressions of practical rationality (ibid., 380).
Justificatory rationalism claims that moral transgressions are rational transgressions
(ibid., 388). For the purpose of this article the crucial positions are psychological and
justificatory rationalism because both of them are objectives of the challenge.
Psychological rationalism regarding moral judgment is an etiological thesis since the
position is solely interested in the sources, motives and mechanisms that produce moral judgment (ibid., 16). According to this position, moral judgments flow from a rational faculty. This version of rationalism is addressed by the first part of the challenge:
Moral judgments are not sufficiently determined by reasoning processes. Rather than
by deliberation prior to a judgment, moral behavior is determined by a compound of
automatic unconscious processes and external cues.

Psychology Instead of Ethics?

81

Psychological rationalism comes in different versions. A modest psychological rationalist claims that rational activity is necessary for moral judgments. In a stronger
version, such activity is necessary and sufficient. In a synchronic version, this activity
must occur at the time of judgment while a representative of a diachronic version allows for the occurrence at any moment in the past (ibid., 6).
Justificatory rationalism regarding moral judgment is a normative thesis since the
position is solely interested in the principles that justify the contents of moral judgment
(ibid., 16). According to this position, moral judgments are justified by reasons. This
version of rationalism is therefore addressed by the second part of the challenge: Our
self-reports are less reliable than we think they are. Rather than recalling our motives
we either confabulate reasons or we rationalize our initial intuition in a post-hoc manner.
Note that justificatory rationalism is not only compatible with any discovery about
the origins of moral judgment, but is also silent about the motivation of moral judgments. A representative of justificatory rationalism can even accept that the motive was
not a moral one at all (ibid., 16).
Neither psychological nor justificatory rationalism has to commit itself per se to a
specific approach of moral judgment. However, I propose that both, at least in some
versions, are consistent with the heuristic approach, which opens up the opportunity to
meet the challenge.

Explaining Moral Judgment: The Heuristic Approach

Moral judgment and decision making has been extensively studied by developmental,
cognitive and moral psychologists, which have all gained a lot of valuable insights.
Here, I will restrict myself to those findings that are concerned with the underlying
mechanism of judgment and decision making. The affective program favors the view
that moral judgments are triggered automatically and are mainly driven by intuitions
and unconscious processes. It stands in contrast to a view that focuses almost exclusively on conscious moral reasoning and justification. There are at least five psychological
theoretical approaches of moral judgment and decision making. They are competing in
some respects, but focus on different aspects (Waldmann et al. 2012)2. One of the five
approaches is the Heuristic Program. Like the other approaches, the heuristic research
program is not a unified field. However, for my purpose it is sufficient to concentrate
on the uncontroversial claims.
2

Dual-Process Theory, the Moral Heuristics Program, Moral Grammar Theory, emotion based theories, and
Kohlbergs Rationalist Theory. While, e.g., Kohlbergs theory focuses on the justification of moral judgment, the
Moral Grammar Theory spells out how certain inputs are processed into a specific moral judgment.

82

Janett Triskiel

Heuristics can be understood as principles to reduce complex tasks (Tversky and


Kahneman 1974, 1124), as mental short cuts or rules of thumb (Sunstein 2005, 531),
mental strategies which ignore information (Gigerenzer 2008, 5), or simplifying mechanisms (Payne and Bettman 2004, 112): Due to cognitive restrictions, they help us to
deal with problems by reducing complex information to a manageable size. They are
embodied, situated, and allow us to act fast (Kahneman 2003; Sunstein 2005; Gigerenzer 2008). Within the heuristic account, moral judgments are considered to be rule
based, i.e., based on heuristics. The rules can either be learned implicitly or explicitly,
are derived from individual experiences or they are hard-wired (Payne and Bettman
2004, 126). These rules can be accepted consciously, but very often people cannot name
them and do not even know that they apply them. We can think about heuristics as
internalized knowledge. A lot of those rules that underlie our behavior and judgments
are already discovered. They have been described for logic and economical tasks (e.g.
trial and error, anchoring, availability heuristic, representative heuristic) as well as for
moral settings (e.g., Punish and do not reward betrayals of trust, Do not tamper with
nature, Do not knowingly cause a human death). I will describe some of them later in
more detail.
The heuristic approach is compatible with a mild, diachronic version of psychological rationalism. It spells out the idea that diachronic reasoning can lead to decision
rules, i.e., heuristics that are - whenever triggered - applied. In particular, it specifies the
underlying rules of moral judgments better than any other theory (Waldmann et al.
2012, 284). It is additionally compatible with justificatory rationalism. The application
of internalized rule knowledge does not preclude reconstructing the reasons that once
explicitly justified the judgment. Justificatory as well as psychological rationalism do
not even have to insist that the user of heuristics underwent the justificatory process
herself. This is compatible with the assumption that we can adopt entire rules as already justified. However, it is mandatory from the perspective of a justificatory rationalist that we, when asked for reasons, are able to state reasons that sufficiently justify
our judgment.
This is also mandatory from the GoR-perspective:
[For] a noninferential report to express knowledge (or the belief it expresses to constitute
knowledge), the reporter must be able to justify it, by exhibiting reasons for it. This is to say
that the reporter must be able to exhibit it as a conclusion of an inference, even though that
is not how the commitment originally came out (Brandom 1997, 158, my emphasis).

Additionally, common sense, justificatory rationalism, and a mild diachronic psychological rationalism share the assumption that the agent does not have to undergo the
justificatory process herself.
In conclusion: The heuristic approach is not only compatible with versions of philosophical moral rationalism, but also embraces the normative practice of the GoR.

Psychology Instead of Ethics?

3.1

83

Application of Heuristics in Probability Tasks

Tversky and Kahneman (1974) described, among others, the so-called Availability
Heuristic: When people assess the probability of an event they do so by the ease with
which instances or occurrences can be brought to mind (ibid., 1127). Whether it is the
probability of a heart attack, a plane crash or the success of a business venture: The
probability is evaluated by recalling tokens of the event in question. People who have
recently heard of heart attacks will evaluate their probability as higher than those, who
cannot recall such cases.
Heuristics are mainly considered to be helpful tools to deal with decision problems.
The authors state: Availability is a useful clue for assessing frequency or probability,
because instances of large classes are usually recalled better and faster than instances of
less frequent classes (ibid., 1127). However, applying the heuristic, people also tend to
evaluate the probability of a plane crash as much higher than it in fact is. This may be
due to the intense media reports after plane crashes.
It is widely agreed that heuristics can perform well in some cases, but that they operate poorly in others. The downside is that they tend to overgeneralization beyond the
context (Tversky and Kahneman 1974; Payne and Bettman 2004; Larrick 2004; Sunstein 2005; Gigerenzer 2008). This analysis presupposes a criterion in order to distinguish when heuristics work well from when they do not. This objective criterion is
usually provided by experts which, e.g., use reliable statistics to evaluate the accuracy of
the Availability Heuristic.

3.2

Application of Heuristics in Moral Settings

Sunstein (2005) is especially concerned with heuristics that apply to moral questions.
Among many others he describes the so-called Do not knowingly cause a Human Death
Heuristic. This heuristic is generally sound and quite useful. Say, if we were to learn that
our actions were to result in human death, we would (ceteris paribus) stop these. However, with the following example Sunstein tries to show that this heuristic also has its
downside and leads to an inappropriate judgment (ibid., 536):
Company A knows that its product will kill ten people. It markets the product to its
ten million customers with that knowledge. The cost of eliminating the risk would have
been $100 million.
Company B knows that its product creates a one in one million risk of death. Its
product is used by ten million people. The cost of eliminating the risk would have been
$100 million.
Sunstein predicts that people will tend to punish company A more severely than
company B. According to him, people will apply the Do not knowingly cause a Human

84

Janett Triskiel

Death heuristic and will therefore evaluate the two cases differently. He concludes that
our judgment misfires, as there is no difference between A and B; both companies ignore the risk of ten people dying. Heuristics are considered to be highly context sensitive, but according to Sunstein, knowing for sure and knowing of a risk misleadingly
indicate a difference.
Again, just like for probability tasks, a normative criterion is needed to distinguish
when the heuristic performs well and when our moral judgments misfire. And Sunstein
is right: When we put the bare numbers on a paper, there is no difference between the
cases, i.e., there is no statistical difference. From a statistical perspective, the application
of the heuristic leads to different evaluations of identical cases.
However, I doubt that Sunsteins conclusion is correct that therefore it is already
shown that our moral heuristics can misfire. To draw this conclusion he has to show
that there is, in addition, no moral difference involved which leads to the different
evaluation. This means he has to show that knowing for sure and knowing of a risk indeed does not make any moral difference.
Therefore we have to look further since in the moral domain one can only analyze
the situations in which a heuristic is ecologically rational if a normative criterion is
introduced (Gigerenzer 2008, 20). And statistical correctness is clearly not the currency of morality.

The Need for a Baseline Criterion: Moral Philosophy

The discussion in the preceding section shows the necessity of a criterion to label a
heuristic as adaptive, reliable or well-working. Once one adopts the heuristic approach
to explain moral judgments, it is unavoidable to ask normative questions in order to
analyze ones data. For moral questions, a reliable heuristic should lead to the morally
right judgment therefore it is necessary to have a concept of moral rightness (and
wrongness) to specify if the application of a heuristic leads to right judgements. Referring to statistics does not help here. Providing such a criterion for the moral domain
usually falls within the scope of philosophers.

4.1

Application of Moral Principles

In philosophy, it is far from clear what the correct criterion for being morally right is,
because, as opposed to, e.g., logic, we find many conflicting accounts for that in moral
philosophy. Different from statistical facts, we do not have any objective recordings of
moral rightness. As Sunstein says: If certain fast and frugal heuristics are defensible on
utilitarian or consequentialist grounds, they might still be objectionable from the moral

Psychology Instead of Ethics?

85

point of view (Sunstein 2005, 30). I do not wish to argue for one particular philosophical account of rightness. I assume that it is possible to accept the heuristic approach for
moral judgments without accepting a specific normative claim. Instead, I will argue that
there is no neutral way to work in the research field of moral heuristics.
I will illustrate my point with two examples that were discussed by Sunstein. He
claims that the Act and Omission Heuristic is often applied when moral issues are complex and difficult to access. This heuristic reduces complex settings to the act-omission
distinction: The tendency to favor harms of omission over harms of action. Consider a
doctor in two different settings:
Patient A suffers from terminal illness and asks the doctor not to provide life-sustaining
measures any longer. The doctor agrees and the patient dies.
Patient B suffers from terminal illness and asks the doctor to overdose morphine. The doctor agrees and the patient dies.

Sunstein (2005) states that many people have the correct intuition that the former case
is morally legitimate while the latter is not (ibid., 540). On that ground he concludes
that the act and omission heuristic is generally sound and makes useful distinctions:
A murderer is typically more malicious than a bystander who refuses to come to the aid of
someone who is drowning; the murderer wants his victim to die, whereas the bystander
need have no such desire. In addition, a murderer typically guarantees death, whereas a
bystander may do no such thing (ibid., 540).

He argues further that the Act and Omission Heuristic has its downside when it suggests
a moral difference where there is none. To illustrate this, he uses the example of parents
who do not vaccinate their children due to the risks of the vaccination. According to
Sunstein, these parents show an omission bias and favor inaction over statistically
preferable action (ibid., 540, my emphasis). In this case, he concludes that the parents
produce moral error (ibid., 540, my emphasis).
In order to draw this conclusion, he has to apply a normative principle that accounts
for moral wrongness. Otherwise he could not speak of a moral error that is produced
by the heuristic. If Sunstein claims that vaccination is statistically preferable but remains silent about why it is also morally demanded, his own comparison of the two
cases only show that a single heuristic can distinguish well what is morally relevant but
fails to do so when it comes to statistics. This interestingly leads us back to the company example from the previous section.
Sunstein claims that both settings, company A and B, are of the same kind and
therefore that the Do not knowingly cause a Human Death Heuristic just pretends a
difference, and produces an error. As pointed out before, he is right as there is no statistical difference, which is why our statistical judgment is wrong. Alternatively, people
could have applied the Act and Omission Heuristic, in which case the different evaluation of company A and B would not count as distorted moral judgment anymore.

86

Janett Triskiel

Company A acts on explicit knowledge that its product will kill ten people but markets
it anyway. Company B omits to decrease or to eliminate the risk. Seen this way, the
company example is analogous to the example with the doctor. Therefore the Act
Omission Heuristics leads in both cases (company and doctor) to the right moral judgment, but in the vaccination-case to the wrong one. In order to conclude that, we have
to have a concept of moral rightness, i.e., here, that omissions of harm are indeed morally preferable over acts of harm.

4.2

Psychology Needs Philosophy

To draw the conclusion that heuristics lead to real error and significant confusion
(Sunstein 2005, 542) or that they can perform well (Gigerenzer 2008, 10), one must
apply a criterion. There has been an intense controversy over the virtues and vices of
heuristics (Sunstein 2005, 533). This controversy comes to a head for moral heuristics,
because there is no unified criterion for moral rightness. But once one adopts the heuristic approach as an explanatory model, one plainly and simply has to decide whether
heuristics qualify as guidelines for moral actions or if they are only second-best solutions.
Gigerenzer (2008) is right in pointing out that heuristics are not good or bad per se.
Do what all the Others do can be good advice under certain circumstances but has horrible consequences in others: One and the same heuristic can produce actions we
might applaud and actions we condemn, depending on where and when a person relies
on it (ibid., 4). I believe that even if far from agreement, the ongoing discussion in
moral philosophy helps to clarify ones normative foundation. The normative stance we
take determines the direction of the heuristic research program.
So far, I have introduced certain versions of rationalism and the heuristic approach
of moral judgment. In the following section I will critically examine findings in the
affective program. My reinterpretation of the empirical findings within the GoR- and
the heuristic approach- perspective will hopefully show that they do not undermine
certain types of rationalism.

Reinterpreting Empirical Findings

5.1

Concerns Regarding the Truth Source of Our Moral Judgments

The claim that reasoning does not determine our moral judgments comes in at least
two versions. The first one holds that our judgments are mainly based on intuitions or
emotions (e.g., Haidt 2001; Greene et al. 2001; Moll et al. 2002a; Moll et al. 2002b;

Psychology Instead of Ethics?

87

Heekeren et al. 2003; Harenski and Hamann 2005; Greene 2007). The second version
claims that our judgments are determined by automatic processes triggered by our
social environment (e.g., Darley and Batson 1973; Doris 1998; Mazar and Zhong 2010).
Particularly induced emotions, especially disgust, are reported to seriously influence
our judgments (e.g., Wheatly and Haidt 2005; Valdesolo and de Steno 2006; Schnall et
al. 2008a; Schnall et al. 2008b; Jones and Fitness 2008; Horberg et al. 2009; Horberg et
al. 2011; Eskine et al. 2011; Inbar et al. 2012).

5.1.1

Intuition- and Emotion-Based Judgments

The basis of moral judgments has been extensively studied in experiments that use
fMRI-scans. Brain regions associated with emotional activation (e.g., medial frontal
gyrus, superior temporal sulcus, orbifrontal temporal sulcus, amygdala) have been
shown to be active while moral statements (e.g., Moll et al. 2002a), moral judgments
(e.g., Heekeren et al. 2003) or moral pictures (e.g., Harenski and Hamann 2005) were
presented to healthy participants. Therefore it has been concluded that moral judgments are mainly based on emotions. In line with this conclusion are the results of a
series of experiments done with patients who suffer from brain damages in the ventral
medial prefrontal cortex (e.g., van den Bos and Groglu 2009) or patients with diagnosed mental pathologies (e.g., Blair 1995; Sommer et al. 2010). Both groups of patients
have difficulties to simulate emotional experience, show less activation in brain regions
associated with emotion processing and score significantly lower in social and moral
judgment tasks.
It is far from clear what kind of statement these results exactly support. They have
been used for various conclusions from supporting moral sentimentalism (Prinz 2006)
to the claim that emotions are necessary for moral judgment (Greene et al. 2001) or
that emotions are necessary for our capacity to make moral judgments (Blair 1997).
Huebner and colleagues (2008) argue that none of the claims is sufficiently supported
by empirical data. They instead suggest, on the basis of the findings, that our moral
judgments are mediated by a fast, unconscious process that operates over causalintentional representations. The most important role that emotions might have is in
motivating action (ibid., 5).
The much weaker claim that emotions mediate, motivate, or accompany moral
judgment and actions is compatible with the GoR. Nothing substantial about reasons is
stated that would undermine our social practice. The same applies for psychological
and justificatory rationalism. Psychological rationalism holds in its modest synchronic
variant. In order to conflict with psychological rationalism, evidence is needed that
reasoning does not take place at all. As mentioned before, justificatory rationalism accords with every finding about the sources of moral judgment, anyway.

88

5.1.2

Janett Triskiel

Situational Factor as Decisive Predictor for the Resulting Judgment

It has been shown that immediately after having found a dime in a telephone box, significantly more people helped a woman who dropped a folder full of paper (Doris
1998). If the act of helping would depend on character traits, deliberation or pure
knowledge of norms, finding a dime should not make a difference. It has been concluded that not prior deliberation but factors of the situations are decisive when it comes to
moral judgments and actions. Further evidence for this claim comes from a great many
different experiments, mainly of the kind in which emotions are induced to participants who are to make some sort of moral judgment. It has been shown that disgusting
smell or just sitting at a dirty desk influences how people evaluate different moral scenarios (e.g., Schnall et al. 2008b). Participants in the disgust condition made significantly more severe moral judgments than those in the neutral one. Let us refer to this
recent research as manipulation paradigm. Within this paradigm it has been claimed
that whenever particular factors external to the subject are manipulated it is possible to
predict the subjects behavior and (alarming) shifts in judgments.
Following the results of these manipulation studies, it has been concluded that participants used their feelings of disgust (attached only to a word, not to the act in question) as information about the wrongness of the act (Wheatly and Haidt 2005, 781),
that Humes famous statement that reason is the slave of passion is supported (ibid.),
that moral judgments often derive from gut-level emotion-based intuition (Horberg et
al. 2009), that disgust underlies moral processing (Eskine et al. 2011), and that moral
judgment does not reside solely in responses evoked by the considered dilemma, but
also resides in the affective characteristics of the environment (Valdesolo and de Steno
2006, 477).
Every new finding that provides further empirical evidence that we can be distracted
(e.g., by disgusting smell) is consistent with common sense knowledge as well as with a
rational approach in moral philosophy.
Common sense already contains explanations for impaired moral judgments: Time
pressure and external emotional arousals (e.g., anger) are considered to be confounders
for appropriate moral considerations. Philosophers, too, know well about conflicting
desires and confounding variables that are able to set aside or distract moral considerations.3
Research that shows that external emotions influence our moral judgments is more
than welcome as it specifies the effect of confounders. Note that in none of the studies a
spillover effect from morally wrong to morally right in comparison with the neutral
condition has been recorded. While it has been shown that, e.g., induced emotions can
shift moral judgment, i.e., make it more or less severe, it has not been shown that they
3

e.g. the weakness of will debate

Psychology Instead of Ethics?

89

are influential enough to incline the judgment from morally right to morally wrong or
vice versa.
Even if we assume a stronger claim, namely that these findings allow for the conclusion that deliberation and reasons do not play any role at the time of the judgment,
they are still consistent with a GoR-perspective. What counts is the justification for the
judgments (which is usually not prompted in the studies) and the willingness to be
influenced by reasons.
The results are additionally consistent with a modest, diachronic version of psychological rationalism. That no reasoning was involved at the time of the judgment does
not rule out that it had been involved at some time in the past. The vignettes used in the
experiments to test moral judgments typically describe cases that very likely have
crossed the participants minds before. Since a justificatory rationalist is silent about the
psychological source of moral judgments anyway, she is not hard-pressed to explain
matters.
Still, the findings suggest a strong connection between triggered emotions and shifts
in the severity of moral judgments, which is not random. These shifts have been consistently reproduced and can be predicted. A reliable connection between cues and
judgments is what especially the heuristic approach of judgment and decision making
focuses on. Using emotional states as information to make quick judgments is consistent with the heuristic approach. And indeed, some researchers discuss the possibility that the recorded effects of emotions on moral judgments hint at an underlying
strategy: Our findings lead us to conclude that affectively-laden moral intuitions are
often useful [] (Schnall et al. 2008b, 1108). Or, at least, that the effect of emotions
optimizes or biases the resulting decision (Valdesolo and de Steno 2006). If a certain
emotional arousal leads reliably to certain judgments, we can re-describe this connection as heuristic. This is this case if, e.g., anger reliably indicates that something morally
wrong has happened. While heuristics can be expected to be reliable in most of the
cases, as mentioned before, they can go wrong.
There are two ways to describe the unreliability of heuristics: as misapplication and
as bias. The error that results from a misapplication is random and non-systematic
(Larrick 2004, 316), i.e., it is usually unpredictable. A bias can be identified where the
descriptive behavior falls systematically short of normative ideals (ibid., 316). This is
the case if, e.g., a disgusting odor predictably leads to harsher moral judgments because
odor should not have any influence on the evaluation of moral settings.
The research paradigm, i.e., the manipulation paradigm, I have been referring to
might test the downsides of heuristics, namely biases, and thereby reveal further evidence that heuristics can lead to moral errors, i.e., heuristics can be in conflict with
ideals. Here are some more examples: Odor caused participants to evaluate gay men
more negatively (Inbar et al. 2012). Eskine and colleagues (2011) showed that the induction of bitter taste leads to harsher judgments than in the control group. The exper-

90

Janett Triskiel

iments in which good mood (dime in a telephone box) as well as disgust (e.g., by a dirty
desk, disgusting smell, bitter taste) has been induced, show that emotions have an influence on moral judgment. This result is in line with the heuristic approach which
claims that certain cues trigger certain judgments. That is because heuristics operate as
mental short-cuts, as cognitive abridgement between a cue (that represents the whole
scenario) and the corresponding judgment. Which cue represents what kind of scenario is learned, based on individual experience or hard-wired. What the experiments
additionally show is that heuristics can get in conflict with ideals: The cue that triggers
a certain moral judgment should be part of the moral scenario, but not external like,
e.g., a dirty desk. Whenever the heuristic is triggered by an external cue, but its following judgment is about a certain scenario, the heuristic turns into a bias. Therefore the
manipulation paradigm can be seen as part of the research on biases.
I have tried to show that the concerns regarding the truth source of our moral judgments can be dispelled. If we adopt the heuristic approach perspective the empirical
findings are compatible with common sense and certain types of rationalism. In what
follows, I will try to show that this is also true for the concerns regarding the reliability of
our reporting abilities.

5.2

Concerns Regarding the Reliability of Our Reporting Abilities

Findings used as evidence for the confabulation claim come from research on splitbrain patients (e.g., Gazzaniga and LeDoux 1978), causal attribution (e.g., Nisbett and
Wilson 1977) and causal inference (e.g., Wegner and Wheatly 1999; Pronin et al. 2006).
Evidence for our tendency to rationalize our judgments mainly post hoc, is provided by
a series of experiments done by Nisbett and Wilson (1977) and by Haidt (2001).

5.2.1

Confabulation

Confabulation is a process that occurs when people are not aware of the causes of their
actions (Gazzaniga and LeDoux 1978). Nisbett and Wilson (1977) claim that whenever
we do not know about causes for effects, we confabulate causal reasons or use causal
strategies to infer them. We do not use our direct prior experience, trying to remember
which cause led to the present situation. Rather, we apply or generate causal theories
about effects and judge how plausible it is that the stimulus would have influenced the
situation (ibid., 248).
Two studies aim to show that even our feeling of having acted can be merely the result of the application of causal strategies and not of direct experience. In the I-spy
study (Wegner and Wheatly 1999) participants believed that they had chosen a figure
on a computer screen by mouse click. They had to think about a special figure (the

Psychology Instead of Ethics?

91

figure was named over headphones) a few seconds before they were to choose one of
many figures. However, it was not the participant but a confederate who stopped the
pointer at the particular figure. When asked how strongly they felt that they had chosen
the stopping, most participants answered that they strongly felt that they themselves
had willingly stopped. In the Voodoo study (Pronin et al. 2006) students believed that
they caused another persons pain during a voodoo course, if bad feelings about this
person had previously been induced.
While Nisbett and Wilson do not imply that all causal theories are wrong in general,
we should nevertheless, according to other authors, worry about the fact that [c]auses
that escaped our attention, causes that are not easily remembered, and causes that are
within our known range of causes will never be cited (Sie and Wouters 2010, 126). The
concern seems to be that we confabulate unsystematically that we are opaque to ourselves and that verbal reports about causes are therefore unreliable. Sie and Wouters
admit that these studies do not show that we do not act for reasons, but that the process
of providing reasons does not necessarily recollect the motives that drove our actions.
Rather, we infer them on the basis of information we do have (ibid., 127). The studies
done by Wegner and Wheatly, and Pronin et al. were interpreted as challenging too,
because, so it seems, people are also applying causal strategies rather than using direct
experience when it comes to the sense of agency. To sum up the general worry about
these findings: The mistakes indicate that the process of providing reasons is quite
different from what it seems and only loosely connected to the processes that generate
the actions. Initially one might think that when we give reasons we recollect the motives that drove our actions (Sie and Wouters 2010, 127).
First of all, we have to state that the experiments done by Wegner and Wheatly, and
Pronin et al. are mainly about naming causes and not about reporting reasons.
Furthermore, Nisbett and Wilson (1977, 233) claim that a priori theories give reliable estimates about the real causes and Wegner and Wheatly (1999, 490) add that the
application of principles of causal inference usually leads to the correct identification.
We can call these a priori theories heuristics about causal inferences, which are generally sound and lead to the correct answer. Note, that the act of confabulation is not the
same as the use of heuristics. I do not believe that both studies reveal anything about
confabulation, but about the use of heuristics. It is plausible to assume that general
knowledge about the connection of particular causes with particular effects has been
applied by the participants in the I-spy study. Usually, when we click the mouse in order to, say, save a document, the belief is correct that it was our clicking that causes the
saving.
Even if the experiments should not worry us with regard to our ability to reliably report reasons, the Voodoo study interestingly reveals the downside of the causal heuristic which operated in the I-spy study. This bias is a tendency to overestimate personal
influence (Pronin et al. 2006), which is a systematic error in causal inference.

92

5.2.2

Janett Triskiel

Post-Hoc Rationalization

Post-hoc rationalization labels the fact that while our motives for actions are based on
unconscious automatic processes or intuitions, we nevertheless try to find reasons to
justify our judgment. By now, Haidts incest setting (2001) is not only famous for the
influence of intuitions on judgments, but also for post-hoc rationalization. After their
initial judgment that the sex between the siblings was clearly wrong, participants were
asked to provide reasons for their judgment. But all the reasons the participants gave
(e.g., someone will be emotionally hurt, inbreeding is dangerous, no mutual agreement)
were excluded by the example in the first place. The participants were left dumbfounded claiming that it is just wrong. Haidt claims that his Social Intuitionist Model allows that people know by intuition that something is wrong without knowing the reasons why. He explains the observed moral dumbfounding by pointing out that most of
our moral judgments are based on intuitions and unconscious affects instead of reasons. Reasons do not motivate the judgment but are provided in a post-hoc manner to
justify the initial judgment. Similar conclusions have been drawn by Nisbett and Wilson (1977), who conducted experiments in a parallel fashion long before Haidt. In one
experiment, participants had to choose from identical pairs of nylon stockings the one
with the best quality. Nisbett and Wilson used the setting to show that positions have a
large effect on choice. And indeed, [t]here was a pronounced left-to-right position
effect, such that the rightmost object in the array was heavily over-chosen (ibid., 244).
When asked about the reasons for the choice, the position was not mentioned. The
participants instead praised the better feel or quality of the pair they had chosen (Newstead 2001).
The participants in Haidts experiment show that they participate in the GoR. When
asked, they provided reasons for their judgment. The presented reasons are the normative, widely accepted reasons which sufficiently justify that incest is morally wrong. In
real cases usually someone is emotionally hurt, inbreeding is dangerous and incest is
not based upon mutual agreement. I believe this explains why the participants did not
stop to provide new reasons and repeated the ones they had already brought up. It is
because the confederate rejected all the reasons that usually justify incest prohibition
that they got dumbfounded. This status is the result of the rejection and of failed justification. This line of argumentation holds also for Nisbetts and Wilsons experiment.
The participants refer to normative, widely accepted reasons which usually justify the
choice of a certain item, namely better feel or quality.
The justificatory rationalist will not worry about these results. The belief that incest
is morally wrong is sufficiently justified by the right kind of reasons. People do refer to
these reasons and thereby repeat the justificatory relation between particular reasons
and the corresponding judgment. The justificatory rationalist does not mind that the
judgment might have been driven by intuitions since all he cares about is the process of

Psychology Instead of Ethics?

93

justification. This is in line with Gigerenzer: Moral intuitions can be based on reasons,
even if the latter are unconscious. These reasons, however, need not to be the same as
those given post hoc in public (Gigerenzer 2008, 16).
Psychological rationalism, in its synchronic or diachronic version, does not have to
accept the conclusion that the judgment was primarily driven by intuitions. The experimental setting did not rule out that the participants had deliberated immediately prior
or years before they had made their judgment.
What Haidts experiment reveals is an application error which can be explained
within the heuristic approach of judgment and decision making. Sunstein (2005) considers the incest taboo as a form of a Do not tamper with Nature Heuristic (ibid., 539f.)
which operates also in the context of cloning and genetic engineering of food. The rule
that incest is wrong, is internalized as a heuristic but nevertheless contains the abridged
justificatory relation between reasons and judgment. The participants apply the heuristic in the experiment and after being asked for reasons, they start to expand this relation. As mentioned before, heuristics tend to overgeneralize beyond the context. Siblings having sex is clearly an instantiation of incest that therefore triggers the related
heuristic. The scenario contains information which is new with regard to the concept of
incest and is therefore faded out. A justificatory rationalist can accept the occurrence of
application errors without giving up her substantial claims.
To defuse Nisbetts and Wilsons results, we have to distinguish between reasons and
causes. The position effect, which has been investigated, had been unknown to the participants. It is very likely that this effect causes the choice of the item. As a psychological mechanism it operates unconsciously but can be made conscious. There is an important difference between psychological facts (e.g., mechanisms like the position effect) and normative facts (reasons). While both can be used to explain the judgment,
they are part of different levels of description.
Reasons are part of the normative realm and bound to a first-person perspective.
The essential point is that in characterizing an episode or a state as that of knowing, we
are not giving an empirical description of that episode or state; we are placing it in the
logical space of reasons, of justifying and being able to justify what one says (Sellars
1997, 76). A justification cannot be ruled out or even replaced by a psychological description. And this again means that a differing psychological description does not
suffice to show that the participants make a mistake. But causes can enter the realm of
reasons. Psychological facts are potential reasons and become reasons if we take an
affirmative stance toward them (Ladwig 2003, 552-557). But this has not been the case
in the experiment: And, when asked directly about a possible effect of the position of
the article, virtually all subjects denied it (Nisbett and Wilson 1977, 244).
Refusing the supposed challenge for our ethical self-concept using the heuristic approach generated an unavoidable new one. The new challenge is to answer the question
if and how we can exercise control over heuristics.

94

Janett Triskiel

Can We get Heuristics under control?

If heuristics work well for most of the cases, i.e., produce satisfactory outcomes (Payne
and Bettman 2004, 113), lead to quicker decisions (ibid., 121; Kahneman 2003, 1464)
and are adaptive (Gigerenzer 2008), but also lead to systematic and predictable errors
(Payne and Bettman 2004, 129; Sunstein 2005, 535), the question if we can prevent
biases or control for overgeneralization becomes quite appealing. From an ethical
standpoint this is a fundamental issue that can only be answered by empirical sciences.
If biases systematically influence our judgments, i.e., our judgments systematically fall
short of normative ideals, they should be avoided. Under which circumstances we are
biased is an empirically question. But as said before, in order to examine if we can prevent or control biases we have to define on the basis of a normative criterion what does
count as heuristic and what does count as bias in the first place. A large body of research has emerged to investigate the options and limits of debiasing strategies.

6.1

Debiasing Strategies

The question whether we can control our heuristics and therefore our biases is currently under debate and we can find optimists (Gigerenzer) as well as pessimists (Kahneman, Tversky, Sunstein, Wilson, Brekke). Gigerenzer (2008) for example is remarkably
optimistic und favors a double track strategy: Because of their simplicity and transparency, however, heuristics can be easily made conscious, and people can learn to use or
to avoid them (ibid., 10). Sunstein (2005), in contrast, is remarkably pessimistic and
points out that the reluctance to acknowledge that we are tricked by a heuristic, is a
product of unreflective insistent intuitions (ibid., 538).
However, empirical findings suggest that a more fine-graded approach is needed to
take a firm stand. The work by Larrick (2004) seems to support Sunsteins view rather
than Gigerenzers. With Kahneman (2003), Larrick points out that there are reasons to
doubt that individuals can de-bias themselves. On the one hand they do not realize
their poor judgments. They attribute good outcomes to their skills and bad ones to
situational factors (Larrick 2004, 318). Additionally, people do resist being debiased
because they do not want to be told that they have done wrong (ibid., 331). On the
other hand he doubts that a simple training in biases without accompanying recognition skills would help (ibid., 326). Groups as error checking systems also do not help
much because shared training and discussions tend to lead to similar world views and
similar blind spots (ibid., 326f.). To improve decisions, he suggests to use groups with
highly diverse experiences and a teaching in which every member must formulate her
own judgment independently before working in a group (ibid., 327). Nevertheless, he
points to research results which show that a class of decision rules can indeed be taught,

Psychology Instead of Ethics?

95

especially statistical, logic and economic principles (ibid., 324f.). Cheng and colleagues
(1986) successfully trained students to reason with the if p, then q-conditional by using familiar, pragmatic rules instead of abstract ones.
Payne and Bettman (2004) point out that helping individuals manage attention is
critical for improving decisions (ibid., 112) which means that teaching attention skills
could serve as a long-term debiasing strategy. However, they have reservations regarding the expectable success: [T]he potential biases or errors in reasoning that result
should not be viewed as fragile effects that can easily be made to disappear; they are
important regularities in decision behavior (ibid., 114). These regularities, the biases,
are the downsides of otherwise useful heuristics that help to manage attention.
Wilson and Brekke (1994) distinguish two types of biases: failure of rule knowledge
or application errors and mental contamination (ibid., 118). They claim that for the
first type of bias, learning and training the rules increase the accuracy of judgments
(e.g., Sunk Costs, Law of big Numbers) but that for the second type, improvement is
very difficult, if not impossible (Halo Effect, Anchoring) (ibid., 119). On the one hand,
the impossibility of controlling for mental contamination is due to the nature of human
cognition: Very often we are not aware of our mental processes and even if we are, we
still have limited control over them. On the other hand, it is due to the nature of lay
theories about the human mind: We tend to be highly overconfident about our own
skills (ibid., 120).

6.2

Philosophy Needs Psychology

Empirical research has already revealed important insights about how heuristics work
and which kind of biases are particularly persistent. Future research hopefully will further differentiate and deepen this knowledge. Especially moral philosophers should pay
a good deal of attention.
In order to be psychologically robust, a normative ethical theory has to reflect on
who the addressees are for what kind of ought-statements. From a theoretical point of
view usually all humans are addressed. But, as I will assume, if it holds that ought implies can, we cannot demand from individuals what they cannot achieve.
If it turns out that a robust effect like the Anchoring Effect, i.e., the assimilation of a
numeric estimate to a previously considered standard (Mussweiler et al. 2000, 1142),
compromises (nearly) all of us, this is worthy of our attention. First, we have to find out
in which contexts this influence additionally has morally unwanted effects. There might
be contexts where the effect is negligible from a moral perspective but there might still
be some other contexts where it should not be ignored (e.g., courtrooms). In a second
step, psychological research can inform us whether individuals can control for the effects and if so, what the necessary environmental conditions and helpful strategies are.

96

Janett Triskiel

In this sense a psychologically informed philosophy has to overcome the notion that
virtuous character or good will do alone suffice for correction, since biases are systematic errors, whereon we have only limited influence.
This information can, and I think should, impact ought-statements and shift the
scope of responsibilities. Of course, from empirical evidence concerning a robust bias it
does not follow that no one is any longer responsible for anything. People who work in
morally relevant contexts should inform themselves (and should be informed) about
biases that systematically mislead their judgments and should take appropriate action.
To sum up: Moral philosophy should be informed by psychological research on this
topic because it is an empirical question which kind of heuristics we apply in a given
situation and under which circumstances they turn into a bias. Furthermore, only empirical experiments can reveal the circumstances under which we can control them and
identify the most helpful strategies to this end.

References

Blair, R.J.R. (1995). A cognitive developmental approach to morality: investigating the psychopath. Cognition 57, 129.
Blair, R. (1997). Moral reasoning and the child with psychopathic tendencies. Personality and
Individual Differences 22, 731739.
Brandom, R. (1994). Making it explicit. Reasoning, representing, and discursive commitment.
Cambridge: Harvard University Press.
van den Bos, W., & Groglu, B. (2009). The role of ventral medial prefrontal cortex in social
decision making. The Journal of Neuroscience 29(24), 76317632.
Cheng, P.W., Holygak, K.J., Nisbett, R. E., & Oliver, M. (1986). Pragmatic versus syntactic approaches to training deductive reasoning. Cognitive Psychology 18, 293328.
Darley J.M., & Batson, C.D. (1973). From Jerusalem to Jerichow: A Study of situational and
dispositional variables in helping behavior. Journal of personality and Social Psychology
27(1), 100108.
Davidson, D. (1980). Essays on actions and events. Oxford Clarendon Press.
Doris, J. M. (1998). Lack of character. Personality and moral behavior. New York: Cambridge
University Press.
Eskine, K.J., Kacinik, N.A., & Prinz, J.J. (2011). A bad taste in the mouth: Gustatory disgust
influences moral judgment. Psychological Science 22(3), 295299.
Gazzaniga, M. S., & LeDoux, J. E. (1978). The integrated mind. New York: Plenum Press.
Gigerenzer, G. (2008). Moral intuitions= Fast and frugal heuristics? In W. Sinnott-Armstrong
(Ed.), Moral psychology. Volume 2: The cognitive science of morality: Intuition and diversity
(pp. 126). Cambridge: MIT Press.
Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M., & Cohen, J.D. (2001). An fMRI
investigation of emotional engagement in moral judgment. Science 293, 21052108.

Psychology Instead of Ethics?

97

Greene, J. D. (2007). Why are VMPFC patients more utilitarian?: A dual-process theory of moral
judgment explains. Trends in Cognitive Sciences 11(8), 322323.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Reviews 108(4), 814834.
Harenski, C.L., & Hamann, S. (2005). Neural correlates of regulating negative emotions related
to moral violations. Neuroimage 30, 313324.
Heekeren, H.R., Wartenburger, I., Schmidt, H., Schwintowski, H.-P., & Villringer, A. (2003). An
FMRI study of simple ethical decision making. Cognitive Neuroscience and Neuropsychology 14(9), 12151219.
Horberg, E.J., Oveis, C., Keltner, D., & Cohan, A.B. (2009). Disgust and the moralization of
purity. Journal of Personality and Social Psychology 97(6), 963976.
Horberg, E.J., Oveis, C., & Keltner, D. (2011). Emotions as moral amplifiers: An appraisal tendency approach to the influences of distinct emotions upon moral judgment. Emotion Review 3(3), 237244.
Huebner, B., Dwyer, S., & Hauser, M. (2008). The role of emotion in moral psychology. Trends
in Cognitive Science 13(1), 16.
Inbar, Y., Pizarro, D., & Bloom, P. (2012). Disgusting smells cause decreased liking of gay men.
Emotion 12(1), 15.
Jones A., & Fitness, J. (2008). Moral hypervigilance: The influence of disgust sensitivity in the
moral domain. Emotion 8(5), 613627.
Joyce, R. (2008). What neuroscience can (and cannot) contribute to metaethics. In W. SinnottArmstrong (Ed.), Moral Psychology Volume 3: The Neuroscience of Morality: Emotion,
Brain Disorders, and Development (pp. 371394). Cambridge: MIT Press.
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The
American Economic Review 93(5), 14491475.
Ladwig, B. (2003). Autonomie als Antwortfhigkeit. Unpublished conference paper, talk on the
5th GAP Conference in Bielefeld: http://www.gap5.de/proceedings/pdf/547-559_ladwig.pdf.
Accessed 31 Jan 2014.
Larrick, R. P. (2004). Debiasing. In D.J. Koehler & N. Harvey (Eds.), Blackwell Handbook of
Judgment and Decision Making (pp. 31633). Oxford, UK: Blackwell Publishing.
Mazar, N., & Zhong, C.-B. (2010). Do green products make us better people? Psychological Science 21(4), 494498.
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative
theory. Behavioral and Brain Sciences 34, 57111.
Moll, J., Oliveira-Souza, R., Esliner, P.J., Bramati, I. E., Mourao-Miranda, J., Andreiuolo, P. A., &
Pessoa, L. (2002a). The neural correlates of moral sensitivity. A functional magnetic resonance imaging investigation of basic and moral emotions. The Journal of Neuroscience
22(7), 27302736.
Moll, J., Oliveira-Souza, R., Bramati, I.E., & Grafman, J. (2002b). Functional networks in emotional moral and nonmoral social judgments. NeuroImage 16, 696703.
Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect:
Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin 26(9), 11421150.
Newstead, S. (2001) Introspection-A new look? The Psychologist 14(1), 34.

98

Janett Triskiel

Nisbett, R.E, & Wilson, T. (1977). Telling more than we can know: Verbal reports on mental
processes. In Psychological Review 84(3), 231260.
Payne, J. W., & Bettman, J.R. (2004). Walking with the scarecrow: The information-processing
approach to decision research. In D.J. Koehler & N. Harvey (Eds.), Blackwell Handbook of
Judgment and Decision Making (pp. 110132). Oxford, U.K.: Blackwell Publishing.
Prinz, J. (2006). The emotional basis of moral judgment. Philosophical Explorations 9(1), 2943.
Pronin, E., Wegner, D.M., McCarthy, K., & Rodriguez, S. (2006). Everyday magical powers: The
role of apparent mental causation in the overestimation of personal influence. Journal of
personality and Social Psychology 91(2), 218231.
Schnall, S., Benton, J., & Harvey, S. (2008a). With a clean conscience. Cleanliness reduces the
severity of moral judgments. Psychological Science 19(12), 12191222.
Schnall, S., Haidt, J., & Jordan, A.H. (2008b). Disgust as embodied moral emotion. Personaity
and Social Psychology Bulletin 34(8), 10961109.
Searle, J. (2000). Mind, language and society: Philosophy in the real world. London: Phoenix.
Sellars, W. (1997). Empiricism and the philosophy of mind (ed. by Brandom, R.) Cambridge:
Harvard University Press.
Sie, M., & Wouters, A. (2010). The BCN challenge to compatibilist free will and personal responsibility. Neuroethics 3(2), 121133.
Sommer, M., Rothmayr, C., Dhnel, K., Meinhardt, J., Schwerdtner, J., Sodian, B., & Hajak, G.
(2010). How should I decide? The neural correlates of everyday moral reasoning. Neuropsychologia 48, 20182016.
Sunstein, C. (2005). Moral heuristics. Behavioral and Brain Sciences 28 (4), 531573.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science
185(4157), 11241131.
Valdesolo P., & de Steno, D. (2006). Manipulations of emotional context shape moral judgment.
Psychological Science 17(6), 476477.
Waldmann, M., Nagel, J., & Wiegmann, A. (2012). Moral judgment. In K.J. Holyoak & R.G.
Morrison (Eds.), The Oxford Handbook of Thinking and Reasoning (pp. 364389). New
York: Oxford University Press.
Wegner, D.M., & Wheatly, T. (1999). Apparent mental causation: Sources of the experience of
will. American Psychologist 54, 480491.
Wheatly, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science 16(10), 780784.
Wilson, T., & Brekke, N. (1994). Mental contamination and mental correction: Unwanted influences on judgments and evaluations. Psychological Bulletin 116 (1), 117142.

Part II
Empirical Approaches in Recent
Moral Psychology Research

Motive for Young Childrens Developing Concern for


Others Well-Being as a Core Motive for Developing
Prosocial Behavior
Robert Hepach

Abstract
This paper investigates the underlying motive of prosocial behavior in young children,
particularly the function of benevolent feelings. On an evolutionary scale, the human
capacity for genuine other-oriented behavior significantly contributed to a groups
survival as a whole. Studies on the ontogeny of prosocial behavior suggest that the motive of young childrens helping behavior is a genuine concern for anothers well-being.
By the second year of life, children engage in various ways on behalf of others, including fulfilling others goals and comforting those who are hurt. A brief review of this
developmental work is provided with a focus on specifying the intrinsic motivational
mechanism of childrens prosociality. Not only do children show signs of genuinely
selfless behavior, but their concern for others develops to include more flexible sympathetic responses, such that children help less if a request for help is unjustified. Childrens sympathetic helping is driven by an assessment of the persons actual need. Such
insights into justified and unjustified requests for help may represent one crucial step
toward childrens more flexible forms of prosociality, including moral behavior.

NB: This paper is based on the authors Ph.D. dissertation (Hepach 2012). Therefore,
several passages in the present text are adapted from the introduction and conclusion
of the unpublished work.

Robert Hepach
Max Planck Institute for Evolutionary Anthropology
Department of Developmental and Comparative Psychology
hepach@eva.mpg.de

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_5

102

Robert Hepach

On January 2, 2007, a New York construction worker named Wesley James Autrey
performed a heroic act (Radiolab 2014; Carengie Hero Fund 2014). He was waiting for
a subway train when a man standing next to him had a seizure and fell onto the train
track bed. Mr. Autrey, who was with his daughters at the time, did not hesitate and
jumped onto the tracks in an attempt to rescue the young man. He persisted despite the
fact that a train was arriving at the station. It became clear that Mr. Autrey would not
be able to move the victim in time before the train got to them. However, instead of
leaving the track level to bring himself to safety, Mr. Autrey decided to lie on top of the
victim and pressed their bodies to the ground such that the train could pass over them.
Both Mr. Autrey and the young man survived.
A remarkable feature of heroic acts is that people often cannot explain what propelled them to help. For example, when people such as Mr. Autrey are interviewed,
they fail to give an ad hoc explanation for their prosocial behavior (see also Haidt 2001
for similar post-hoc rationalizations of intuitive judgments). One is inclined to think
that such genuine and altruistic behavior be the result of a thorough and deliberative
process. After all, the benefactor is risking her/his life to save a complete stranger.
However, in interviews heroes do not recall extensive deliberation but rather acted on
something like instinct, a feeling that compelled them to act on behalf of the person in
need (Radiolab 2014). Such acts of altruism can be thought of as an extreme manifestation of a general human propensity to care about others, even strangers. It has been
suggested that empathic feelings elicit altruistic behavior where the main motive is to
care about the well-being of another person (Batson 2010). Most people admire Mr.
Autreys behavior and consider it something they would have done themselves if, at the
time, they had had the courage to do so. The question is where does our motivation to
help others come from? More specifically, how does prosocial behavior emerge in human ontogeny? Why do humans help each other at all?
The present chapter seeks to address this question of the core motives underlying
human helping behavior. First, human behavior (and animal behavior in general) is the
product of evolutionary processes. Therefore, the first section is dedicated to a brief
summary of theories regarding the evolution of prosocial behavior and its survival
benefits. On the evolutionary (ultimate) level, helping is in fact an adaptive behavior
given that the benefactor may profit in the long run through various forms of reciprocation from beneficiaries and observers (Milinski et al. 2002; Nowak and Sigmund
1998; Trivers 1971). On the proximate level, the question is how helping is actually
motivated and in particular its earliest occurring forms early in ontogeny (Tomasello
2009; Warneken and Tomasello 2009). The distinction between the two levels of analysis is important when explaining behavior. A child at the age of two may not be fully
aware of the return benefits ultimately provided when helping others. However, this
insight is not a necessary condition for prosocial behavior because the proximate
mechanisms, i.e., the actual motives such as empathic feelings, can elicit helping more

Motive for Young Childrens Developing Concern for Others Well-Being

103

or less independently of its ultimate function. Therefore, this chapter reviews recent
research findings on the ontogeny of prosocial behavior in young children with a particular focus on the motives underlying simple forms of helping such as completing
others goals and comforting those who were harmed. Finally, an outlook is provided
on how young childrens prosocial motives may develop into more complex forms of
prosociality, including moral behavior.

The Evolutionary Significance of Prosocial Behavior

The theory of evolution has offered a biologically rooted way of thinking about the
functionality of behavior (Darwin 1859). In Darwins original formulation, those individuals who survive are better adapted to their environment. One challenge for the
theory of natural selection has been to explain other-benefitting behavior, which could
conceivably reduce the benefactors fitness. Hamilton (1964) expanded Darwins original theory by shifting the focus from the survival of the individual to the perspective of
genes. From a gene-centered point of view, altruistic behavior, especially toward kin,
serves to promote the survival of the genes the benefactor shares with those that are
related to him. Crucially, a behavior then is not only judged in terms of costs and rewards, but also by the degree of relatedness between the benefactor and the beneficiary.
Empirical evidence for Hamiltons theory has been provided in a number of domains,
including social psychology and evolutionary psychology (see Simpson and Beckes
2010). When presenting individuals with a hypothetical situation in which they have to
decide whether to help another person, people are more willing to carry out costly prosocial behavior (altruism) toward kin rather than strangers (Burnstein et al. 1994).
A further approach to explaining the evolution of altruistic behavior has come from
focusing on groups as the target of natural selection pressures. According to group
selection theory, the combined prosocial effort of group members leads to the group as
a whole being better prepared against environmental influences, animal predators, and
other competing groups (Darwin 1874; Sober and Wilson 1999). The central point is
that an individuals chances of survival are directly dependent on the groups survival.
This, in turn, allows for the possibility of individuals performing other-benefitting behavior that strengthens the group. Groups consisting of altruists fare better than groups
comprising only individual egoists, where egoistic, in a narrow sense, means not
providing benefits for others (Sober and Wilson 1999).
Within groups, certain group-level mechanisms have been proposed that ensure
helping behavior also benefits the benefactor. Trivers (1971) suggested that otherbenefitting behavior can evolve through a process by which helpful acts are reciprocated among individuals over time. In this theory of reciprocal altruism, or direct reci-

104

Robert Hepach

procity, individuals are repaid in kind by those whom they have helped (see also Axelrod 2006). An alternative group-mechanism is indirect reciprocity. Benefactors can
obtain their return benefits indirectly via an enhanced reputation conferred by observers of the process (Milinski et al. 2002; Nowak and Sigmund 1998; Panchanathan and
Boyd 2004; Rockenbach and Milinski 2006). Therefore, a helpful person gains benefits
from the cost of helping because her reputation in the group is enhanced and she may
be preferred by partners in future collaborative activities. Similarly, the group-level
mechanism of strong reciprocity holds that human cooperation is maintained through
a process in which an individual helps anyone in the group as long as someone has
helped him in the past (Fehr and Fischbacher 2003). On such an account, individuals
help others not because the beneficiary could directly reciprocate but rather to maintain the integrity of the group in the face of competition with other groups (Barta et al.
2011; Fehr and Fischbacher 2005; Gintis et al. 2008). It is important to note that on all
of these accounts, reciprocity does not refer to the psychological motive that propels
individuals to help others. Rather, the concepts of direct, indirect, and strong reciprocity refer to mechanisms that ultimately provide return benefits to the benefactor either
because she may be helped in the future and / or because the integrity of the group as a
whole is strengthened.
Further theories on cooperation in humans have focused on another group-level
mechanism, namely that, among primates, humans are perhaps unique in their interdependence. In their evolutionary past, human individuals foraging success, e.g., the
hunting of large prey, required coordinated collaboration with others, which led to
caring directly about the welfare of the collaborative partners (Roberts 2005; Tomasello
2009; Tomasello et al. 2012). According to the theory of interdependence, individuals
help one another because the benefactor can directly profit from the well-being of the
beneficiary; if the latter is a skilled hunter in the group, then having him well and
healthy will increase the chances of the group successfully hunting for food in the future.
All of the above theories explain behavior on the evolutionary, sometimes termed
ultimate, level (Mayr 1963). In other words, helping behavior is explained by the fact
that it ultimately pays to help other individuals, especially those in ones own group.
However, from evolutionary theories of prosocial behavior, it is not clear what actually
motivates individuals carry out such behavior, i.e., help in the first place.Within the
context of prosociality, the term ultimate refers to mechanisms by which return benefits can be provided to the benefactor. The term proximate describes the actual psychological mechanisms that motivate prosocial behavior from the perspective of the benefactor. Therefore, the question of underlying motives addresses the proximate (psychological) level of behavior.

Motive for Young Childrens Developing Concern for Others Well-Being

105

The Proximate Reality of Prosocial Behavior

Darwin himself noted that humans are equipped with a social instinct that complements the suite of other basic instincts, such as feeding and mating (Darwin 1874). As
with all instincts, this social instinct is crucial for survival of groups. According to Darwin sympathy is shared with other animals, but the degree to which it is developed in
humans is unique in the animal world. Kropotkin argued that sympathetic behavior is
evident in the mutual aid within animal groups, e.g., wild Siberian horses forming a
protective ring against predators (Kropotkin 1910, p. 7). Likewise, Preston and De
Waal (2002) pointed out that humans share core empathic dispositions with other animals, but that humans evolved additional cognitive skills, such as perspective taking,
rendering them more capable of sophisticated cognitive empathy. Thus, human sympathy is part of a phylogenetic continuum of animals caring for conspecifics.
Philosophers have put forth similar arguments. Aristotle argued in favor of a social,
communal sense that made individuals seek the company of fellow human beings. This
sense of togetherness was inherent to humans. Furthermore, human nature was described as being in a state of tension between the emotions or desires on the one hand
and reason on the other (see Ozinga 1999, chapter 2). Rousseau claimed that humans
are good-natured and that the laws and rules of society should be a natural continuation of the prosocial tendencies humans carry in themselves from birth (Rousseau
2010). David Hume argued for a benevolent view of human nature given the fact that
the very source of what humans judge to be good or bad is rooted in our moral feelings
of approval and disapproval of others conduct (Hume 2002, chapter 2). Moreover,
Adam Smith agreed with what Hume referred to as human benevolence by further
specifying that inherent in the human feeling of sympathy rests an appreciation of the
circumstances under which others may experience harm, such that we naturally sympathize less if anothers despair appears to be unjustified (Smith 1994, chapter 1). This
observation by Smith highlights that humans take into account whether anothers request for help is reasonable and justified given the circumstances.
In addition to evolutionary approaches and philosophical inquiries, empirical researchers have investigated the effect of empathy on altruistic behavior. In particular,
when adults feel empathic toward another person they will persist in helping even
when they could easily escape the situation (Batson et al. 1981; Coke et al. 1978). Batson and colleagues have argued that empathetic concern is the cause of genuine altruistic behavior (Batson 2010). In those studies, empathetic concern is elicited in a variety
of contexts. For example, adults feel empathetic when they are encouraged to take another persons perspective (Coke et al. 1978) and when the person shares similar values
and attitudes (Batson et al. 1981).

106

Robert Hepach

In sum, the human capacity to be motivated to help others has profound consequences for the coherence of groups. Evolutionary approaches have outlined its survival value for individuals within a group of collaborators. Social instincts motivate behavior that benefit others. The question that arises is the following: If social instincts have
evolved over evolutionary time, when do they develop in ontogeny? In particular, what
motives underlie helping behavior in young children?

Ontogeny of Prosocial Behavior

Helping behavior in young children can be grouped into three categories depending on
the type of need the helper is responding to: 1) sharing resources including information, 2) comforting, and 3) instrumental helping (Dunfield et al. 2011; Warneken
and Tomasello 2009). The most basic form of helping, which will be the focus of this
section, is fulfilling others instrumental goals. Rheingold (1982) found that children at
the ages of 18, 24, and 30 months readily helped an adult in everyday household tasks.
Furthermore, Warneken and Tomasello (2006) presented 18-month-old toddlers with
situations in which an adult was struggling to overcome a physical obstacle, such as
trying to get to an out-of-reach object. Children helped the adult in a majority of instrumental tasks, e.g., picking up dropped objects and opening cabinet doors for the
adult. Importantly, the study also included control conditions to ensure that children
did not find the tasks of overcoming obstacles enjoyable by themselves. In these control
conditions, children showed helping behavior significantly less often, suggesting that
they were motivated to act only in situations where the adult needed help. This behavior has also been shown in 14-month-old children (Warneken and Tomasello 2007).
Furthermore, Warneken, Hare, Melis, Hanus, and Tomasello (2007) showed that 18month-olds would persist in helping an adult even if they had to overcome physical
obstacles in order to reach the adult. In another study, Svetlova, Nichols, and Brownell
(2010) demonstrated that toddlers at the ages of 18 and 30 months persist in bringing
objects to the adult until the relevant object was among them. The child would bring
items as long as the adult verbalized her need. Finally, helping behavior in infants as
young as 21 months is already selective. Children prefer to help an adult who was previously willing to help them, even if unsuccessful, over an adult who was previously
unwilling to help (Dunfield and Kuhlmeier 2010). Taken together, these findings show
that children at a very early age demonstrate robust helping behavior toward adults.
But what triggers children to carry out these actions?
Warneken et al. (2007) found that children will help equally in situations, regardless
of whether they were rewarded by an adult. Moreover, in another study, Warneken and
Tomasello (2008) further investigated the effects of rewards on childrens helping be-

Motive for Young Childrens Developing Concern for Others Well-Being

107

havior. In this study, 20-month-old infants were presented with three different situations. In one situation, an adult needed help reaching an object and would not reward
participants if they picked up the object for her. In a second situation, the adult praised
children for helping, thus providing a social reward. A third group of children received
a material reward for helping from the adult. After this treatment phase, children in all
three conditions could help the adult in nine additional instances. The results showed
that those children who had received a material reward for their helping were less likely
to help the adult in the subsequent situations. This was not the case in the praise condition where childrens rate of helping remained high. The authors reasoned that childrens motivation to help must be intrinsic given that extrinsic material rewards undermined it, a phenomenon known as the overjustification effect (Lepper et al. 1973).
In a more recent study, Warneken and Tomasello (2013) showed that at the age of two,
childrens helping is not influenced by the presence or absence of their parents. That is,
childrens motivation to help an adult is not affected by whether their parents actively
encourage them to help.
In sum, by the age of two years, children appear to be naturally motivated to help
others achieve their goals (Tomasello 2009; Warneken and Tomasello 2009). But we
can still ask what triggers children to help in those situations. The problem is, how does
one test hypotheses regarding the underlying motivation of childrens behavior?

Young Childrens Concern for the Well-Being of Another


Person in Need - The Core Motive

To address the actual motivation underlying young childrens helping behavior, a recent study used a novel methodology to measure childrens internal arousal state during a helping task (Hepach et al. 2012). Specifically, changes in childrens pupil dilation
were measured as an indicator of their level of internal arousal and its reduction. Increases in sympathetic activity result in greater changes in pupil dilation. The kinds of
psycho-sensory stimulation known to induce pupil dilation include a variety of phenomena, e.g., viewing or listening to emotionally charged stimuli (Bradley et al. 2008;
Partala and Surakka 2003), mentally adding numbers (Kahneman and Beatty 1966),
and anticipating rewards (Bijleveld et al. 2009). The latter finding is relevant for present
purposes because it documents an important property of pupil dilation. More specifically, a stimulus that bears motivational significance, such as a reward in the form of
money (Bijleveld et al. 2009), can trigger increases in pupil dilation (see also Nieuwenhuis et al. 2010). This suggests that the measure of pupil dilation can index changes
of sympathetic activity (internal arousal) underlying motivational states. In paradigms
researchers have applied recent studies employing violation-of-expectation paradigms

108

Robert Hepach

have applied the measure of pupil dilation in infancy research and show that infants
respond with greater pupil dilation to irrational social events (Gredebck and Melinder
2010), impossible physical actions (Jackson and Sirois 2009), and when others perform
actions incongruent with their emotional display (Hepach and Westermann 2013).
In their study on prosocial motivation, Hepach et al. (2012) measured changes in 2year-olds internal arousal in response to different resolutions of an actual helping situation. Participants saw an adult reaching for an item he had accidentally dropped (see
Fig. 1). The adult needed the object to resume a task that was interrupted by the
dropped object. Crucially, he sat behind a desk in such a way that he could not reach
the object without help. One group of children subsequently got the opportunity to
help the adult (Help condition). They responded within seconds by picking up the
dropped object and handing it to the adult. A second group of children saw the same
situation but were held back by their parents (No-Help condition). Therefore, they
could not personally help and the situation remained unresolved for the adult.
The prediction was that childrens internal arousal would be greater in the second
situation when no help was provided. This could be because children may be motivated
to provide the help themselves. This would allow them to potentially get credit from
the beneficiary who can reciprocate in future interactions. Another possibility is that
children were genuinely concerned with the adults well-being, and did not primarily
care about providing the help themselves. Therefore, if childrens initial motive was
indeed to get credit, they should remain aroused if another person provided the help
for them. However, if it was anothers well-being they were concerned about, then internal arousal should also decrease when another person helps. This was tested with a
third condition in which parents held back their children, but in this condition they
watched as another adult handed the object to the adult in need (Third-Person-Help
condition). Childrens internal arousal, i.e., pupil dilation, was measured both before
and immediately after the situation. The results showed that the average increase in
pupil dilation was greater in the No-Help condition compared to both the Help condition and the Third-Person-Help condition. There was no difference in childrens
arousal between the Help condition and the Third-Person-Help condition. That is,
childrens internal arousal decreased equally when they themselves provided the help
and when another person helped. In addition, the more children were aroused after
witnessing the problem, the quicker they were to help the adult (Hepach et al. 2013a).

Motive for Young Childrens Developing Concern for Others Well-Being

109

Fig. 1: Experimental Conditions of Hepach et al. (2012)

An illustration of the experimental design used in Hepach et al. (2012). Children in all
conditions saw an adult reaching for an object he needed to continue a task (top panel).
In the Own-Help condition children were able to pick up the object for the adult (bottom
left panel). In the No-Help condition children were held back by their parents and could
not help and no help was provided at all for the adult (bottom center panel). In the
Third-Person-Help condition children could not help but saw another adult provide the
help (bottom right panel).
These results suggest that the motivation underlying young childrens helping behavior
is not to perform the behavior themselves, and thus to get credit for it, but rather to
see the person in need being helped. Therefore, it seems unlikely that concerns for reciprocity are a crucial component of early spontaneous helping behavior in young children. It may be that the motive of getting recognition for ones helping emerges later in
ontogeny. The core motive for children to help others is to see the person in need being
helped. The question then is, how does this motive develop and how does it transfer to
other situations?

110

Robert Hepach

Instrumental help, such as picking up objects that have dropped, is arguably a lowcost form of helping behavior. A different type of prosocial behavior is comforting.
During the second year of life, childrens feelings of empathy when others are in distress can result in comforting behavior (Zahn-Waxler et al. 1992). Furthermore, children become increasingly sensitive to the situation in which another individual shows
distress (Bischof-Khler 1991). As Roth-Hanania, Davidov, and Zahn-Waxler (2011)
pointed out, young childrens responses become indicative of a more cognitive component of empathy where they attempt to interpret the situation of the person in need.
Moreover, Eisenberg and Fabes (1990) refer to this form of empathy as sympathy. At
this point in development, childrens emotional response to others in distress has a
clear motivational component; children who experience sympathy (e.g., decrease in
heart rate) are more likely to help others, whereas those more prone to respond with
negative distress (e.g., increase in heart rate) will be less likely to help (Eisenberg and
Miller 1987; see also Hastings et al. 2006). In a recent study, Vaish, Carpenter, and Tomasello (2009) found that 18- and 25-month-old toddlers respond with a concerned
facial expression to an adult being harmed even if that adult did not display any emotional cues. Moreover, toddlers concern was positively correlated with their prosocial
behavior toward the adult. The greater childrens concern while watching the transgression, the more they helped the adult on a later occasion.
Taken together, childrens motivation to help others included situations where others cannot achieve a goal and where another person is hurt or in distress. Furthermore,
such prosocial behavior emerges early in ontogeny during the second year of life. However, by the age of three, children will likely have interacted with other peers and seen
situations in which peers show emotional distress. One prototype would be that one
child (A) has an object that the other child (B) wants. If child B takes the object, child A
will most likely be upset. Alternatively, if child A does not want to share, child B may
begin to cry because she did not get child As object. However, the emotional distress of
child B would elicit a different response in an observer. Whereas the case of child A
being upset after her toy is taken away is justified, the second scenario, where child B
becomes upset after not getting the other childs toy, seems unjustified. That is, child B
does not have good reasons to be upset because it was not her toy to begin with (see
also Leslie et al. 2006). This raises an interesting point: Instead of sympathizing automatically with others, children, in fact, respond flexibly to the justifiedness of the others distress.

Motive for Young Childrens Developing Concern for Others Well-Being

111

Children Become Sensitive to Whether Anothers Emotional


Distress is Justified

To address the question of whether children would sympathize less in response to unjustified distress, Hepach, Vaish, and Tomasello (2013b) explored 3-year-olds sympathetic responses to an adult displaying emotional distress, either after he was genuinely
harmed (e.g., he got his fingers caught in a cardboard box), after experiencing only a
minor inconvenience (e.g., his sleeve got caught in the box), or when the source of the
distress was unknown to the children. In the latter situation, the children were briefly
turned away and thus did not see any harm being caused. Therefore, the amount of
harm caused to the adult was systematically varied while keeping constant his emotional cues. The question was if children would respond automatically to the emotional
distress or if they would take into account the context in which the distress was displayed. The authors coded childrens facial expression from video following a predefined coding scheme (e.g., Zahn-Waxler et al. 1992) as well as their helping behavior
toward the adult.
The results showed that childrens motivation to help the adult was not the same
across conditions as would have been the case if they responded automatically to distress cues regardless of context. Instead, childrens concern, as measured through their
facial expression, was the lowest in the condition where they did not know the cause of
the adults emotion, and increased with the degree of harm they witnessed, such that
the greater the harm, the more concern children showed. Furthermore, children helped
and reengaged the adult significantly more often in both the harm and no-harm conditions compared to the minor-harm condition. In addition, children in the harm and
no-harm conditions checked more often on the adult in a subsequent task where he
expressed emotional distress behind a barrier. Therefore, 3-year-old children do not
automatically sympathize and act prosocially towards any individual displaying distress. Rather, they take into account the context and whether the distress is appropriate
given the harm caused. They show less concern and less prosocial behavior toward an
individual who displays unjustified emotional distress, indicating that the childs prosocial behavior is not an automatic response to the amount of harm caused or to witnessing emotional distress. Finally, childrens degree of concern was negatively correlated with their latency to help the adult in a later situation. If showing emotional distress is taken as a request for help, then children by the age of three can flexibly adjust
their sympathetic response in cases when the distress is unjustified. This is an important cognitive development because it allows children to modulate their behavior
and respond efficiently to others in need of help.

112

Robert Hepach

Conclusion

From as early as two years of age, childrens motivation to help others shows signs of
benevolence. Children are motivated to increase and maintain the well-being of those
around them. Their motivation appears selfless and automatic as it is not influenced by
the presence or absence of a parent and does not depend on the degree of parental encouragement (Warneken and Tomasello 2013). Rather, childrens motivation is intrinsic and driven by a concern to see others in need being helped. If early helping behavior
can be viewed as simple acts of kindness, then one crucial developmental step is for
children to control their prosocial tendencies and to act on them flexibly. In the case of
sympathy, children, by the age of three, become sensitive to the justifiedness of another
persons distress. They help less and show less concern if the situation in which an adult
is showing emotional distress does not justify it, i.e., if the adult is reacting. This insight
allows children to distinguish between when help is needed and when it is not needed.
Of course, childrens prosocial behavior encompasses more than instrumental and
empathic helping. One task for future research is to study whether the motivation to
see others helped applies also to other domains of prosocial behavior, e.g., sharing
where a resource needs to be divided.
From an evolutionary point of view, it paid off for individuals to care for one another. Groups consisting of altruists had higher chances of survival in the face of environmental challenges and competing groups (Sober and Wilson 1999). Because group
members are interdependent, they have an active stake in each others welfare (Roberts
2005; Tomasello et al. 2012). But how might a motivational mechanism to help others
have evolved? All animals likely have some form of tension systems that guide their
behavior toward fulfillment of the organisms basic needs. At the non-social level, an
individual interacting with the environment will be in states of tension if there is something to be gained, i.e., if there is an incentive, such as a desirable object. If this goal is
frustrated, then tension remains aroused because the need was not fulfilled. Tension is
reduced if the goal is obtained. In a social species, individuals will frequently encounter
others who are observed pursuing goals. It is therefore conceivable that an individuals
own tension is aroused by someone elses goal (Hornstein, 1982).
As humans evolved to become obligate cooperators (Tomasello et al. 2012), there
was an increase in group size, with social interactions becoming more numerous and
collaborative activities increasing in complexity and commitment. Evolution naturally
created a selection pressure; individuals that became involved in others struggles and
who could only reduce tension by providing help themselves would then be at a disadvantage. Therefore, evolution built on top of mechanisms that previously worked, now
favoring individuals whose tension was aroused by another persons needs and would
also decrease if a third party helped. This is the logical consequence of a species that

Motive for Young Childrens Developing Concern for Others Well-Being

113

lives in increasingly demanding social environments. Evolutionarily, humans motivation to be genuinely prosocial is not a maladaptive trait. By maintaining cultures of
ever-increasing social complexity, humans have created a natural selection pressure for
biological adaptations to provide the motivational basis for our collaborative nature. It
is therefore possible that prosocial motivation evolved from an individuals tension
becoming linked to others needs (see also Hepach et al. 2013a).
The fact that young childrens motivation to help is specifically linked to the fulfillment of the others need suggests that a mechanism evolved which rendered individuals to be good and efficient helpers (not in the sense of their motivation being good or
bad, but in terms of their helping being efficient and targeted). One could argue that it
does not matter whether groups have good-natured altruists amongst them. What matters is that individuals within the group are helped appropriately. If groups only consisted of individuals who were kind-hearted but never responded appropriately to others needs, then those group members would not have survived for long. A group in
which individuals are inefficient at helping one another will likely not stand the test of
time.
Finally, the core motivation to see others helped may serve a crucial function for
more complex forms of prosocial behavior, such as moral development. Having no
morals equates to having no regard for others. Many of the moral dilemmas we face
include deciding on an appropriate response, even when the response may be at odds
with the individuals interests. One aspect of morality is having to orchestrate individual interests into the collective interest of the group. This would not be possible if individuals were not generally motivated to curb their own self-interests and care about the
well-being of others. If there was no fundamental motivation to help, no genuine concern for others, how could we take an individuals concerns into account when deciding on the moral or right course of action? Therefore, the motivation of young children
to see others helped can be seen as a starting point of a later developing moral compass to care about the well-being of others.
Whereas very young children appear selfless in their willingness to act on behalf of
others, their motivation to help may gain the extra quality of not merely acting on ones
automatic instinct but taking on more mature forms of human helping behavior that
has more deliberative elements when deciding whom and why to help. Mature moral
development rests on the fact that humans have the quality to care not only about
themselves but also about the well-being of others. It is a fundamental human challenge
to integrate these prosocial tendencies from the start into a culture where each member
profits from the benevolence of others.

114

Robert Hepach

References

Axelrod, R. (2006). The evolution of cooperation: revised edition. Cambridge, MA: Basic books.
Barta, Z., McNamara, J., Huszr, D., & Taborsky, M. (2011). Cooperation among non-relatives
evolves by state-dependent generalized reciprocity. Proceedings of the Royal Society B: Biological Sciences 278(1707), 843848.
Batson, C. (2010). Empathy-induced altruistic motivation. In M. Mikulincer & P. R. Shaver
(Eds.), Prosocial motives, emotions, and behavior: The better angels of our nature (pp. 15
34). Washington, DC: American Psychological Association.
Batson, C., Duncan, B., Ackerman, P., Buckley, T., & Birch, K. (1981). Is empathic emotion a
source of altruistic motivation? Journal of personality and Social Psychology 40(2), 290.
Bijleveld, E., Custers, R., & Aarts, H. (2009). The unconscious eye opener pupil dilation reveals
strategic recruitment of resources upon presentation of subliminal reward cues. Psychological Science 20(11), 13131315.
Bischof-Khler, D. (1991). The development of empathy in infants. In M. E. Lamb & H. Keller
(Eds.), Infant development: Perspectives from German speaking countries (pp. 245273).
Hillsdale, NJ: Erlbaum.
Bradley, M., Miccoli, L., Escrig, M., & Lang, P. (2008). The pupil as a measure of emotional
arousal and autonomic activation. Psychophysiology 45(4), 602607.
Burnstein, E., Crandall, C., & Kitayama, S. (1994). Some neo-Darwinian decision rules for altruism: Weighing cues for inclusive fitness as a function of the biological importance of the
decision. Journal of Personality and Social Psychology, 67(5), 773789.
Carnegie Hero Fund. (2014). http://carnegiehero.org/search-awardees, 28.7.2014.
Coke, J., Batson, C., & McDavis, K. (1978). Empathic mediation of helping: A two-stage model.
Journal of Personality and Social Psychology 36(7), 752.
Darwin, C. (1874). The Descent of Man and Selection in Relation to Sex. London, UK: John
Murray.
Darwin, C. (1909). The Origin of Species. New York, NY: Collier & Son. (Original work published 1859)
Dunfield, K., & Kuhlmeier, V. (2010). Intention-mediated selective helping in infancy. Psychological Science 21(4), 523527.
Dunfield, K., Kuhlmeier, V. A., OConnell, L., & Kelley, E. (2011). Examining the Diversity of
Prosocial Behavior: Helping, Sharing, and Comforting in Infancy. Infancy 16(3), 227247.
Eisenberg, N., & Fabes, R. (1990). Empathy: Conceptualization, measurement, and relation to
prosocial behavior. Motivation and Emotion 14(2), 131149.
Eisenberg, N., & Miller, P. (1987). The relation of empathy to prosocial and related behaviors.
Psychological Bulletin 101(1), 91.
Fehr, E. & Fischbacher, U. (2003). The nature of human altruism. Nature, 425, 785791.
Fehr, E., & Fischbacher, U. (2005). Human altruism: proximate patterns and evolutionary origins. Analyse & Kritik 27(1), 647.
Gintis, H., Henrich, J., Bowles, S., Boyd, R., & Fehr, E. (2008). Strong reciprocity and the roots of
human morality. Social Justice Research 21(2), 241253.

Motive for Young Childrens Developing Concern for Others Well-Being

115

Gredebck, G., & Melinder, A. (2010). Infants understanding of everyday social interactions: a
dual process account. Cognition 114(2), 197206.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral
judgment. Psychological Review, 108(4), 814.
Hamilton, W. D. (1964). The genetical evolution of social behaviour. I. Journal of theoretical
biology 7(1), 116.
Hepach, R. (2012). The motivation of early benevolence: An investigation into a causal mechanism of cooperation. (Unpublished doctoral dissertation). University of Leipzig, Germany.
Hepach, R., & Westermann, G. (2013). Infants sensitivity to the congruence of others emotions
and actions. Journal of experimental child psychology 115(1), 1629.
Hepach, R., Vaish, A., & Tomasello, M. (2012). Young children are intrinsically motivated to see
others helped. Psychological Science 23(9), 967972.
Hepach, R., Vaish, A., & Tomasello, M. (2013a). A new look at childrens prosocial motivation.
Infancy 18(1), 6790.
Hepach, R., Vaish, A., & Tomasello, M. (2013b). Young children sympathize less in response to
unjustified emotional distress. Developmental Psychology 49(6), 11321138.
Hastings, P. D., Zahn-Waxler, C., & McShane, K. (2006). We are, by nature, moral creatures:
Biological bases of concern for others. In M. Killen & J. Smetana (Eds.), Handbook of moral
development (pp. 483 - 516). New Jersey: Lawrence Erlbaum.
Hume, D. (2002). Eine Untersuchung ber die Prinzipien der Moral. Stuttgart, Germany: Reclam.
(Reprinted from Enquiries concerning Human Understanding and concerning the Principles of Morals, 1777)
Jackson, I., & Sirois, S. (2009). Infant cognition: going full factorial with pupil dilation. Developmental Science 12(4), 670679.
Kahneman, D., & Beatty, J. (1966). Pupil diameter and load on memory. Science 154, 15831585.
Kropotkin, P. (1910). Mutual Aid: A Factor of Evolution. London, UK: William Heinemann.
Lepper, M., Greene, D., & Nisbett, R. (1973). Undermining childrens intrinsic interest with
extrinsic reward: A test of the" overjustification hypothesis. Journal of Personality and social
Psychology 28(1), 129.
Leslie, A. M., Mallon, R., & DiCorcia, J. A. (2006). Transgressors, victims, and cry babies: Is basic
moral judgment spared in autism? Social Neuroscience 1, 270283.
Mayr, E. (1963). Animal species and evolution. Cambridge, MA: Harvard University Press.
Milinski, M., Semmann, D., & Krambeck, H. J. (2002). Reputation helps solve the tragedy of the
commons. Nature 415(6870), 424426.
Nieuwenhuis, S., De Geus, E., & Aston-Jones, G. (2010). The anatomical and functional relationship between the p3 and autonomic components of the orienting response. Psychophysiology 48(2), 162175.
Nowak, M. A., & Sigmund, K. (1998). Evolution of indirect reciprocity by image scoring. Nature
393(6685), 573577.
Ozinga, J. R. (1999). Altruism. Westport, CT: Praeger Publishers.
Panchanathan, K., & Boyd, R. (2004). Indirect reciprocity can stabilize cooperation without the
second-order free rider problem. Nature 432(7016), 499502.
Preston, S., & De Waal, F. (2002). Empathy: Its ultimate and proximate bases. Behavioral and
Brain Sciences 25(01), 120.

116

Robert Hepach

Partala, T., & Surakka, V. (2003). Pupil size variation as an indication of affective processing.
International Journal of Human-Computer Studies 59(1), 185198.
Rheingold, H. (1982). Little childrens participation in the work of adults, a nascent prosocial
behavior. Child Development 53(1), 114125.
Radiolab. (2014). http://www.radiolab.org/story/104009-i-need-a-hero/, 28.07.2014.
Roberts, G. (2005). Cooperation through interdependence. Animal Behaviour 70(4), 901908.
Rockenbach, B., & Milinski, M. (2006). The efficient interaction of indirect reciprocity and costly
punishment. Nature 444(7120), 718723.
Roth-Hanania, R., Davidov, M., & Zahn-Waxler, C. (2011). Empathy development from 8 to 16
months: Early signs of concern for others. Infant Behavior and Development 34(3), 447
458.
Rousseau, J.-J. (2010). Abhandlung ber den Ursprung und die Grundlagen der Ungleichheit
unter den Menschen. Stuttgart: Reclam. (Original work published 1755)
Simpson, J. A., & Beckes, L. (2010). Evolutionary perspectives on prosocial behavior. In M.
Mikulincer & P. R. Shaver (Eds.), Prosocial motives, emotions, and behavior: The better angels of our nature (pp. 3554). Washington, DC: American Psychological Association.
Smith, A. (1994). Theorie der moralischen Gefhle. Hamburg, Germany: Meiner. (Reprinted
from The Theory of moral Sentiments, 1759)
Sober, E., & Wilson, D. (1999). Unto others: The evolution and psychology of unselfish behavior.
Harvard University Press.
Svetlova, M., Nichols, S., & Brownell, C. (2010). Toddlers prosocial behavior: From instrumental to empathic to altruistic helping. Child development 81(6), 18141827.
Tomasello, M. (2009). Why we cooperate. Cambridge, MA: MIT press.
Tomasello, M., Melis, A. P., Tennie, C., Wyman, E., & Herrmann, E. (2012). Two key steps in the
evolution of human cooperation. Current Anthropology 53(6), 673692.
Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly review of biology 46(1), 35
57.
Vaish, A., Carpenter, M., & Tomasello, M. (2009). Sympathy through affective perspective taking and its relation to prosocial behavior in toddlers. Developmental Psychology 45(2), 534.
Warneken, F., & Tomasello, M. (2006). Altruistic helping in human infants and young chimpanzees. Science 311(5765), 13011303.
Warneken, F., & Tomasello, M. (2007). Helping and cooperation at 14 months of age. Infancy
11(3), 271294.
Warneken, F., & Tomasello, M. (2008). Extrinsic rewards undermine altruistic tendencies in 20month-olds. Developmental psychology 44(6), 1785.
Warneken, F., & Tomasello, M. (2009). The roots of human altruism. British Journal of Psychology 100(3), 455471.
Warneken, F., & Tomasello, M. (2013). Parental presence and encouragement do not influence
helping in young children. Infancy 18(3), 345368.
Warneken, F., Hare, B., Melis, A., Hanus, D., & Tomasello, M. (2007). Spontaneous altruism by
chimpanzees and young children. PLoS biology 5(7), e184.

Motive for Young Childrens Developing Concern for Others Well-Being

117

Watts, D. P., & Mitani, J. C. (2002). Hunting behavior of chimpanzees at Ngogo, Kibale national
Park, Uganda. International Journal of Primatology, 23(1), 128.
Zahn-Waxler, C., Radke-Yarrow, M., Wagner, E., & Chapman, M. (1992). Development of concern for others. Developmental psychology 28(1), 126.

Moral Argumentation Skills and Aggressive Behavior.


Implications for Philosophical Ethics
Michael von Grundherr

Abstract
Much recent research on moral judgment making has focused on quick one-shot
judgments. Explicit reasoning has been shown to play a minor role in these cases. However, these results do not generalize to real moral conduct that often includes the iterative adaptation of long-term behavioral strategies. I suggest using school bullying as an
ecologically valid model for moral conduct and refer to studies that show that moral
reasoning competence is negatively correlated to immoral aggressive behavior. Taken
together, these results suggest a rather strict division of labor between reasoning on the
one hand and automatic processes on the other hand. I suggest that moral reasoning is
part of a long-term learning process, which sets the parameters for quick intuitive decision making. Philosophical ethics can play an important role by systematizing and
reflecting this learning process.

Introduction

Subjects in experiments on ethical decision-making regularly face fictional emergency


situations such as the following: [A] trolley threatens to kill five people. You are standing next to a large stranger on a footbridge []. [T]he only way to save the five people
is to push this stranger off the bridge (Greene 2001, 2105). Ought you to do this?
Stimulus stories of this type are likely to put the subjects in a certain cognitive mode:
They suggest that subjects have to make a single quick decision on a complex and highly important matter. Under such conditions, people are likely to rely on automatic and
proven judgment schemas (cf. Bargh and Chartrand 1999). This is no time to think

Michael von Grundherr


Research Center for Neurophilosophy and Ethics of Neuroscience
Ludwig-Maximilians University Munich
e-mail: mvg@lrz.uni-muenchen.de

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_6

120

Michael von Grundherr

about the difference between variants of the categorical imperative or the relative merit
of act- vs. rule-utilitarianism. Neither is it possible to start with one type of behavior
and to adapt it iteratively on the basis of feedback. Rather unsurprisingly, there is increasing experimental evidence showing that conscious moral reasoning in the style of
an explicit philosophical argument does not figure prominently in judgments of this
type. This has been dubbed the automaticity hypothesis (Cushman et al. 2011).
These findings about moral judgment have a counterpart in research about moral
action: In real emergency situations, moral reasoning is not likely to play a significant
role. A central finding of moral courage research illustrates this: people who intervene
and help the victim of an attack usually use a repertoire of specific and automatic action patterns for the relevant situation (Jonas and Brandsttter 2004). They often report
in retrospect that they acted like robots.1
Stories like the trolley cases and first aid or civil courage cases are attractive paradigms in moral psychology research because they are clearly morally relevant as well as
experimentally accessible. However, many cases of moral conduct differ from these
paradigms. Being a helpful colleague, bullying classmates or committing a series of
minor criminal acts all extend over a long time, and require repeated decisions. These
behaviors result from an iterative learning process with many feedback loops.
In order to be a meaningful challenge for ethics, the psychological story about moral
behavior should not only rely on lab experiments about one-shot decisions. I will show
that field studies on school bullying provide important complementary data. On this
basis I will argue that moral reasoning may not be highly effective in isolated one-shot
judgments and actions, but may play a more important role for the choice and adaptation of long-term behavioral strategies.
To prevent misunderstandings common in this interdisciplinary debate, it is useful
to introduce some terminological conventions. They are not meant to make any substantive philosophical claim, but to delineate a shared terminological background.

Unconscious processes are not accessible to introspection (e.g. most processing of


visual input). They are, however, accessible via third-person inspection, e.g. by psychology or neuroscience.
Intuitions are consciously accessible outputs of unconscious processes (e.g. most
emotions, memory recall).
Explicit processes are conscious and verbal or verbalizeable (e.g. making a mathematical calculation).
While some authors allow for reasoning to be both conscious and unconscious
(Harman et al. 2012), I regard reasoning as explicit inference (Mercier and Sperber

Compare for instance reports about a tourist who intervened when a man was attacked in a Berlin subway station
(Hamburger Abendblatt 2011), or the report on a driver who rescued a young woman from a burning car
(Dahlkamp et al. 2013).

Moral Argumentation Skills and Aggressive Behavior.

121

2011, 57), i.e. the production of new conscious representations (conclusions) on


the basis of other conscious representations (premises).
Moral judgments are occurrent explicit beliefs and can be either intuitions or results of reasoning. Note that reasoning is not the only way to arrive at justifiable
moral judgments. Intuitive judgments are often reason-responsive even if they are
not based on explicit reasoning (Railton 2009, 8283, 98).

State of the Debate Empirical Findings and Implications for


Ethics

2.1

Moral Psychology and Moral Reasoning

Moral psychology started out as a branch of developmental psychology. Two accounts


dominated the field during the last decades of the 20th century: Kohlbergs global stage
model (Kohlberg 1984) and the social-domain theory of moral development (Turiel
1983). Piagets work is the common root of both. According to Piaget (1997), children
generate moral norms and develop moral reasoning skills through interaction with
their peers. Developmental psychologists in this tradition tend to regard moral reasoning and verbal justification of moral judgments as central elements of moral competence, albeit with different focuses:
Kohlbergs cognitive developmental approach focuses on verbal moral reasoning.
Kohlberg holds that good reasoning leads to adequate moral judgments. According to
Kohlberg, children and youths move through a sequence of stages when they develop
their moral reasoning skills. They start out to reason in a self-interested way and then
proceed to reasoning in terms of conventional social norms. Finally they reach a stage
on which their moral judgments are independent of the influence of authorities or conventions and follow general moral principles. (Kohlberg 1984) Kohlberg also argues
that being on a higher level of moral reasoning skills predicts more consistent moral
behavior, i.e. more consistency between judgment and action (cf. Kohlberg and Candee
1983). He explains this by a lower tendency to accept pseudo-obligations and excuses
on the higher moral stages. There is also more recent evidence that moral reasoning
effectively influences behavior in certain contexts: Gummerum et al. (2008) show in a
study with high school students that students with high moral reasoning ability as
measured by Kohlbergs moral judgment interview were more influential than other
students in group discussions and group decisions in an economic distribution game.
As opposed to Kohlberg, social-domain theorists (e.g. Turiel (1983), Nucci (1981),
Smetana (2006), Killen (2007), Nunner-Winkler (2009)) do not claim that children
move through a sequence of stages, but that judgment skills in three domains (psycho-

122

Michael von Grundherr

logical/personal, conventional, moral) develop in parallel. On this account, moral


judgment does not necessarily require elaborate moral reasoning, but in a recent paper,
Richardson et al. (2012) argue that reasoning increases the quality of moral judgments
and may be essential to adjudicate between the different domains. They claim that if
someone is encouraged to reason about or justify her thinking then this will likely
activate the representational system and allow for more nuanced judgments (ibid., 14).
Both approaches share the optimistic view that individuals can learn a way of reasoning that leads to correct moral judgments and eventually guides action. On this
account, discursive engagement (talking to and debating with others) is important for
moral development. Children should get the opportunity to train their moral reasoning
skills and must learn to evaluate arguments independently of their personal preferences. Lind (2003) has developed a program of moral education that implements these
ideas. Participants (usually high school students) discuss moral dilemmas in a supervised group. In these dilemmas, the protagonists face a situation in which moral norms
are in conflict with each other. In one of the stories, Susanne observes her friend Uli
who steals an expensive blouse from a shop and escapes. Susanne must then decide
whether she should give Ulis name to a store detective, who has seen both of them
together and interrogates Susanne (ibid., 138). Susanne must weigh the requirements of
loyalty and honesty and eventually decides for honesty, i.e. she does tell the detective
Ulis name. In a structured discussion of this story, students learn to argue for their
position and to evaluate counterarguments to their own position impartially.
In the framework of the cognitive developmental approach and the social domain
theory, philosophical ethics is effective and important. It turns out to be an elaborate
version of common moral reasoning and can seamlessly interact with it. If common
moral reasoning reaches an impasse, professional ethical reflection may take over with
more sophisticated, but not principally different methods. It is completely continuous
with the skills that high-school students learn in the moral reasoning training.
This view has been challenged by a more recent interdisciplinary research program
in moral psychology (see Doris 2010 for a recent overview). Several proponents of this
new moral psychology tend to attribute effective moral judgment to unconscious processes and claim that at least a significant proportion of moral reasoning amounts to
ineffective post-hoc rationalization. Haidts social intuitionist model is probably the
most pronounced and influential example (Haidt et al. 2000; Haidt 2001, 2012). In a
famous experiment Haidt and colleagues asked their experimental subjects to evaluate
a case of consensual incest between siblings who had taken all possible precautions
against pregnancy, infections or damage to their reputation. Most subjects intuitively
judged this as morally wrong. When asked, they gave reasons (e.g. genetic defects of
potential children) for their judgment, but were reminded by the experimenter that
these reasons did not apply in this case (e.g. due to birth control). However, most subjects stayed with their original opinion and stated something like: "I don't know, I can't

Moral Argumentation Skills and Aggressive Behavior.

123

explain it, I just know it's wrong." (Haidt 2001, 814) They were, as Haidt puts it, morally dumbfounded. Haidt concludes from this finding that unconscious intuitive processes usually determine moral judgment and reasoning comes later: Under [] realistic
circumstances, moral reasoning is not left free to search for truth but is likely to be
hired out like a lawyer by various motives, employed only to seek confirmation of preordained conclusions (ibid., 822).
According to Haidt and colleagues, children are born with a set of cognitive moral
modules that are fine-tuned by cultural imitation (Graham et al. 2012). Children are
prepared to detect, punish and avoid, for instance, violations of fairness. What counts
as fair in a society and whether fairness is important compared to other areas of norms
(such as care, loyalty, authority etc.), is learned by adaption to social standards. This
learning process is mainly unconscious and people often do not know the decision
procedures and norms they learn and apply. For instance, it has been shown that people make a difference between harm that is intended as a means and harm that is accepted as a side effect. But mostly, they cannot use this principle to justify their judgment (Cushman et al. 2006; Hauser et al. 2007). Of course, people can name some principles that they use, such as the principle that requires making a difference between
action and omission. But even in these cases, recent imaging data shows that reasoning
with these principles is mostly secondary and mirrors automatic processes (Cushman
et al. 2011). Moreover, behavioral studies show that training in moral reasoning may
not help to correct errors of intuitive judgment. Schwitzgebel and Cushman (2012)
presented their subjects with two versions of the trolley dilemma mentioned in the
introduction of this article. They found that lay people judged the cases differently depending on the order in which the versions were presented. From a moral point of view
this is an unwanted bias. As the cases are independent, one should also evaluate them
independently. Professional philosophers should know this, but they were not less biased than lay people.
Philosophical ethics has some difficulty to find its place in this picture of moral
judgment: Isnt it just a sophisticated lawyer under the dictate of intuition? To be fair,
Haidts claim should not be overinterpreted. Haidt (2001, 819) concedes that ethical
reasoning can play a role, especially if embedded in social discussion, and he himself
argues for a utilitarian normative theory. More moderate proponents of new moral
psychology give reasoning a clear role and assume that it can be overriding in some
cases (cf. Cushman et al. 2011). In any case, however, the interface between ethical
theory and practical moral reasoning is not as seamless as cognitive developmental and
social-domain theories claim.

124

2.2

Michael von Grundherr

Limits of the Automaticity Hypothesis

Over the last few years, critics have warned of precipitate conclusions from the automaticity-findings, both on conceptual and empirical grounds.

2.2.1

Conceptual Criticism

Several authors have argued that the fact that automatic processes do most of the work
in moral decision-making is compatible with plausible moderate claims of rationalists.
They agree that metaethical rationalists would be in trouble if reasoning and philosophical ethics were purely epiphenomenal. But in order to guide moral judgment effectively, reasoning does not have to be involved all the time, or so they argue. A twostage argument has been put forward in various versions:
First, automatic processes suffice to decide in a morally adequate way most of the
time. Kennett and Fine (2008), using a distinction made by Jones (2003), argue that
automatic processes can track moral reasons, even if the agent is not consciously reason-responsive: an agent may do automatically what there is most reason to do without
being able to name all or even some relevant reasons. Moreover, such automatic responses are not just makeshift solutions. They provide enormous processing capacity
and thus enable a degree of speed and context-sensitivity that is not available to reasoning. By taking over most of the cognitive workload, they also free up reasoning for nonroutine tasks. As Railton (2009, 81) puts it, automated processes are essential for fluent
agency, which requires both automatic habitual behavior and reflection.
Second, rationalism is warranted even if reason only selectively intervenes in the
working of automated processes. Craigie (2011, 67ff.) suggests making use of Pettits
(2007) concept of virtual control, according to which an agent can be regarded as
reason-responsive if reason monitors automatic decisions and only takes over control if
something is going wrong. In order to control automatic prejudice, e.g. a racial prejudice of an interviewer in a series of job interviews, reasoning does not need to replace
all automatic judgment making (cf. Kennett and Fine 2008, 79, 88ff.). It needs only to
correct the effects of biases selectively and it must only intervene in cases that may trigger the prejudice, for instance when interviewees have a certain ethnical background.
To summarize: reasonable rationalists hold that [t]he real moral judgment is ultimately the one that the agent can reflectively endorse (ibid., 93; cf. Sauer 2012). Not
every real moral judgment must be actively endorsed in a reasoning process. Reasonable rationalists are satisfied with possible endorsement. Thus most automaticity findings are not strictly in conflict with a reasonable rationalist position.

Moral Argumentation Skills and Aggressive Behavior.

2.2.2

125

Empirical Counterevidence

The critical conceptual points reported above suggests to look more closely at the empirical data and to ask, (a) whether the experiments, on which the reasoning-critical
positions such as Haidts social intuitionist model are based, provide enough triggers
for reasoning and (b) if they are able to record potential effects of reasoning.
Haidt et al.s influential moral dumbfounding experiment seems to show that noninvolved, third-person judgments, which are even discussed with another person, are
produced by intuition. Isnt this a good model for the kind of decision in which rationalists would expect reason to intervene? I suspect that this appearance is deceptive.
Haidt and Bjrklund confront their subjects with a well-prepared confederate who
challenges their clear intuitive decision bout the consensual incest case with a battery of
arguments in a live and recorded discussion. This is not a typical case of relaxed deliberation. The subjects had better argue for what is most likely true instead of liberally
testing alternative hypotheses and making a fool of themselves. Consequently, the experiment only shows that people under pressure are likely to stick to their intuition and
to use reasoning for lawyer-like ex post argumentation only.
Furthermore, subjects may find it difficult to adopt the instruction of the experimenter immediately and blank out all risks of incest. What the experimenter says is just
not credible information for a well-trained automatic process. The automatic system
might simply not have enough time to reset its parameters so fundamentally. Thus the
experiment cannot disprove that intuition learns from reasoning slowly and iteratively,
including time-consuming quality and coherence checks.
A recent reprise of the dumbfounding experiment operationalizes these concerns.
Paxton et al. (2012) confront their subjects with Haidts original incest story. In addition, they give them a written statement and let them deliberate about it without time
pressure. The statement informs the subjects that they are likely to have strong intuitions, but then explains why these intuitions are not adequate in this specific case and
should be disregarded. This variation causes a break-down of the dumbfounding effect:
Paxton et al. found that the experimental group changed their opinion significantly
more often than people in the control groups, who had to read a flawed argument or
did not have time to deliberate.
The dumbfounding experiment is of course not the sole basis for the automaticity
hypothesis. Many relevant studies use the trolley-case paradigm (Hauser et al. 2007;
Valdesolo and DeSteno 2006). As mentioned already in the introduction, typical trolley
experiments force one-shot decisions and suggest the context of an emergency situation. Rationalists would not expect much reasoning in such situations anyway. However, even in these cases there is evidence that deliberate processes are at work. Greene
has famously argued in a series of papers that those subjects who make consequentialist
decisions in trolley studies employ cognition rather than emotion (Greene 2007,

126

Michael von Grundherr

40). Kahane disagrees with Greene about the interpretation of these findings and claims
that not consequentialist, but counterintuitive results lead to employment of deliberate
processes (Kahane 2012). Both agree, however, that more cognitive processes play a
role. I am reluctant to rely on these observations as decisive evidence for the effectiveness of reasoning, as it is unclear if controlled cognition as observed be Greene and
Kahane can count as reasoning in the strict sense at all. Nevertheless, these findings cast
at least initial doubt on strong versions of the automaticity hypothesis.
Another motive for accepting the automaticity hypothesis comes from research on
the influence of unconscious emotional states on moral judgment. For instance, Valdesolo and DeSteno (2006) showed that watching a comedy clip changes the evaluation
of trolley dilemmas and Wheatley and Haidt (2005) found that disgust induced during
hypnosis makes moral judgments more severe. Prinz (2006) draws far-reaching philosophical conclusions from these results and claims that they force one to accept
metaethical sentimentalism. However, it can be argued that the results only warrant
much weaker interpretations: the fact that moral judgment may be slightly influenced
by induced emotion does not show that emotion is the main determinant of moral
judgment (May 2014) or that no other cognitive process can affect it. Furthermore,
rationalists can take resort to the literature on prejudice control, which shows that people can compensate for unconscious biases in their judgment by various metacognitive
processes including reasoning (Kennett and Fine 2008, 88ff.). Knowledge about susceptibility to emotional influence is certainly important for ethical practice, but it cannot
decide the general question about the role of reasoning.
On a more general level, critics of the automaticity hypothesis seem to converge on
two points: (a) intuitive (or emotional) processes may often or even necessarily (cf.
Sauer 2012) be involved in moral judgments, especially in quick one-shot judgments,
(b) but this cannot prove that reasoning is inefficient. These results are far from spectacular and this, in turn, may indicate that the range of data on which the discussion
has been based is too narrow to warrant more interesting generalizations. Researchers
interested in implications for ethics should look for a broader and more valid database,
or so I will argue. According to my hypothesis, actual moral conduct is mostly longterm and implies iterated decisions. There is good reason to expect that effects of reasoning are better visible in this type of data. Reasoning is most likely to be triggered in
long-term learning processes. One important trigger for reasoning and learning is
feedback, which is only available after a first action. Moreover, the most probable effect
of reasoning is to be seen in the long-term choice of social strategies, social roles and
behavioral patterns.
Taken together, conceptual and empirical doubts about the importance of the automaticity findings motivate the following interactionist research hypothesis: reasoning and unconscious inference interact in the production of moral judgment and action. Depending on the task, the relative importance will vary: in single actions, people

Moral Argumentation Skills and Aggressive Behavior.

127

apply moral schemes and strategies automatically and without much reasoning. In the
long run, however, moral reasoning (reflection, control, learning) gains importance
and has an influence on action.

Long-Term Behavior and Reasoning as a Learning Mechanism

The critical discussion reported in the last sections shows that empirical results from
lab experiments must be handled with care and cannot be easily generalized to all moral decision-making and action. In what follows I will demonstrate how data from field
studies may put the debate about the role of moral reasoning into perspective. As these
studies investigate action in real-life contexts, they are also more directly relevant for
applied ethics.

3.1

Empirical Evidence for the Role of Reasoning in Long-Term


Behavior

A large body of research on the role of moral reasoning in long-term behavior comes
from research on delinquency. In a comprehensive meta-analysis, Stams et al. (2006)
show that lower levels of moral judgment correlate with delinquency in juvenile delinquents. It is disputable, however, if delinquency is a good prototype for what we usually
understand as immoral behavior, even though it is quite likely a subclass.
In studies that my colleagues and I have conducted, we investigate a more typical
case of immoral behavior, viz. school bullying. According to Olweus standard definition, [a] student is being bullied or victimized when he or she is exposed, repeatedly
and over time, to negative actions on the part of one or more other students (Olweus
1993, 9). Bullies, their assistants and supporters use aggression against the victim in
order to improve their social status in the peer group; they instrumentalize the victim
to reach social dominance. In other, more Kantian terms, bullies use the victim as a
means to reach self-interested goals. Victims suffer severely both from immediate
physical and mental harm and from long-term effects, such as lower self-esteem, increased emotional loneliness and difficulties in maintaining friendships (Schfer et al.
2004). Using others as means and causing severe harm have been consistently categorized as morally wrong by psychologists (e.g. Turiel 1983) and philosophers in the
Kantian tradition (c.f. Kant 2012, Korsgaard 1996), which strongly influences moral
and legal culture in Germany, where we conducted our studies. Thus we regard bullying as an adequate prototype of immoral behavior.
Bullying is widespread and therefore an important phenomenon in itself. Moreover
it shares many characteristics with other important cases of immoral behavior. This is

128

Michael von Grundherr

obvious for bullying in other contexts. According to Smith et al. (2003), workplace
bullying is remarkably similar to school bullying in many respects. Going further, I
think it is a plausible hypothesis that many cases of discrimination or unfair treatment
of social groups follow similar dynamics. This claim needs empirical backup and for
the time being it is enough to record the fact that bullying itself is a typical case of immoral behavior.
Bullying differs from cases with runaway trolleys, moral courage situations or awkward interviews. It is long-term by definition. Aggression that does not appear repeatedly and over time does not count as bullying. When a new class is formed, it takes
some weeks until students establish a social hierarchy and find their roles. Once these
roles are established, students enact them rather consistently. Students do not decide at
a single point in time whether to bully another student. Instead, they run through an
iterative process, in which they act aggressively or prosocially, find out if they are successful or not, register feedback from their peers, justify their behavior to teachers,
peers and parents and adapt their behavioral strategies for the next iteration. In other
words, they engage in an extended learning process. The most promising approach to
the prevention of bullying therefore modulates this learning process: if class or school
culture and credible authorities make aggression unsuccessful, students who aim at
social dominance are much more likely to adopt pro-social strategies (Schfer et al. in
press).
In our study, bullying roles were determined by a well-established peer-nomination
procedure (Salmivalli et al. 1996; Schfer and Korn 2004). A typical item for the bullyrole in this questionnaire was: Who does regularly insult others?. There were also
items for other roles, such as defenders, outsiders or victims. The students were asked
to nominate a number of classmates for each of these questions. Based on the number
times a student had been nominated in the different categories, behavioral types (roles)
could be determined. In addition to this, we measured moral reasoning skills with the
Moral Judgment Test (MJT) developed by Lind (e.g. Lind 2008). The MJT has been
developed out of Kohlbergs original moral judgment interview (Colby and Kohlberg
1987). Subjects read two moral dilemma stories. In one of these stories, factory workers
break into the main office of their company in order to prove that the management has
been illegally spying on employees through an intercom system.2 Subjects must first
indicate whether they think that the workers acted rightly or wrongly. Then they are
asked to indicate how strongly they accept or reject six moral arguments that support
the workers decision and six arguments that speak against their decision. Each of the
arguments belongs to one of Kohlbergs levels of moral judgment, which range from a
purely hedonistic level to a level of principle-guided universal reasoning. By summing
2

The other story is about a doctor who agrees to administer a potentially deadly dose of morphine to a terminally
ill patient. (Lind 2008, 197)

Moral Argumentation Skills and Aggressive Behavior.

129

up the acceptance score of the four items per judgment level (one pro- and one contraargument per story), one could determine the subjects preference for a Kohlbergian
judgment level.
However, the test design allows for a more sophisticated scoring. The so-called CScore (Competence Score) measures whether a subject judges the quality of the arguments independently of his or her own decision in the case. Let me illustrate this with
the workers dilemma. The pro-argument on level 6 claims that the workers have acted
rightly, because trust between people and individual dignity count more than the firm's
internal regulations (Lind 2008, 198; Lind 2009). The contra-argument on level 6 appeals to basic property rights, which can only be violated if universal moral principles
allow doing so. Contrary to these principle-guided considerations, the arguments on
level 1 appeal to self-interest of the involved parties. The pro-argument on level 1 refers
to the fact that the workers did not cause much damage to the company (ibid.), while
the contra-argument on the same level dwells the fact that the workers helped their
colleagues more than themselves. Subjects who consistently rate pro- and contraarguments on their preferred level as acceptable get a high competence score. These
subjects accept arguments based on their quality, not on their content. Subjects who
choose only pro- or only contra-arguments across levels, e.g. accept the level 6 and the
level 1 pro-argument, but reject the level 6 and the level 1 contra-argument, score low
on the test. The choice behavior of these subjects indicates that they accept arguments
that support their own opinion, but do not so much care about the quality of the arguments.
The C-Score is in principle independent of Kohlbergs hierarchy of judgment levels.
As a matter of fact, however, it seems that people who judge consistently and get a high
C-Score often prefer the more principle-based arguments on the higher judgment levels
(cf. Lind 2008). Only rarely do people get a high score by consistently accepting proand contra-arguments on low levels.
The C-Score is highly interesting for the discussion about automaticity and the role
of explicit reasoning. In the MJT, people are presented a moral question and are asked
to make a spontaneous judgment. Only after triggering this judgment, the test forces
people engage in explicit reasoning about the case, i.e. evaluate a list of more or less
sophisticated arguments. The C-Score then indicates if people are able to reason independently of their spontaneous judgment. To use Haidts (2001) terms, subjects with a
high C-Score are able to avoid reasoning like a lawyer who only justifies given intuitive
judgments. However, the test does not show that impartial moral reasoning is actually
effective in people with a higher C-Score. They might be able to appreciate the quality
of arguments, but fail to make use of this ability in morally relevant situations. It is an
empirical question, if good performance in the test leads to good moral judgment and
eventually to morally good behavior.

130

Michael von Grundherr

Our results provide evidence for a link between moral reasoning competence and
long-term moral behavior in bullying contexts. We find evidence that high school students (age 11-17) with lower C-Scores are more likely to take aggressive roles, i.e. to
behave morally wrong (von Grundherr et al. 2015). This finding provides support for
the following picture of cognitive processes underlying long-term immoral behavior:
The typical student even if involved in bullying thinks that bullying is morally
wrong and undesirable in general (Whitney and Smith 1993). However, this knowledge
does not generally translate into behavior; many students fail to live up to their own
moral standards. This divergence can be due to a lack of moral motivation or concern
for the moral principles. The moral motivation of bullies might be weak and break
down as soon as immoral behavior promises high benefits in terms of social status in
the peer group. However, our findings suggest that an additional mechanism might
play a role. Differences in the ability to judge situations in an impartial way (as measured by the C-Scores) may at least partially explain the divergence between moral attitudes and behavior. Students with a low C-Score might fail to apply their general antibullying attitude to their own behavior, while students with high C-Scores succeed in
evaluating the specific situation in an impartial and unbiased way according to their
general moral standards.
This interpretation gains plausibility when an often-observed feature of bullying dynamics is taken into account. Bullies usually succeed in establishing a coherent set of
wrong beliefs and pseudo-moral arguments in their group. They manage to make a
large part of the class, maybe also themselves, belief that their victim has morally deserved its plight (Schfer et al. in press). Students with a higher C-Score may be better
able to adjudicate between these pseudo-moral arguments and proper moral considerations in a complex situation in which much is at stake for them. Students with lower CScore may be less able to uphold their initial (anti-bully) view of the situation over
time, especially if it is regularly challenged by influential peers and acting on it turns
out to be inconvenient. Moral reasoning competence may block this erosion of adequate situation-specific moral judgments. It reliably ties them back to general moral
standards.
This explanation is in line with Lind (2003), who emphasizes that the C-Score
measures an application competence (i.e. the ability to apply moral standards to specific situations). Lind claims that interventions should foster application competence
instead of providing value education. He refers to classical studies by Levy-Suhl (1912)
showing that criminals do not differ from average citizens in their preference for universal moral principles.
To summarize, the ability to reason impartially about moral issues and to reliably
evaluate the quality of moral arguments is associated with long-term moral behavior
(role-taking) in bullying situations. I take this as evidence that reasoning is relevant for
effective moral judgments and moral conduct. However, as we have investigated only

Moral Argumentation Skills and Aggressive Behavior.

131

stable long-term behavior, we cannot derive any claim about the role of reasoning in
single moral decisions, which are investigated in the majority of experiments on moral
judgment.

3.2

The Division-of-Labor-Hypothesis

Having reviewed the current debate about the relative influence of reason and intuition
(or emotion) on moral judgment, Helion and Pizarro suggest: Perhaps shifting the
question from simply asking if reason influences moral judgment, and toward when
and how reasoning influences moral judgment yields more nuanced insight (Helion
and Pizarro in press).
Dual process theories have been used to capture the interplay between intuitive and
explicit moral judgment making. In one version, dual process models simply assume a
parallelism of two cognitive processes, which fulfill the same task with different means.
For instance, Greene (2014) compares the two systems for moral judgment with two
modes of a camera. In the manual mode, the user defines exposure and aperture, while
in the automatic mode, a preprogrammed algorithm defines the settings in question.
Approaches at the other end of the spectrum assume complementary rather than
overlapping processes. Craigie (2011), for instance, highlights the fact that in a differentiated dual process model such as the one developed by Kahneman and Frederik
(2002), automatic and deliberate processes have different tasks: automatic processes
drive first-order decision making while explicit processes exert metacognitive control
over automatic processes. Let us call this the division-of-labor-hypothesis.
Taken together, the evidence from studies on short- and long-term moral behavior is
highly compatible with the division-of-labor-hypothesis in a very strict form, which
claims that reasoning and unconscious inference do not overlap. According to this
view, every decision is a result of unconscious inference. Reasoning plays an important
role in learning processes and can modify the automatic system, without ever making a
judgment itself.3 The details of the experiment by Paxton et al. (2012, 8) are revealing.
Subjects adapted their judgment about the case of consensual and riskless incest when
they had time to reflect and read a strong argument, which explained that automatic
feelings of disgust toward incest had made sense for a large part of human evolutionary
history, but had lost their relevance today. The option of contraception had changed
the environment and therefore ones feelings of disgust were not a good basis for a
judgment anymore. This is essentially an argument about the information that one
3

Parallel constraint satisfaction theory (PCS) as developed by Glckner and Betsch (2008) is one way to spell out
this thesis more formally. Most of the time, our cognitive systems include new information into an existing constraint network and thus quickly find a solution that is good enough for most purposes. Permanent coherence
maximization is an unconscious process. Reasoning does not participate in this process, but it can change its
parameters by setting, for instance, strategies for searching, producing or changing information (ibid., 223).

132

Michael von Grundherr

should take into account when thinking about this case. It does not make a direct point
about the case itself.
This fits well with the observations in studies on bullying. Bullying behavior is integrated in complex social situations. It is highly implausible that children make behavioral choices in the bullying context on the basis of moral reasoning. It is also nave to
expect that a single moral argument may change the situation, even if it is a very good
argument. An otherwise inactive teacher who spends one lesson on discussing the
problematic atmosphere in the class is likely to make bullying worse, no matter how
well he argues. On the other hand, children and youths who are able to activate impartial moral reasoning are much less likely to behave aggressively in the long run. Training moral judgment competence may thus be an indirect metacognitive, but effective
intervention.
Let me finally situate this position in the broader context of moral psychology: The
position that emerges from our research does not fundamentally contradict Haidts
social intuitionist model, but it certainly stresses different causal links. It also evaluates
mechanisms in an importantly different way. Haidt agrees that social (and in some
cases private) reasoning can influence intuitions and thus have an indirect effect on
judgment and behavior, but calls this somewhat disrespectfully reasoned persuasion
(Haidt 2001, 818). On a division-of-labor-model, however, there is nothing defective or
problematic with this kind of indirect impact indeed, reasoning can only influence
judgment via intuition. Moreover, on a long-term view on iterative learning processes,
there is nothing bad about post-hoc reasoning, as reasoning after one decision is reasoning before a series of other decisions.
The findings from our bullying research are, of course, even more obviously compatible with the more cognitivist traditions in moral psychology. Our studies show that
there is a link between a neo-Kohlbergian measure of moral reasoning and behavior
although it is clearly only one determinant among others. Our findings are also in line
with very recent developments in social domain theory of moral judgment, such as the
approach of Richardson et al. (2012) who claim that the representational system kicks
in in complicated cases, e.g. cases of inter-domain conflicts.

Implications for Philosophical Ethics

I have argued that real-life moral behavior is long-term and embedded in complex
social contexts. School bullying is a better prototype for it than helping in emergency
situations. While explicit considerations in the style of a philosophical argument are
not likely to figure in moral judgment and action in emergency cases, recent data from
bullying research shows that explicit reasoning competence is correlated with the mid-

Moral Argumentation Skills and Aggressive Behavior.

133

and long-term choice of aggressive behavior of high-school students. These observations are compatible and even mutually supporting if one assumes division of labor
between intuitive and explicit processes. According to this picture, moral reasoning is a
second-level process that sets parameters and strategies for an unconscious decision
process without making behavioral choices itself.
Moral reasoning may therefore be more effective and important for moral practice
than Haidt and like-minded proponents of the intuitionist turn in moral psychology
claim. If this is true, not only folk moral reasoning, but also philosophical ethics are
clearly important. The division-of-labor-hypothesis assigns a specific place to philosophical ethics, however. Ethics can provide structured and precise reflection that supports the second-level system, but it contributes to decision making only indirectly.
This picture is compatible with the actual practice in moral philosophy. Although ethicists sometimes talk as if they could derive a single moral decision from theory alone,
this is not what they usually do and what theyre good at. Complete deductive arguments that lead to a clear judgment in a specific case are rare in ethics. Many arguments
in ethical discussions are rather higher-level reflections about how one should make
more specific judgments. They highlight coherence links (e.g. if humans have rights
because they are sentient beings, animals also have rights) or recommend integrating
additional information. I think one does not do grave injustice to the large theoretical
schools such as utilitarianism or deontology by holding that they mainly serve to systematize the parameters and decision strategies of the automatic system and extrapolate
them to new areas of application.
In the long history of philosophy, many have claimed that intuition cannot be replaced completely by principle-guided reasoning.4 The division-of-labor-hypothesis
explains why intuitive judgment plays an integral role in everyday moral decisionmaking and in the philosophical discourse. It implies that without intuition we cannot
make any moral judgment that does justice to real-life situations. Intuition is highly
sensitive to subtle differences and able to integrate single decisions in a larger network
of context factors. It does not follow, however, that philosophers should use a reflective
equilibrium methodology, which gives equal authority to intuition and reasoning.5 The
division-of-labor-hypothesis is compatible with the claim that in some contexts, individual or social reasoning may have the sole normative authority to determine how we
should make moral judgments. If we as in Haidts incest case do not have any reason to trust our intuitive judgment, it is not easy to see why the fact that we have a
4

Aristotle, for instance, emphasizes that explicit theoretical knowledge of virtue is neither sufficient nor necessary
for good moral judgment and action. He claims that practical experience is necessary to judge single cases correctly
(Aristotle 2014, 1141b). The particularist account (e.g. Dancy 2006) provides a modern elaborated version of
principle-skeptical ethics.
5
Orininally suggested by Rawls (1971), the method of reflective equilibrium consists in mutually adapting casespecific intuitions, general principles and theoretical considerations in an iterative process of deliberation.

134

Michael von Grundherr

strong intuition to the contrary has more than heuristic import. Division of cognitive
labor between reasoning and intuition implies that both systems have far-reaching
authority in their respective domains.

Perspectives for Future Research

I have sketched a model of moral cognition that is based on specific empirical findings
and coheres with a large body of research in related areas. More research is desirable. In
the following, I name a few examples, not an exhaustive list:

Longitudinal studies of bullying behavior and moral reasoning skills are necessary
to clarify the direction of causal influence between these two constructs.
I have hypothesized that the type of argument that people get in the Paxton et al.
study (2012) plays a role. I think it is promising to investigate this further. Does it
make a difference whether the argument is a meta-argument that challenges implicit premises or an argument that reaches a concrete conclusion in the case?
On the theoretical side, modeling moral judgment on a two level parallelconstraint-satisfaction-model, for instance with computer simulations, would be
interesting to better understand the cognitive mechanisms at work and to lend
support to the division-of-labor-hypothesis.

Moral psychology has undoubtedly made an enormous leap forward in the last decade.
Many new results show that explicit moral reasoning is only one among many determinants of moral judgment. However, if the research suggested above will corroborate
the results of our field studies on bullying and support the cognitive model I have suggested, explicit moral reasoning turns out to be influential and authoritative in an important area: Such reasoning, including philosophical ethics, may sustainably change
the way in which people make their intuitive choices, at least if they have time for integrating feedback and for learning.

References

Aristotle (2014). Nicomachean Ethics (trans: C.D.C. Reeve). Indianapolis, Cambridge: Hackett.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist 54, 462479. doi:10.1037/0003-066X.54.7.462.
Colby, A., & Kohlberg, L. (1987). The Measurement of Moral Judgement: Volume 2, Standard
Issue Scoring Manual. Cambridge University Press.

Moral Argumentation Skills and Aggressive Behavior.

135

Craigie, J. (2011). Thinking and feeling: Moral deliberation in a dual-process framework. Philosophical Psychology 24(1), 5371. doi:10.1080/09515089.2010.533262.
Cushman, F., Young, L., & Hauser, M. (2006). The Role of Conscious Reasoning and Intuition in
Moral Judgment: Testing Three Principles of Harm. Psychological Science 17(12), 1082
1089. doi:10.1111/j.1467-9280.2006.01834.x.
Cushman, F., Murray, D., Gordon-McKeon, S., Wharton, S., & Greene, J. D. (2011). Judgment
before principle: engagement of the frontoparietal control network in condemning harms
of omission. Social Cognitive and Affective Neuroscience 7(8), 888895. doi:10.1093/
scan/nsr072.
Dahlkamp, J., Friedmann, J., Ulrich, A., & Windmann, A. (2013). Ich oder keiner. Der Spiegel
2013(11), pp. 5865.
Dancy, J. (2006). Ethics Without Principles. Oxford: Oxford University Press.
Doris, J. M. (2010). The Moral Psychology Handbook. Oxford: Oxford University Press.
Glckner, A., & Betsch, T. (2008). Modeling option and strategy choices with connectionist
networks: Towards an integrative model of automatic and deliberate decision making.
Judgment and Decision Making 3(3), 215228.
Greene, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgement.
Science 203, 21052108.
Greene, J. D. (2007). The secret joke of Kants soul. In W. Sinnott-Armstrong (Ed.), The Neuroscience of Morality: Emotion, Brain Disorders, and Development (pp. 3579). Cambridge,
Mass.: MIT Press.
Greene, J. D. (2014). Beyond point-and-shoot morality: Why Cognitive (Neuro)Science Matters
for Ethics. Ethics 124(4), 695726.
Gummerum, M., Keller, M., Takezawa, M., & Mata, J. (2008). To Give or Not to Give: Childrens
and Adolescents Sharing and Moral Negotiations in Economic Decision Situations. Child
Development 79(3), 562576. doi:10.1111/j.1467-8624.2008.01143.x.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral
judgment. Psychological Review 108(4), 814834. doi:10.1037//0033-295X. 108.4.814.
Haidt, J. (2012). The righteous mind: why good people are divided by politics and religion. London: Allen Lane.
Haidt, J., Bjrklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no
reasons. Unpublished manuscript.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral
Foundations Theory: The Pragmatic Validity of Moral Pluralism, Advances in Experimental
Social Psychology (47), 55130. doi: 10.1016/B978-0-12-407236-7.00002-4.
Hamburger Abendblatt (2011, August 25). U-Bahnschlger: Keiner half dem mutigen Retter.
Hamburger Abendblatt. Retrieved from http://www.abendblatt.de/vermischtes/article
2003639/U-Bahnschlaeger-Keiner-half-dem-mutigen-Retter.html.
Harman, G., Mason, K., Sinnott-Armstrong, W., & Doris, J. M. (2012). Moral Reasoning. In J.M.
Doris et al. (Eds.), The Moral Psychology Handbook (pp. 206245). Oxford: Oxford University Press.
Hauser, M., Cushman, F., Young, L., Kang-Xing, J., & Mikhail, J. (2007). A Dissociation Between
Moral Judgments and Justifications. Mind Language 22(1), 121.

136

Michael von Grundherr

Helion, C., & Pizarro, D. A. (in press). Beyond dual-processes: The interplay of reason and emotion in moral judgment. In N. Levy, & J. Clausen (Eds.), Handbook of Neuroethics. Springer.
Jonas, K. J., & Brandsttter, V. (2004). Zivilcourage. Zeitschrift fr Sozialpsychologie 35(4), 185
200. doi:10.1024/0044-3514.35.4.185.
Jones, K. (2003). Emotion, weakness of the will and the normative conception of agency. In A.
Hatzimoysis (Ed.), Philosophy and the Emotions (pp. 181200). Cambridge: Cambridge
University Press.
Kahane, G. (2012). On the Wrong Track: Process and Content in Moral Psychology. Mind &
language 27(5), 519545.
Kahneman, D., & Frederick, S. (2002). Representativeness Revisited: Attribute Substitution in
Intuitive Judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 4981). Cambridge: Cambridge University
Press.
Kant, I. (2012). Groundwork of the Metaphysics of Morals (trans: M. Gregor & J. Timmermann).
Cambridge: Cambridge University Press. (Original work published 1785).
Kennett, J., & Fine, C. (2008). Will the Real Moral Judgment Please Stand Up? Ethical Theory
and Moral Practice 12, 7796. doi:10.1007/s10677-008-9136-4.
Killen, M. (2007). Childrens Social and Moral Reasoning About Exclusion. Current Directions in
Psychological Science 16(1), 3236. doi:10.1111/j.1467-8721.2007.00470.x
Kohlberg, L. (1984). Essays on Moral Development. Vol. II: The Psychology of Moral Development. San Francisco: Harper & Row.
Kohlberg, L., & Candee, D. (1983). The Relationship of Moral Judgment to Moral Action. In L.
Kohlberg (Ed.), The Psychology of Moral Development: The Nature and Validity of Moral
Stages (pp. 498581). San Francisco: Harper & Row Publishers [1984].
Korsgaard, C. M. (1996). The Sources of Normativity. Cambridge: CUP.
Levy-Suhl, M. (1912). Die Prfung der sittlichen Reife jugendlicher Angeklagter und die Reformvorschlge zum 56 des deutschen Strafgesetzbuches. Zeitschrift fr Psychotherapie
232254.
Lind, G. (2003). Moral ist lehrbar: Handbuch zur Theorie und Praxis moralischer und demokratischer Bildung. Munchen: Oldenbourg.
Lind, G. (2008). The meaning and measurement of moral judgment competence. A dual-aspect
model. In D. Fasko & W. Willis (Eds.), Contemporary philosophical and psychological perspectives on moral development and education (pp. 185220). Creskill: Hampton Press.
Lind, G. (2009). Moral Judgment Test (MJT) English Version. Available from the author upon
request (contact: http://www.uni-konstanz.de/ag-moral/mut/mjt-engl.htm).
May, J. (2014). Does Disgust Influence Moral Judgment? Australasian Journal of Philosophy
92(1), 125141.
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative
theory. Behavioral and Brain Sciences 34(02), 5774. doi:10.1017/S0140525X10000968.
Nucci, L. (1981). Conceptions of Personal Issues: A Domain Distinct from Moral or Societal
Concepts. Child Development 52(1), 114121. doi:10.2307/1129220
Nunner-Winkler, G. (2009). Prozesse moralischen Lernens und Entlernens. Zeitschrift fr Pdagogik 55(4), 528548.

Moral Argumentation Skills and Aggressive Behavior.

137

Olweus, D. (1993). Bullying at School: What we know and what we can do. Oxford, Cambridge,
Mass.: Blackwell.
Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and Reasoning in Moral Judgment.
Cognitive Science 36(1), 163177. doi:10.1111/j.1551-6709.2011.01210.x.
Pettit, P. (2007). Neuroscience and Agent Control. In D. Ross, D. Spurrett, H. Kincaid, & G. L.
Stephens (Eds.), Distributed Cognition and the Will: Individual Volition and Social Context
(pp. 7791). Cambridge, Mass., London: A Bradford Book.
Piaget, J. (1997). The Moral Judgment of the Child (trans: M. Gabain). New York: Free Press.
(Original work published 1932.)
Prinz, J. (2006). The Emotional Basis of Moral Judgement. Philosophical Explorations 9, 2943.
Railton, P. (2009). Practical competence and fluent agency. In D. Sobel & S. Wall (Eds.), Reasons
for action (pp. 81115). Cambridge: Cambridge University Press.
Rawls, J. (1971). A Theory of Justice. Cambridge, Mass.: Harvard University Press.
Richardson, C. B., Mulvey, K. L., & Killen, M. (2012). Extending Social Domain Theory with a
Process-Based Account of Moral Judgments. Human Development 55(1), 425.
doi:10.1159/000335362.
Salmivalli, C., Lagerspetz, K., Bjrkqvist, K., sterman, K., & Kaukiainen, A. (1996). Bullying as
a group process: Participant roles and their relations to social status within the group. Aggressive Behavior 22(1), 115.
Sauer, H. (2012). Psychopaths and Filthy Desks. Ethical Theory and Moral Practice 15(1), 95
115.
Schfer, M., & Korn, S. (2004). Bullying als Gruppenphnomen. Zeitschrift fr Entwicklungspsychologie und pdagogische Psychologie 36(1), 1929.
Schfer, M., Korn, S., Smith, P. K., Hunter, S. C., Mora-Merchn, J. A., Singer, M. M., & Meulen,
K. (2004). Lonely in the crowd: Recollections of bullying. British Journal of Developmental
Psychology 22(3), 379394. doi:10.1348/0261510041552756.
Schfer, M., von Grundherr, M., & Sellmaier, S. (in press). Mobbing. In J. Sautermeister (Ed.),
Moralisches Knnen: Transdisziplinre Perspektiven und ethische Herausforderungen der
Moralpsychologie. Stuttgart: Kohlhammer.
Smetana, J. G. (2006). Social-cognitive domain theory: Consistencies and variations in childrens
moral and social judgments. In M. Killen & J. G. Smetana (Eds.), Handbook of moral development (pp. 119154). Mahwah: Erlbaum Associates.
Schwitzgebel, E., & Cushman, F. (2012). Expertise in Moral Reasoning? Order Effects on Moral
Judgment in Professional Philosophers and Non-Philosophers. Mind & Language 27(2),
135153.
Smith, P. K., Singer, M., Hoel, H., & Cooper, C. L. (2003). Victimization in the school and the
workplace: Are there any links? British Journal of Psychology 94(2), 175188.
Stams, G. J., Brugman, D., Dekovi, M., Rosmalen, L., Laan, P., & Gibbs, J. C. (2006). The Moral
Judgment of Juvenile Delinquents: A Meta-Analysis. Journal of Abnormal Child Psychology
34(5), 692708. doi:10.1007/s10802-006-9056-5.
Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention. Cambridge
University Press.
Valdesolo, P., & DeSteno, D. (2006). Manipulations of emotional context shape moral judgment.
Psychological Science 17(6), 476477.

138

Michael von Grundherr

von Grundherr, M., Geisler, A., Stoiber, M. & Schfer, M. (2015). School bullying and moral
reasoning competence. Manuscript submitted for publication.
Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological science 16(10), 780784.
Whitney, I., & Smith, P. K. (1993). A survey of the nature and extent of bullying in junior/middle and secondary schools. Educational Research 35(1), 325. doi:10.1080/0013188
930350101.

Psychologys Contribution to Ethics: Two Case Studies


Liz Gulliford

Abstract
This paper contends that psychology cannot replace ethics. However, it will be argued,
with reference to two case studies, that the empirical investigation of human morality
can offer an important contribution to ethics. First, an empirical approach can illuminate matters of definition. Normative ethicists often make distinctions between concepts that do not reflect lay usage, and may seek to refine or reclaim the true meaning
of words to prevent the erosion of conceptual distinctions. However, it might be argued
that they should hold no privileged place when it comes to defining the terms of language as it is used. It is essential that philosophers take seriously the question of what
laypeople understand by ethical concepts, partly because the cultural and social differences such analyses reveal are interesting in themselves, but also because there are implications for the relationship between laypeople and the academy. The first case study
thus shows that psychology can make a contribution towards defining ethical concepts.
Secondly, it will be shown that psychology can elucidate the processes by which ethically desirable ends might be facilitated. Psychological approaches to forgiveness may, for
example, help to expedite a goal which may seem remote from the human dynamics of
forgiveness. Psychological interventions focus not on when forgiveness is appropriate
or fitting (as a normative ethical account might) but on how this goal can be promoted.
These methods do not replace ethics, but they do complement it in elucidating how
certain ethically desirable ends might be progressed.

Liz Gulliford
Jubilee Centre for Character and Virtues
School of Education
University of Birmingham, UK
e-mail:l.z.gulliford@bham.ac.uk

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_7

140

Liz Gulliford

Case Study 1: Defining the Phenomenon of Interest (Gratitude)

In the first section of the paper it will be shown that psychological methods can be used
to illuminate ethical concepts. It will be suggested that a conceptual understanding of
gratitude can be enriched and even challenged by empirical research that enables lay
understandings of gratitude to be canvassed. Should expert definitions from above or
more democratic definitions from below carry the day, or is there not in the end a
place for both types of enquiry?
It will be argued that there are good reasons for supplementing expert analyses of
gratitude with attempts to define the concept from a lay perspective. Findings from a
recent prototype analysis of gratitude (Morgan et al. 2014) will be used to illustrate
that laypeople in the UK associated gratitude with more negative features than one
might expect, challenging received wisdom about gratitudes putatively unambiguously
positive nature. This same study found no support for the view that gratitude is characterized by awe or wonder (as some definitions suggest) and thereby raises the question of whether gratitude is sometimes constructed in ways which, in some respects,
jar with the experience of ordinary people.
In the final section of the first case study, a vignette questionnaire to access lay conceptual understanding of gratitude, developed at the Jubilee Centre for Character and
Virtues, will be described. This questionnaire enables conceptual controversies rehearsed in the gratitude literature to be manipulated to see which factors influence the
amount of gratitude laypeople report they would feel in response to the scenarios described, and whether this follows the contours of expert philosophical analyses of gratitude. Such an approach may also shed light on the question of whether gratitude is a
unitary concept, or whether there are a number of sub-types (or species) of gratitude
in lay usage which share family resemblances.

1.1

The Wise and the Many

Who is deemed qualified to hold forth about ethical matters? Should this be left to the
Wise, people who have reflected at length on the meaning of ethical concepts and the
normative question of when, say, gratitude or courage is excellent (virtuous), or should
the views of the Many also be surveyed in order to produce a broader (if not, perhaps, a
deeper) understanding of virtue concepts and the practice of virtue in the world?
In a recent paper, Robert C. Roberts distinguishes between two sorts of question.
First, the question of what a virtue (in this case, gratitude) is, and second, when a virtue
(gratitude) is excellent (Roberts, in press, 17). Roberts is right to make this distinction.
However, the two questions are related and the second question supervenes on the first.
In the first step of defining ones terms, it must be acknowledged that ethicists fre-

Psychologys Contribution to Ethics: Two Case Studies

141

quently make distinctions between concepts that do not reflect lay usage. Should this be
a cause for concern? There are plenty of philosophers who are unruffled by this state of
affairs: laypeople are deemed to be using the terms incorrectly, and part of the job of the
philosopher is to refine or reclaim the true meaning of words to prevent the erosion of
important conceptual distinctions. In this connection, a philosopher might cite the
widespread lay confusion between the concepts of jealousy and envy, between sympathy and empathy, and the confounding of the intrapersonal state of forgiveness with
the interpersonal encounter involved in reconciliation.
There is much to be said for trying to achieve clarity in these matters of definition.
However, it could be argued that normative ethicists should hold no special place when
it comes to defining the terms of language as it is actually used. It might be claimed that
learned and nuanced understandings (interesting though they may be) represent the
views of a minority and should, on that basis, be dropped in favor of a more mainstream view. There is clearly a place for both types of enquiry. Historically, the focus
has been on the reflections of the Wise, though with the advent of social science and a
battery of methods and sampling techniques, we are perhaps now more able to take on
board lay understandings than we were in the past. Now that we are in a position to
contrast the insights of experts with those of laypeople, should we then do so?
To my mind, it now seems essential that philosophers take seriously the question of
what ordinary people understand by virtue concepts, such as gratitude. One immediately obvious reason is that there are clear differences in understanding virtues across
social and cultural boundaries. These cultural and social differences are interesting in
themselves. What is more, they demonstrate that there is no universally accepted understanding of the concepts involved. Even within a culture, virtues mold to different
contours depending on the dynamics of power in the relationships in which these virtues are exhibited. For instance, gratitude may be experienced differently by those who
are primarily benefactors, in comparison with those who usually assume the role of
beneficiary. Aristotle, it will be recalled, did not regard gratitude as a virtue exhibited
by the great minded because being the receiver of a benefit placed that individual in an
inferior position to the giver (Nicomachean Ethics, 1124b 10-15). Whilst there is widespread disagreement with Aristotle, for many individuals the concept of gratitude has
irretrievably negative connotations.
On this point, tension exists between the values of gratitude and of justice. It could
be argued that some benefits ought to be ours by right, not by good fortune or blessing.
A culture of gratitude could engender an unhealthy dependency on benefactors, minimizing personal autonomy and individual agency. There are undoubtedly power issues
at stake in interactions involving gratitude which may impact on the way in which the
concept is understood both within and between cultures. Gratitude could mean different things to parents and children and to rich and poor. We need to go beyond a super-

142

Liz Gulliford

ficial assumption that we know what gratitude (or any virtue) is or that it takes the
same shape in every society. How might this be done?
In this first section, a means of accessing lay understandings of concepts (in this case
the virtue concept of gratitude) will be presented. Attention will be turned to the words
laypeople associate with gratitude. It should be acknowledged that this method does
not replace a conceptual analysis. However, it affords interesting linguistic and potentially cross-cultural insights, and sheds light on the question just raised of whether laypeople regard gratitude as an unambiguously positive concept.

1.2

A Prototype Analysis of Gratitude

One method of gathering meanings and descriptions of concepts is to ask laypeople


what features they associate with a given concept, and which of these features they
think are most important to that concept. In the first stage of prototype analysis, participants name features they think are typical of instances of a concept (actions, feelings,
consequences, determinants etc.). In this way, a nucleus of central concept features can
be established, around which relatively peripheral or marginal concept features can be
identified. Prototype analysis is particularly useful for comparing cross-cultural differences and has been used by psychologists to examine concepts such as emotion (Fehr
and Russell 1984), love (Fehr and Russell 1991), nostalgia (Hepper et al. 2012), modesty
(Gregg et al. 2008) and forgiveness (Kearns and Fincham 2004). A prototype analysis
can make a contribution towards defining the concept of gratitude from a lay perspective.
In two studies of gratitude in the US and UK (Lambert et al. 2009; Morgan et al.
2014), participants were asked to identify features of gratitude, then rate how positive
or negative these features were (their valence rating). This begins to address the question of whether gratitude is evaluated by laypeople as an entirely positive concept. Indeed, the most common feature associated with gratitude in the UK (Morgan et al.
2014) was the emotional category of happy, which was named by 65.28% of a sample
of 108 student participants at the University of Birmingham. The second most frequently named feature was thankful (listed by 50% of the sample). All of the five most
frequently named features were accorded a high mean average positivity rating (scored
from 1-5, where 1 is very negative and 5 is very positive) across the 108 participants,
ranging from 4.63 for grateful (fifth most frequently named) to 4.79 for happy.
Counter to this, however, almost a third of the sample (29.17%) named either obligation or indebtedness a feature of gratitude and it was accorded an average valence
rating of 2.26. Furthermore, 16.67% of the people in the Birmingham study associated
gratitude with guilt, giving it an average valence score of just 1.71 (markedly below the
midpoint of 3). Based on these figures, it would seem that gratitude is not considered

Psychologys Contribution to Ethics: Two Case Studies

143

unambiguously positive, at least within the UK student sample studied. There are clearly features of gratitude, identified by this lay sample, which are rated negatively.
In contrast with the findings of the UK study, an earlier examination (also conducted with undergraduates) in Florida, USA (Lambert et al. 2009) showed an overall less
negative understanding of gratitude (as indicated by feature ratings) than that which
was found in the UK. Furthermore, the features of guilt and embarrassment/awkwardness were not named in the US sample. The composite feature of obligation/indebtedness was mentioned by the US sample, but far less frequently, and was
accorded a higher mean valence rating of 3. Therefore, those who did list obligation or
indebtedness as features of gratitude in the US sample rated them neutrally rather
than negatively.
While there was a significant overlap of common features in the two studies, there
were differences between the UK and USA in terms of the frequency with which negative features were named. This could be because UK respondents have a more negative
perception of gratitude than Americans. On the other hand, it could be that UK participants were more ready to acknowledge negative aspects of gratitude than their US
counterparts. The American culture of positivity and optimism, documented in Barbara Ehrenreichs Smile or Die (2009), may place social pressure on Americans to limit
reporting negative appraisals of putatively positive virtues, such as gratitude.
It should be noted, however, that differences between observed features of gratitude
in the USA and UK were less marked when in a separate (and arguably less spontaneous) task, a fresh group of UK participants rated the centrality to the concept of gratitude of features listed in the first study. Feature centrality was rated on a scale of 1-8
(where 8 equated to extremely central). At this stage, the composite feature obligation/indebtedness slipped from its previous ranking of 6 (of 63 main features) to a
rank of 48. Similarly, guilt, which emerged at 12 in terms of frequency ratings, went
down 46 places to 58. It seems likely that participants monitored features of gratitude at
this stage, making more considered judgments about whether a given characteristic
should be a feature of gratitude. Given time to reflect, perhaps laypeople are less ready
to acknowledge these negative features as characteristics of gratitude. Nonetheless, the
fact that there are noteworthy differences between the more immediate, freely generated listings of features of gratitude in the USA and UK, speaks to the earlier points about
whether gratitude is entirely positive and whether there may be cross-cultural differences in understanding the concept.
It has been proposed here that there is much to be said for examining what lay people mean by gratitude in order to avoid straying too far from ordinary language, or
offering an overly narrow (or even esoteric) view of the concept under consideration.
Should experts have an exclusive right to define what is meant by gratitude, constructing measures based on their own definitions, or should definitions map lay usage more
closely?

144

Liz Gulliford

Including features in a description or measure of gratitude is only appropriate if laypeople actually associate gratitude with those features. Consider the claim made by
Emmons and Shelton (2002, 460) that gratitude is characterized by a felt sense of
wonder, thankfulness and appreciation for life. In both studies in which lay participants listed features of gratitude (Lambert et al. 2009 in the USA and Morgan et al.
2014 in the UK), the characteristics of awe or wonder were not named. Measures which
include items about purported features of gratitude may effectively construct gratitude
in a way that is at odds with the experience of most people.

1.3

Species of Gratitude

A further question about the range of meaning of gratitude concerns whether it is construed as a specifically interpersonal exchange involving a benefactor, a benefit and a
beneficiary (what we have called triadic gratitude, see Gulliford et al. 2013), or
whether people can be grateful without attributing benefits to a benefactor. An example
of the latter would be gratitude for ones health, good weather or the beauty of nature.
We have labelled this dyadic gratitude (see Gulliford et al. 2013). The distinction
shows that it cannot be taken for granted that people mean the same thing by gratitude,
which seems to take two distinct forms (a specifically interpersonal benefit-triggered
variety and a more general, intrapersonal sense of appreciation).1
In the words of the adage, is it truly the thought that counts? Philosopher
McConnell (1993 , 44) has suggested that if someone tries to give another a significant
benefit but fails, gratitude may still be owed, a sentiment echoed by psychologists Bartlett and DeSteno (2006). However, do laypeople believe they would experience the
same degree of gratitude for a benefit that fails to materialize, or for a benefit that was of
no real value to them (such as an unwanted gift)? Our prototype study of gratitude
indicated that, despite gratitude being deemed the quintessential positive psychology
trait (Wood et al. 2009, 43), it is also associated with negative affect (guilt, embarrassment or feelings of indebtedness for instance). Thus it is possible to develop and challenge expert understandings and circumscriptions of gratitude by examining the views
of laypeople.
To this end, my colleagues and I have also designed a vignette questionnaire which
manipulates the conceptual controversies outlined above, to illuminate what factors
laypeople think influence the amount of gratitude they believe they would feel (and
1

In this recent interdisciplinary paper (Gulliford et al. 2013), we also review further conceptual controversies that
attend the concept of gratitude. Alongside the distinction between dyadic and triadic gratitude, we have also highlighted debates about whether gratitude must involve supererogation, as Roberts (2004) suggests and with which
McConnell (1993) disagrees and whether gratitude is only warranted where there is a clear and benign intention
to provide a benefit (the intentionality condition). In this connection, there has been debate about the degree to
which ulterior and even malicious motives undermine gratitude.

Psychologys Contribution to Ethics: Two Case Studies

145

should feel) in response to an unrealized benefit, or a benefit that was bestowed for
ulterior motives. We are currently examining the results from this questionnaire, having canvassed the views of over 500 laypeople. In using questionnaires that probe the
intuitions of ordinary people, our work embodies the principles of the emerging field of
experimental philosophy, complementing traditional armchair conceptual analysis
with empirical data.
We believe that it is imperative to examine lay understandings of gratitude to examine the degree to which they map onto current psychological and philosophical conceptions. As previously noted, there is a risk that the definitions of experts may not represent lay views in all their complexity, and could end up constructing the concepts in a
way that is at odds with the experience of normal people. We argue that the normative
study of gratitude carried out by a few must be complemented by empirical research
that involves the many. We are mindful of the fact that this position may not be shared
by other researchers in this field. However, we have been concerned to represent the
views of laypeople in our research and we express reservations about leaving conceptual analysis solely to an elite of experts (be they philosophers or psychologists).
This endeavor will not resolve the ethical question of when gratitude is excellent or
virtuous, though it does highlight complexities that may impact on this, such as when a
benefit has been given with ulterior motives. An empirical psychological approach can,
therefore, do much to clarify the nature of the concept of gratitude, as this first case
study has shown.

Case Study II: Practical Benefits of Psychological Research in


Ethical Domains: A Case Study Involving Forgiveness

The second case study presented in this chapter describes a different way in which psychology, far from replacing ethics, can make a contribution to ethics. It will be shown
that psychology can elucidate the processes by which the ethical ideal of forgiveness
might be realized. In this section a number of psychological interventions will be presented to illustrate how the ethical ideal of forgiveness, a goal that may sometimes seem
unattainable, may be expedited by practical psychological insights. In contrast to Case
Study 1, the focus in this section is not on definition (and how an empirical approach
can shed light on lay understanding) but on the how of forgiveness: what psychology
tells us about the process of forgiveness. Over the course of this section two main therapeutic approaches to forgiveness will be appraised. The first kind of intervention is
predicated on cognitive foundations and is based on the assumption that attributions of
blame towards offenders can be consciously revisited and reworked to make the task of

146

Liz Gulliford

forgiveness easier. The second kind of intervention focuses on generating empathy for
offenders as a means of facilitating forgiveness.
Psychology, in this case study, does not replace the ethical consideration of forgiveness, including discussion about when and under what circumstances forgiveness
might be deemed an appropriate response. However, it is suggested here that psychology offers insights as to how the process of forgiveness occurs, what psychological impediments to forgiveness might exist, and the means by which forgiveness could be
promoted through psychological interventions.
Notwithstanding the concerns about defining ethical concepts raised in the first case
study, it seems important to offer some clarification of the meaning of forgiveness in
order to distinguish it from concepts with which it has been confused. The following
summary goes over both philosophical and psychological literatures on the meaning of
forgiveness. However, it is not meant to exhaust discussion about the conceptual understanding of forgiveness which, it is proposed here, could adopt a similar approach to
that which we have applied to gratitude, and which was discussed in the previous section. It was noted earlier that a prototype study of forgiveness has been conducted by
Kearns and Fincham (2004), which could be seen as a first step towards examining lay
understandings of forgiveness. A vignette questionnaire, based on conceptual controversies in the forgiveness literature, would further clarify the contours of laypeoples
understanding of the concept of forgiveness, a project which must, at present, remain
on the horizon.
A review of the literature of experts concerning forgiveness suggests that the concept should be differentiated from the following: pardoning (Downie 1965; Horsburgh
1974; McGary 1989), excusing (McGary 1989), condoning (Downie 1965; Lewis 1980)
and forgetting (McGary 1989). Beginning with the last, the adage 'forgive and forget is
perhaps responsible for the greatest degree of confusion. If a person forgets a wrong
against her she has no need to forgive it; only remembered offences may call for forgiveness. As I have put it elsewhere: [] while forgetting may be a symptom of forgiveness, we do not need to forget in order to forgive, especially where hurt runs deep
(Gulliford 2013, 293).
Secondly, a pardon is given by a person who stands in a specific role towards an offender who has violated a specific law (or laws) over which the pardoner has jurisdiction. A pardon, therefore, may be given by a judge or monarch (see Downie 1965;
Horsburgh 1974). Forgiveness, on the other hand, describes the overcoming of offence
in an interpersonal relationship, rather than within a legal or social context, and it is the
offended party alone who possesses the right to forgive. In this connection, there has
been debate as to whether the work of South Africas Truth and Reconciliation Commission in the mid 1990s, involved forgiveness or whether the meaning it placed on forgiveness was closer to a pardon (see Cherry 2004).

Psychologys Contribution to Ethics: Two Case Studies

147

Forgiveness must be differentiated from excusing or condoning, for when a person


condones or excuses an attitude or behavior, she has dealt with offence without the
need for forgiving it. Essentially, condoning and excusing represent an attempt to explain why something occurred and how, within a certain situation, it was understandable.
Distinguishing forgiveness from the above concepts may be a helpful step towards
circumscribing it. However, forgiveness must also be defined positively; what forgiveness is as opposed to what it is not. There is much discussion on this point. The
eighteenth-century Anglican bishop, Joseph Butler, conceived of forgiveness as the
foreswearing of resentment - where resentment is a negative feeling (anger, hatred)
toward another who has done one moral injury (see Butler 1970, 80-89). Philosopher
Norvin Richards (1988) objects to this conception, however, pointing out that a person
could stop resenting an offender whilst maintaining a different sort of hostile attitude
incommensurate with forgiveness. He suggests that forgiveness should, therefore, be
defined as the abandonment of negative feelings in general. However, as the following
shows, one should perhaps go further than a view that sees forgiveness purely as a matter of damping down negative feelings.
A number of definitions of forgiveness encompass the dimension of unforgiveness,
which has been described as a combination of delayed emotions, including resentment,
bitterness, hatred, hostility, anger and fear that develops after a transgression and can
motivate desires for retaliation against or avoidance of the offender (see Worthington,
Sandage & Berry; Worthington and Wade 1999). Wade and Worthington (2003)
demonstrated empirically that forgiveness is not simply the reduction or elimination of
negative feelings. They showed that people can lower their level of measured unforgiveness, without necessarily becoming any more forgiving towards offenders.
Forgiveness seems also to include an emotional rehabilitation of feelings towards a
transgressor (see McGary 1989). Therefore Enright et al. (1998, 46f.) suggest forgiveness is:
[] a willingness to abandon ones right to resentment, negative judgement, and indifferent behaviour toward one who unjustly injured us, while fostering the undeserved qualities
of compassion, generosity, and even love toward him or her.

This definition encompasses cognitive, conative and behavioral aspects of forgiveness,


whilst incorporating both negative and positive affect, and is the basis of a number of
interventions to promote forgiveness to which attention will shortly be turned.

2.1

Forgiveness and Psychology

This brief summary about conceptual issues highlights that forgiveness presupposes a
number of psychological processes. It assumes some sort of cognitive reappraisal

148

Liz Gulliford

(judgment) of an offender that may, or may not, precede a renewal of emotions towards the perpetrator. Given that forgiveness exhibits these clearly psychological characteristics, it seems reasonable to assume that psychology could shed light on how these
processes might naturally occur, identifying forgiveness as a process that incorporates
a number of elements. Furthermore, and more relevant to the purposes of this paper,
the insights of psychology can elucidate how forgiveness might be consciously promoted by psychological interventions.
Philosophers might debate whether forgiveness in a given situation is appropriate or
fitting, whether to forgive in a particular situation would be a manifestation of virtue
(Downie 1965; Neblett 1974; Twambley 1976; North 1987; Richards 1988). Normative
ethicists might debate whether forgiveness is at odds with justice (Lewis 1980) and
which of these should be privileged in considering how to respond in a given situation.
There is clearly a place for these moral objections to forgiveness.
However, let us suppose that the question of whether one ought to forgive (either
because one sees it as ones duty, ones religious calling or because one believes forgiveness to be a virtue to be emulated at all times) has been resolved in the affirmative.
How, then, does an individual go about bringing the desired goal of forgiveness to fruition? It is clearly important to reflect on whether forgiveness is an appropriate response
to offending behavior. However, having considered whether it is, psychological understanding of the process of forgiveness and interventions specifically created to foster
forgiveness may help enormously in bringing the desired end about. Psychology cannot
replace ethics, but it can help to realize ethically desirable ends.
Psychological examination about how forgiveness comes about has demonstrated
that it is a hard-won process. Forgiveness, for all but minor infractions, takes time and
is far from instantaneous. Having arrived at a point where forgiveness is sought, psychology can be enormously helpful in illuminating what I have called elsewhere the
human side of forgiveness (Gulliford 2004a, 83). Psychology sheds light on how forgiveness occurs in default of any explicit therapeutic intervention to promote it, highlighting the normal reactions people experience in the wake of an offence, such as
anger or resentment that are impediments to forgiveness. These inhibiting psychological factors bring to awareness that any ethical decision to forgive is subject to emotional
limitations. Furthermore, psychological studies have shown that victim and perpetrator
accounts of interpersonal conflict differ markedly (Baumeister et al. 1990)2, which has
significant psychological implications for the processes of forgiveness and reconciliation.

Victims are likely to see perpetrators actions as arbitrary, gratuitous and incomprehensible, whereas perpetrators
are disposed towards seeing their behaviour as meaningful and comprehensible. Victims tend to see events as
open and ongoing. Perpetrators are more likely to have closure on the same incident.

Psychologys Contribution to Ethics: Two Case Studies

149

Psychology can also help identify particular developmental issues that may underlie
difficulties people might face in forgiving others (Coate 2004). Furthermore, as I have
argued elsewhere (Gulliford 2013), forgiveness presupposes a certain degree of cognitive (as well as emotional) development in order for an individual to understand the
complexities of attributing culpability to others. The experience of having received
forgiveness oneself and the role of social modeling is also central. Psychology, therefore, can also illuminate socio-emotional factors that may play a part in whether forgiveness is readily considered as a possible course of action in the wake of an interpersonal offence.
However, in addition to psychology illuminating the process of forgiveness as it
might naturally occur, a number of interventions to facilitate forgiveness as a therapeutic goal have emerged in recent decades. These methods have been applauded in
some camps but deprecated in others. For instance, there have been criticisms that
these therapeutic interventions distort the Christian understanding of forgiveness and
emphasize forgiveness for the emotional release of the person seeking to forgive, rather
than for the sake of repairing a fractured relationship and effecting reconciliation between the estranged parties (Augsberger 2000; Jones 1995).
Others have observed that psychological models for approaching forgiveness may
promote a consumers mindset, the view that forgiveness will do us good if we buy
into it (Watts 2004, 65). Critics are worried that the concept of forgiveness risks being
reduced to a purely instrumental, rather than an intrinsic good (Holloway 2002). I address these criticisms at length elsewhere and will not rehearse them in detail here
(Gulliford 2004a, 88f.). Suffice it to be acknowledged that these methods have been
shown to promote forgiveness and should not be dismissed out of hand. Furthermore,
there is surely nothing wrong with approaching forgiveness in a pragmatic or instrumentalist way; it is perfectly legitimate to want to forgive someone because their behavior has had, and perhaps continues to have, a negative effect on us that we seek to drive
out. This initially self-pertaining (but not selfish) motivation for forgiveness (see
McGary 1989) might, in any case, change over time, with forgivers gradually more able
to consider forgiveness for the perpetrators benefit.
Psychological approaches to forgiveness fall into two main categories (Gulliford
1999; 2004a; 2004b; 2013). First, there are interventions which advocate cognitive reframing techniques to gain a different perspective on the person one is seeking to forgive. There are also approaches which promote empathic identification with the offender in order to facilitate forgiveness. Both offer ways in which what might seem like
an abstract ideal of forgiveness may be made more concrete and attainable.
I suggested (Gulliford, 1999) a third therapeutic approach towards forgiveness which
complements the first two. It was suggested that while cognitive reframing techniques
locate healing by revisiting the past, and empathic approaches to forgiveness aim to
transform feelings in the present, the anticipatory role-taking approach to forgiveness

150

Liz Gulliford

allows the anticipation of a fully realized future forgiveness to sustain hope in situations
where forgiveness is especially difficult.

2.2

Psychological Interventions to Facilitate Forgiveness Focusing on


Cognitive Reattribution

A number of psychological interventions that incorporate cognitive reframing have


been devised to expedite forgiveness. Cognitive reattribution (reappraisal or reframing)
has proved effective in treating a range of clinical disorders, including anxiety and depression. In essence, the process involves systematically examining attributions people
make that sustain beliefs they have about the world, themselves and other people. For
instance, one aspect of a cognitive interpretation of depression posits that people who
are depressed attribute the causes of personal success and personal failure in characteristically negative ways which are maintained by dysfunctional patterns of thinking.
Cognitive therapies for depression intervene at the level of revisiting and reworking
the attributions that sustain these thought patterns, inviting the individual to challenge
implicit assumptions they hold about themselves. The process enables the individual to
stand back from their own thinking and to examine their beliefs. For instance, a depressed person may believe everything is their fault. In forms of cognitive and cognitive behavioral therapy (CBT), the individual would be asked to review this belief and
to consider attributing the reason for failure to external circumstances outside their
control. Attributing failure to the self (making internal attributions) is a characteristic
attribution bias of depressed people. However, the bias can be modified and the distorted and dysfunctional thinking that has a hand in sustaining depression can, to some
extent, be remedied.
Fritz Heider (1958) initiated attribution theory which later informed this kind of
clinical intervention. He suggested that people attribute the behavior of others (and
themselves) to both internal dispositions and external situations. The degree to which
these internal or external attributions guide our interpretations of the behavior of other
people has clear implications for interpersonal relationships. If a colleague snaps at us,
and we attribute the cause to their character (they are a rude and inconsiderate person),
it is perhaps harder for us to get over the slight than it would be if we put their shortness down to external or transient factors (a period of stress they are undergoing).
The relevance of this thinking to psychological interventions that promote forgiveness is that if we can rework our attributions of blame towards people who have
offended us, making our attributions less determined by the offenders internal disposition (character traits) and attribute externally (to circumstances), we would find forgiveness easier. In the last three decades or so, a number of psychological interventions
have aimed to promote forgiveness by this underlying means. These have variously

Psychologys Contribution to Ethics: Two Case Studies

151

referred to the process as reappraising, reframing, reattribution and even seeing the
offence with magic eyes (Smedes 1984). However, they all share the process of revisiting the causes we have imputed to others hurtful behavior to take account of contextual or situational factors that may have played a part in the episode/s for which forgiveness is sought.
The process of reframing the offender may involve considerations about the proximal and distal causes for their behavior (recent stressors or a difficult family background, for instance) which will help to loosen the grip of attributions which locate the
cause of offending behavior in the offenders relatively settled internal state of character.
Forms of reframing are pivotal in the interventions to forgiveness proposed by Robert Enright, author of the self-help book Forgiveness is a Choice (2001) and co-author
of Helping Clients Forgive: An Empirical Guide for Resolving Anger and Restoring Hope
(2000), and in the approach of the late Lewis Smedes (Forgive and Forget: Healing the
Hurts We Dont Deserve, 1984, and The Art of Forgiving, 1997). Outside the context of
individual therapy, reframing approaches have been incorporated into the interventions of marital and family therapists (see Coleman 1998).
Bob Enrights (2001) intervention is predicated on a model of forgiveness that consists of four key stages that together make up a twenty unit process model. In the first
Uncovering stage, the person seeking to forgive examines psychological defenses, confronts their anger and gains insight into the effects of the injury. In the following Decision phase, the client examines their understanding of forgiveness (in order to distinguish forgiveness from forgetting, excusing and other concepts). At the end of this
stage, the forgiver makes a decision to commit to forgiveness that is potentiated in the
third phase (the Work Phase) by engaging in a process of reframing (the client reflects
on the offenders behavior and aims to revisit the causes to which they have attributed
his or her conduct). This process is seen as the means by which empathy for the offender is generated. In the final Deepening phase, clients are encouraged to find meaning in
their suffering and to continue deepening positive affect towards the offender.
This process model has been used in controlled interventions with a range of clients
(see Hebl and Enright 1993; Al-Mabuk et al. 1995; Freedman and Enright 1996; Coyle
and Enright 1997). Participants in experimental groups were taken through all, or part,
of Enrights process model (Units 1-11). Control participants were given support that
did not specifically incorporate forgiveness. The intervention has been shown to variously lower anxiety, anger and depression, and increase forgiveness3 and hopefulness. It
should be noted, however, that there are questions concerning the degree to which the
process model accurately reflects participants retrospective reports of their own for3
Measured with the 30-item Psychological Profile of Forgiveness Scale3and the 16-item Willingness to Forgive Scale,
see Hebl and Enright (1993)

152

Liz Gulliford

giveness experiences (Knutson et al. 2008), with the suggestion that the model might
benefit from revisions based on these empirical findings.
To return to the earlier point, psychological interventions centering on cognitive reframing have therefore been empirically shown to help people to forgive. The ideal of
forgiveness is, fleshed out and broken down into concrete and practicable stages (confronting the harm done, labelling the emotional effects of the wrongdoing, considering
what might have led the offender to behave the way they did in order to reframe the
perpetrator). The model illustrates that the process of forgiveness involves a number of
stages or steps through which the person seeking to forgive progresses, thereby reinforcing the view that forgiveness is not instantaneous. The psychological approach does
not in any sense replace the ethical question of whether forgiveness is a fitting response
to a situation. It should be noted that in Unit 9 of Enrights 20 stage model, the client is
asked to reflect on alternatives to forgiveness (such as revenge) and then to explore
forgiveness and how it differs from excusing, condoning or forgetting the offence (Unit
10). The model therefore invites people to contemplate responses other than forgiveness and, as such, affords some space for ethical considerations about the appropriateness of forgiveness in a given circumstance, even if this reflection is not its major
focus. It must be acknowledged that these psychological interventions centering on
cognitive reappraisal, which have been shown to expedite forgiving offenders, do not
(as part of the process) coerce people into forgiveness.

2.3

Psychological Interventions to Facilitate Forgiveness Focusing on


Generating Empathy for the Offender

It was noted previously that alongside interventions which advocate forgiveness


through cognitive reframing, there are alternative psychological approaches which
promote empathic identification with the offender to facilitate forgiveness. The former
tend to assume that empathy for the offender is generated as a result of having engaged
in a process of reframing. The latter intervene directly at the affective level by encouraging people who are seeking to forgive to place themselves in their offenders shoes,
imagining how badly they must feel for the harm and distress they have caused by their
offending behavior (Worthington 1998). Clients are also encouraged to reflect on their
own guilt for transgressions in order to generate sympathy and fellow-feeling for the
offender. Everett Worthington Jr. (1998) is the originator of this five-stage Pyramid
model which is better known by the acronym REACH. Each letter stands for processes
involved in the model (Recall the hurt; Empathize with the one who hurt you; Altruistic
gift; Commit to forgive; Hold onto forgiveness).
This approach may assume parity between the offences of the forgiver and those of
the perpetrator she or he is seeking to forgive. However, there may be a huge disparity

Psychologys Contribution to Ethics: Two Case Studies

153

between them. It may prove extremely difficult to identify with an offender who has
behaved in a particularly heinous way. In situations involving abuse, for instance, recalling ones own failings may only serve to reinforce the gap between ourselves and the
magnitude of the hurt we are attempting to forgive.
Nonetheless, there is also empirical evidence to support the effectiveness of empathy
in promoting forgiveness. McCullough and Worthington (1995) compared two forgiveness groups (forgiveness for self-enhancement and forgiveness for interpersonal
harmony) with a wait-list control group in a brief therapy session lasting an hour, administering the Wade Forgiveness Scale (1989) before and after the session and at a sixweek follow-up. Sessions consisted of teaching, exercises to promote empathy and a
discussion. The self-enhancement group were given the rationale that forgiveness
might improve their state of mind whilst those in the interpersonal group were advised
that forgiveness would benefit their relationship towards the offender and other people.
Results showed that both the experimental groups (self-enhancement and interpersonal) gave rise to a lessened desire for revenge, more positive feelings towards the
offender and a greater inclination towards reconciliation than in the control group.
Furthermore, this endured six weeks after the brief therapy session. Interestingly, better
outcomes were achieved in the self-enhancement group than the interpersonal group,
suggesting that people may be more motivated to forgive for their own sense of release
than for the sake of a relationship, though this motivation may lessen as people progress through the process of forgiveness.
Further supporting the empathy model of forgiveness, McCullough et al. (1997)
conducted a small-scale study in which they manipulated the degree of empathy participants experienced. In the empathy group, participants were explicitly told that empathy facilitated forgiveness and were encouraged to feel empathy towards their offender.
A comparison group were encouraged to forgive but were not encouraged to engage in
a process of generating empathy for those they were seeking to forgive. Alongside these
two groups was a wait-list control group. Participants were randomly selected to a
group in a repeated measures design, and follow-up measures were taken six weeks
post-intervention.
Results post-intervention showed that the empathy group yielded the highest forgiveness scores and that this group had significantly greater affective empathy than the
comparison group, which in turn evinced no greater affective empathy than the waitlist control group. The results support the view that systematically manipulating empathy has an effect on forgiveness, though it should be acknowledged that at the six-week
follow-up, there was no significant difference in forgiveness scores between the empathy and comparison groups.

154

2.4

Liz Gulliford

Reviewing the Efficacy of Forgiveness Interventions

A number of meta-analytic reviews of forgiveness interventions have been conducted


(Baskin and Enright 2004; Lundahl et al. 2008; Wade et al. 2014). The first of these,
based on nine published studies, concluded that interventions with individuals following Enrights process model showed larger effects than interventions that had utilized
Worthingtons REACH model (which is administered to groups), and group interventions with Enrights process model (Baskin and Enright 2004). Further support for the
effectiveness of individually delivered programs was found by Lundahl et al. (2008).
Their meta-analysis (involving 14 published reports of forgiveness interventions) also
highlighted that the efficacy of the intervention was influenced by factors other than
treatment modality (individual or group), such as time spent in treatment and total
number of sessions. They also found that the forgiveness intervention itself was significant; Enrights model significantly outperformed studies based on Worthingtons
REACH model.
Lundahl et al. (ibid.) also showed that forgiveness interventions give rise to increased
positive affect, decreased negative affect and improved self-esteem, findings that were
echoed in the most recent meta-analysis of forgiveness interventions (Wade et al.
2014), which reported greater changes in depression, anxiety and hope in forgiveness
treatment groups in comparison with participants receiving alternative treatments.
This last study, involving 54 published and unpublished reports of forgiveness interventions, found that differences between treatment approaches dropped out of the
picture when significant moderators were controlled. When dosage and treatment modality (group or individual) were controlled, treatment model did not predict study
effect size, and the apparent superiority of the Enright-model interventions could be
explained by the fact that the Enright model is usually administered in an individual
format and over a longer period of time. At all events, psychological interventions
promoting forgiveness seem to share more in common than they differ (Wade and
Worthington 2005), and both the Enright and REACH models have demonstrated that
psychological interventions can help bring about the goal of forgiveness.
This second case study shows that while psychology cannot settle the question of
whether it is advisable, desirable or appropriate to forgive in a given situation, it can
shed light on the mechanisms that might facilitate the forgiveness, making this goal less
abstract and more attainable. I have argued elsewhere (Gulliford 2013) that these different mechanisms may reflect forgiveness of the head (cognitive approaches) and forgiveness of the heart (empathic approaches), or what Wade and Worthington (2003)
labeled as decisional and emotional forgiveness respectively. Psychology can thereby
afford a nuanced understanding of the nature of forgiveness incorporating both cognitive and affective change.

Psychologys Contribution to Ethics: Two Case Studies

155

It ought not to be overlooked that the word forgiveness is applied to both receiving
forgiveness oneself and extending forgiveness to others. Whilst the focus in this paper
has been on how a psychological approach to forgiveness may help individuals to forgive other people, psychology might also illuminate the processes by which an individual appropriates the forgiveness bestowed on them by others.
Thus it has been argued here that psychology complements ethics in showing how
forgiveness might be expedited by therapeutic endeavors. It does not (and cannot) replace the ethical question of whether, and under what circumstances, forgiveness is
virtuous, though it should be acknowledged that some of the therapeutic interventions
reviewed here encourage people to reflect on alternatives to forgiveness, thereby incorporating a degree of ethical reflection.
It could be suggested that we may be particularly in need of psychological approaches to forgiveness since we inhabit a secular world where the traditional theological resources that may have helped people with the process of forgiveness in the past are now
less likely to be drawn upon. With the erosion of belief in a God that is the final source
of forgiveness, people are thrown back on themselves in the task of forgiving others.
Believers in God pray for assistance in the task of forgiveness, believing that what is not
humanly forgivable may be divinely so. This, in effect, reframes forgiveness beyond the
human realm. However, for those who do not believe in God, the buck stops at our
human resources. There is insufficient space to examine this interesting question further here. Suffice it to be acknowledged that psychology is perhaps making an increasingly significant contribution towards equipping people with resources to help them
forgive.

Concluding Remarks

This paper has argued, with reference to two case studies, that psychology cannot replace ethics. However, it has been argued that psychology can make a significant contribution towards defining ethical concepts such as gratitude, particularly from a lay
perspective, even if it cannot adjudicate on the question of precisely when and under
what conditions gratitude might be deemed a virtue. Similarly, though the focus of the
discipline of psychology does not lie with the matter of when and whether forgiveness is
ethically desirable, psychology can play a demonstrably important role in elucidating
the human dynamics of forgiveness, which can be intentionally applied in therapeutic
interventions that aim to promote the ethical ideal of forgiveness.

156

Liz Gulliford

References

Al-Mabuk, R., Enright, R.D., & Cardis, P. (1995). Forgiveness education with parentally lovedeprived college students. Journal of Moral Education 24, 427444.
Augsberger, D. (2000). The new freedom of forgiveness. Chicago: Moody.
Bartlett, M., & DeSteno, D. (2006). Gratitude and pro-social behaviour: Helping when it costs
you. Psychological Science 17 (4), 31925.
Baskin, T.W., & Enright, R. D. (2004). Interventions studies on forgiveness: A meta-analysis.
Journal of Counselling and Development 82, 7990.
Baumeister, R.F., Stillwell, A.M., & Wotman, S.R. (1990). Victim and perpetrator accounts of
interpersonal conflict: Autobiographical narratives about anger. Journal of Personality and
Social Psychology 59, 9941005.
Butler, J. (1970). Upon forgiveness of injuries. In T. A. Roberts (Ed.), Butlers Fifteen Sermons
(pp. 8089). London: SPCK.
Cherry, S. (2004). Forgiveness and reconciliation in South Africa. In F. Watts & L. Gulliford
(Eds.), Forgiveness in Context: Theology and psychology in creative dialogue (pp. 160177).
London: T&T Clark.
Coate, M. A. (2004). The capacity for forgiveness. In F. Watts & L. Gulliford (Eds.), Forgiveness
in Context: Theology and psychology in creative dialogue (pp. 123143). London: T&T
Clark.
Coleman, P. (1998). The process of forgiveness in marriage and the family. In R. D. Enright & J.
North (Eds.), Exploring forgiveness (pp. 7594). Madison: University of Wisconsin Press.
Coyle, C.T., & Enright, R.D. (1997). Forgiveness intervention with post-abortion men. Journal of
Consulting and Clinical Psychology 65, 10421046.
Downie, R. S. (1965). Forgiveness. Philosophical Quarterly 15, 128134.
Ehrenreich, B. (2009). Smile or die: How positive thinking fooled America and the world. London:
Granta.
Emmons, R. A., & Shelton, C.M. (2002). Gratitude and the science of positive psychology. In
C.R. Snyder & S.J. Lopez (Eds.), Handbook of positive psychology (pp. 459471). Oxford:
Oxford University Press.
Enright, R.D. (2001). Forgiveness is a Choice. Washington: APA LifeTools.
Enright, R.D., & Fitzgibbons, R.P. (2000). Helping Clients Forgive. Washington: APA.
Enright, R.D., Freedman, S., & Rique, J. (1998). The psychology of interpersonal forgiveness. In
R.D. Enright & J. North (Eds.), Exploring Forgiveness (pp. 4662). Madison: University of
Wisconsin Press.
Fehr, B., & Russell, J.A. (1984). Concept of emotion viewed from a prototype perspective. Journal of Experimental Psychology 113 (3), 464486.
Fehr, B., & Russell, J.A. (1991). The concept of love viewed from a prototype perspective. Journal
of Personality and Social Psychology 60 (3), 425438.
Freedman, S.R, & Enright, R.D. (1996). Forgiveness as an intervention goal with incest survivors.
Journal of Consulting and Clinical Psychology 64, 983992.
Gregg, A.P., Hart, C. M., Sedikes, C., & Kumashiro, M. (2008). Everyday conceptions of modesty: A prototype analysis. Personality and Social Psychology Bulletin 34 (7), 978992.

Psychologys Contribution to Ethics: Two Case Studies

157

Gulliford, E. Z. (1999). Theological and psychological aspects of forgiveness. Unpublished MPhil


Thesis, University of Cambridge.
Gulliford, L. (2004a). Intrapersonal forgiveness. In F. Watts & L. Gulliford (Eds.), Forgiveness in
context: Theology and psychology in creative dialogue (pp. 83105). London: T&T Clark.
Gulliford, L. (2004b). The healing of relationships. In F. Watts & L. Gulliford (Eds.), Forgiveness
in context: Theology and psychology in creative dialogue (pp. 106122). London: T&T Clark.
Gulliford, L. (2013). The head and the heart of the matter in hope and forgiveness. In F. Watts &
G. Dumbreck (Eds.), Head and heart: Perspectives from religion and psychology (pp. 273
312). West Conshohocken, PA: Templeton Press.
Gulliford, L., Morgan, B., & Kristjnsson, K. (2013). Recent work on the concept of gratitude in
philosophy and psychology. The Journal of Value Inquiry 47(3), 283317.
Hebl, J. H., & Enright, R.D. (1993). Forgiveness as a psychotherapeutic goal with elderly females.
Psychotherapy 30, 658667.
Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.
Hepper, E.G., Ritchie, T.D., Sedikes, C., & Wildschut, T. (2012). Odysseys end: Lay conceptions
of nostalgia reflect its original Homeric meaning. Emotion 12, 102119.
Holloway, R. (2002). On forgiveness. How can we forgive the unforgiveable? Edinburgh: Canongate.
Horsburgh, H.J. (1974). Forgiveness. Canadian Journal of Philosoph, 4 (2), 269289.
Jones, L.G. (1995). Embodying forgiveness: A theological analysis. Grand Rapids, Michigan: William B. Eerdmans.
Kearns, J.N., & Fincham, F.D. (2004). A prototype analysis of forgiveness. Personality and Social
Psychology Bulletin 30 (7), 838855.
Knutson, J., Enright, R., & Garbers, B. (2008). Validating the developmental pathway of forgiveness. Journal of Counseling and Development 86, 193199.
Lambert, N.M., Graham, S.M., & Fincham, F.D. (2009). A prototype analysis of gratitude: Varieties of gratitude experiences. Personality and Social Psychology Bulletin 35 (9), 11931207.
Lewis, M. (1980). On forgiveness. Philosophical Quarterly 30, 236245.
Lundahl, B.W., Taylor, M. J., Stevenson, R., & Roberts, K.D. (2008). Process-based forgiveness
interventions: A meta-analytic review. Research on Social Work Practice 19, 465478.
McConnell, T. (1993). Gratitude. Philadelphia, PA: Temple University Press.
McCullough, M. E., & Worthington Jr., E. L. (1995). Promoting forgiveness: A comparison of
two brief psycho-educational group interventions with a waiting-list control. Counseling
and Values 40 (1), 5568.
McCullough, M.E., Worthington Jr., E.L., & Rachal, K.C. (1997). Interpersonal forgiving in close
relationships. Journal of Personality and Social Psychology 73, 321336.
McGary, H. (1989). Forgiveness. American Philosophical Quarterly 26 (4), 343351.
Morgan, B., Gulliford, L., & Kristjnsson, K. (2014). Gratitude in the UK: A new prototype analysis and a cross-cultural comparison. Journal of Positive Psychology 9 (4), 291294.
Neblett, W. (1974). Forgiveness and ideals. Mind 73, 269275.
North, J. (1987). The ideal of forgiveness: A philosophers exploration. In R. D. Enright & J.
North (Eds.), Exploring forgiveness (pp. 1534). Madison: University of Wisconsin Press.
Richards, N. (1988). Forgiveness. Ethics 99, 7997.

158

Liz Gulliford

Roberts, R.C. (2004). The blessings of gratitude: A conceptual analysis. In R.C. Emmons & M.E.
McCullough (Eds.), The Psychology of Gratitude (pp. 5878). Oxford: Oxford University
Press.
Roberts, R. C. (in press). The normative and the empirical in the study of gratitude. Res Philosophica.
Smedes, L. B. (1984). Forgive and forget: Healing the hurts we dont deserve. New York: Harper
and Row.
Smedes, L. B. (1997). The art of forgiving: When you need to forgive and dont know how. NY:
Ballantine Books, Random House.
Twambley, P. (1976). Mercy and forgiveness. Analysis 36, 8490.
Wade, S. H. (1989). Forgiveness scale. In P. C. Hill & R.W. Hood (Eds.), Measures of religiosity
(pp. 422425). Birmingham, AL: Religious Education Press.
Wade, N. G., & Worthington Jr., E.L. (2003). Overcoming interpersonal offences: Is forgiveness
the only way to deal with unforgiveness? Journal of Counseling and Development 81 (3),
34353.
Wade, N.G., & Worthington Jr., E. L. (2005). In search of a common core: A content analysis of
interventions to promote forgiveness. Psychotherapy: Theory, Research, Practice, Training
42 (2), 160177.
Wade, N. G., Hoyt, W. T., Kidwell, J. E. M., & Worthington Jr., E.L. (2014). Efficacy of psychotherapeutic interventions to promote forgiveness: A meta-analysis. Journal of Consulting
and Clinical Psychology 82 (1), 154170.
Watts, F. (2004). Christian theology. In F. Watts & L. Gulliford (Eds.), Forgiveness in context:
Theology and psychology in creative dialogue (pp. 5068). London: T&T Clark.
Wood, A. M., Joseph, S., Lloyd, J., & Atkins, S. (2009). Gratitude influences sleep through the
mechanism of pre-sleep cognitions. Journal of Psychosomatic Research 66, 4348.
Worthington Jr., E. L. (1998). The pyramid model of forgiveness: Some interdisciplinary speculations about unforgiveness and the promotion of forgiveness. In E. L. Worthington Jr.
(Ed.), Dimensions of Forgiveness (pp. 107137). Radnor, Pennsylvania: Templeton Foundation Press.
Worthington Jr., E.L., & Wade, N.G. (1999). The social psychology of forgiveness and unforgiveness and the implications for clinical practice. Journal of Social and Clinical Psychology
18, 358415.

Moral Judgments and Moral Integrity Three Empirical


Studies
Mariola Paruzel-Czachura

Abstract
The paper is an attempt to obtain an answer to the question of how information concerning the behavior, views and emotions of another person influence the judgment of
that persons morality. For the purposes of the paper, the author defines the concepts of
vertical integrity and its lack in positive and negative terms, thus taking part in the
extensive debate concerning integrity in general. Study no. 1 made it possible to determine the key sets of ethical dilemmas among young adults: To have or to be? Should I
conform to religious dogmas if my views are different? Should I tell the truth or lie?
The dilemmas thus obtained made it possible to develop a questionnaire for studies no.
2 and no. 3 in which the respondents rated various hypothetical situations in terms of
the degree of their morality. The studies confirmed the significant influence of information about an individuals emotions and views on the judgment of that individuals
morality, so there are empirical grounds to recognize the existence of the phenomenon
of vertical integrity/lack of vertical integrity with regard to morality. The person considered the most moral was always the individual who demonstrated the highest integrity in the positive sense, both in the Polish and in the international study sample, regardless of religious faith or lack thereof.

Introduction

According to the Polish philosopher and ethicist Ija Lazari-Pawowska (1992), human
morality is a multifaceted phenomenon due to, among other things, the conflicts of
roles which are inextricably linked with conflicts of norms. Firstly, the conflicts may

Mariola Paruzel-Czachura
Institute of Psychology
University of Silesia
mariola.paruzel@us.edu.pl

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_8

160

Mariola Paruzel-Czachura

manifest themselves in the individual playing various roles, whose moral norms are
mutually contradictory (e.g. friend and boss). Secondly, they can appear in an intrarole conflict, when contradictory interests exist within a single role. For instance, patients health and respect of their free will are values which are important in clinical
ethics. Certain situations may occur, however, when the pursuit of one of those values
makes it impossible to be guided by the other. Alasdair MacIntyre (1996) shares the
views of Lazari-Pawowska. In one of his comments on MacIntyres philosophy, Bruce
W. Ballard describes the situation as moral schizophrenia and compares it to wearing
different hats (Ballard 2000, 17-20). For instance, when I am wearing my friendship
hat, I behave with respect, goodwill and care. When I put on the work hat, however, I
am likely to start treating other people instrumentally. The fragmentary nature of our
lives was already accurately described by Diogenes:
Men strive at digging and kicking to outdo one another, but no one strives to become a
good man and true. And he would wonder that the grammarians should investigate the ills
of Odysseus, while they were ignorant of their own. Or that the musicians should tune the
strings of the lyre, while leaving the dispositions of their own souls discordant; that the
musicians should gaze at the sun and the moon, but overlook matters close at hand; that
the orators should make a fuss about justice in their speeches, but never practice it (Diogenes Laertios1925, 321).

Researches often point to environmental factors as a source of our different moral behaviors. According to Tadeusz Tyszka (2010), important factors in the process of judging include not only the emotional and rational systems, but also environmental conditions such as smell or cleanliness of the room we are in, making people feel disgust or
reminding them of an unpleasant event immediately before they make the judgment.
The most recent research conducted by Nina Mazar, On Amir and Dan Ariely (2008)
confirms that the very fact of reminding the respondents of moral norms leads to a
decline in the tendency to be dishonest. The impact of external factors was already
recognized by Emil Durkheim, who wrote:
If it [morality] is as it is at any given moment, it is because the conditions in which men
are living at that time do not permit it to be otherwise. The proof of this is that it changes
when these conditions change, and only in that eventuality (Durkheim 1997, xxxvi).

Psychologists seeking to investigate this problem often ask a question about the integrity of moral behaviors and the lack thereof (cf. Zylicz 2010).Such integrity may be vertical or horizontal. The first type of integrity can be defined as the coherence between our
behavior, views and emotions (cf. Zylicz 2010; Paruzel 2011). In psychological research
so farit was connected only with behavior and views (e.g. Darley and Batson 1973;

Moral Judgments and Moral Integrity Three Empirical Studies

161

Zylicz 2010).The second type of integrity is related to behaving consistently in various


situations.1
Contemporary researchers point to the importance of addressing once again the issue of vertical integrity, but taking into account the subjects emotional states. Those, in
light of most recent research in the field of moral psychology, prove to be significant
elements thereof (Goab 1975; Styczen and Szostek 1974; Rozin et al. 1999; Haidt 2001;
Koenigs 2007; Prinz 2007; Smilansky 2009; Huebneret al. 2009; Tyszka 2010; Paruzel
2011; Paruzel-Czachura 2011). It is recognized that the emotional sphere is linked with
the moral sphere, i.e. the breach of specific norms leads to the appearance of emotions
(cf. Huebner et al.2009). For instance, we most often react angrily to injustice, and with
contempt when an obligation is not met (cf. Rozin et al. 1999). However, the direction
of the dependency has not been proven in a definitive manner yet, i.e. we do not know
whether feelings or emotions are the cause of moral judgments and behavior, or rather
their effect (cf. Tyszka 2010). Its worth to emphasize that in my studies I didnt focus
on that problem, but investigated the area of judgments (not their sources). Both problems are relevant for moral psychology, yet in my studies only the judgments were crucial. Jonathan Haidt and Craig Joseph (2004) also noticed the role of the affective functioning and distinguished five spheres of morality constituting psychological systems.
The first sphere is linked with harm and care, the second one with fairness and reciprocity, the third one with belonging to a group and loyalty, the fourth one with authority and respect, and the fifth one with purity and sanctity (cf. Graham et al. 2011).
In each of those spheres, consideration of the values listed above makes us act positively
in the moral sense. The role of emotional reactions is significant here, such as for instance the wish to retain purity may be linked with the reaction of disgust (ibid.).
The author of this paper decided that empirical research on the lack of vertical integrity, taking into account an individuals behavior, views and emotions, should be preceded by an empirical verification of the existence of the phenomenon of vertical integrity in the respondents opinion. This paper contains the results of three empirical studies conducted on diversified samples of young adults, whose aim was to examine moral
judgments about the morality of people demonstrating high and low vertical integrity,
i.e. coherence between moral behavior, moral views and moral emotions.

It has various names in literature: local traits (Merritt et al. 2010; Doris 2010), moral schizophrenia (Paruzel
2011), mixed traits (Miller 2013a, 2013b), and narrow dispositions (Kamtekar 2013). It has been the subject matter
of many empirical studies (including Latane and Darley 1968; Hartshorne and May 1928; Zylicz 1995, 1996; Narvaez and Lapsley 2005; Mazar et al.2008; Annas 2011).

162

Mariola Paruzel-Czachura

Operationalization of Variables, Sample and Research


Question

This study concerns the phenomena of vertical integrity and lack thereof with reference
to human morality. Moral integrity (at the vertical level) is defined here as the coherence between two or three aspects (behavior, views and emotions). For instance, I am
morally integral if:

I tell the truth and I feel happy,


I believe we should be faithful to our partner and at the same time I feel guilty in
relation to an affair,
I do not obey religious principles, but I believe that we do not always have to obey
them and I feel happy (in that case, I am probably an atheist).

If there is coherence between all three elements, one can speak of complete moral integrity. Lack of moral integrity occurs when there is lack of coherence between two or
three aspects, e.g. I believe that one should always tell the truth, and at the same time I
tell lies. If there is lack of integrity between all three elements, one can speak of complete lack of moral integrity. It is therefore clear that moral integrity and lack thereof are
not negative or positive in themselves in ethical terms. The concept of moral integrity
isnt normative and my research comprises only descriptive studies of peoples judgments. Additionally, therefore, two types of integrity may be distinguished (and, in the
same way, two types of its lack): moral integrity in the positive and in the negative sense.
The former occurs when positive behavior is coherent with the individuals emotions
and/or views (e.g. I am faithful to my partner and I believe that we should be faithful).
The latter arises when negative behavior is coherent with emotions and/or views (e.g. I
am unfaithful to my partner and I believe that we may be unfaithful if we want to).
Behavior, emotions and views were also broken down into positive and negative ones
for the purposes of the study, with the following definitions:
a) positive behavior behavior regarded as good according to generally accepted
norms, e.g. telling the truth, obeying religious principles;
b) negative behavior behavior regarded as bad according to generally accepted
norms, e.g. stealing, killing, infidelity;
c) positive emotionsemotions which should appear during, before or after behaving
in a specific manner, according to generally accepted norms, e.g. feeling guilty after
stealing something or happiness when one is faithful to ones partner;
d) negative emotionsemotions which should not appear during, before or after behaving in a specific manner, according to generally accepted norms, e.g. sexual desire for someone other than our life partner, feeling happy about stealing;

Moral Judgments and Moral Integrity Three Empirical Studies

163

e) positive views views coherent with generally accepted norms, e.g. stealing is bad,
one should not be unfaithful to ones partner;
f) negative views views incoherent with generally accepted norms, e.g. we should not
always obey religious principles, infidelity is acceptable.
It needs to be emphasized that the above definitions are of a black-and-white nature.
They were created for the purposes of the research in order to obtain clear results, but
ultimately the author is not interested in the content of the judgments, but in their
form (the integrity between behavior, views and emotions or its lack), and consequently in whether or not the phenomenon of vertical integrity or lack thereof exists. The
specific types of behavior are not judged in a normative sense here (the terms positive
and negative are descriptive). When categorizing our behaviors, emotions and views,
the author referred to norms recognized by most societies. There are many different
scientific studies concerning the hierarchy of values in various societies (cf. Oles and
Pluzek 1990; Zalewska 2002; Schwartz and Rubel 2005); after all, the majority of people
living on Earth recognize certain universal values (Brzozowski 2005).
All three studies presented in this paper were conducted on a group of young adults,
given that they were considered to have mature views on the sphere of morality.2
The author asked herself the following research question: How is the morality3of a
person judged on the basis of information concerning vertical integrity or lack thereof
(within the scope of views, emotions and behavior) from the observers point of view?
In order to answer that question, study no. 1 was first carried out, making it possible to
distinguish the most important ethical dilemmas appearing among young adults. Said
dilemmas were used to develop the questionnaire used in studies no. 2 and no. 3.

The moral development of children has already been covered quite well in psychological research (cf. Piaget
1966; Kohlberg 1969; Rest 1979; Hoffman 2006; Gibbs 2010). It seems, however, that it is more difficult to study
the phenomenon of vertical integrity in that group from the methodological point of view. Also, as Andrzej Golab
(1975) suggests, our integrity increases as we get older. In the future, it would be worthwhile to extend research
concerning vertical integrity to include people of various ages (from childhood to old age).
3
As a psychologist, I use the term morality simply to refer to a phenomenon that exists for people. I do not make
any assumptions on what morality is like because this is exactly what I want to investigate.

164

Mariola Paruzel-Czachura

The Studies

3.1

Study No. 1

3.1.1

Description: Research Question, Tools and Study Group

The question asked in the first study was whether ethical dilemmas accompanied young
people, and if so, what they concerned and where their sources could be sought. The
purpose of the study was to analyze the ethical dilemmas and to find those most significant for the group of young adults, which could then be used in further research concerning the phenomenon of lack of vertical integrity. It was assumed that the essence of
an ethical dilemma was that none of the potential solutions satisfied us completely
(Jones et al. 2005).
150 first-year students at the Faculty of Pedagogy and Psychology of the University
of Silesia participated in the study (conducted between October 2011 and January
2012). The final analysis covered 67 filled-out questionnaires (including 6 by male respondents), while the remaining respondents had not been able to answer the question
or had responded that they had no ethical dilemmas in their lives. The average age of
the respondents was 20.41 (SD=2.75).4
The respondents were given the following instruction: In daily life, so-called ethical
dilemmas may appear, i.e. situations when we have to choose between various values.
Ethical dilemmas are characterized by the fact that we find none of the potential solutions to be fully satisfactory. Please describe in detail one most important ethical dilemma occurring in your life. It took the respondents an hour on average to carry out
the instruction.

3.1.2

Results

Qualitative data were analyzed by independent, neutral judges (not familiar with the
idea of my studies and the respondents). At the first stage of the analyses, the author of
the paper created working descriptive categories which the respondents answers could
match, and at the subsequent stage five competent judges rated on a scale of 0 to 5to
which degree the specific dilemma matched the category. The judges were given the
opportunity of suggesting their own name for the category. The average rating given by
the judges ranged from 3 to 5 for various dilemmas (M=4.44 for all the items), while

Originally, the study was supposed to have been conducted in the form of interviews recorded on a sound recorder, but in order to guarantee full anonymity, it was decided that the written form would encourage the respondents to be more honest.

Moral Judgments and Moral Integrity Three Empirical Studies

165

the standard deviation was 0 to 1.73 (average SD=0.77). The coefficient of concordance
for the judges was satisfactory (Kendalls W= 0.416; N=5; p=0.000; chi-square=137.302;
df=66). Three most frequent ethical dilemmas were distinguished:
I.
II.
III.

To have or to be?
Should I obey religious dogmas if my views are different?
Should I tell the truth or lie?

Tab. 1 shows examples of the respondents statements.


Apart from the above dilemmas, sixother categories of dilemmas were mentioned,
shown in Fig. 1. The number stated in each bar is the number of people describing the
specific ethical dilemma. An abstract dilemma means that the individual was describing
a problem which did not concern them directly, but which was very important in their
opinion, e.g. the problem of abortion.
Fig. 1: Ethical dilemma categories

Should I accept a gay person?


Abstract dilemma
Which side of the conflict should I be on?
Should I help another person or not (thus forcing
them to be independent?)
Should I live obeying my familys principles or my
own views?
Should I act for my own good or for the good of
others?
Should I tell the truth or lie?
Should I obey religious dogmas if my views are
different?
To have or to be?

1
2
3
4
5
5
9
13
25

166

Mariola Paruzel-Czachura

Tab. 1: Examples of ethical dilemmas reported according to the respondents statements

To have or to be?

I think Id rather kill myself than end up like this.Dreams and ambitions, the
great big world of the capital, going to see theatre plays, the best musicals,
ballet at the National Opera, to the cinema, to great restaurants, buying wonderful clothes I could never afford, trips abroad I could never go on.
Today, the world is in pursuit of money and this is what matters much more
now, allowing people to make their dreams and plans come true.
Should I stay in Poland, my home country, or leave and pursue a more comfortable and easier life?
I like him, but I dont want to be with him, although on the other hand Id be
well-off with him. In short, Id have what Id like to have.
Should I invest in experience, in drawing benefits and pleasure from day-today life, and accept the position of a poor wayfarer through life?

Should I obey religious


dogmas if my views are
different?

Excerpts from respondents statements

Sometimes, after all, its hard for me to reconcile what I feel with what I believe in.
Should I obey all the rights and rules of the Church if Im a believer, even
though I not infrequently disagree with them?
My great ethical problem is whether I should believe blindly in everything the
Church says, even though it doesnt agree with my own views.
Sex, which is everywhere and is attractive for a young person, especially in a
relationship lasting several years, is considered evil in the Catholic religion.
Should I have sex or not?

Should I tell the


truth or lie?

Dilemma
category

The most important ethical dilemma which occurs in my life is connected with
hiding my sexual orientation from my friends, and especially family.
A friend told me in secret that shed cheated on her fianc, who was also my
friend. On the one hand, I didnt want to let her down, on the other hand, I
couldnt lie to him.
My ethical dilemma is the question whether telling the truth is worthwhile and
obligatory in every situation. Sometimes it hurts others.

Moral Judgments and Moral Integrity Three Empirical Studies

3.1.3

167

Summary

The study made it possible to determine three key sets of ethical dilemmas among
young adults, which were grouped under the working headings of to have vs. to be,
religious dogmas vs. own views, and telling the truth vs. lying.

3.2
3.2.1

Study No. 2
Description: Research Question, Tools and Study Group

The dilemmas selected were then used to develop ethical dilemmas for the purposes of
study no. 2, whose nature was qualitative and quantitative. Due to the specific nature of
Poland (where the Roman Catholic faith prevails), it was decided that conducting the
research on an international sample of young adults (study no. 2) and separately on a
Polish sample of young adults (study no. 3) would be worthwhile.
Using a recently self-constructed questionnaire, the author attempted to answer the
question of whether young people of different nationalities vary in their perceptions of
what is (and isnt) moral in the light of vertical integrity (e.g. to which extend is it moral to perform some negative behavior, feel positive emotions and have positive views
etc.). The sample consisted of 33 medical students from Europe, America, Asia, and
Africa (not from Poland), including 7 women, the average age of the participants being
22.33 (SD=3.68). The research participants included people of various faiths (Christian,
Catholic, Hindu, Muslim, Sikh, Jewish, Buddhist) and atheists (N=7).
In the first part, the survey respondents were asked to answer a general question, namely
What does it mean to you to be a moral human being? The question was asked to see
which thought categories (behavior, views, emotions) were activated when considering
morality. In the second part of the questionnaire, the respondents were supposed to rate the
degree of morality of people (on a scale of 0 to 5, with 0 meaning immoral, and 5 very moral), after obtaining information about their behavior (e.g. that they told the truth), views
(e.g. that they believed one should always tell the truth) and emotions (e.g. that they felt
anger because they had told the truth). In total, the respondents rated four different categories of ethical dilemmas, selected on the basis of the results obtained in study no. 1:
a) telling the truth or lying (related to the dilemma Should I tell the truth or lie?),
b) obeying or not obeying religious principles (in relation to the dilemma Should I
obey religious dogmas if my views are different?),
c) stealing or not stealing (related to the dilemma To have or to be?),
d) being faithful or unfaithful to ones partner (this category has been added because it
may be connected with all three ethical dilemmas indicated in study no. 1).

168

Mariola Paruzel-Czachura

In relation to each category, the respondents task was to rate eight hypothetical situations (cf. Tab. 2), which consisted of various configurations of information concerning
emotions, views and behavior of an anonymous person. For instance, a situation
demonstrating the phenomenon of complete moral integrity was one in which the individual in question told the truth, believed that one should always tell the truth, and
felt happy. One who lacks moral integrity, on the other hand, does not tell the truth and
feels happy, at the same time believing that we should always tell the truth. The average
time of filling out the questionnaire was one hour.
Tab. 2: The fragment of the questionnaire connected with telling the truth
Behavior

Emotions

Views

1.a

Tells the truth

Feels anger

1.b

Doesnt tell the truth

Feels guilty

1.c

Doesnt tell the truth

Feels happy

Believes that we do not always have to


tell the truth
Believes that we do not always have to
tell the truth
Believes that we should tell the truth

1.d

Tells the truth

Feels happy

Believes that we should tell the truth

1.e

Doesnt tell the truth

Feels happy

1.f

Tells the truth

Feels happy

1.g

Doesnt tell the truth

Feels guilty

Believes that we do not always have to


tell the truth
Believes that we do not always have to
tell the truth
Believes that we should tell the truth

1.h

Tells the truth

Feels anger

Believes that we should tell the truth

3.2.2

Results

Initially, the author of the paper analyzed the respondents statements to find out which
category (behavior, emotions, views or combinations thereof) appeared in the understanding of morality among the respondents. The qualitative data from the first open,
descriptive part of the second study were analyzed by five independent competent
judges. They evaluated the extent to which the relevant category matched the statements of the specific respondent.5. The final results are shown in Fig. 2.
5

The majority of the judges (a minimum of 3) decided that only four out of all statements made by the respondents had been classified by author of the paper wrongly, so a new category was attributed to them in accordance
with the judges suggestions. The coefficient of concordance for the judges is satisfactory (Kendalls W= 0.352;
N=5; p=0.005, chi-square=56.292; df=32)

Moral Judgments and Moral Integrity Three Empirical Studies

169

Fig. 2: The types of categories that appeared in the understanding of morality among
the respondents
14

13
11

12
10
8

6
4
4
2
0

Emotions

Views

0
Behaviors

Behaviors
and
emotions

Behaviors
and views

Behaviors
and
emotions
and views

It can be seen that the respondents most often (N=13) applied the behavior category to
describe morality (e.g. [quotations here and below are left as they were provided in English, without correcting the mistakes] As a moral person, you should not injury somebody. You should listen till the end before you talk. Dont kill anyone. If you dont steal
or cheat on your partner), followed by behavior and views (N=11) (e.g. In my opinion a moral man is a man that knows is right from wrong, and he is not afraid to do
what is right, also treat others with respect and treats others the way he would want to
be treated). The combination of behavior and emotions was applied more rarely
(N=4) (e.g. A moral man is a man who is sympathetic and loving in his actions from
his deepest care of being), and so was the combination of behavior, views and emotions (N=5) (e.g. Being a moral man to me is making sure I put religion first. Also
being a caring, loving person, putting others before you always brings good karma. You
are supposed to treat others in the same way you would like to be treated). A total of
27% of the respondents mentioned emotions. None of the respondents decided that
emotions or views exclusively characterized the sphere of morality.
Descriptive statistics concerning the category related to telling the truth or lying are presented in Tab. 3, with the hypothetical situations ranked from most moral to least moral.6
6

The Friedman test confirmed the existence of differences between the ratings (N=33, chi-square=118.671, df=7, p=0.00).

170

Mariola Paruzel-Czachura

Tab. 3: Descriptive statistics concerning the category related to telling the truth or lying
No.

The hypothetical situation

Min

Max

SD

1.

Tells the truth, feels happy and believes that


we should tell the truth
Tells the truth, feels happy and believes that
we do not always have to tell the truth
Tells the truth, feels anger and believes that
we should tell the truth
Tells the truth, feels anger and believes that we
do not always have to tell the truth
Doesnt tell the truth, feels guilty and believes
that we should tell the truth
Doesnt tell the truth, feels guilty and believes
that we do not always have to tell the truth
Doesnt tell the truth, feels happy and believes
that we do not always have to tell the truth
Doesnt tell the truth, feels happy and believes
that we should tell the truth

4.70

0.59

3.76

0.97

3.64

0.93

3.15

0.91

3.03

1.33

1.97

1.59

1.73

1.31

1.55

1.12

2.
3.
4.
5.
6.
7.
8.

According to the respondents, the most moral person seems to be the one who demonstrates complete moral integrity in the positive sense, i.e. one who told the truth, felt happy and believed we should always tell the truth (1st place in Tab. 3). As the least moral
person, respondents indicated one who did not tell the truth, felt happy and believed we
should always tell the truth, i.e. one who lacked moral integrity (8th place). A higher rating
was given even to one demonstrating complete moral integrity in the negative sense, i.e.
one who did something bad, was happy and believed that one was allowed to do bad
things (7th place). Moreover, the most significant aspect for the respondents was behavior
(positive behavior was always rated higher than negative behavior), followed by emotions
(since the coherence between positive behavior and positive emotions proved to be more
significant than the coherence between positive behavior and positive views).

Moral Judgments and Moral Integrity Three Empirical Studies

171

Tab. 4: Descriptive statistics concerning the category related to obeying or not obeying
religious principles
No.

The hypothetical situation

Min

Max

SD

1.

Obeys religious principles, feels happy and


believes that we should have to obey them
Obeys religious principles, feels happy and
believes that we do not always have to obey
them
Doesnt obey religious principles, feels happy
and believes that we do not always have to
obey them
Doesnt obey religious principles, feels guilty
and believes that we should have to obey
them
Obeys religious principles, feels anger or desire and believes that we should have to obey
them
Obeys religious principles, feels anger or desire and believes that we do not always have
to obey them
Doesnt obey religious principles, feels guilty
and believes that we do not always have to
obey them
Doesnt obey religious principles, feels happy
and believes that we should have to obey
them

4.03

1.14

3.26

1.37

3.13

1.65

2.97

1.00

2.97

1.22

2.77

1.33

2.61

1.61

2.00

1.36

2.

3.

4.

5.

6.

7.

8.

Subsequently, the respondents rated situations related to obeying or not obeying religious principles (cf. Tab. 4).7 According to the respondents, the most moral person was
one who obeyed religious principles, felt happy and believed we should always obey
those principles, i.e. one who demonstrated complete integrity in the positive sense.
The respondents judged as the least moral person one who lacked moral integrity, i.e.
one whose negative behavior (similarly to the previous category) was coherent only
with emotions, but incoherent with views. Interestingly, the respondents judged a person demonstrating complete moral integrity in the negative sense as rather moral (3rd
7

The Friedman test confirmed the existence of differences between the ratings (N=29, chi-square=30.958, df=7,
p=0.00).

172

Mariola Paruzel-Czachura

place). This means that if we do not obey religious principles, but it is coherent with
our emotions (conscience) and views, the respondents would find us rather moral.
The respondents subsequently judged a situation related to stealing or not stealing.8
As in the previous categories, someone demonstrating complete moral integrity in the
positive sense was judged as the most moral person. The results concerning the least
moral person are slightly different, since in that case the person demonstrating complete moral integrity in the negative sense was judged as the least moral (the incoherence regarded as the least moral in the previous categories came fifth this time). Tab. 5
contains descriptive statistics for all the hypothetical situations.
Tab. 5: Descriptive statistics concerning the category related to stealing or not stealing
No.

The hypothetical situation

Min

Max

SD

1.

Doesnt steal, feels happy and believes that we


shouldnt steal
Doesnt steal, feels anger or desire and believes
that we shouldnt steal
Steals, feels guilty and believes that we
shouldnt steal
Doesnt steal, feels happy and believes that we
may steal if we want to
Steals, feels happy and believes that we
shouldnt steal
Steals, feels guilty and believes that we may
steal if we want to
Doesnt steal, feels anger or desire and believes
that we may steal if we want to
Steals, feels happy and believes that we may
steal if we want to

4.79

0.65

3.39

1.37

2.67

1.49

2.30

1.51

1.31

1.47

1.15

1.28

1.06

1.22

0.91

1.65

2.
3.
4.
5.
6.
7.
8.

In the last situation, related to being faithful or unfaithful to ones partner, the first and
last place were the same in terms of form as in the category related to stealing. Detailed
data can be found in Tab. 6.9

The Friedman test confirmed the existence of differences between the ratings (N=32,chi-square=11.779, df=7,
p=0.00)
9
The Friedman test confirmed the existence of differences between the ratings (N=33, chi-square=107.897, df=7,
p=0.00).

Moral Judgments and Moral Integrity Three Empirical Studies

173

Tab. 6: Descriptive statistics concerning the category related to being faithful or unfaithful to ones partner
No.

The hypothetical situation

Min

Max

SD

1.

Is faithful to ones partner, feels happy and believes that we should be faithful
Is faithful to ones partner, feels anger or desire
and believes that we should be faithful
Is faithful to ones partner, feels happy and believes that we may be unfaithful if we want to
Is unfaithful to ones partner, feels guilty and
believes that we should be faithful
Is faithful to ones partner, feels anger or desire
and believes that we may be unfaithful if we
want to
Is unfaithful to ones partner, feels guilty and
believes that we may be unfaithful if we want to
Is unfaithful to ones partner, feels happy and
believes that we should be faithful
Is unfaithful to ones partner, feels happy and
believes that we may be unfaithful if we want to

4.55

1.09

3.48

1.35

2.61

1.34

2.03

1.53

1.52

1.48

1.39

1.48

1.30

1.38

1.09

1.55

2.
3.
4.
5.

6.
7.
8.

3.2.3

Summary

A qualitative data analysis leads to the conclusion that the first thought category appearing in statements concerning the characterization of a moral human being is behavior, followed by views, although 27% of the respondents also mentioned emotions
(cf. Fig. 2). However, when the category of emotions was activated in the second part of
the research, respondents considered it as relevant information for the evaluation of
anothers morality.
The following conclusions can be drawn after analyzing all the hypothetical situations. Firstly, in all situations, the respondents found that the most moral person was one
keeping complete moral integrity in the positive sense (positive behavior in accordance
with generally acceptable norms, positive emotions and views coherent with them).
Secondly, the least moral person according to the respondents was either one
demonstrating complete moral integrity in the negative sense (negative behavior vs.

174

Mariola Paruzel-Czachura

generally acceptable norms, and coherent emotions and views), which can be seen in
the case of judgments on the situation of stealing and cheating on ones partner, or one
whose negative behavior was coherent with emotions, but contradicted views (e.g. I
dont tell the truth and Im happy, even though I believe that we should always tell the
truth) in the case of lying and not obeying religious principles. In the case of lying,
integrity in the negative sense was also given a low (penultimate) rating, and vice versa,
in the case of cheating on ones partner, the incoherence of views with emotions and
behaviors was also rated seventh. The conclusion may therefore be drawn that the incoherence between behaviors and views is usually evaluated as more immoral than the
incoherence between behaviors and emotions.
Thirdly, the judgment related to obeying religious principles was the one that deviated most from the judgment of other categories. This probably results from the facts
that not all people are religious and that religious people tolerate atheists. Thus, people
who do not obey religious principles, believe that we do not always have to obey these
principles and feel happy about it, were judged as rather moral (3rd place).
Fourthly, despite the above differences concerning the religious sphere, those judged
as rather moral (2nd place) were those whose positive behavior was coherent with emotions or whose positive behavior was coherent with views.
Fifthly, it is interesting that positive behavior does not always determine whether one
is judged as a moral person or not. In the case of telling the truth, respondents did indeed state that it was more important to tell the truth than to lie (the four highest ratings were given to positive behavior, and the lowest four to negative behavior), but in
the remaining three cases the results were different. However, in the case of stealing,
the respondents believed that it was more moral to steal but regret it and believe one
should not steal (3rd place) than not to steal, but be tempted to steal and believe that
one may steal if one only wishes to (7th place). Similarly, in the situation of cheating on
ones partner, the respondents found that it was more moral to cheat but regret it and
believe that we should be faithful (4thplace) than not to cheat, but feel desire and believe
that one is allowed to be unfaithful (5thplace).
To recapitulate, the data obtained confirmed that information about another persons emotions and views (and not only about behavior, as used to be the case in psychological research) changed the respondents judgment of morality, so there are empirical grounds for recognizing the existence of the phenomenon of lack of vertical
integrity with regard to morality. This result refutes the traditional psychological thinking about defining people in terms of moral integrity and avoiding the lack of it (e.g.
Banduras concept of dissonance, see Bandura 1986). The phenomenon of moral disintegrity explains many contemporary issues such as brutal violence among people who
appreciate values like helping others and compassion. Further empirical research, not
only about moral judgments, but also measuring peoples behavior, views and emotions, are relevant in this area.

Moral Judgments and Moral Integrity Three Empirical Studies

3.3
3.3.1

175

Study No. 3
Description: Research Question, Tools and Study Group

The same procedures and tools as in study no. 2 were used in study no. 3. The sample
included students of various Polish universities, N=238 (including 129 males). A part of
the sample was collected electronically (N=117) (students of various universities and
faculties across Poland), the others filled out the questionnaire in hard copy (students
of the Faculty of Pedagogy and Psychology of the University of Silesia). The average age
of the respondents was 20.47 (SD=3.32). The respondents included 190 Catholics, 38
atheists, 2 agnostics, 2 Protestant, 3who declared to be Evangelical, and by one who
indicated each of the following faiths: Lutheran, Jehovahs Witnesses and Jediism (Jedi
Religion). All the atheists also declared to have been brought up in the Catholic faith.

3.3.2

Results

The respondents answers to the question What does it mean to you to be a moral
human being? were analyzed by ten competent judges and rated on a scale of 0 to 5. At
the first stage, five judges attributed the respondents answers to the specific categories.
The decision on including a statement in a specific category had to be made by the
majority of judges (at least 3). At the second stage, five other competent judges rated
the extent to which a statement matched the individual category (on a scale of 0 to 5).
The coefficient of concordance for the judges is satisfactory (Kendalls W=0.237; N=5;
p=0.03, chi-square=264.325; df=223).
It can be seen in Fig. 3 that the vast majority of the respondents stated that behavior
and views were the most significant for them (e.g. Being moral is to be guided in life
by values such as honesty, truthfulness, altruism, faithfulness and goodness. Its a way
of life which consists in living in harmony with oneself and with others), followed by
behavior (e.g. Dont hurt others, help them.). Full coherence between behavior, views
and emotions was only third in terms of importance, similarly to study no. 2 (e.g. Being moral, i.e. complying with certain ethical principles one follows, i.e. being all right
with oneself. Behaving in accordance with ones views and consequently feeling good).
A few Polish respondents stated that views in themselves were important (e.g. Being
moral for me means having and applying the ability of telling the good from the evil in
a specific culture or religion). A total of 14% of the respondents pointed to the role
emotions played in the phenomenon of morality (an example in the case of the behavior and emotions category may be the following statement: Being moral means living
in harmony with oneself. Doing things and saying things in such a way so as not to
have a guilty conscience afterwards.) Nobody stated, however, that the emotional

176

Mariola Paruzel-Czachura

sphere alone was sufficient to talk about being a moral human being. 17 respondents
didnt answer the question.
Fig. 3: The types of categories that appeared in the understanding of morality among
the respondents

140

126

120
100
80
60

57

40

27

20
0

Emotions

Views

Behavior
and
emotions

0
Behavior

Behavior
and views

Behavior,
views and
emotions

Let us now proceed to an analysis of the second part of the research, i.e. the questionnaire. Descriptive statistics concerning the category related to telling the truth or lying
are presented in Tab. 7, with the hypothetical situations ranked from most moral to
least moral.10 The respondents made nearly identical judgments as in study no. 2, the
only difference was that in study no. 3 the situation rated 5th in study no. 2 was rated 4th,
and vice versa. Both situations, however, were in the middle of the hierarchy.

10
The Friedman test confirmed the existence of differences between the ratings (N=237, chi-square=825.305, df=7,
p=0.00).

Moral Judgments and Moral Integrity Three Empirical Studies

177

Tab. 7: Descriptive statistics concerning the category related to telling the truth or lying
No.

The hypothetical situation

Min

Max

SD

1.

Tells the truth, feels happy and believes that


we should tell the truth
Tells the truth, feels happy and believes that we
do not always have to tell the truth
Tells the truth, feels anger and believes that we
should tell the truth
Doesnt tell the truth, feels guilty and believes
that we should tell the truth
Tells the truth, feels anger and believes that we
do not always have to tell the truth
Doesnt tell the truth, feels guilty and believes
that we do not always have to tell the truth
Doesnt tell the truth, feels happy and believes
that we do not always have to tell the truth
Doesnt tell the truth, feels happy and believes
that we should tell the truth

4.56

0.95

3.68

1.07

3.42

1.22

3.00

1.31

2.91

1.25

2.48

1.18

1.78

1.50

0.98

1.03

2.
3.
4.
5.
6.
7.
8.

Subsequently, the respondents rated situations related to obeying or not obeying religious principles (cf. Tab. 8).11 The results proved to be very similar to those obtained in
the previous study, only two situations were judged differently. The situation which
had been rated third in study no. 2 was fifth here, and vice versa. Poles decided, therefore, that it was more moral to obey religious principles, feel anger or desire and believe
that we should always obey religious principles, than not to obey religious principles,
feel happy and believe that we do not always have to obey religious principles. This
result is probably linked to the larger emphasis placed on religious values in Poland and
to the larger percentage of believers in the studied Polish sample (in study no. 2, 21%
were atheists, and in study no. 3only16%).

11

The Friedman test confirmed the existence of differences between the ratings (N=235, chi-square=626.262, df=7,
p=0.00).

178

Mariola Paruzel-Czachura

Tab. 8: Descriptive statistics concerning the category related to obeying or not obeying
religious principles
No.

The hypothetical situation

Min

Max

SD

1.

Obeys religious principles, feels happy and


believes that we should have to obey them
Obeys religious principles, feels happy and
believes that we do not always have to obey
them
Obeys religious principles, feels anger or desire
and believes that we should have to obey them
Doesnt obey religious principles, feels guilty
and believes that we should have to obey them
Doesnt obey religious principles, feels happy
and believes that we do not always have to obey
them
Obeys religious principles, feels anger or desire
and believes that we do not always have to obey
them
Doesnt obey religious principles, feels guilty
and believes that we do not always have to obey
them
Doesnt obey religious principles, feels happy
and believes that we should have to obey them

4.51

1.03

3.41

1.25

3.03

1.29

2.8

1.23

2.64

1.79

2.46

1.21

2.42

1.12

1.23

1.17

2.

3.
4.
5.

6.

7.

8.

Moral Judgments and Moral Integrity Three Empirical Studies

179

Tab. 9: Descriptive statistics concerning the category related to stealing or not stealing
No.

The hypothetical situation

Min

Max

SD

1.

Doesnt steal, feels happy and believes that we


shouldnt steal
Doesnt steal, feels anger or desire and believes
that we shouldnt steal
Doesnt steal, feels happy and believes that we
may steal if we want to
Steals, feels guilty and believes that we
shouldnt steal
Doesnt steal, feels anger or desire and believes
that we may steal if we want to
Steals, feels guilty and believes that we may
steal if we want to
Steals, feels happy and believes that we may
steal if we want to
Steals, feels happy and believes that we
shouldnt steal

4.66

1.06

3.27

1.26

2.32

1.32

2.19

1.33

1.65

1.27

1.29

1.01

0.85

1.4

0.81

1.00

2.
3.
4.
5.
6.
7.
8.

The respondents subsequently judged a situation related to stealing or not stealing (cf.
Tab. 9).12 The results differ from those obtained on the international sample in five
places. The respondents agree on the first two and on the sixth position on the list. The
person considered most moral was therefore fully integral in positive terms (1st place)
or their views and behavior were coherent (2nd place). Positions 3 and 4 switched places.
Polish respondents judged therefore that ultimately not doing something negative, even
when believing that we were allowed to do something wrong if we wanted to, was after
all slightly more moral than doing something wrong, but feeling guilty and judging it as
unacceptable. The situation rated 7thhad been rated 8thin the previous research, so the
difference can be considered rather small. The biggest difference between average ratings concerned the situation in which the individual did not steal, felt anger or desire,
and believed that we were allowed to steal if we wanted to. In this case, the respondents
rated it 5th, while in the previous study it had been rated 7th, i.e. less moral. A difference
also appeared with regard to the situation in which the respondent stole, felt happy and
believed one should not steal. The Polish respondents judged that as the worst situation, while respondents from other countries judged it as average, i.e. 5th.
12

The Friedman test confirmed the existence of differences between the ratings (N=237, chi-square=924.207, df=7,
p=0.00).

180

Mariola Paruzel-Czachura

Tab. 10: Descriptive statistics concerning the category related to being faithful or unfaithful to ones partner
No.

The hypothetical situation

Min

Max

SD

1.

Is faithful to ones partner, feels happy and


believes that we should be faithful
Is faithful to ones partner, feels anger or desire and believes that we should be faithful
Is faithful to ones partner, feels happy and
believes that we may be unfaithful if we want
to
Is unfaithful to ones partner, feels guilty and
believes that we should be faithful
Is faithful to ones partner, feels anger or desire and believes that we may be unfaithful if
we want to
Is unfaithful to ones partner, feels guilty and
believes that we may be unfaithful if we want
to
Is unfaithful to ones partner, feels happy and
believes that we may be unfaithful if we want to
Is unfaithful to ones partner, feels happy and
believes that we should be faithful

4.75

0.9

3.18

1.21

2.52

1.34

2.07

1.29

1.65

1.3

1.15

1.08

1.03

1.56

0.75

0.99

2.
3.

4.
5.

6.

7.
8.

In the last situation, concerning cheating on ones partner, the results were very similar
to those in study no. 2 (cf. Tab. 10).13 In the judgments of the Polish respondents, the
last two positions (7thand 8th) switched places compared to the international group. It
can therefore be stated that the results of studies no. 2 and no. 3 concerning the sphere
of cheating were very similar.

3.3.3

Summary

The qualitative data analysis leads to the conclusion that the first thought category appearing in statements concerning being moral is behavior, followed by views, although
some respondents also mentioned emotions (14%). The fact that the emotions category

13

The Friedman test confirmed the existence of differences between the ratings (N=237, chi-square=938.005, df=7,
p=0.00).

Moral Judgments and Moral Integrity Three Empirical Studies

181

was activated in the second part of the research points, however, to the important role
they play for the phenomenon of morality, apart from behaviors and views.
The research demonstrated that Polish respondents made very similar moral judgments, especially in relation to telling the truth/lying and being faithful or unfaithful to
ones partner. The situation of obeying religious principles was also rated very similarly. Nevertheless, Polish respondents decided that it was more moral to maintain coherence between positive behavior and views than demonstrate complete negative integrity
(i.e. it is more moral to obey principles and believe in them than not to obey them, not
to recognize them and be happy, i.e. be a non-believer). The difference, however, was
not very big, since the ratings were not extreme (3rd and 5th place). The biggest differences appeared in relation to the judgment of stealing. In the Polish sample, the ratings
proved to be more similar to the judgments expressed in relation to the other situations. The coherence between negative behavior and negative emotions or complete
integrity in the negative sense were judged as least moral. As in the previous situations,
the coherence between emotions and negative views which were inconsistent with positive behavior were given an average rating (5th place).
To recapitulate, the data confirmed, similarly to the case of study no. 2, that information on another persons emotions and views changes the judgment of that individuals morality, so there are empirical grounds to recognize the existence of the phenomenon of vertical integrity/lack of vertical integrity with regard to morality.

A New Definition of Morality?

The data obtained may help one to answer the general question: What does it actually
mean to be moral?. The respondents answered the question directly (open-ended
question in studies no. 2 and no. 3), but also indirectly, judging the specific situations.
The results described in this paper confirm that all manifestations of the phenomenon
of morality are worth taking into account when it is studied. The presented, data-based
approach to the way morality is understood, assuming at least three levels or three aspects, seems to be the most comprehensive and scientific one possible, taking into account the current state of the art in the field of moral psychology. One may therefore
state that morality is an attitude whose constituents are: our behavior (Do I help others?
Have I ever stolen anything?), our view of the world (Which values do I subscribe to?
What do I think about my friends affair?), and our emotions (What do I feel when I tell
a lie? What do I feel when I help someone?). The purpose of the present study was to
verify the hypothesis about the role an individuals feelings and emotions play in the
perception of their morality. The empirical data obtained confirmed the role of information about emotions and views in judgments concerning another persons morality.

182

Mariola Paruzel-Czachura

It is definitely worth adding moral reasoning as well as motivation to the above aspects in subsequent studies. In further research, it would be worth increasing the sample size and applying other methods, e.g. an experimental one. A further step may be an
attempt to describe an individuals morality scientifically, taking into account their
behavior, emotions and views, for instance with reference to a single selected situation.
In the even longer term, it would be worth looking at horizontal integrity, in order to
subsequently move to the normative level. This, perhaps, should rather be the task of
ethics, and not of psychology, in accordance with the views of Mark Johnson (1996)
who stated that empirical data obtained in the psychology of morality might help to
shape a life which is wise in ethical terms.

References

Annas, J. (2011). Intelligent Virtue. Oxford: Oxford University Press.


Ballard, B.W. (2000). Understanding MacIntyre. Lanham, New York, Oxford: University Press of
America.
Bandura, A. (1986). Social foundations of thought and action: A social-cognitive theory. Upper
Saddle River, NJ: Prentice-Hall.
Brzozowski, P. (2005). Uniwersalna hierarchia wartosci fakt czy fikcja? Przeglad Psychologiczny
48, N 3, 261276.
Darley, J. M., & Batson, C. D. (1973). From Jerusalem to Jericho: A study of situational and
dispositional variables in helping behavior. Journal of Personality and Social Psychology 27,
100108.
Diogenes Laertios (2004). Zywoty i poglady slynnych filozofow. Warsaw: PWN. English edition:
Diogenes Laertios (1925). Lives of the Eminent Philosophers. London: Heinemann.
Doris, J. (2010). The Moral Psychology Handbook. Oxford: Oxford University Press.
Durkheim, E. (1997). The Division of Labour in Society. New York: Free Press.
Gibbs, J. C. (2010). Moral development and reality: Beyond the theories of Kohlberg and Hoffman.
Boston: Pearson Allyn & Bacon.
Golab, A. (1975). Problemy psychologii moralnosci. In H. Jankowski (Ed.), Etyka ( pp. 121177).
Warsaw: PWN.
Graham, J., Iyer, R., Nosek, B. A., Haidt, J., Koleva, S., & Ditto, P. H. (2011). Mapping the Moral
Domain. Journal Of Personality & Social Psychology 101(2), 366384.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Review 108, 814834.
Haidt, J., & Joseph, C. (2004). Intuitive ethics: how innately prepared intuitions generate culturally variable virtues. Daedalus 133(4), 5566.
Hartshorne, H., & May, M. A. (1928). Studies in the nature of character. Vol. 1: Studies in deceit.
New York: Macmillan.
Hoffman, M. L. (2006). Empatia i rozwoj moralny. Gdansk: GWP.

Moral Judgments and Moral Integrity Three Empirical Studies

183

Huebner, B., Dwyer, S., & Hauser, M. D. (2009). The role of emotion in moral psychology.
Trends in Cognitive Science 13, 16.
Jones, C., Shillito-Clarke, C., Syme, G., Hill, D., Casemore, R., & Murdin, L. (2005). Co wolno, a
czego nie wolno terapeucie. Gdansk: GWP.
Johnson, M. (1996). How moral psychology changes moral theory. In L. May, M. Friedman, &
A. Clark (Eds.), Mind and Morals: Essays on Cognitive Science and Ethics (pp. 4568).
Cambridge: Massachusetts: MIT Press.
Kamtekar, R. (2013). Narrow Dispositions. University of Gdansk Conference, Is there such a
thing as moral character? (5/11/13).
Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A.
(2007). Damage to the prefrontal cortex increases utilitarian moral judgments. Nature 446,
908911.
Kohlberg, L. (1969). Stage and sequence: The cognitive development approach to socialization.
In D. Goslin (Ed.), Handbook of socialization theory and research (pp. 347480). Chicago:
Rand McNally.
Latan, B., & Darley, J.M. (1968). Group inhibition of bystander intervention in emergencies.
Journal of Personality and Social Psychology 10, 308324.
Lazari-Pawlowska, I. (1992). Etyka. Pisma wybrane. In Z. Kalita (Ed.), Etyka w teorii i praktyce.
Antologia tekstw (pp. 3747). 2001. Wroclaw: Wydawnictwo Uniwersytetu Wroclawskiego.
MacIntyre, A. (1996). Dziedzictwo cnoty. Studium z teorii moralnosci. Warsaw: PWN. English
edition: MacIntyre, A. (2007). After Virtue. A Study in Moral Theory. Notre Dame, Indiana:
University of Notre Dame Press.
Mazar, N., Amir, O., & Ariely, D. (2008). The Dishonesty of Honest People: A Theory of SelfConcept Maintenance. Journal of Marketing Research45(6), 633644.
Merritt, M., Doris, J., & Harman, G. (2010). Character. In J. Doris (Ed.), The Moral Psychology
Handbook (pp. 335401). Oxford: Oxford University Press.
Miller, Ch. B. (2013a). Character and Moral Psychology. Oxford: Oxford University Press.
Miller, Ch. B. (2013b). Moral Character: An Empirical Theory. Oxford: Oxford University
Press.
Narvaez, D., & Lapsley, D.K. (2005). The psychological foundations of everyday morality and
moral expertise. In D. K. Lapsley, & C. Power (Eds.), Character Psychology and Character
Education (pp. 140165). Notre Dame: University of Notre Dame Press.
Oles, P., & Pluzek, Z. (1990). Osobowosc a system akceptowanych wartosci analiza zaleznosci.
Przegld Psychologiczny, Tom XXXIII, 2, 313324.
Ossowska, M. (1963). Podstawy nauki o moralnosci.Warsaw: PWN.
Ossowska, M. (2002). Motywy postpowania. Z zagadnien psychologii moralnosci. Warsaw:
Ksizka i Wiedza.
Paruzel, M. (2011). Moral psychology. Subject, possibilities and limitations. In D. CzajkowskaZbiorowska (Ed.), Academic areas of scientific knowledge (pp. 162181). Poznan: Akademicki Instytut Naukowo Wydawniczy Altus.
Paruzel-Czachura, M. (2011). Psychologia moralnosci. Odpowiedzi jakich nie znajdzie filozofia.
Archeus. Studia z bioetyki i antropologii filozoficznej. 12/2011, 293316.

184

Mariola Paruzel-Czachura

Piaget, J. (1966). Rozwoj ocen moralnych dziecka. Warsaw: PWN. French edition: Piaget, J.
(1937). La construction du rel chez l'enfant. Paris: Delachaux et Niestl.
Prinz, J. (2007). The emotional construction of morals. Oxford: Oxford University Press.
Rest, J. (1979). Development in judging moral issues. Minneapolis: University of Minnesota
Press.
Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The moral-emotion triad hypothesis: a mapping between three moral emotions (contempt, anger, disgust) and three moral ethics
(community, autonomy, divinity). Journal of Personality and Social Psychology 76, 574586.
Schwartz, S. H., & Rubel, T. (2005). Sex differences In value priorities: Cross-cultural and multimethod studies. Journal of Personality and Social Psychology 89(6), 10101028.
Smilansky, S. (2009). 10 moralnych paradoksow. Crakow: WAM.
Styczen, T., & Szostek, A. (1974). Uwagi o istocie moralnosci. Roczniki Filozoficzne KUL 22(2),
1933.
Tyszka, T. (2010). Decyzje. Perspektywa psychologiczna i ekonomiczna. Warsaw: Scholar.
Zalewska, A. (2002). System wartosciowania a zadowolenie z zycia pracownikow w nowym
miejscu pracy. Przegld Psychologiczny 45(2), 177196.
Zylicz, P. O. (1995). Problematyka moralna w psychologii humanistycznej. Roczniki Filozoficzne.
Psychologia 43, 7590.
Zylicz, P. O. (1996). Samoaktualizacja a integracja moralna. Warsaw: ATK.
Zylicz, P. O. (2010). Psychologia moralnosci. Wybrane zagadnienia. Warsaw: Academica.

Moral Intuitionism and Empirical Data


Jonas Nagel / Alex Wiegmann

Abstract
In this article, it is argued that empirical data can undermine normative arguments
generated by intuitionist methodologies that involve a step of inducing an abstract
principle from a set of case-based moral intuitions. The use of case-based intuitions in
normative theory construction is conceptualized here as an inductive inference procedure in which philosophers draw conclusions from introspectively observable data
(their intuitions) to the state of a latent variable (what morality actually requires). We
argue that such a procedure can only generate valid output if it can be applied objectively in the sense that its outcome is independent of the person who carries it out. This
requirement is only met when fundamental case-based intuitions are intersubjectively
shared to a relevant degree. At this point, empirical data comes into play. It is needed to
assess the degree to which specific intuitions are actually intersubjectively shared. In
contexts in which this requirement is not met, principles resulting from this method
cannot be argued to be valid representations of what morality actually requires. We
illustrate this argument with a concrete example from the literature in which a specific
normative principle is called into question on the basis of psychological data on laypeoples moral intuitions. Furthermore, we defend the argument against potential objections, and we discuss its relationship to other criticisms of moral intuitionism as well
as its implications for intuitionist methodologies in general.

Jonas Nagel (corresponding author)


Department of Psychology, University of Gttingen, Germany
jnagel1@uni-goettingen.de
Alex Wiegmann
Department of Psychology, University of Gttingen, Germany
Alex.Wiegmann@psych.uni-goettingen.de

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_9

186

Jonas Nagel / Alex Wiegmann

Introduction

The past decade has seen an increased interest in the empirical investigation of laypeoples moral judgments and of the psychological processes underlying these judgments
(see Haidt and Kesebir 2010; Waldmann et al. 2012, for reviews). Although this emerging field is still in its infancy compared to many other areas in cognitive science, as is
evident for example in a relatively low degree of theoretical formalization and in an
abundance of experimental paradigms that are only loosely related, some progress has
already been made in the construction of evidence-based descriptive accounts of our
moral judgment capacity (for recent examples, see Cushman 2013; Greene 2013; Haidt
2012; Mikhail 2011).
Most moral psychologists conceive of their endeavor as a primarily descriptive one.
They look at moral beliefs as objects of scientific inquiry, and this is arguably interesting in its own right. Another important question less often seriously asked by psychologists is whether empirical findings about the contents of laypeoples moral judgments can be relevant for normative questions as well. Do all the interesting new insights about what moral beliefs people do hold have any bearing on what moral beliefs
they ought to hold?1
It is commonly held that purely descriptive statements do not imply normative ones
(Hume 1969), so we cannot simply conclude from the observation that people tend to
render certain judgments that they ought to render them. However, it also seems odd
to regard moral psychology and normative ethics as two strictly independent bodies of
knowledge. On the one hand, psychologists rely heavily on materials from the philosophical literature. The conceptual repertoire generated in moral philosophy over the
centuries heavily influences the way in which psychologists frame their descriptive
hypotheses and as a consequence also the kind of data they can possibly observe. This
means that psychologists empirical data are laden with normative theory to a degree
that is not yet well understood (see also the debate in Elqayam and Evans 2011). On the
other hand, even the most sophisticated moral philosophers still generate their normative claims using a human cognitive system of the kind studied by moral psychologists.
In a sense, then, normative claims defended by a moral philosopher can be seen as the
output of a processing system that is legitimately subject to scientific inquiry. Viewed
this way, it is at least conceivable that a better descriptive understanding of how this

Some psychologists claim that their empirical work has implications for the policies a society should adopt (e.g.,
Baron 1998). However, in these cases the normative conclusions are usually derived from a normative theory
(consequentialist leanings in Barons case) that is adopted for reasons that are independent of the empirical findings. Given that this normative theory is accepted, it seems to follow from the data that certain policies should be
adopted; however, the data are not used to inform the question whether the normative theory itself should be
accepted in the first place.

Moral Intuitionism and Empirical Data

187

kind of system works and what kinds of output it generates could influence our interpretation of these outputs (i.e., normative claims).
In this paper, it is argued that there are conditions under which empirical data on
the contents of laypeoples moral judgments can indeed have a bearing on the strength
of specific normative arguments.2 The claim will not be that these data can tell us directly which normative claims are actually true or false. The potential of empirical data
is instead seen on the level of argumentative discourse in normative ethics. It will be
argued that specific kinds of normative arguments might be attacked on empirical
grounds, discounting their persuasiveness as support for their normative conclusion.
Whether or not the conclusion itself is true or false (given that a meta-ethical position
is adopted according to which this is a sensible question at all) is a different matter.
However, if it is agreed that our best source of knowledge about what is morally required is convincing philosophical argumentation in normative ethics, then data that
are relevant for the evaluation of arguments defended in normative ethics are also relevant for what we have the best reason to believe to be morally required.
According to the present line of argument, the extent to which empirical data on
laypeoples moral intuitions have implications for a given normative claim will depend
on the methodology used by that normative claims proponent. We therefore begin by
describing some important distinctions in the methodology of normative ethics before
we present the main argument. Afterwards, we will illustrate the argument with a concrete example from the literature and discuss some potential objections to and extensions of the argument.

Intuitions in the Methodology of Normative Ethics

How do moral philosophers arrive at their normative claims? Given that normative
conclusions cannot be derived from purely factual premises, a normative element of
some sort must already be included in the premises (e.g., Singer 1973). This normative
element often takes the form of an intuition. Having an intuition amounts to having a
certain attitude towards a specific normative proposition. The attitude is to treat this
proposition as justified without having inferred it from other justified propositions
(e.g., Sinnott-Armstrong 2008). In arguments of moral intuitionists, these intuitive
normative premises refer to matters of moral substance: They contain substantial
claims about what is morally required (i.e., Torturing innocent children for fun is
morally wrong), rather than merely claims about more general normative notions that
are not specific to morality (e.g., consistency constraints derived from logical or other
2

We thank Cordula Brand, David J. Hall and Jana Samland for valuable comments on an earlier draft of this
manuscript.

188

Jonas Nagel / Alex Wiegmann

formal systems). In this paper we only deal with the role of substantial moral intuitions
in normative arguments.
Many moral philosophers share the assumption that underneath the observable chaotic and contradictory diversity of moral judgments within and across individuals and
societies, there are some basic regularities to be discovered, some moral truths that we
all actually should subscribe to. Unger (1996, 11) calls these assumed basic prescriptions our Basic Moral Values. Under this conception, an important task of moral
philosophy is to solve the epistemic problem of finding and explicating these Values.
However, there is considerable disagreement about how to best approach this epistemic
task and about the role that substantial moral intuitions should play in this process.
Some philosophers are generally skeptical about the epistemic value of substantial moral intuitions and try to reduce their role in their argumentation to a minimum (e.g.,
Hare 1981; Norcross 2008).3 Moral intuitionists, by contrast, are committed to the view
that at least some of their substantial moral intuitions, at least when generated in a
specific way under specific circumstances, provide trustworthy information about the
contents of our Basic Moral Values.
Even among moral intuitionists, there are still important differences in the kinds of
substantial intuitions used and in the functional roles that these intuitions play in their
normative arguments. Some intuitionist philosophers base their moral theory on intuitions about the validity of one or a few abstract moral principles (e.g., The greatest
happiness of the greatest number, Bentham 1907). These principles are treated as noninferentially justified, and from these principles the philosopher then deduces normative prescriptions for more concrete applications in a top-down fashion. This deductive
procedure has a principled appeal to it, but it can result in prescriptions that most people would consider counterintuitive in the context of some particular cases. Other intuitionists prefer a more bottom-up approach in which they set out from a set of intuitions about what is morally required in some well-defined concrete situations (e.g.,
trolley dilemmas; see Kamm 2007). On the basis of these case-based moral intuitions,
they generate an abstract principle, one that can account for all of the particular judgments, in an inductive fashion. This inductive procedure ensures that the resulting
principle is largely in line with our particular moral intuitions in important concrete
cases, but it seems less principled and it can result in rather complicated and qualified
abstract prescriptions to which most people do not have a strong prima facie intuition.
3

This seems desirable as the import of substantial moral propositions into the premises of a moral argument runs
the risk of begging the question. If substantial moral intuitions are to be avoided, they have to be substituted by
intuitions referring to other (preferably less controversial) sources of normativity. Hare (1981), for example, develops a moral theory that attempts to draw its normative premises solely from linguistic intuitions about the logical
properties of our moral words (and thus ultimately from intuitions about what the norms of rationality require
from us). The present argument does not apply to such attempts as long as the critical linguistic intuitions on
which they are based are uncontroversial (which they arguably are, at least compared to subtle case-based intuitions of moral substance).

Moral Intuitionism and Empirical Data

189

For reasons that will become apparent, in the present paper we are mainly concerned
with this latter procedure of employing case-based intuitions as a basis for the inductive
inference of abstract normative principles.4
Intuitionists who make use of this inductive procedure are committed to the assumption that at least a qualified subset of their case-based moral intuitions taps into
our Basic Moral Values. Each of these intuitions constitutes an accessible data point
that operationalizes another facet of these Values. Taken together and synthesized into
a principled whole, they are assumed to provide a valid accessible image of what it is
that morality actually requires from us. Unger (1996, 11) calls this methodology Preservationism and summarizes its essential epistemic assumptions as follows:
At least at first glance, our moral responses to particular cases appear to reflect accurately
our deepest moral commitments, or our Basic Moral Values, from which the intuitive reactions primarily derive []. So, on this view, its only by treating all these various responses as valuable data that well learn much of the true nature of these Values [] (Unger 1996, 11).

Within the Preservationist methodology, individual case-based intuitions are usually


evaluated in the context of other case-based intuitions and of plausible abstract principles that could potentially account for them. There may be some initial intuitions that
get adapted or even completely discarded after careful deliberation about other cases
and principles. However, it remains at the heart of Preservationism that at least some
important case-based intuitions are preserved throughout the process and partly determine the contents of the moral principles that constitute its output.
Unger (1996, 11 f.) contrasts Preservationism with Liberationism, an alternative view
on the value of case-based moral intuitions in the process of normative theory construction. Liberationists believe that case-based intuitions are often influenced by morally irrelevant factors. If all of these intuitions were taken at face value, we would arrive
at a distorted picture of our Basic Moral Values. In Ungers words:
On our contrasting Liberationist view, folks intuitive moral responses to many specific
cases derive from sources far removed from our Values and, so, they fail to reflect the Values, often even pointing in the opposite direction. So, even as the Preservationist seeks
(almost) always to preserve the appearances promoted by these [case-based] responses, the
Liberationist seeks often to liberate us from such appearances (ibid.).

Such liberation becomes necessary when a case-based intuition conflicts with other
prescriptions that arise from independent sources of normativity which the Liberation4

For convenience, we treat case-based intuitions and intuitions directed at abstract principles as if they were
distinct kinds, although in fact they seem to lie on a continuum of abstractness. Case descriptions can be rather
abstract, and abstract principles can be qualified as to make allowances for certain situational particularities. Intuitions about a substantially qualified abstract principle may not be distinguishable from intuitions about a highly
abstract case description.

190

Jonas Nagel / Alex Wiegmann

ist judges to be more trustworthy. For the present purpose, it suffices to characterize
Liberationism negatively in that it denies moral authority to our case-based moral intuitions. Which independent criteria should instead take their place as indicators of our
Basic Moral Values (e.g., rational theory construction; deduction of concrete moral
prescriptions from abstract prima facie principles; etc.) is a different matter that will
not be discussed here.
It thus seems that the different preferences of Preservationists and Liberationists
arise from different epistemic meta-intuitions about the psychological process generating our case-based moral intuitions. Preservationists believe that our case-based intuitions are mainly produced by a process that accurately reflects our Basic Moral Values.5
Therefore, they treat at least some case-based intuitions as indicative of what we have
the best reasons to believe to be morally required. If there is such a thing as misleading
influence from morally irrelevant sources that potentially bias our case-based intuitions
away from our Values, then the Preservationist is confident that she can at least prevent
these sources from affecting her particular intuitions. These epistemic assumptions of
Preservationism and the importance of case-based intuitions in Preservationist theory
construction are illustrated in Fig. 1a.
Liberationists, by contrast, assume that case-based judgments are often critically
causally affected by sources other than our Basic Moral Values, many of which might
be morally irrelevant and operate beyond conscious awareness. Under these assumptions, case-based intuitions no longer reflect our Values. We simply do not know
whether or not a given case-based intuition is a valid indicator of our deepest moral
commitments, and therefore we should refrain from using these appearances for the
induction of a moral theory. Instead, we should rely on some other (to be specified)
indicators that stand in closer relationship to our Basic Moral Values and that can be
assessed more reliably. The epistemic assumptions of Liberationism about the genesis
of case-based intuitions and their insignificance for Liberationist theory construction
are illustrated in Fig. 1b.

It is a crucial interdisciplinary question how the cognitive process by which philosophers arrive at their moral
intuitions could be further specified. In psychology, the term intuition suggests that it should be some automatic,
quick process that operates without conscious control (e.g., Glckner and Witteman, 2010). This is usually not
what intuitionist philosophers have in mind when they talk about moral intuitions, case-based or otherwise. Moral
intuitionists usually claim that their intuitions have undergone a great deal of deliberative scrutiny and critical
examination, and only those intuitions which survived these tests end up being used for normative theory construction, while many others are screened out, for example when it seems obvious that they result from the psychological influence of morally irrelevant factors (see contributions in Stratton-Lake 2002). Thus, even though the
output of this psychological process is called an intuition by philosophers, this does not preclude the process from
involving deliberative, critical thought (at least as an important yardstick against which the intuitive moral proposition has to measure up). However, it seems that every intuition that survives these critical tests still has in essence
the quality of a spontaneous appearance (in the sense that its actual genesis does not involve a conscious inference
from another normative belief), or it would not be called an intuition.

Moral Intuitionism and Empirical Data

191

From this different a priori confidence in the validity of case-based intuitions, it follows that the contents of these intuitions have a much greater influence on the normative claims that Preservationists end up defending than on the claims defended by Liberationists. For Preservationists, the function of the abstract moral principle is mainly
to account for as many case-based intuitions as possible. The principle would look different but for the fact that certain (confirming) case-based intuitions have been obtained. For Liberationists, by contrast, other sources of normativity are exploited as
independent indicators of our Basic Moral Values, and so a moral principle can be held
on to even if certain (conflicting) case-based intuitions have been obtained. This crucial
difference makes the Preservationist but not the Liberationist methodology susceptible
to the sort of empirical challenge that we will develop in the next section.
Fig. 1: Illustration of the epistemic assumptions about the genesis of case-based moral
intuitions and the role of these intuitions in moral theory construction according to (a)
Preservationism and (b) Liberationism

Normative Implications of Empirical Data on Moral Intuitions

Having given a brief account of the use of substantial moral intuitions in normative
ethics and of the methodological distinction between Preservationists and Liberation-

192

Jonas Nagel / Alex Wiegmann

ists, we now turn to the question what, if anything, an empirical investigation of laypeoples moral judgments can add to a normative debate. Many philosophers are skeptical about the value of such data. Kamm (2007, 5), for instance, makes this very explicit: I say, consider your case-based judgments, rather than do a survey of everyones
judgments. This is because I believe that much more is accomplished when one person
considers her judgments and then tries to analyze and justify their grounds than if we
do mere surveys. We do not doubt the merit of Kamms approach, but we do not agree
that a proper experimental investigation does not have additional value. For one, proper experiments are much more powerful tools than mere surveys. They offer a great
deal of precision and can discover subtleties of the moral judgment process that would
not be discernible without them. But in addition to their value for descriptive accounts
of moral judgment, we will argue that empirical data can give us good reason to reject
some normative principles that result from the Preservationist methodology outlined
above.
The epistemic problem of normative ethics can be conceptualized as an inquiry into
the properties of something that is not directly observable. The moral philosopher is
after the state of a latent construct: What is it that is morally required? What are the
contents of our Basic Moral Values? Depending on the philosophers meta-ethical
commitments, this latent variable can take on different natures. But regardless of the
assumed nature of the target variable, it seems clear that this variable is latent: It cannot
itself be assessed with a straightforward empirical process. Whoever wants to find out
anything about its properties needs to employ some kind of epistemic tool, a principled
method of inquiry.
In the previous section, we have outlined the inductive procedure employed to this
end by Preservationists. Like any other epistemic tool, the Preservationist method
needs to fulfill a number of quality criteria in order to be considered as a legitimate
method of inquiry. Most importantly, the procedure needs to be objective, reliable, and
valid.6 An epistemic procedure is objective to the extent to which its outcome is independent of the investigator employing it. A procedure is reliable to the extent to which
it measures a latent construct with precision regardless of whether or not it actually
taps into the construct it is intended to measure. A procedure is valid to the extent to
which it measures the latent construct it is intended to measure.7 These three criteria
are not independent. The main aim, to have a valid instrument, can only be achieved to
the extent to which the procedure is reliable, and the procedure can only be reliable to
6

For an accessible recent introduction to the principles of psychological testing as an example for the scientific
measurement of latent constructs, see Kaplan and Saccuzzo (2013).
7
In the philosophical jargon, the concept of reliability is usually defined in broader terms including the validity
criterion. Thus, whereas in philosophy a reliable epistemic tool is by definition also a valid one (e.g., SinnottArmstrong 2008), in psychology a diagnostic test is reliable yet invalid whenever it taps reliably into an unintended
construct.

Moral Intuitionism and Empirical Data

193

the extent to which it is objective. This implies that an epistemic procedure that yields
different results depending on who carries it out cannot provide a valid measure of any
latent construct, no matter what the target is.
The objectivity of an epistemic procedure thus indicates the upper possible limit of
this procedures potential validity, for whatever purpose. To what extent a given epistemic procedure is objective, in turn, is a straightforward empirical question of correlating the outcomes produced by two independent assessors using this procedure to
assess the same target (this correlation is called inter-rater reliability in psychology).
This empirically derived statistic has normative implications: A low inter-rater reliability indicates that the procedure should not be used to make inferences about the latent
target variable in the present context. The whole point of epistemic tools is that they
ought to allow for objective conclusions, ones that are to the least possible extent dependent on the idiosyncratic perceptual and inferential processes of the particular person employing the tool. If this requirement is given up, we might just as well resort to
unaided subjective judgment, forgo the opportunity of obtaining epistemic standards
that impartial observers could agree on, and settle disagreement about the state of the
latent variable on other, probably less civilized grounds instead.
The crucial question is under what circumstances the Preservationist method can be
expected to be objective in this sense. We argue that this can only be expected when at
least the most central case-based intuitions underlying a given Preservationist principle
are intersubjectively shared to a relevant degree. The inductive nature of the Preservationist method implies that the content of the resulting normative principle essentially
depends on the contents of at least some central case-based intuitions. The resulting
principle has its content but for the fact that these case-based intuitions have been obtained by its proponent. Another person having different intuitions toward the same
cases can reasonably be expected to arrive at a different principle if she were to use the
same methodology. This implies that the Preservationist method can only be an objective (and thus potentially valid) epistemic tool if it starts from case-based intuitions that
are intersubjectively shared to a relevant degree. If this requirement is not met, the
Preservationist method will predictably lead to inconsistent conclusions if conducted
by different individuals, even if the method is applied according to exactly the same
rules.
Whether or not a case-based intuition is intersubjectively shared cannot be determined from the armchair. Introspectively, a person can only assess which intuition she
has in response to a given case. But even if she experiences this intuition very clearly
and strongly, there is no guarantee that it is shared by relevant others. It seems she has
to ask them about the intuitions they experience in response to the same cases if she
wants to make sure that her case-based intuitions are intersubjectively shared to a relevant degree.

194

Jonas Nagel / Alex Wiegmann

Psychological studies on moral intuitions of laypeople, if properly conducted, can


provide relevant data in this regard. The paradigmatic empirical study in moral psychology has a similar structure to the case-based thought experiment technique often
used in moral philosophy. At least two parallelized versions of a case description are
presented to the experimental subjects. These versions are identical except for the variation of the target factor the intuitive moral relevance of which is to be assessed. The
main difference to philosophical thought experiments is the dependent variable: Reliance on introspective intuitions of a single individual is substituted by measurement
and statistical comparison of the intuitions expressed independently by many individuals. In this way, estimates can be made about the prevalence, strength, and robustness
of a case-based moral intuition among representatives of the relevant moral community. If stimulus scenarios from the philosophical literature are employed, such data can
be seen as a test for intersubjectivity of the case-based intuitions published by philosophers, which usually rest solely on subjective introspection.
If such tests for intersubjective agreement are conducted for cases that play a central
role in the construction of a Preservationist normative principle, we argue that the
results of these tests can have normative implications for the evaluation of this principle. If the case-based intuitions are shared by most people, this indicates that the
Preservationist method can be applied objectively in the present context. Independent
assessors will start their inductions from largely the same raw material of case-based
intuitions. This empirical result would indicate that a necessary condition for an objective application of the method is satisfied, and thus that the method can potentially be
valid in the present context. As the percentage of subjects sharing the case-based intuition gets smaller, the degree to which the method is objective in the present context
also decreases. If people are divided over the question from which raw intuitions to set
out on a Preservationist endeavor, it gets likely that even consistent application of the
method will lead to the induction of contradictory conclusions, depending on who
conducts the process. The result would be a moral stalemate: The method itself does
not provide any criteria to settle the conflict as neither of the parties has made an error
in applying it. In this situation, neither of the contradicting principles can claim to be a
valid representation of what morality actually requires as neither results from an objective method of inquiry.
To sum up the argument: The Preservationist method of moral theory construction
(like any other epistemic tool) can only be a valid instrument for assessing what is morally required to the extent to which it is objective. It is only objective to the degree to
which independent assessors arrive at consistent conclusions if they apply the method
correctly to the same target. The contents of the principles generated by the Preservationist method depend essentially on the contents of at least some central case-based
intuitions. Two independent assessors starting from contradicting intuitions towards
these central cases can be reasonably expected to arrive at contradicting principles,

Moral Intuitionism and Empirical Data

195

even if they consistently apply the same Preservationist method. The Preservationist
method can therefore only be expected to be objective (and thus potentially valid) if at
least the central case-based intuitions underlying a given normative claim are intersubjectively shared to a relevant degree. Whether this is the case is an empirical question. It
follows that empirical data on the contents of peoples case-based moral intuitions can
undermine Preservationist normative claims by demonstrating that the method that
gave rise to them lacks objectivity (and, by implication, validity) in the present context.

An Example: The Moral Relevance of Spatial Distance

In this section, the argument will be illustrated with a concrete example of how empirical data may help shed light on a normative controversy. The controversy is about
whether or not we are justified in feeling more obligated to help needy strangers who
are near us rather than far from us. While Unger (1996) argues that spatial distance is
irrelevant for our moral obligations, Kamm (2007) claims that there may be circumstances in which spatial proximity per se can increase our helping obligations. This
example is well suited because both authors report mutually contradictory substantial
moral intuitions (see Sect. 4.1), both authors put different emphasis on case-based intuitions (see Sect. 4.2), and there is a substantial amount of empirical data on laypeoples
case-based intuitions about the issue (see Sect. 4.3). We will show how these data bear
on the normative controversy according to our main argument (see Sect. 4.4).

4.1

Contradicting Case-based Moral Intuitions

Both Unger (1996) and Kamm (2007) have reported case-based intuitions in line with
their respective claims. Unger denies that spatial distance plays a role in his case-based
intuitions about the extent to which an agent is obligated to help a needy stranger. He
constructs several equalized sets of cases differing only in the spatial distance between
agent and victim. In his Sedan case (Unger 1996, 24 f.), for instance, an agent is driving
on a deserted country road when he sees a person at the roadside suffering from a severely injured leg. He could take this person to the next hospital, saving his leg, but this
would result in a costly damage to the valuable leather seating of his vintage sedan. Intuitively, the agent is strongly obligated to incur these costs in order to help the suffering
stranger. Unger contrasts this case with a far version called CB Radios (ibid., 34 f.).
Instead of driving right past the victim, the agent is driving ten miles away. The victim
contacts him via CB radio about his bad condition and his location so that the agent
could easily drive over to him and take him to hospital. The rest is identical to the Sedan
case. Ungers (1996) case-based intuitions towards these two cases are identical: It would

196

Jonas Nagel / Alex Wiegmann

be equally outrageous for the agent not to help, regardless of whether he was near or far.
His case-based intuition thus tells him that spatial distance per se is morally irrelevant.
Kamm (2007) has different case-based intuitions. She does not contest Ungers (1996)
judgments on his cases, but she points out that the absence of distance effects in some
contexts does not imply their absence in all contexts. She constructs another thought
experiment in which she argues spatial distance per se to make an intuitive difference.
Her case descriptions are as follows (see also Fig. 2):
Near Alone Case. I am walking past a pond in a foreign country that I am visiting. I
alone see many children drowning in it, and I alone can save one of them. To save the
one, I must put the $ 500 I have in my pocket into a machine that then triggers (via
electric current) rescue machinery that will certainly scoop him out.
Far Alone Case. I alone know that in a distant part of a foreign country that I am visiting, many children are drowning, and I alone can save one of them. To save the one, I
must put the $ 500 I have in my pocket into a machine that then triggers (via electric
current) rescue machinery that will certainly scoop him out (Kamm 2007, 348).
Fig. 2: Illustration of Kamms (2007) Near Alone Case (drawing by JN)

Kamm (2007) judges the agent in Near Alone to be intuitively more strongly obligated
to help than the agent in Far Alone. As spatial distance is (almost) the only factor that
differs between these two cases, she concludes that spatial distance per se matters in her
case-based moral intuitions.

Moral Intuitionism and Empirical Data

4.2

197

Different Importance of Case-based Intuitions in the Moral


Arguments

What roles do these diverging case-based intuitions play in the moral arguments advanced by Unger and Kamm? As can be seen from the quotations in Sect. 2, Unger
(1996) sees himself in the Liberationist camp. When conflicts arise between moral
judgments at different levels of abstractness, he would tend to dismiss a case-based
judgment as flawed rather than to adjust a moral principle that suggests itself from the
viewpoint of our general moral common sense (ibid., 28). Accordingly, he advances a
general statement about the moral irrelevance of distance saying that unlike many
physical forces, the strength of a moral force doesnt diminish with distance. Surely, our
moral common sense tells us that much (ibid., 33). He does not argue further for this
principle, letting it rest solely on its intuitive appeal. This demonstrates how for him a
strong intuition about the validity of a relatively abstract moral principle constitutes a
source of normativity that is independent from case-based considerations. He then
goes on and also construes case-based thought experiments (see above), and he also
sees it as a good sign for his principle that it is in line with his case-based intuitions.
However, the case-based intuitions are not critical for the content of the normative
claim he ends up defending.
Kamm (2007), by contrast, places heavy emphasis on her case-based moral judgments. From the way she describes her method of reaching her normative claims, it
becomes clear that she can be classified as Preservationist:
In general, the approach to deriving moral principles that I adopt may be described as follows: Consider as many case-based judgments of yours as prove necessary. Do not ignore
some case-based judgments, assuming they are errors, just because they conflict with simple or intuitively plausible principles that account for some subset of your case-based
judgments. Work on the assumption that a different principle can account for all of the
judgments (ibid., 5).

Kamm believes that her case-based intuitions generally are to be trusted, and therefore
they should trump contradicting prescriptions from (oversimplified) abstract moral
principles in cases of conflict.8 Accordingly, Kamm starts out from case-based intuitions like the ones presented above and induces from these data points a complex abstract principle that can account for her case-based intuitions that distance matters
morally. The principle she ends up proposing (a non-consequentialist principle involving agent-centered prerogatives) thus depends heavily on the contents of her case8

Kamm (2007) points out that it is necessary to additionally analyze this new principle independently of the cases
from which it was derived. This is important in order to justify it as a correct principle, one that has normative
weight, not merely one that makes all of the case judgments cohere (ibid., 5). However, this clearly is a secondary
step in her method. First and foremost, the goal is to arrive at a (relatively complex) moral principle that can
account for as many case-based judgments as possible.

198

Jonas Nagel / Alex Wiegmann

based intuitions. If she had not had the intuition that the near agent in her cases was
more obligated to help than the far agent, she would not have been motivated to construct an abstract principle that can account for this difference.

4.3

Empirical Evidence on Laypeoples Case-based Intuitions

Nagel and Waldmann (2013) have assessed case-based intuitions of hundreds of laypeople on this issue. In a central experiment, they turned Kamms (2007) Near
Alone/Far Alone thought experiment into a psychological experiment in order to find
out whether laypeople share the intuition that distance per se matters morally. They
presented 849 subjects with one of four different written case vignettes and asked them
to indicate on a 6-point rating scale the degree to which they felt the scenarios agent
was obligated to help (not at all [1] to very strongly [6]).
In designing their case descriptions, they took care of the fact that, as Kamm (2007)
has already noted, between Near Alone and Far Alone spatial distance is confounded
with directness of visual access which is likely to affect the salience of the victims need
to the agent. There is direct visual contact when the agent is near, but not when she is
far. Therefore, any difference in intuitive moral obligation ratings that might be found
between both cases may be caused by differences in salience of need rather than distance per se. Nagel and Waldmann (2013) deconfounded both factors in their experiment. They implemented two background levels of visual access (direct vs. mediated) at
neither of which variations of spatial distance between agent and victim (near vs. far)
affected salience of need. In both direct conditions, the agent saw the victims with her
own eyes. While this was unproblematic when the agent was near, the far agent was
equipped with binoculars that allowed direct visual contact despite large spatial distance. In the mediated conditions, by contrast, there was only mediated visual contact
with the victims. Both agents received the information about the suffering victims via a
video message on their cell phones. While the need for such a mediating informational
mechanism was obvious when the agent was far, in the near condition a high wall between the agent and the victims was mentioned that prevented direct visual access despite spatial proximity.
It turned out that distance did not affect moral judgments, neither under conditions
of direct visual access, nor under conditions of mediated visual access. Nagel and
Waldmann (2013) concluded that distance per se does not matter to laypeople between
Near Alone and Far Alone. At the same time, they have found that subjects in the conditions with direct visual access judged the agent to be more obligated to help than
subjects in the conditions with mediated visual access, no matter where the agent was
located. For Nagel and Waldmann (2013), this indicates that intuitive differences between Near Alone and Far Alone can be attributed to differences in the directness of

Moral Intuitionism and Empirical Data

199

visual access (which probably affects the salience of the victims need to the agent). If
salience is kept constant at high or low levels, no systematic variability in moral judgment is left to be explained by variations in distance per se.

4.4

What does the Evidence Imply for the Normative Debate?

Assuming that this data demonstrates positively that laypeople treat spatial distance as
morally irrelevant in their case-based intuitions about Near Alone and Far Alone9:
What, if anything, follows from this descriptive conclusion for the normative debate
between Unger (1996) and Kamm (2007)? First, it can be concluded that laypeoples
case-based intuitions are more in line with those of Unger than with those of Kamm.
Taken by itself, this descriptive conclusion does not seem to have any normative implications. After all, from the fact that most people believe a certain moral proposition to
be justified it does not follow that it is actually justified. This conclusion may indicate
the majority opinion on a normative matter, but as such, nothing of normative relevance seems to follow from it.
According to the present argument, this changes when we take the different methodologies employed by Unger (1996) and Kamm (2007) into account. Being a Liberationist, Unger can be rather indifferent about evidence on the intersubjectivity of casebased intuitions, as his normative claim does not strongly depend on such intuitions in
the first place. By contrast, Kamms Preservationist normative claim, tailored to account for a set of case-based intuitions, seems to be affected by such evidence as it seriously calls into question the objectivity (and, hence, validity) of her method in the present context. The evidence indicates that hardly any of the 849 subjects in the Nagel
and Waldmann (2013) study described above share the experience of having the casebased intuition that distance matters morally; an intuition that plays a central formative
role in Kamms generation of a non-consequentialist principle accounting for intuitive
distance effects in morality. Accordingly, most of these laypeople can reasonably be
expected to arrive at a different normative principle if they were to carry out Kamms
Preservationist method, even if they received intensive philosophical training in order
to apply this method properly. The method cannot be objectively applied in this context.10 Therefore, Kamms non-consequentialist principle explaining when and why we
9

This claim is controversial because the inference from absence of evidence for effects to actual absence of effects
is inherently inductive. There is always a range of alternative explanations for null effects, including lack of sensitivity to detect small true effects and the choice of inadequate boundary conditions in the scenario description.
Nagel and Waldmann (2013) have taken several measures to enhance confidence in substantial interpretations of
observed null effects, including high statistical power to detect small true effects and keeping the experimental
materials as close as possible to the case vignettes published in the philosophical literature.
10
Of course, it could be argued that we have a widely shared intuition here (the intuition that distance does not
matter morally) and hence that the method can be applied objectively (just probably leading to a different principle
than the one proposed by Kamm).

200

Jonas Nagel / Alex Wiegmann

are justified in treating near and far victims differently can be criticized for resulting
from an invalid procedure. The principle may or may not be an accurate account of
what morality actually requires, but the Preservationist method cannot be used to support it. As long as the principle is not independently justified, there is no reason to
accept it.
In the present paper, we have so far argued that empirical data on the contents of
laypeoples case-based moral intuitions can undermine the persuasiveness of Preservationist arguments in normative ethics, and we have illustrated this with a concrete example from the literature. We will now turn to the discussion of some strategies a
Preservationist might employ to justify holding on to his or her normative claims even
if the underlying case-based intuition has been shown to lack intersubjectivity (see Sect.
5). Afterwards, we will put the present argument into the context of related critiques of
intuitionist methods (see Sect. 6). Finally, we will close by discussing some potential
extensions of the argument (see Sect. 7).

Potential Escape Routes for Preservationists

One way to defend a Preservationist principle resting on case-based intuitions that are
not intersubjectively shared would be to find different normative reasons in support of
this principle (e.g., rational reconstruction). As noted in the introduction, the present
claim is not that principles generated from unshared case-based intuitions are necessarily false. It is just that they cannot be effectively defended with the Preservationist
method.
Another way could be to discount the empirical data by arguing that they were collected under conditions that lead to errors. For example, philosophers case-based intuitions are generated from highly deliberative, reflective reasoning processes in which
several cases are simultaneously considered. By contrast, many of the lay judgments
observed in empirical studies (such as the ones described in Sect. 4) are elicited as responses to single cases that are judged spontaneously in isolation (psychologists call
this between-subjects design). It could be argued that judgments that are generated under these epistemic conditions are prone to errors and do not reflect our deepest moral
commitments which might only be discernible under careful deliberative scrutiny of
many relevant case-based intuitions in parallel. In this way, all of the judgments that
psychologists collect with between-subjects designs could be dismissed because they do
not tap into what people actually believe to be morally required. To counter this potential objection, psychologists could run their experiments in within-subject designs if
their main goal is to relate their empirical data to a normative debate (though this procedure may produce order effects and have further disadvantages for other research

Moral Intuitionism and Empirical Data

201

questions in moral psychology; see Nagel and Waldmann 2013). In a within-subject


design, all participants receive all the to-be-compared cases in a counterbalanced order.
This procedure thus generates quasi-philosophical epistemic conditions in which subjects concurrently evaluate sets of equalized cases and have the possibility to explicitly
judge them identically or differently. The resulting intuitions would thus be generated
with a procedure to which the specific objection of inadequate epistemic boundary
conditions would no longer apply.11
If evidence obtained under such conditions still indicates that the case-based intuition in question is not intersubjectively shared to a noteworthy degree, the Preservationist could still argue that other specific aspects of the experimental conditions were
inadequate (e.g., the ways in which the questions were formulated, the use of rating
scales, etc.). However, such critiques do not seem self-evident and would have to be
worked out and justified thoroughly before empirical data could be dismissed as
flawed.12
Yet another way to dismiss empirical data would be to claim that laypeoples judgments should generally not be considered because they are relevantly different from the
judgments generated by professional moral philosophers. This move is known as the
expertise defense (see Weinberg et al. 2010). It basically states that professional moral
philosophers, in virtue of their education and their practice, have acquired special skills
that allow them to reach intuitions that reliably reflect aspects of what morality actually
requires. Laypeople usually lack these special skills, making their moral judgments
more prone to errors. This rather elitist position is highly controversial. Weinberg et al.
(ibid.) have pointed out that it is unclear what exactly the alleged expertise should consist in. The conditions under which philosophers generate intuitions do not seem to
allow the development of the kind of expertise observed and well understood in more
paradigmatic cases (such as in chess or in medicine). Furthermore, some recent studies
have shown that laypeople and professional philosophers both generate inconsistencies
in their case-based moral judgments (e.g., Schwitzgebel and Cushman 2012; Tobia et al.
2013). Without empirical evidence to the contrary, it should thus not generally be assumed that philosophers intuitions are superior to laypeoples intuitions (Weinberg et
al. 2010). For these reasons, we believe that it needs to be argued in every single instance in what way the philosophers intuition should be regarded as superior compared to a conflicting folk intuition. A general a priori dismissal of data on laypeoples

11

For the case of spatial distance described in Sect. 4, there is some evidence that a change of the experimental
design from between-subjects to within-subject comparisons does not affect the results (Nagel and Waldmann
2013, Experiment 4b). Spatial distance is seen as morally irrelevant in both procedures.
12
Any such rebuttal will have to include a positive account of the kind of intuition that is to be trusted. Given a
conception of intuitions as psychological phenomena, it seems highly likely that empirical data on the nature of
these phenomena and the processes by which they are generated will also play a crucial role in any such attempt.

202

Jonas Nagel / Alex Wiegmann

moral judgments would be too easy a way out for Preservationist philosophers whose
case-based intuitions are not intersubjectively shared.
Finally, it needs to be stressed that establishing matters of empirical fact is a nontrivial task, especially in abstract domains like cognitive science. Whether the subject of
investigation is moral judgment or any other cognitive task, interpretations of empirical data are usually subject to vigorous debate even within descriptive disciplines. Much
writing on the role of empirical data in moral philosophy (including the present article)
has a somewhat uncritical positivist appeal to it, as if the relevant empirical facts could
be doubtlessly ascertained by plain observation. This is clearly not the case. There are
always many free parameters in the choice of methodology and boundary conditions.
The hypothesis under study often does not mandate the choice of a specific methodological option over another yet, this choice might crucially shape the outcome of the
study. Furthermore, many normative considerations enter into the interpretative process which turns empirical raw data (e.g., the proportion of marks on a certain location
of a rating scale) into so-called empirical facts (e.g., the proposition that distance does
not matter morally to laypeople). How strongly does a given intuition have to be expressed by an individual so that one might say this individual shares the intuition?
Which proportion of the subjects has to share the intuition so that one might say it is
shared to a relevant degree? How is the relevant moral community defined, and how
can one make sure that the conclusions of the empirical study generalize to this population? If two different empirical methodologies that seem a priori equally adequate for
the task lead to contradictory results, which of these facts is relevant for what purposes? These are all crucial questions which are at the core of any empirical investigation
in moral psychology. Glossing them over in the present context is not to say that these
matters are trivial or unimportant. But it seems they are not more or less important in
the present context than in any other domain investigated by cognitive psychologists. A
serious treatment of this issue is therefore beyond the scope of the present paper. In any
case, again it seems to be up to the philosopher to provide a convincing argument
against every concrete empirical challenge. These challenges cannot be waived aside a
priori without further ado.
Despite all these deep problems inherent in any empirical investigation of how people reason with and about normative concepts, we are therefore convinced that empirical data can pose serious challenges to certain intuitionist moral arguments. We are of
course not the first to raise this more general point. In the following section, we will
relate our present argument to a previous attempt to attack moral intuitionism on empirical grounds.

Moral Intuitionism and Empirical Data

203

Related Empiricist Arguments Against Intuitionism

The crucial intuitionist assumption that at least some moral intuitions provide a valid
glimpse of what morality actually requires is a main gateway for critics of moral intuitionism. Sinnott-Armstrong (2008), for example, cited empirical data to demonstrate
that case-based intuitions can be causally affected by morally irrelevant factors, and he
argues that such observations undermine the validity of case-based intuitions as normative premises in moral arguments. For example, many psychological studies have
shown that case-based moral judgments can be subject to framing effects: They can be
crucially affected by the order in which the compared cases are presented (e.g., Liao et
al.2012; Petrinovich and ONeill 1996; Wiegmann et al. 2012; Wiegmann and Waldmann 2014) or by different verbal framing of one and the same moral situation (e.g.,
Tversky and Kahneman 1981; Kern and Chugh 2009; Petrinovich and ONeill 1996).
According to Sinnott-Armstrong (2008), such empirical findings raise doubts as to
whether our case-based intuitions are valid indicators of what morality actually requires (see also Norcross 2008). Showing case-based intuitions to be systematically
different across relevantly similar judgment contexts compromises the trustworthiness
of these intuitions as epistemic tools.
Like us, Sinnott-Armstrong (2008) invokes empirical data to attack the intuitionist
method. Of course it is not the data per se that imply the normative conclusion (no
ought from an is), but a normative assumption about the irrelevance of the factors
shown to influence moral intuitions. His argument is that judgments that are subject to
framing effects cannot be truth-tracking because the truth of a proposition cannot be
affected by the wording used to describe it or by the context in which it is presented. As
even very central intuitions have been empirically shown to be affected by such framing
variables (e.g., the bystander trolley case, Wiegmann and Waldmann 2014, or the loop
case, Liao et al. 2012), even these famous intuitions cannot be indicative of moral
truths.
Our argument goes beyond Sinnott-Armstrongs (2008) claims in that it states that
intuitions can be a shaky ground for Preservationist principles even if they are not subject to framing effects. Even if intuitions can be shown to be stable across relevantly
similar judgment contexts and not easily affected by irrelevant factors, it is possible that
people persistently disagree about the contents of these intuitions. According to our
argument, such stable disagreement would be just as problematic for the Preservationist method as instability of intuitions across similar contexts. The normative commitment that imbues intersubjectivity data with normative relevance is just the minimal
requirement for epistemic tools to be objective. This normative requirement is even
more parsimonious than Sinnott-Armstrongs requirement for moral intuitions to be
consistent across different framings of the same situation.

204

Jonas Nagel / Alex Wiegmann

Summary and Potential Extensions of the Present Argument

In the present paper, we have argued that empirical data on the contents of laypeoples
moral intuitions can undermine the justification of some normative principles proposed by moral philosophers. This is the case when the epistemic procedure by which
the philosopher generated and justified the principle involves as a crucial step an inductive inference from a set of case-based intuitions, a methodology that Unger (1996)
called Preservationism because at least some central case-based intuitions are held on
to throughout the process of normative theory construction. These intuitions therefore
partly determine the contents of the resulting principle. Basing the inductive inference
on diverging initial intuitions would lead to a different principle. We argued that this
implication makes the Preservationist method susceptible to empirical challenge. Data
demonstrating lack of intersubjectivity of central case-based intuitions strongly indicates that the procedure cannot be applied objectively in the respective context. By
definition, a procedure that is not objective cannot be valid, for whatever purpose.
Therefore, moral principles induced from widely unshared case-based intuitions in a
Preservationist fashion cannot be argued to provide a valid account of what morality
actually requires.
With our argument, we have shown how individual moral principles that actually
have been generated and defended by means of the Preservationist method, like the
exemplary claim that distance matters morally discussed in Sect. 4, can be challenged
with reference to empirical data. However, our argument already points to circumstances in which empirical data can cast doubt on the Preservationist method of finding
a moral principle or moral theory in general. Let us call those case-based intuitions that
can reasonably be expected to play a crucial role in the formation process of any moral
theory basic moral intuitions (e.g., intuitions about whether it is sometimes permissible
to sacrifice one life for the sake of saving more lives). Now, if empirical studies showed
that there are no basic moral intuitions that are shared by a relevant number of people,
it would be reasonable to assume that different moral principles or moral theories will
be proposed by people with different basic moral intuitions, indicating that the Preservationist method cannot be applied objectively in this broader context, namely the
general endeavor of finding a moral principle or theory.
Finally, the present analysis is limited to the role of morally substantial case-based
intuitions. In principle, the argument could also be applied to other intuition-based
sources of normativity in moral arguments. For example, it would be possible to make
a psychological investigation about the question whether laypeople share philosophers
abstract moral intuitions about the validity of certain prima facie principles. If it turned
out, for instance, that the abstract principle It is morally wrong to break a promise is
not found to be intuitively compelling to most people in the relevant moral communi-

Moral Intuitionism and Empirical Data

205

ty, this would constitute a problem for a normative theory built on this abstract moral
intuition. However, critiques of normative theories operating with prima facie principles do usually not contest the intuitive appeal of these principles. The whole point of
prima facie principles seems to be their almost irresistible prima facie plausibility, and
therefore they can be expected to be rather unanimously accepted when probed in isolation. Problems for these accounts usually do not arise before several prima facie principles clash in a particular situation. If a philosopher resolves such conflicts with reference to an intuition as to which principle has priority in this particular case (which is
not uncommon; see the critical discussion of intuitionism in Rawls 1971), this casebased assessment of relative priority will almost certainly be much more controversial
than the intuitions about the plausibility of the isolated principles. It thus seems that it
is at the level of relatively concrete case-based intuitions that moral psychology can
make its most interesting contributions to normative ethics.

References

Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. New York:
Oxford University Press.
Bentham, J. (1907). An introduction to the principles of morals and legislation. Oxford: Clarendon Press. (Original work published in 1789)
Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review 17, 273292. doi: 10.1177/1088868313495594
Elqayam, S., & Evans, J. S. B. T. (2011). Subtracting ought from is: Descriptivism versus
normativism in the study of human thinking. Behavioral and Brain Sciences 34, 233248.
doi: 10.1017/S0140525X1100001X
Glckner, A., & Witteman, C. (2010). Beyond dual-process models: A categorisation of processes underlying intuitive judgement and decision making. Thinking and Reasoning 16, 125.
doi: 10.1080/13546780903395748
Greene, J. D. (2013). Moral tribes: Emotion, reason, and the gap between us and them. New
York: Penguin Press.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New
York: Pantheon Books.
Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.), Handbook
of Social Psychology, 5th Edition (pp. 797832). Hobeken, NJ: Wiley.
Hare, R. M. (1981). Moral thinking: Its levels, method, and point. Oxford: Clarendon Press.
Hume, D. (1969). A treatise of human nature. London: Penguin. (Original work published 1739
1740)
Kamm, F. M. (2007). Intricate ethics. Oxford: Oxford University Press.
Kaplan, R. M., & Saccuzzo, D. P. (2013). Psychological testing: Principles, applications, and
issues. Belmont, CA.: Wadsworth.

206

Jonas Nagel / Alex Wiegmann

Kern, M. C., & Chugh, D. (2009). Bounded ethicality: The perils of loss framing. Psychological
Science 20, 378384. doi: 10.1111/j.1467-9280.2009.02296.x
Liao, S. M., Wiegmann, A., Alexander, J., & Vong, G. (2012). Putting the trolley in order: Experimental philosophy and the loop case. Philosophical Psychology 25, 661671.
doi: 10.1080/09515089.2011.627536
Mikhail, J. (2011). Elements of moral cognition: Rawls linguistic analogy and the cognitive science of moral and legal judgment. New York: Cambridge University Press.
Nagel, J., & Waldmann, M. R. (2013). Deconfounding distance effects in judgments of moral
obligation. Journal of Experimental Psychology: Learning, Memory, & Cognition 39, 237
252. doi: 10.1037/a0028641
Norcross, A. (2008). Off her trolley? Frances Kamm and the metaphysics of morality. Utilitas 20,
6580. doi: 10.1017/S0953820807002919
Petrinovich, L., & ONeill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology 17, 145171. doi: 10.1016/0162-3095(96)00041-6
Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.
Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects on moral
judgment in professional philosophers and non-philosophers. Mind & Language 27, 135
153. doi: 10.1111/j.1468-0017.2012.01438.x
Singer, P. (1973). The triviality of the debate over is-ought and the definition of moral.
American Philosophical Quarterly 10, 5156. http://www.jstor.org/stable/20009474
Sinnott-Armstrong, W. (2008). Framing moral intuitions. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality (pp. 4776). Cambridge, MA: MIT
Press.
Stratton-Lake, P. (Ed.). (2002). Ethical intuitionism: Re-evaluations. New York: Oxford University Press.
Tobia, K., Buckwalter, W., & Stich, S. (2013). Moral intuitions: Are philosophers experts? Philosophical Psychology 26, 629638. doi: 10.1080/09515089.2012.696327
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice.
Science 211, 453458.doi: 10.1126/science.7455683
Unger, P. (1996). Living high and letting die: Our illusion of innocence. New York: Oxford University Press.
Waldmann, M. R., Nagel, J., & Wiegmann, A. (2012). Moral Judgment. In K. J. Holyoak, & R. G.
Morrison (Eds.), The Oxford Handbook of Thinking and Reasoning (pp. 364389). New
York: Oxford University Press.
Weinberg, J. M., Gonnerman, C., Buckner, C., & Alexander, J. (2010). Are philosophers expert
intuiters? Philosophical Psychology 23, 331355. doi: 10.1080/09515089.2010.490944
Wiegmann, A., Okan, Y., & Nagel, J. (2012). Order effects in moral judgment. Philosophical
Psychology 25, 813836. doi: 10.1080/09515089.2011.631995
Wiegmann, A., & Waldmann, M. R. (2014). Transfer effects between moral dilemmas: A causal
model theory. Cognition 131, 2843. doi:10.1016/j.cognition.2013.12.004

Can Biological Approaches Explain (Im)Moral Behavior?


Problems and Potentials of Studies Focused on a Genetic
Predisposition of Human Behavior
Stefan Walter

Abstract
Sociologists traditionally assumed that biological factors can be excluded in the pursuit
of explanations for moral behavior. On the contrary, a number of empirical studies in
recent years mentioned the influence of biological factors on the capacity of morality.
Especially an often-cited study by Avshalom Caspi et al. (2002a) shows a statistically
proven influence of a genetic predisposition on antisocial behavior. Thereby, the question arises to what extent moral capacity is genetically pre-determined. This paper is
meant to discuss some methodological problems occurring in studies where a genetic
predisposition towards moral or immoral behavior is suggested. It is argued that especially the insufficient use of further independent variables gives the impression of compelling evidence that biological factors explain human behavior in general and moral
behavior in particular. Referring to these methodological problems and new molecular
genetic knowledge, it is further argued that the research potential of biological concepts
could lie less in efforts to identify specific genes associated with a specific behavior than
in helping to underpin theoretical cognitive models.

To What Extent Is Moral Capacity Genetically Pre-Determined?

In 2009, an Italian court decided in appeal proceedings to reduce a convicted murderers level of sentence because of his special genetic predisposition (see Feresin 2009).1

Stefan Walter
Carl von Ossietzky University of Oldenburg
Department of Educational Sciences
stefan.walter@uni-oldenburg.de

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_10

208

Stefan Walter

The court referred to a behavioral genetic study suggesting an unfavorable impact of a


gene variant on the mans actions. More precisely, the man carried a specific variant of
the MAOA gene suspected of increasing the likelihood of aggressive, antisocial and
criminal behavior under certain environmental conditions. The study upon which the
Italian judges arguments were principally based was carried out by Avshalom Caspi et
al. and drew global attention when it was published in 2002. Since then it can often be
found in psychological textbooks as a reference example of how a biological factor correlated to environmental factors can affect human behavior (Asendorpf 2007, 122f.;
Asendorpf 2008, 63). In this respect, the study by Caspi et al. is of particular relevance
for the empirical research into human morality because it obviously subjects the general ability to behave morally to the conditions of human genetic makeup. If moral
capacity were indeed to a certain degree genetically pre-determined, this would have
far-reaching consequences, not only in terms of penal law. By indicating a need to minimize the risk to society, someone might make the political demand to put people who
have an unfavorable gene variant like the abovementioned murderer under surveillance as a precautionary measure. In view of advancing possibilities of prenatal diagnosis, parents might also become aware of gene variants that are unfavorable for a childs
moral capacity so that the decision to carry the child to full term might be influenced.
The results of a certain genetic determinism of human behavior also present a particular challenge for sociologists because sociology traditionally assumed that human
behavior is mostly explainable by factors relating to the social environment. However,
recent research suggests that matters may be more complex than this. Therefore, we
have to ask whether biological factors such as the genetic predisposition discovered by
Caspi et al. can possibly provide better explanations of (im)moral behavior.
Conceptions of a hereditary predisposition towards moral or immoral behavior are not
new to the social sciences (Montagu 1979; Schwind 2011, 104).2 The founding fathers of
sociology already dealt with contemporary biological approaches to human action.3
1

I would like to thank Kurt Mhler and Michael Rhr (Leipzig University), Cordula Brand and Margarita Berg
(University of Tbingen) for their helpful comments and advice as well as Lewis Enim (Frankfurt/Main) for proofreading and Margarita Berg again for translation help.
2
For discussions about the return of biological concepts in criminology and law see Bllinger et al. 2010 or Grn
et al. 2008.
3
A famous sociological view concerning the influence of a genetic predisposition towards morality was pointed
out by Emile Durkheim (1911 [1972]). According to Durkheim, an individual at birth is like a tabula rasa, having
only his individual nature, which is characterized as egoistic and asocial. To be able to cope with life, a newborn
has to add to his biological being a further social and moral being that he receives through education in a society
(ibid., 31). Therefore, morality has to be understood mainly as a result of human social coexistence. Nurture makes
us moral. Nevertheless, it is to some minor extent, in a very general and vague sense, based on a persons predispositions (ibid., 41). The reason that these predispositions have to be vague is that humans always have to adapt to
their complex surroundings and changing environmental conditions. This is possible only if predispositions are
malleable and flexible. Durkheim thereby widely excludes a genetic predestination of human behavior. Such factors appear to have negligible impact on sociological explanations.

Can Biological Approaches Explain (Im)Moral Behavior?

209

In the debate about the usefulness of considering biological factors in explanations


of human behavior, Max Webers (1978) point of view is a methodological one because
it takes the explanation of the analyzed issue as crucial. If, so Weber, profound statistical analysis suggests an influence of biological factors on human behavior, these factors
have to be considered as given data for sociological explanations. The task of sociological studies to interpret individuals actions in their subjective meaning thus will not
be modified. If the influence of a biological factor is discovered, it has to be integrated
into an explanation of the issue as a non-interpretable factor:
It is possible that future research may be able to discover non-interpretable uniformities
underlying what has appeared to be specifically meaningful action, though little has been
accomplished in this direction thus far. Thus, for example, differences in hereditary biological constitution, as of races, would have to be treated by sociology as given data in the
same way as the physiological facts of the need of nutrition or the effect of senescence on
action. This would be the case if, and insofar as, we had statistically conclusive proof of
their influence on sociologically relevant behavior. The recognition of the causal significance of such factors would not in the least alter the specific task of sociological analysis or
of that of the other sciences of action, which is the interpretation of action in terms of its
subjective meaning. The effect would be only to introduce certain non-interpretable data
of the same order as others which are already present, into the complex of subjectively understandable motivation at certain points (Weber 1978, 7f.).

Thus, for Weber, the explanation of the issue (here: moral behavior) has priority. He is
concerned with the question in what way an influencing factor (whether social or biological) can contribute to the explanation of this behavior in a statistically provable
way. Even though modern sociology still assumes that it is the social influences which
explain behavior, a certain change can be detected in recent representations of socialization theory. Here, one now increasingly finds the conception of an interaction between genes and environment in the formation of personality traits (cf. Geulen 2007,
139; Tillmann 2010, 58; Hurrelmann 2014, 447). This theoretical position states that
neither environmental factors alone nor exclusively genetic factors are responsible for
the formation of the personality in the process of socialization. Instead, complex interactions are suspected between the genes of a person and the surrounding environment.
It is assumed that the genes set the frame within which a personality trait can develop
depending on the predominant environmental condition.
With the study by Caspi et al. (2002a), a first statistical proof of an interrelation between a specific gene and a specific environmental influence in forming the behavior of
individuals seems to have been achieved. In the following, I therefore wish to seize on
Max Webers position and ask how profound the correlation described by Caspi et al.
actually is. To do so, the study by Caspi et al. is to be subjected to a detailed methodological criticism. After a fine-grained description of their approach and their results, it
will be asked whether the measurements taken of childhood maltreatment and antiso-

210

Stefan Walter

cial behavior conform to the quality criteria of measurements, namely reliability and
validity. This can be easily tested insofar as there is comprehensive criminological literature on the recognition of childhood maltreatment and aberrant behavior. Thereafter,
the used sample of persons on the basis of which the interaction between gene and
environmental factor was tested by Caspi et al. will be addressed. It will be asked
whether the exclusion of female study members from the investigation is justified.
Then, the question will be considered whether the influence of third variables (i.e.
possible further influencing factors on the antisocial behavior which is to be explained)
was sufficiently controlled. A variety of factors can influence a behavior. Therefore, as a
last point, the relevance of the biological factor in the context of explanations of antisocial behavior is to be assessed.

The Reference Study by Caspi et al.

Previous studies have shown that individuals who experienced maltreatment during
childhood might be susceptible to violent and antisocial behavior in their adulthood
(Rutter et al. 1998; Widom 1989). But only a small proportion of maltreated children
really do express such behavior. This is why Caspi et al. (2002a) raise the question
whether the risk of later antisocial behavior might be influenced by the genetic disposition of the maltreated child. They concentrate on effects of variation in the so-called
MAOA gene which is located on the X chromosome. This gene is responsible for encoding the enzyme Monoamine Oxidase (MAO) that regulates the breakdown of several biological amines in the human body and can be differentiated in Monoamine
Oxidase A (MAOA) and Monoamine Oxidase B (MAOB). Caspi et al. concentrate on
MAOA because it regulates the breakdown of the neurotransmitters serotonin, norepinephrine and dopamine that elicit aggressive behavior. The MAOA gene varies from
person to person in a specific sequence called the promoter region. Here the number of
tandem repeats can vary (VNTR). Based on already existing studies (see Caspi et al.
2002b, 1; Caspi et al. 2003, 7), Caspi et al. assume a low MAOA activity when the allele
shows 2, 3 or 5 repeats. A high MAOA activity is assumed when the allele4 has 3.5 or 4
repeats in the promoter.

Every human being has approximately the same genes. However, a multitude of genes can vary from person to
person. The specific manifestations of a gene are referred to as its alleles. The specific alleles of a human being are
responsible for differences in the phenotype. This is the term for the outward appearance of a being. Thus, it is not
the genes which cause the differences e.g., in eye color or blood type, but their alleles. The alleles do not act directly, but serve as a kind of text in their cell, which is used for the production of different proteins (enzymes). This
description follows the genetic standard model which is still prevalent. However, the conceptions of what a gene
is have recently been in a state of flux (see Pearson 2006).

Can Biological Approaches Explain (Im)Moral Behavior?

211

Human genotyping studies (Brunner et al. 1993) and studies of transgenic mice
(Cases et al. 1995) suggest that low MAOA activity could be associated with aggressive
behavior. Referring to these studies, Caspi et al. assumed a gene-environment interaction (G x E) between MAOA activity and childhood maltreatment, especially in the
case of males. Depending on the variant of MAOA activity, maltreated young boys will
develop antisocial behavior differently. According to Caspi et al., the genotype moderates the influence of the environmental factor childhood maltreatment.

2.1

Data Base

The assumption was tested on a sample of 442 males. All participants were members of
the Dunedin Multidisciplinary Health and Development Study. This is an ongoing
longitudinal study of 1,037 children born between April 1972 and March 1973 in a
hospital in Dunedin/New Zealand.5 The participants have to undergo psychological
and medical examinations in regular intervals. For testing their hypothesis, Caspi et al.
draw on data which were gathered up to the 26th year of age of the participants of the
Dunedin longitudinal study (Caspi et al. 2002b, 1). Remarkably, 96 % (N = 980) of the
initial birth cohort still participated in this phase. Caspi et al. exclude the female participants as well as all the males who said in an interview that they belong to the Maori
ethnic group. The justification for this selection will be discussed below.

2.2

Measurement

Antisocial behavior was measured by the following four indicators: 1. conduct disorder
(interview with participants, parents, teachers at the participants age of 11, 13, 15, and
18), 2. disposition toward violence (interview with participants aged 26), 3. symptoms
of antisocial personality disorder (questionnaire presented to informants when participants aged 26), 4. violent convictions (analysis of court records when participants aged
26).6 Caspi et al. create a cumulative index by adding the observed antisocial outcomes

The aim of the longitudinal study is investigating the frequency of health problems and developmental disorders
and also detecting possible causes and consecutive symptoms (see DMHDRU 2014). A first summary of the results
of the Dunedin health study can be found in Silva and Stanton (1996).
6
The description of the measurement of the indicators of antisocial behavior and child maltreatment presented by
Caspi et al. in their supplementary material (Caspi et al. 2002b, 2f) will be outlined and discussed in detail below.

212

Stefan Walter

for each participant.7 This index is used by Caspi et al. as the dependent variable antisocial behavior in the statistical analysis.8
Childhood maltreatment appears as one of the two independent variables in the statistical model and was measured by the following five indicators: 1. mother-childinteractions (participant observation by an assessor when participant aged 3), 2. harsh
discipline (interview with parents when participants aged 7 and 9), 3. changing childs
primary caregiver (measurement not specified; every phase up to the participants age
of 11), 4. child physical abuse before the age of 11 (interview with participants aged 26),
and 5. unwanted sexual contact before the age of 11 (interview with participants aged
26). Counting the number of the observed instances of maltreatment according to the
five indicators for each test subject, Caspi et al. again develop a cumulative index with
three ranges:
1. people who experienced no maltreatment in childhood (no indicator observed,
64 % of the participants),
2. people who experienced probable maltreatment (1 indicator, 28 %),
3. people who experienced severe maltreatment (2 or more indicators, 8 %).
This index is used by Caspi et al. as the independent variable child maltreatment in the
statistical analysis.
MAOA activity is the second independent variable. Using standard procedures for
genotyping and DNA extraction, the MAOA activity was obtained for 953 members of
the Dunedin study. As already mentioned above, Caspi et al. only used the data of 442
male participants of the study with European origins (Caucasians). Tab. 1 shows the
outcomes of the DNA analysis for these participants.

The pooling of a number of individual indicators concerning a variable is called an index. An index is mostly
formed when a theoretical construct (in this case antisocial behavior) has various dimensions (e.g., violence,
criminal behavior, deceit etc.). Caspi et al. form the index by addition of the individual indicators. However, indices can also be formed by multiplication. Furthermore, through weighting, the indicators can be emphasized to
varying extents.
8
In empirical social research, a variable which is assumed to influence another variable is referred to as an independent variable. On the other hand, the variable which is assumed to be influenced by the independent variable is
referred to as a dependent variable. Thus, the hypothesis is formulated that differences in the independent variable
affect the dependent variable in a certain way.

Can Biological Approaches Explain (Im)Moral Behavior?

213

Tab. 1: Outcomes of DNA analysis in the Dunedin sample and in previously published
studies9
Number (and percent) of
alleles in

Number of repeats at MAOA promoter polymorphism


2

3.5

Dunedin sample males,


n (chromosomes) = 442

1 (0.2)

149 (33.7)

5 (1.1)

274 (62.0)

13 (2.9)

Caucasian controls,
n (chromosomes) = 1940

3 (0.2)

658 (33.9)

9 (0.5)

1238 (63.8)

32 (1.6)

Note: A low MAOA activity is assumed when the allele has 2, 3 or 5 repeats. A high
MAOA activity is assumed when the allele has 3.5 or 4 repeats.
It can be inferred from Tab. 1 that the considered male participants of the Dunedin
study mainly have a high activity of the MAOA gene. 274 persons, i.e. 62.0 % of the
males taken into account, have 4 repeats (and thus a high MAOA activity according to
Caspi et al.). Persons with 2, 3.5, and 5 repeats are generally rare. In order to show that
the distribution of the number of repeats at the MAOA promoter in the male participants of the Dunedin sample does not differ from that found in other studies, Caspi et
al. added another row with data from a control group in their table S1 (the bottom row
in Tab. 1). This control group is composed of 1940 persons of Caucasian descent and
was gathered from previously published studies. A comparison of the percentages of
the Dunedin study with the control group seems to prove Caspi et al. right. The number of repeats at the MAOA promoter is distributed in a similar way among the male
participants of the Dunedin study as among the participants of other studies. However,
we will come back to this point later.

2.3

Findings

Applying a moderated regression analysis,10 Caspi et al. predict a significant main effect
of childhood maltreatment but no main effect of the genotype on antisocial behavior.
9

Tab. 1 conforms to Tab. S1 in Caspi et al. 2002b, 5.


Moderated regression analysis is a variant of linear regression analysis. Linear regression analysis is one of the
most basic statistical procedures which has the aim of explaining a dependent variable through one or more independent variables. The distinctive feature of the moderated regression analysis is that apart from the independent
variables, which in this context are also referred to as main effects, a multiplicative term, which is formed from the
independent variables and can also be referred to as interaction term, is included in the equation. With this procedure, it is attempted to not only reveal the influence of the main effects on the dependent variable, but also the
influence of a possible interaction effect which exists between the independent variables. For a more detailed
description of moderated regression analysis see, e.g., Baltes-Gtz (2009).

10

214

Stefan Walter

Otherwise, the included multiplicative interaction term of both independent variables


reveals a significant interaction between MAOA activity and childhood maltreatment.
That means that only childhood maltreatment directly influences the occurrence of
antisocial behavior: If a male person was maltreated as a child, then he more often
tends to behave antisocially in his later lifetime than a boy who was not maltreated. In
contrast, the genetic disposition does not directly affect antisocial behavior. A male
person with low MAOA activity is not distinguished per se from a male with high
MAOA activity regarding antisocial behavior. But the significant interaction term reveals that MAOA activity indirectly influences the occurrence of antisocial behavior,
namely when a person has been maltreated as a child. As hypothesized, the strength of
the effect of childhood maltreatment on antisocial behavior is moderated by the
MAOA genotype. Maltreated boys with low MAOA activity have a higher risk of antisocial behavior than maltreated boys with high MAOA activity. Testing this geneenvironment interaction effect for each of the four measures indicating antisocial behavior, Caspi et al. arrived at robust results: For all four antisocial outcomes, the pattern of findings was consistent with the hypothesis that the association between maltreatment and antisocial behavior is conditional, depending on the childs MAOA
genotype (Caspi et al. 2002a, 853).
Now, the question arises whether Caspi et al. provide conclusive proof of the influence of a genetic predisposition towards antisocial and in a broader sense immoral
behavior. The studys criticisms have so far been primarily of a general nature. Critics
accused Caspi et al. of propagating a genetic fundamentalism (Schwartz 2005). In addition, the gene-environment-interaction-design is criticized in general (Zammit et al.
2010). Another criticism is empirical. Some researchers could not replicate the moderate effect on antisocial behavior in their studies (Haberstick et al. 2005; Widom and
Brzustowicz 2006). The latter results raise the question of whether the statistical correlations found by Caspi et al. are maybe only spurious correlations. Up to now, criticism
of the study has hardly focused on the methodological approach of Caspi et al. Therefore, I want to discuss some methodological problems below.

Insufficient Definition of the Constructs Which Are to Be


Measured

Child maltreatment and antisocial behavior are theoretical constructs. In order to be


able to exchange views with other scientists and to prevent misunderstandings, one
needs to define clearly what is understood by these terms. Caspi et al. do not meet this
basic methodological requirement. Neither in the study (2002a) nor in the supplementary material on the methods (2002b), a specific definition of antisocial behavior can

Can Biological Approaches Explain (Im)Moral Behavior?

215

be found. This is problematic because it cannot be assumed that everybody has the
same conception of antisocial behavior. What the authors possibly understand by it can
only be reconstructed through the indicators they use. A closer look at these indicators
shows that mainly pathological dimensions of behavior as well as aggressive and physical violence are meant by this term. However, this is a considerably restricted view.
Why should tax evasion, theft, fare evasion or deceiving a friend not be considered
antisocial behavior as well? The missing definition of this term can easily lead to misunderstandings. In the literature, it is generally assumed that the study only refers to
antisocial behavior in adults (e.g., Asendorpf 2007, 123; Asendorpf 2008, 63). However,
through the inclusion of the indicator conduct disorder (which is assumed and measured for children and adolescents) this is not the case, and it is not specifically stated
anywhere by Caspi et al.
The term childhood maltreatment is not explicitly defined by Caspi et al., either.
However, at the beginning of the paper, one at least finds the statement that maltreated
children are those [] who experience abuse and, more generally, those exposed to
erratic, coercive, and punitive parenting (Caspi et al. 2002a, 851). Nonetheless, it remains unclear what exactly is to be understood by erratic, coercive, and punitive parenting. Furthermore, there is no stipulation of those years of age which constitute the
phase of childhood.11
In summary, it can be said that the study contains no or only insufficient definitions
of what is investigated.

Problems of Validity and Reliability in the Study by Caspi et al.

Caspi et al. use four indicators to measure antisocial behavior, and five indicators to
measure child maltreatment. The question arises how reliable and valid these indicators
are. In the following, I will discuss each variable, antisocial behavior as well as child
maltreatment, concerning the indicators used by Caspi and colleagues.

11

Strictly speaking, a definition of childhood maltreatment would first require a definition of childhood. Even
though this is missing, it can be deduced from the indicators used that Caspi et al. understand the phase of life
between age 3 and age 11 as childhood.

216

4.1
4.1.1

Stefan Walter

Indicators of Antisocial Behavior


Conduct Disorder

According to the Diagnostic and Statistical Manual of Mental Disorders IV (DSM IV),
conduct disorder is defined as a repetitive and persistent pattern of behavior in which
the basic rights of others or major age-appropriate societal norms or rules are violated
(American Psychiatric Association 1994, 85). It will usually be diagnosed only during
childhood and adolescence. Caspi et al. mention that conduct disorder was ascertained
according to the criteria of DSM IV,12 and they derived a lifetime diagnosis for a person when the participant received a diagnosis between the age of 11 and 18 (Caspi et al.
2002b, 3). At first glance, this indicator seems to be reliable, but a closer look shows
that conduct disorder is one of the most problematic indicators due to insufficient reliability but also due to theoretical considerations.
Let us first address the problems concerning reliability. When Caspi et al. investigate
the influence of maltreatment and the MAOA gene on antisocial behavior, the study
participants are at the age of 26. At this age, conduct disorder usually will no longer be
diagnosed. This is why they use the raw data of previous phases of the study, measured
and documented by different teams of researchers. The numbers of cases thus found
are listed in Tab. 2. In contrast to these previous publications, Caspi et al. do not disclose immediately their number of cases with a lifetime diagnosis for conduct disorder.13 But in a later study by co-authors of Caspi et al., one can find the number of participants with such a lifetime diagnosis (see Moffitt et al. 2006, 138).

12

According to DSM IV, conduct disorder is to be diagnosed when at least three of 15 criteria were present during
the past 12 months. These 15 criteria fall into the following four main groups of behavior: aggression to people and
animals, destruction of property, deceitfulness or theft, and serious violation of rules (American Psychiatric Association 1994, 90). In addition, at least one criterion must have been present during the past 6 months (ibid.).
13
No figures are given in Caspi et al. (2002a, 2002b). However, by visual impression or by measuring with a ruler,
the following numbers of cases (N) of conduct disorder can be approximately reconstructed from Fig. 2A in Caspi
et al. 2002a, 852. In the group with low MAOA activity: no maltreatment of N = 108 persons, circa 24 % i.e.
N = 26 subjects with conduct disorder; probable maltreatment of N = 42, circa 35 % i.e. N = 15; severe maltreatment of N = 13, circa 84 % i.e. N = 11. In the group with high MAOA activity: no maltreatment of N = 180, circa
24 % i.e. N = 43; probable maltreatment of N = 79, circa 29 % i.e. N = 23; severe maltreatment of N = 20, circa 40 %
i.e. N = 8 cases of conduct disorder. In sum, this results in a total number of approximately 126 subjects with
conduct disorder. Since some male participants were excluded by Caspi et al., it seems justified to assume that
those are the same cases of conduct disorder as in the later study by Moffitt et al. (2006).

Can Biological Approaches Explain (Im)Moral Behavior?

217

Tab. 2: Conduct Disorder of the Dunedin sample in publications14


Age of
participants
during
examination

Participants with conduct disorder


Publication

male

female

total

11

12

21

Anderson et al. 1989, 843

13

10

17

Frost et al. 1989, 309

15

35

34

69

McGee et al. 1990, 615

18

41

10

51

Feehan et al. 1994, 92

11-18

154

72

226

Moffitt et al. 2006, 138

On comparing the number of cases in the different examinations, it is noticeable that


Terrie E. Moffitt (co-author of the 2002 study) and Avshalom Caspi diagnose significantly more cases of conduct disorder in their publications than the other teams of
researchers, and this based on the same data. The high numbers of cases of conduct
disorder can also be reconstructed in the study by Caspi et al. (2002a). It becomes apparent that in Caspi et al., the diagnosis conduct disorder was made for almost one in
three of the Dunedin sample. As can be inferred from Tab. 2, earlier studies were significantly more careful with this diagnosis.
How can the wondrous increase of cases of conduct disorder in Caspi et al. be explained? The reason lies in an altered method of measurement. According to the DSM
IV handbook of diagnosis, a test person must have at least three symptoms in order to
warrant a diagnosis of conduct disorder (American Psychiatric Association 1994, 90).
The presence of these symptoms was determined by questioning the participants during the respective phases of examination. However, apart from the last test stage (participants at age 18), teachers and parents were questioned in addition (ibid., 41). In the
previous studies, the results of the questionings of the different sources (participants,
parents, teachers) were investigated concerning their consensus in indicating symptoms of conduct disorder. Only when at least two sources agreed, the respective diagnosis was made for a participant (see Cohen et al. 1993, 854). However, this is a very
conservative method of measurement, carefully concerned about reliability and validity. The procedure used by Caspi et al. is significantly more moderate. It is stated: We
14

In some of the mentioned publications, it is distinguished between aggressive and non-aggressive conduct
disorder. These are accumulated in the table. Furthermore, the number of cases of conduct disorder at age 11 is
very likely over-estimated since Anderson et al. measured conduct disorder together with oppositional disorder
(see Anderson et al. 1989, 843).

218

Stefan Walter

counted a symptom as present if there was evidence for it from any source (Moffitt et
al. 2006, 40). This procedure results in an excessive number of diagnoses of conduct
disorder. At the same time, it restricts the reliability and validity of the indicator. Possibly, many cases were diagnosed which do not really constitute conduct disorder. Thus,
in comparison to previous studies, this indicator is unreliable.
Apart from the low reliability, there is also a theoretical reason which generally
speaks against the use of conduct disorder in the context of the hypothesis which is to
be tested by Caspi et al. Psychologists assume that conduct disorder usually only occurs
during childhood and adolescence. And because of this, they only measure it for this
period (see also Caspi et al. 2002b, 3). And indeed, why should a young person who has
been diagnosed with conduct disorder not get rid of it later? It is rather true that during
a lifetime, social integration usually increases. The reasons for the abatement of conduct disorder in the course of a lifetime and for the increasing social integration of the
persons concerned can be manifold and shall not be addressed in detail here. However,
if conduct disorder usually only occurs during childhood and adolescence and does not
simply continue during adulthood, one should assume neither child maltreatment nor
the genetic disposition as influencing factors for conduct disorder. The number of tandem repeats in the MAOA promoter (as genetic disposition) and the experience of
maltreatment during childhood are to be regarded as constantly present influencing
factors. This means that they do not change on transition from adolescence to adulthood. Therefore, they should have an effect on conduct disorder in adulthood as well.
However, in adulthood, conduct disorder usually does not occur. From this, it can be
deduced that the occurrence of conduct disorder in adolescence can be attributed neither to the number of tandem repeats in the MAOA promoter nor to experienced child
maltreatment but rather to other influencing factors. These are presumably such factors
that change during a persons transition to adulthood. The effects between child maltreatment, genetic disposition and conduct disorder measured by Caspi et al. are therefore probably spurious correlations which are caused by influences of other factors
which were not taken into account by Caspi et al. The insufficient control of influences
of (unobserved) third variables will be further discussed below.

4.1.2

Disposition Toward Violence

Caspi et al. took the participants self-evaluation on a so-called aggression scale (Caspi et al. 2002b, 3) as a second indicator of antisocial behavior. In this way, the test subjects self-evaluation of a certain disposition toward violence was measured.15 Although
self-evaluations of dispositions are common practice, especially in the field of personal15

Items were e.g., When I get angry I am ready to hit someone, I admit that I sometimes enjoy hurting someone
physically (Caspi et al. 2002b, 3).

Can Biological Approaches Explain (Im)Moral Behavior?

219

ity psychology, the indicator as it is used by Caspi et al. cannot be described as valid,
because Caspi et al. treat the participants self-evaluation of the disposition toward
violence and actual violent behavior as equivalent. This assumption is problematic
because one cannot per se infer a certain behavior of a person from their equivalent
disposition (here: disposition toward violence). That a discrepancy can exist between
stated disposition and actual behavior has been a well-known problem in social psychology since the studies by LaPiere (1934). The question under which conditions a
disposition actually leads to behavior has since then been one of the main research
fields of disposition research.
By now, there are many attempts to solve the disposition-behavior-problem. An attempt well known in sociology is, e.g., the Low Cost Hypothesis (e.g., Best and Kroneberg 2012). It states that in a given situation, an actor will only behave according to his
disposition if performing this behavior results in no or at most low costs.16 However,
many deviant behaviors are accompanied by high behavioral costs due to restrictions
(e.g., criminal prosecution, social norms). This is especially so in the study by Caspi et
al. Child maltreatment is severely punished upon detection, antisocial behavior is socially ostracized. Despite a conceivable contrary disposition of a person, it can be deduced from the Low Cost Hypothesis that simply due to high behavioral costs, there is
often no translation of dispositions into actual behavior. The consideration of other
(often situational) influencing factors is necessary in order to predict under which circumstances this will be the case. According to Icek Ajzens (1991) theory of planned
behavior which is well known in psychology, the subjective norms of a person have to
be considered as well. According to this theory, a person does not only orientate her
own behavior to her own disposition or the possible situational behavioral costs, but
among other things also to the behavior of the persons important to her. Especially
concerning the explanation of aggressive behavior, it seems necessary to know whether
a person orientates herself to other persons or whether she acts in a group in which
aggressive behavior is accepted or at least not problematized. This subjective norm
concerning the practice of violence, according to the theory of planned behavior, is a
plausible influencing factor. However, Caspi et al. do not take it into consideration as a
potential third variable. Precisely because the literature contains good reasons not to
equate the question of a disposition toward violence with violent behavior, this indicator seems problematic.

4.1.3

Symptoms of Antisocial Personality Disorder

The measurement of symptoms of an antisocial personality disorder was also done


with the help of a questionnaire. To that end, the study member nominated three peo16

This is the case, e.g., with political elections where the electoral behavior reflects the political disposition.

220

Stefan Walter

ple as someone who knows you well (Caspi et al. 2002b, 3). Caspi et al. mailed the
questionnaire to these informants and asked them to describe the test subject by using
seven specific ordinal scaled items.17
At first glance, this approach seems to be more useful because it is an attempt to get
to know the participants observed behavior. Furthermore, questioning informants
concerning delicate topics such as the question for clues of Antisocial Personality
Disorder seems reasonable (Schwind 2009, 50). However, two points of criticism
restrict the validity and reliability of this indicator. On the one hand, it is problematic
to leave the selection of the informants to the participants of the study and thereby to
inform them about the intended acquisition of information from third parties. Thus,
participants maybe selected informants who answer unreliably, e.g. due to close social
relationships. Since the focus of interest was on the detection of a personality disorder,
one should rather ask persons who are personally acquainted with the participant
whether they know of such a diagnosis. This survey should be done confidentially in a
written and anonymous form so as not to distort the results by feelings caused by taboo, shame or repression (see Diekmann 2010, 446ff.). Secondly, the symptoms inquired for do not necessarily say anything about actual antisocial behavior. This applies
especially to the symptoms has problems controlling anger or impulsive, rushes into
things without thinking which do not necessarily have to be associated with antisocial
behavior. Furthermore, it remains unclear what is meant by the symptom good citizen
(reversed). Insofar, doubts concerning the reliability of this indicator are in order.

4.1.4

Violent Convictions

To ascertain convictions for violent crime, court records were searched with the assistance of the Australian and the New Zealand Police. Contrary to the previous indicators that have been identified as problematic, the question whether the members of the
study had been convicted for a violent crime seems to be a good indicator because it
measures antisocial behavior relatively well and obviously fulfills the requirements of
validity and reliability. This is because a conviction for violence requires the verification of actual behavior on the part of the prosecuting authorities. Insofar, it was examined by third parties whether antisocial behavior was present or not. In the light of the
general difficulty to get an overall picture of an individuals personality, it seems reasonable to rely on crime statistics despite well-known problems (see e.g. Kunz 2008,
174).

17

Informants described the study members on seven cardinal symptoms, has problems controlling anger,
blames others for own problems, does not show guilt after doing something bad, impulsive, rushes into things
without thinking, good citizen (reversed), does things against the law, and gets into fights. Response options
were not a problem, a bit of a problem, and yes, a problem (Caspi et al. 2002b, 3).

Can Biological Approaches Explain (Im)Moral Behavior?

221

Summarizing the evaluation of the validity and reliability of the indicators for antisocial behavior, the indicators selected by Caspi et al. only insufficiently cover the diversity of possible dimensions of antisocial behavior. They concentrate mainly on aggressive or violent behavior. But even these few dimensions are only insufficiently gathered due to problems of reliability and validity. In my opinion, only violent convictions
seem to be a suitable indicator in this context.

4.2

Indicators for Child Maltreatment

In order to undertake an evaluation of the reliability and validity of the indicators used,
some preliminary remarks are needed. This is necessary because the term child maltreatment is, unlike what may be expected, highly contested and far from uniformly
applied. In general, modern conceptions distinguish between four dimensions of childhood maltreatment:
(1) neglect (failure to provide care in accordance with expected societal standards for
food, shelter, protection, affection); (2) emotional abuse (verbal abuse, isolation, witnessing violence); (3) physical abuse (nonaccidental bodily injury); (4) sexual abuse
(sexual contact, including attempts or threats) (Wekerle et al. 2006, 2).
Even though there is a certain consensus about the possible dimensions of childhood
maltreatment, the manifestations of these dimensions are contested. Often it remains
unclear whether a certain behavior already constitutes maltreatment or not. When does
speaking loudly to a child turn into verbal abuse? Where does neglect of a child start?
In other words: When is the boundary between a behavior that is (still) accepted by
society and child maltreatment crossed?
Although very different opinions are possible concerning this question, according to
Christine Wekerle et al., it can be distinguished roughly between a rather narrow legal
and a rather wide social scientific understanding of childhood maltreatment (Wekerle
et al. 2006, 10).18 The legal perspective regards the boundary between a socially accepted behavior and child maltreatment as crossed when the childs welfare according to
existing legislation is threatened or already damaged (see ibid.). The social scientific
perspective is based on this legal understanding but it is considerably more broadly
defined and additionally considers social, domestic and individual context factors as
possible risk factors of child maltreatment. Thus, causes and effects of childhood maltreatment are analyzed in the context of the development and the social environment of
the child, by making comparisons with developmental patterns which would normally

18

It shall not be discussed here to what extent one can, like Wekerle et al., assume an understanding of child maltreatment which is generally accepted in the social sciences. The understanding of child maltreatment declared
here as social scientific seems to be widely accepted at least among developmental psychologists.

222

Stefan Walter

be expected (ibid.). Deviations from the context factors which are classified as normal
can then also be interpreted as child maltreatment.
The choice of the perspective on child maltreatment affects the selection of the indicators and thus also the assessment of their reliability and validity. The legal perspective
only takes up the perspective of the threatened or already impaired child and does not
ask for context factors. The advantage is that one concentrates on legally relevant and
thus generally accepted cases of child maltreatment. However, there is the danger that
only a proportion of the cases of child maltreatment is included, namely the hard ones
which are regulated by law.19 Indicators which follow the legal perspective should still
detect actually maltreated persons with a higher reliability. However, a larger number
of potential cases of maltreatment is not included.
On the other hand, the social scientific perspective considers the environment in
which a child grows up in addition to the legal perspective. Deviations from an environment which is evaluated as normal are interpreted as child maltreatment as well.
The advantage of this procedure is that potential cases of child maltreatment which are
not regulated by law and thus often undetected are considered as well. A disadvantage
lies in the fact that persons are often erroneously classified as maltreated. Furthermore,
environmental factors may wrongly be interpreted as indicators of child maltreatment.
Thus, indicators which follow the social scientific perspective can sometimes measure
actually detectable maltreatment less reliably, even though more potential cases of maltreatment are included.
When we now investigate the reliability and validity of the indicators used by Caspi
et al., it seems reasonable to keep in mind the legal and the social scientific perspective
of the concept of child maltreatment. This is especially important because the study was
already taken into account in the administration of justice.

4.2.1

Mother-Child-Interaction

A psychologist classified a mothers affect toward her child at the age of 3 based on his
observations during an interview assessment. Thereby the behavior of the mother was
rated using eight different categories indicating a rejecting behavior toward her child.20
If two or more categories were ascertained, the mother was classified as rejecting.
This indicator is based on a social scientific understanding. This becomes apparent
in the fact that the focus is not on the child but on the parenting behavior of the
mother. However, from a legal perspective, this indicator seems hardly valid or reliable.

19

See Lamnek et al. (2013, 7) who discuss narrow and wide definitions of violence in general.
The categories were mothers affect toward the child was consistently negative; harshness toward the child; rough,
awkward handling of the child; no effort to help child; unaware or unresponsive to childs needs; indifferent to childs
performance; demanding of childs attention; soiled, unkempt appearance of child (Caspi et al. 2002b, 3).

20

Can Biological Approaches Explain (Im)Moral Behavior?

223

The central problem with this indicator is that a retrospective evaluation of the observed
parenting practices of the mother is undertaken. Strictly speaking, parenting practices
were surveyed in the early 1970s, but they differ markedly from those used today.21
However, at the time of the inquiry, the observed parenting practices of the mother
were quite likely still common (i.e. normal) and accepted. Neither the mother nor the
child, and not even the observer, may have perceived the parenting practice as exceptionally offensive, let alone as child maltreatment. Furthermore, no conclusions can be
drawn from the observation of the mother-child interaction during the examination
about the actual occurrence of child maltreatment in the natural environment of the
child. This would be mere speculation, but not observation. Therefore, it is not likely
that the detection of child maltreatment was the intention of the examination at the
time of the survey. Only through Caspi et al.s retrospective evaluation of then common
parenting practices, behavior which was usual at the time turns into a case of maltreatment.

4.2.2

Harsh Discipline

The indicator harsh discipline in which the parents were asked for their preferred
parenting styles yields a similar picture. Caspi et al. measure harsh discipline by interviewing the parents of the participants aged 7 and 9, using a checklist on which parents indicated if they engaged in ten disciplinary behaviors such as smack him or hit
him with something. Parents scoring in the top decile of the sample-wide distribution
were classified as unusually harsh (Caspi et al. 2002b, 2).
Here, too, the survey of the indicator took place in the 1970s and thus at a time when
stricter parenting styles were still common and hardly classified as delicate. That the
parents readily provided information about their parenting styles and that no anonymous inquiry was chosen as a study design indicates that neither the initiators of the
surveys nor the parents thought of child maltreatment while drafting or answering the
questionnaire.22 Instead, the evaluation of a strict parenting style as child maltreatment
once again takes place retrospectively through Caspi et al. A parenting style which used
21

As for instance, in Germany, ignoring behaviors from the mothers toward their children were widespread until
the 1960s (Rapp 1961; Lukesch 1976, 88). In contrast, since the 1990s, usual parenting practices have been supposed to show positive feelings and to give great attention and care toward the child (Schneewind and Ruppert
1995, 147). However, the most reported change in parenting practices during the last decades is the decrease of the
parents practice of physical punishment of their children (e.g. Bchner and Fuhs 1996, 169; Schtze 2002, 85;
Bussmann 2007, 643). Despite certain cultural differences it can be assumed that similar changes in parenting
practices can be observed in other Western societies.
22
From a criminological point of view, this constitutes a case of self-reported delinquency concerning the detection of child maltreatment. For this purpose, an anonymous questioning is generally recommended (Schwind
2009, 42). However, in case of socially tabooed offenses such as child maltreatment, such inquiries hardly ever
yield reliable data (ibid., 44).

224

Stefan Walter

to be considered as normal is now declared as a deviation. However, from a legal


perspective, this indicator is hardly reliable or valid.23

4.2.3

Childs Primary Caregiver

If a child experienced at least two [...] changes in the person occupying the role of the
childs primary caregiver (Caspi et al. 2002b, 2), Caspi et al. already considered it as
maltreated. This indicator is also based on the social scientific understanding of childhood maltreatment. It is a context factor of the environment in which a child grows up.
Strictly speaking, this indicator only measures the stability of the partnerships of the
caregivers. Unstable partnerships of the mother or the father (or other caregivers) are
understood as an unfavorable deviation from the normal stable partnership. However,
this does certainly not mean that the children were physically maltreated, sexually
abused or severely neglected. Therefore, this indicator is neither valid nor reliable in the
legal understanding of child maltreatment.24

4.2.4

Physical Abuse

Study members were retrospectively interviewed about physical abuse suffered before
the age of 11 and classified as physically abused if they reported multiple episodes of
severe physical punishment (e.g., strapping leaving welts; whipping with electric cords)
resulting in lasting bruising or injury before age 11 (Caspi et al. 2002b, 2). The retrospective questioning of the participants concerning physical abuse suffered in childhood is a valid indicator both from a legal and from a social scientific perspective.
However, criminological literature points out that physical abuse is one of the violent
acts committed within the family (Schwind 2009, 47). Thus, based on experience, it
constitutes a delicate topic concerning which many respondents answer only unrelia23

Even if one takes up a social scientific perspective, doubts concerning the validity of this indicator are in order.
Caspi et al. justify the selection of harsh discipline as an indicator for child maltreatment by stating that there is
empirical evidence for an influence of harsh discipline on antisocial behavior. In doing so, they refer to a study by
Strau et al. (1997). However, this study only dealt with antisocial behavior in childhood not in adolescence or
adulthood. Generally, it is doubtful whether persons who experienced stricter parenting in their childhood are
more often prone to antisocial behavior in adulthood. This is all the more true since a strict parenting style was
common practice in the population of many Western states (and among them, presumably also in New Zealands
society) well into the 1970s. Therefore, it would have to be assumed that todays older generations (who were still
brought up in a strict way) behave antisocially considerably more often than the younger generations (who were
brought up in a less strict way). However, there is no empirical evidence for this.
24
Although some studies (e.g. Haas et al. 2004) talk of the change of the primary caregivers during (early) childhood as a risk factor that leads to criminal behavior, it is questionable even taking a social scientific understanding as a basis whether this environmental factor should be interpreted as an indicator of child maltreatment.
Instead, it seems reasonable to understand unstable partnerships as a separate influencing factor for antisocial
behavior.

Can Biological Approaches Explain (Im)Moral Behavior?

225

bly due to shame, social pressure or distrust of the interviewer. Insofar, problems of
reliability are generally to be expected concerning this indicator.

4.2.5

Unwanted Sexual Contact

The study participants were retrospectively interviewed about unwanted sexual contact
before the age of 11. Study members were classified as sexually abused if they reported
having their genitals touched, touching anothers genitals, or attempted and/or completed sexual intercourse before age 11 (Caspi et al. 2002b, 2). The ascertainment of
unwanted sexual contact in childhood can be considered as a valid indicator for childhood maltreatment, both from a social scientific and from a legal perspective. However,
it is once again the case that a very delicate subject is addressed, so that unreliable answers have to be expected. Generally, the methodological literature recommends an
undisclosed survey of informants in case of tabooed topics (Schwind 2009, 50). For
example, teachers, doctors or employees of public authorities can be additionally questioned about physical or sexual abuse of the participants. Since the authors of the study
furthermore had access to court documents concerning the participants, searching
those files for documented cases of child maltreatment would certainly have been reasonable.
Summarizing the criticism of the indicators of child maltreatment, it can be said that
the majority of the indicators used is based on a social scientific understanding of child
maltreatment. However, they can hardly be understood as valid or reliable indicators in
the legal understanding. Here, the evaluation of the indicators as reliable or valid depends on the point of view of the studys recipient and remains controversial. For a
proper understanding, it especially seems helpful if studies dealing with such a sensitive
topic as child maltreatment clarify their perspective. Generally, the question arises how
balanced the chosen indicators should be in order to be taken into consideration in
legal opinions. It seems reasonable in my opinion if the selection of indicators is mostly
or at least predominantly in accord with the legal understanding of childhood maltreatment. However, this is not the case here.
Especially Caspi et al.s retrospective interpretation of parenting practices previously
considered unproblematic as child maltreatment points out the dependence on the
definition and the general changeability of this construct.25 What one understands by
child maltreatment can differ between cultures, between subgroups or between persons, and it can change over time. This variability in the definability of child maltreatment reveals a dilemma for the underlying concept of a gene-environment interaction.
The statistical interaction effect between gene and environment may possibly have been
25

The insufficient definition of child maltreatment was already criticized, but in Caspi et al., the definition is more
or less revealed by the selection of indicators.

226

Stefan Walter

found only because a certain definition of child maltreatment was coincidentally taken
as a basis. With an only slightly modified definition of child maltreatment, the statistical effect might not be detectable anymore. These difficulties are likely to apply to the
concept of gene-environment interaction in general. Whether a gene-environment
interaction effect is statistically measured depends on the suitable definition of the
interacting environmental variable.26

How justified is the Exclusion of the Female Study


Participants?

As already mentioned, female participants of the Dunedin study have been excluded
from the investigation by Caspi et al.27 The selection from a sample needs to be explained as it might also influence the study results. Therefore in this section I want to
discuss the justification for this procedure.
Caspi et al. exclude female participants for the following reason:
Females, having two copies of the X chromosome, fall into two homozygous groups, highhigh (42 % in this sample), low-low (12 %), and a third heterozygous group, low-high
(46 %), that cannot be characterized with certainty because it is not possible to determine
which of the two alleles is inactivated for each female participant (Caspi et al. 2002a, 853).

It needs to be pointed out that only Caspi et al. consider this circumstance as problematic. In other studies focused on the MAOA gene, this fact is considered as insignificant
for the determination of MAOA activity. Caspi et al. themselves refer to such studies, in
order to show that the distribution of the number of repeats at the MAOA promoter in
the male participants of the Dunedin sample does not differ from that found in other
studies (Caspi et al. 2002b, 5). We have already dealt with these studies, they are presented above as the Caucasian control group in Tab. 1. This control group is composed
of and gathered from studies previously published by Sue Z. Sabol and colleagues (Sabol et al. 1998) and by Jrgen Deckert and colleagues (Deckert et al. 1999). Sabol et al.
(1998, 275) as well as Deckert et al. (1999, 622) had included female test subjects in
their genetic observations. The comparison group which Caspi et al. named in their
study in fact only consists of Caucasians, that is to say participants who said that they
26

This is especially true if as in the present instance one forms theoretical constructs with various dimensions
and indicators, both for the independent environmental variable (childhood maltreatment) and for the dependent
variable (antisocial behaviour).
27
In this section, I only discuss the exclusion of the females, not the exclusion of the male Maori. Caspi et al. justify
the exclusion of this ethnic group as a measure against so-called population stratification (see e.g. Hamer and
Sirota 2000, 11). From a social scientists point of view, this procedure raises a lot of questions and thus cannot be
discussed in detail in this article.

Can Biological Approaches Explain (Im)Moral Behavior?

227

have European ancestors. But this control group at the same time consists of males and
females. In addition, subsequent replication studies also normally included female participants when they determined the allele frequencies (see Widom and Brzustowicz
2006, 687).
The question arises why the researchers responsible for earlier and later studies
found it less problematic to genotype female test subjects. These problems are serious
insofar as including female participants may strongly influence the measurement results. For example, women are much less frequently convicted for violent crime
(Schwind 2009, 62). In case of a similar distribution of MAOA activity in men and
women, which is suggested by the other two studies, the effect of the genetic disposition
on antisocial behavior might not be detectable anymore. Summarizing the criticism of
the exclusion of females in the study by Caspi et al., this procedure seems questionable
and even contradictory when using control groups of other studies with test subjects of
both sexes to show similarities of the allele frequencies of MAOA polymorphism.

Insufficient Control of Other Influencing Factors

A variety of variables can influence antisocial behavior. Insofar, the question arises how
relevant the interaction effect between child maltreatment and the genetic disposition
of the MAOA gene is in the context of explanations of antisocial behavior. I will discuss
this question in the first part of this section. That a statistical interaction effect was
found is no sufficient criterion for being able to speak of an empirically reliable relationship. There is always the risk of merely having found a statistically spurious correlation between the variables investigated, but no causal relationship. Therefore, in the
second part of this section, I will discuss the efforts of Caspi et al. to control the influence of third variables, i.e. possible further influencing factors on antisocial behavior
which could have caused the measured statistical interaction effect.

6.1

The Interaction Effect of MAOA Activity and Maltreatment as a


Marginal Effect

There are many conceivable influencing factors for antisocial behavior, such as the level
of education, the subjective norms and values, the level of income, the embeddedness
in social networks, the size of the place of residence, the previous frequency of delinquencies, alcohol abuse, situational influences etc. Some of these influencing factors

228

Stefan Walter

contribute a lot, some only a little to the explanation of antisocial behavior.28 Whether
an influencing factor contributes a lot or a little to the explanation depends among
other things on how well the occurrence of antisocial behavior can be predicted with
this variable. To put it simply: an influencing factor which explains the phenomenon
well is one with which the manifestation of the dependent variable can be correctly
predicted for many persons.
Not only the suitability of each individual independent variable but also the quality
of the statistical model in general can be evaluated. As described above, Caspi et al.
calculated a regression function with the variables child maltreatment and MAOA
activity, with simultaneous consideration of an interaction effect between child maltreatment and MAOA activity, and thereby estimated the frequency of antisocial behavior in the Dunedin sample.29 The question is how well antisocial behavior can actually be predicted with this regression function. In order to test the suitability of statistical models in general, various quality criteria have been developed in statistics.
A common quality criterion for the regression model calculated by Caspi et al. is the
coefficient of determination, also known as R2. The coefficient of determination is a
standardized measure with a figure between 0 and 1. This means that the larger the
coefficient of determination, the better one can predict the manifestations of the to-beexplained fact (in this case the frequency of antisocial behavior) with the estimated
statistical model.30 Applied to the study by Caspi et al., a coefficient of determination of
1 would mean that their regression model can perfectly predict antisocial behavior.
With a coefficient of determination of 0, antisocial behavior could not be predicted
with their regression model.
It is a characteristic of the study that this standardly reported quality criterion is not
reported by Caspi et al. Insofar, it remains unclear how well the independent variables
considered by Caspi et al. can predict antisocial behavior. However, simply due to theo-

28

For example, a factor which well explains domestic violence (which can be understood as antisocial behavior in
the sense of Caspi et al.) is whether a person herself perceived parental violence as a child. If one knows the manifestation of this factor, one is able to predict with a certain probability whether this person will also become violent
in a partnership (e.g. Lamnek et al. 2013, 133).
29
The connection postulated by Caspi et al. can be modelled with the following regression equation:
Antisocial Behavior = 0 + 1 Maltreatment + 2 MAOA-activity + 3 Maltreatment MAOA-activity + u
Antisocial behaviour constitutes the dependent variable which is to be explained. 0 is the constant term of the
regression function (point of intersection of the regression line with the y-axis). 1 and 2 are the to-be-estimated
regression coefficients of the independent variables maltreatment and MAOA activity. 3 is the to-be-estimated
regression coefficient of the interaction effect (as a multiplication term of maltreatment and MAOA activity). u is
the disturbance variable, meaning the coincidental, not observed influences on antisocial behavior.
30
However, it must be noted that with every independent variable which is additionally considered in the model
(in this case also called regressors), the contribution of the model to the explanation generally increases. This is
the case even if rather irrelevant regressors are considered. Furthermore, with a large number of regressors, one
might fall back on the corrected coefficient of determination. On the calculation of the coefficient of determination, see e.g. Backhaus et al. (2011, 7476).

Can Biological Approaches Explain (Im)Moral Behavior?

229

retical considerations, one may assume that their model does not fit very well to the
empirical data from the Dunedin sample, i.e. that the coefficient of determination tends
towards 0 rather than towards 1. The reason is that with the biological variable (MAOA
activity), an independent variable is considered in the regression equation although it
has no direct effect on antisocial behavior and thus, taken by itself, does not make its
own contribution to the prediction of antisocial behavior. Instead, knowledge of the
MAOA activity only helps to improve the prediction of antisocial behavior in persons
who suffered childhood maltreatment. Strictly speaking, the influence of child maltreatment on antisocial behavior is only specified slightly through this knowledge.
In summary, it can be said that the contribution which the knowledge of the genetic
disposition makes towards improving the explanation of antisocial behavior must be
estimated as minimal. Generally, the effect of the biological factor on antisocial behavior must be assessed as marginal.

6.2

Insufficient Control For Other Influencing Factors

It was already indicated above that the interaction effect between maltreatment and
MAOA activity could not be verified in a number of replication studies. Due also to the
outlined problems of validity and reliability of the indicators, it seems possible that
Caspi et al. actually describe a spurious correlation between maltreatment and MAOA
activity. In order to prevent spurious correlations, the influence of relevant third variables must be controlled.
Caspi et al. mention a control for third variables in their supplementary material
(Caspi et al. 2002b, 4f.). They refer to empirical studies which suggest that the IQ and
growing up under favorable socioeconomic conditions influence antisocial behavior.
Therefore, Caspi et al. alternately include two further variables in their regression
model. First, the IQ of a test subject is added as a covariate. The outcome is: The interaction effect between MAOA and maltreatment remained statistically significant and of
equivalent magnitude after controlling for IQ (ibid., 4).
The second variable, reflecting the socioeconomic conditions in which a test subject
has grown up, is denominated as social class (ibid., 4). The measurement of this variable is described as follows: The childhood social class variable used in our analyses is
the average of the highest social class level of either parent, assessed repeatedly at the
study members birth and ages 3, 5, 7, 9, 11, 13, and 15 (ibid., 4). The meaning of social class hence remains vague. The addition of this variable to the regression model as
a covariate demonstrates the same result: the interaction effect between MAOA and
maltreatment remained statistically significant and of equivalent magnitude after controlling for childhood social class origins (ibid., 5). Is the equivalent magnitude of the
G x E interaction effect thereby statistically proven? This method of controlling for the

230

Stefan Walter

influence of other explaining factors seems problematic. First, the meaning of the social
class variable is not clear and because of this, one does not know what exactly is controlled. Secondly, the statistical coefficients which are necessary for an appropriate
interpretation of the control for third variables are presented only insufficiently. Thirdly, as already indicated above, there is a variety of third variables (e.g., the subjective
norms and values, the embeddedness in social networks, the size of the place of residence, the previous frequency of delinquencies, alcohol abuse, situational influences)
which were not controlled by Caspi et al. but which may be more relevant for the
explanation of antisocial behavior than the IQ or the socioeconomic status. Insofar, the
performed control for third variables seems insufficient. In general, the question arises
why Caspi et al. do not use multivariate regression analysis including a set of other
potential explanation factors for antisocial behavior.31
Obviously, the authors gathered a number of additional variables which might not
only have functioned as control variables for the interaction effect, but also as possible
influencing factors for antisocial behavior. If the aim of empirical social research shall
be to obtain as robust and thorough explanations for a fact as possible, one could have
tested whether including the IQ or the socioeconomic status possibly improves the
explanation of antisocial behavior. The authors of the study passed on that opportunity.
In summary, it can be said that the reported control for other influential factors occurs
insufficiently. The influence of a variety of relevant third variables is not controlled. Furthermore, it is not attempted to improve the explanatory model which presumably does
not fit the empirical data very well by including further relevant influencing factors.

Summary and Outlook

Let us summarize the extensive criticism of the study by Caspi et al. (2002a). The following four methodological deficits can be named:

31

insufficient definition of the variables used


low validity and reliability of indicators

It might be argued that another study by Guang Guo and colleagues uncovered evidence that the MAOA gene
(among two other specific gene variations) has a significant impact on violence by using multivariate analysis (Guo
et al. 2008). The study by Guo et al. suffers from the same methodological problems discussed above. Guo and
colleagues also use a selected sample of people excluding all females (ibid., 549) and take measures against population stratification (ibid., 553f.). A specific problem of this study is that the definition of the MAOA activity by Guo
et al. differs from that applied by Caspi et al. and other researchers. They use only the very rare allele with 2 repeats
as independent variable in their analysis (see ibid., 554). In the used sample of 1,126 males, only 11 people (0.98 %
of the participants) carry this specific VNTR on the MAOA promoter region. The validity and reliability of this
modified variable seems problematic because other researchers showed the same low MAOA activity for the longer
alleles with 3 and 5 repeats as for the allele with two repeats.

Can Biological Approaches Explain (Im)Moral Behavior?

231

targeted sample of test subjects


insufficient control of third variables

Due to the methodological flaws, the study by Caspi et al. cannot be regarded as profound evidence for the influence of a genetic disposition on human behavior. Instead, I
advocate the thesis that these methodological insufficiencies can generally be found in
studies which postulate an influence of a genetic disposition on human behavior. Apart
from the methodological-statistical deficits, doubts also arise concerning general concepts and practices of behavioral genetics. This applies especially to the application of
the gene-environment interaction concept.
It was shown that the statistical interaction effect between MAOA activity and childhood maltreatment is only a marginal statistical effect in the context of the explanation
of antisocial behavior. The influence of childhood maltreatment on antisocial behavior
is easily specified. However, antisocial behavior is determined by a large number of
influencing factors. Generally, in case of multifactorially influenced facts, the amount
of information which is gained by the application of the G x E interaction concept is
estimated as rather limited (Zammit et al. 2010, 66).32
Furthermore, when using the gene x environment interaction concept, the following
fact has to be considered: As was shown using the example of the variable childhood
maltreatment, the detection of a statistical G x E interaction effect depends on the definition of the interacting environmental variable. In the study by Caspi et al., this is
problematic insofar as the genetic disposition does not itself have a direct effect on
antisocial behavior. If the definition of the environmental variable and thus the selection of the indicators does not fit, it might no longer be possible to detect a statistical
effect of the genetic disposition. This is especially true if one draws on a changeable
theoretical construct as the interacting environmental factor, such as childhood maltreatment in the present case.33
Referring to the new molecular genetic knowledge based on the breakthrough Human Genome Project completed in 2003 as well as follow-up studies, it seems ever
more improbable that specific genes influencing (im)moral or any other behavior will

32

According to Zammit et al., knowledge acquisition about underlying biological mechanisms can be expected if
the interaction effect which is found is a qualitative interaction. This means that different alleles produce exactly
opposite effects (e.g., if the allele for low MAOA activity increases the influence of maltreatment, while the allele
for high MAOA activity weakens this influence; however, only a weakening effect of the allele for high MAOA
activity was found, but not the opposite one). In general, Zammit et al. assume that qualitative interactions between genes and environmental variables hardly ever occur (Zammit et al. 2010, 66).
33
The problems mentioned here apply generally to the explanation of social constructs and thus refer also to the
dependent variable which is to be explained. A statistical proof that specific alleles of one or more genes are to be
responsible for differences in theoretical constructs, such as antisocial, criminal or moral behavior, but also
intelligence etc., always also depends on the underlying definition of the variable which is to be explained. It
should always be kept in mind that this definition is historically and culturally changeable.

232

Stefan Walter

be discovered.34 This can be expected because the knowledge and insights about what a
gene is have been changed in recent years (see Pearson 2006). Meanwhile, molecular
geneticists presume a very complex gene expression, where apart from the chemical
structure of a gene, the spatial and temporal expression patterns also seem important
for the development of an individual (Keller 2001, 8). Today, the assumption that a
genotype strictly predicts a phenotype only applies to the small number of monogenic
diseases (Propping and Nthen 2003, 183). Due to this change in the understanding of
genes, but also due to the methodological problems and difficulties presented here, the
research potential oriented to biological concepts could lie less in efforts to identify
specific genes or specific areas of the brain associated with a specific behavior.35 Theories of action in social sciences (e.g., the above-quoted theory of planned behavior by
Ajzek, but also a number of other approaches) often postulate assumptions about cognitive processes. Although they are hypothetical, they might be empirically proven or
corrected with the help of neuroscience. In addition, there are many definitions of
terms that concern moral cognitive processes but are not easily differentiated (as, for
instance, internalization, values, virtues, preferences etc.). Neurobiological investigations might check and modify definitions and offer new approaches and theories.
What role could philosophy play here? In my opinion, philosophy is well advised to
incorporate and to reflect on insights from the empirical sciences concerning moral
capacity or human behavior in general, and to work out their implications for ethical
and moral issues. But philosophy should not content itself with that.
If philosophy opens itself to the results of empirical research, it should also and especially turn to applied empirical research methods and adopt them. In order to be
able to evaluate an empirical finding concerning the causes of human morality, one
needs to know how this empirical finding was achieved. In order to be able to assess the
scope, validity and problems of empirical studies on moral action, (statisticalempirical) methodological competence is required. If philosophy refrains from this and
only perceives the results of the empirical studies, it bereaves itself of a large field of
criticism. Therefore, philosophy should reply to the advance of the empirical sciences
into the field of morality with an advance of its own into the field of the empirical sciences by extending its methodological competence. Philosophy could thus become a
critical and indispensable companion of the empirical sciences.

34

Results of the Human Genome Project are discussed in Honnefelder and Propping (2001) and Honnefelder et al.
(2003). For a historical review of the concept of inheritance see Rheinberger and Mller-Wille (2009).
35
The analogous argumentation as for behavioral genetic studies applies to neurobiological research which associates specific areas of the brain with specific behavior. Only the biological object of investigation is substituted here.
Instead of a certain DNA section, certain areas of the brain are associated with behavior in these studies. However,
as soon as one attempts to extrapolate from differences in the associated areas of the brain to behavioral differences,
the same difficulties of furnishing statistical proofs and methodological deficits occur.

Can Biological Approaches Explain (Im)Moral Behavior?

233

References

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision
Processes 50(2), 179211. doi, 10.1016/0749-5978(91)90020-T.
American Psychiatric Association (1994). Diagnostic and Statistical Manual of Mental Disorders.
Fourth Edition. Washington, DC: American Psychiatric Association.
Anderson, J., Williams, S., & McGee, R. (1989). Cognitive and Social Correlates of DSM-III
Disorders in Preadolescent Children. Journal of the American Academy of Child & Adolescent Psychiatry 28(6), 842846.
Asendorpf, J. (2007). Interaktion und Kovariation von Genom und Umwelt. In M. Hasselhorn &
W. Schneider (Eds.), Handbuch der Entwicklungspsychologie (pp. 119128). Gttingen,
Bern, Wien, Paris, Oxford, Prag, Toronto, Cambridge, Amsterdam, Kopenhagen: Hogrefe.
Asendorpf, J. (2008). Evolutionspsychologie und Genetik der Entwicklung. In R. Oerter & L.
Montada (Eds.), Entwicklungspsychologie (pp. 4966). Weinheim, Basel: Beltz.
Backhaus, K., Erichson, B., Plinke, W., & Weiber, R. (2011). Multivariate Analysemethoden. Eine
anwendungsorientierte Einfhrung. Heidelberg, Dordrecht, London, New York: Springer.
doi, 10.1007/978-3-642-16491-0.
Baltes-Gtz, B. (2009). Moderatoranalyse per multipler Regression mit SPSS. Trier: UniversittsRechenzentrum Trier.
Bllinger, L., Jasch, M., Krasmann, S., Pilgram, A., Prittwitz, C., Reinke, H., & Rzepka, D. (Eds.)
(2010). Gefhrliche Menschenbilder, Biowissenschaften, Gesellschaft und Kriminalitt. Baden-Baden: Nomos.
Brunner, H. G., Nelen, M., Breakefield, X. O., Ropers, H. H., & von Oost, B. A. (1993). Abnormal
Behavior Associated with a Point Mutation in the Structural Gene for Monoamine Oxidase
A. Science 262, 578580.
Bchner, P., & Fuhs, B. (1996). Der Lebensort Familie. Alltagsprobleme und Beziehungsmuster.
In P. Bchner, B. Fuhs & H.-H. Krger (Eds.), Vom Teddybr zum ersten Kuss. Wege aus
der Kindheit in Ost- und Westdeutschland (pp. 159200). Opladen: Leske und Budrich.
Bussmann, K.-D. (2007). Gewalt in der Familie. In: J. Ecarius (Ed.), Handbuch Familie (pp. 637
652). Wiesbaden: VS Verlag fr Sozialwissenschaften.
Cases, O., Seif, I., Grimsby, J., Gaspar, P., Chen, K., Pournin, S., Miller, U., Aguet, M., Babinet,
C., Shih, J.C., & De Maeyer, E. (1995). Aggressive Behavior and Altered Amounts of Brain
Serotonin and Norepinephrine in Mice Lacking MAOA. Science 268, 17631766.
doi,10.1126/science.7792602.
Caspi, A., McClay, J., Moffitt, T.E., Mill, J., Martin, J., Craig, I.W., Taylor, A., & Poulton, R.
(2002a). Role of Genotype in the Cycle of Violence in Maltreated Children. Science 297,
851854. doi,10.1126/science.1072290.
Caspi, A., McClay, J., Moffitt, T. E., Mill, J., Martin, J., Craig, I. W., Taylor, A., & Poulton, R.
(2002b). Description of Methods and Measurements Used in the Dunedin Multidisciplinary Health and Development Study (Supplementary Material to Caspi, A. et al. 2002a).
http,//www.sciencemag.org/content/suppl/2002/08/01/297.5582.851.DC1/CaspiSuppl.pdf.
Accessed 30 January 2014.

234

Stefan Walter

Caspi, A., Sugden, K., Moffitt, T. E., Taylor, A., Craig, I. W., Harrington, H. L., McClay, J., Mill,
J., Martin, J., Braithwaite, A., & Poulton, R. (2003). Supplementary Material Material and
Methods, 5HTT X Life Stress. (Supplementary Material to Caspi, A. et al. Influence of Life
Stress on Depression, Moderation by a Polymorphism in the 5-HTT Gene. Science 386,
386389. doi,10.1126/science.1083968). http,//www.sciencemag.org/content/suppl/2003/
07/16/301.5631.386.DC1/SOM.Caspi.rev.pdf Accessed 30 January 2014.
Deckert, J., Catalano, M., Syagailo, Y. V., Bosi, M., Okladnova, O., Di Bella, D., Nthen, M. M.,
Maffei, P., Franke, P., Fritze, J., Maier, W., Propping, P., Beckmann, H., Bellodi, L., &
Lesch, K.-P. (1999). Excess of High Activity Monoamine Oxidase A Gene Promoter Alleles
in Female Patients with Panic Disorder. Human Molecular Genetics 8(4), 621624.
doi,10.1093/hmg/8.4.621.
Diekmann, A. (2010). Empirische Sozialforschung, Grundlagen, Methoden, Anwendungen. Reinbeck bei Hamburg: Rowohlt.
DMHDRU (2014). The Dunedin Multidisciplinary Health & Development Study. Dunedin
Multidisciplinary Health & Development Research Unit, University of Otago.
http://dunedinstudy.otago.ac.nz/studies/assessment-phases. Accessed 18 September 2014.
Durkheim, E. (1972). Erziehung, ihre Natur und ihre Rolle. In E. Durkheim, Erziehung und
Soziologie (pp. 2049). Dsseldorf: Schwann.
Feehan, M., McGee, R., Williams, S. M., Nada, R., Shyamala. (1994). DSM-III-R disorders in
New Zealand 18-year-olds. Australian and New Zealand Journal of Psychiatry 28, 8799.
Feresin, E. (2009). Lighter sentence for murderer with bad genes. http://www.nature.com/news/
2009/091030/full/news.2009.1050.html. Accessed 27 January 2014.
Frost, L. A., Moffitt, T. E., & McGee, R. (1989). Neuropsychological correlates of psychopathology in an unselected cohort of young adolescents. Journal of Abnormal Psychology 98, 307
313.
Geulen, D. (2007). Sozialisation. In: H. Joas (Ed.), Lehrbuch der Soziologie (pp. 137158). Frankfurt am Man, New York: Campus Verlag.
Grn, K.-J., Friedmann, M., & Roth, G. (Eds.) (2008). Entmoralisierung des Rechts, Mastbe der
Hirnforschung fr das Strafrecht. Gttingen: Vandenhoeck & Ruprecht.
Guo, G., Roettger, M. E., & Cai, T. (2008). The Integration of Genetic Propensities into SocialControl Models of Delinquency and Violence among Male Youths. American Sociological
Review 73(4), 543568. doi, 10.1177/000312240807300402.
Haas, H., Farrington, D. P., Killias, M., & Sattar, G. (2004). The Impact of Different Family Configurations on Delinquency. British Journal of Criminology 44(4), 520532.
doi,10.1093/bjc/azh023.
Haberstick, B. C., Lessem, J. M., Hopfer, C. J., Smolen, A., Ehringer, M. A., Timberlake, D., &
Hewitt, J. K. (2005). Monoamine Oxidase A (MAOA) and Antisocial Behaviors in the Presence of Childhood and Adolescent Maltreatment. American Journal of Medical Genetics
Part B, Neuropsychiatric Genetics 135(1), 5964.
Hamer, D., & Sirota, L. (2000). Beware the chopsticks gene. Molecular Psychiatry 5, 1113.
Henning, B., & Kroneberg, C. (2012). Die Low-Cost-Hypothese: Theoretische Grundlagen und
empirische Implikationen. Klner Zeitschrift fr Soziologie und Sozialpsychologie 64(3),
535561. doi, 10.1007/s11577-012-0174-5.

Can Biological Approaches Explain (Im)Moral Behavior?

235

Honnefelder, L., & Propping, P. (Eds.) (2001). Was wissen wir, wenn wir das menschliche Genom
kennen? Kln: DuMont.
Honnefelder, L., Mieth, D., Propping, P., Siep, L., & Wiesemann, C. (Eds.) (2003). Das genetische
Wissen und die Zukunft des Menschen. Berlin, New York: de Gruyter.
Hurrelmann, K. (2014). Sozialisation. In G. Endruweit, G. Trommsdorff & N. Burzan (Eds.),
Wrterbuch der Soziologie (pp. 444451). Konstanz, Mnchen: UVK/Lucius.
Keller, E. F. (2001). Das Jahrhundert des Gens. Frankfurt am Main, New York: Campus.
Kunz, K.-L. (2008). Kriminologie, eine Grundlegung. Bern, Stuttgart, Wien: Haupt.
Lamnek, S., Luedtke, J., Ottermann, R., & Vogl, S. (2013). Tatort Familie. Husliche Gewalt im
gesellschaftlichen Kontext. Wiesbaden: Springer VS, Verlag fr Sozialwissenschaften.
LaPiere, R. T. (1934). Attitudes versus actions. Social Forces 13(2), 230237. doi,
10.2307/2570339.
Lukesch, H. (1976). Elterliche Erziehungsstile. Psychologische und soziologische Bedingungen.
Stuttgart, Berlin, Kln, Mainz: Kohlhammer.
McGee, R., Feehan, M., Williams, S., Partridge, F., Silva, P. A., & Kelly, J. (1990). DSM-III Disorders in a large sample of adolescents. Journal of the American Academy of Child and Adolescent Psychiatry 29(4), 611619.
Moffitt, T. E., Caspi, A., Rutter, M., & Silva, P. A. (2006). Sex differences in Antisocial Behaviour.
Conduct Disorder, Delinquency, and the Violence in the Dunedin Longitudinal Study.
Cambridge: Cambridge University Press.
Montagu, M.F.A. (1979). Das Verbrechen unter dem Aspekt der Biologie. In F. Sack & R. Knig
(Eds.), Kriminalsoziologie (pp. 226243). Wiesbaden: Akademische Verlagsgesellschaft.
Pearson, H. (2006). Genetics: What is a Gene? Nature 441, 398401. doi, 10.1038/441398a.
http://www.nature.com/nature/journal/v441/n7092/full/441398a.html. Accessed 02 October 2014.
Propping, P., & Nthen, M. M. (2003). Wozu Forschung mit genetischen Daten und Informationen? In L. Honnefelder, D. Mieth, P. Propping, L. Siep, & C. Wiesemann (Eds.), Das genetische Wissen und die Zukunft des Menschen (pp. 177185). Berlin, New York: de Gruyter.
Rapp, D. W. (1961). Childrearing Attitudes of Mothers in Germany and the United States. Child
Development 32 (4), 669678. doi,10.2307/1126554.
Rheinberger, H.-J., & Mller-Wille, S. (2009). Vererbung. Geschichte und Kultur eines biologischen Konzepts. Frankfurt am Main: Fischer-Taschenbuch.
Rutter, M., Giller, H., & Hagell, A. (1998). Antisocial Behavior by Young People. Cambridge:
Cambridge University Press.
Sabol, S. Z., Hu, S., & Hamer, D. (1998). A Functional Polymorphism in the Monoamine Oxidase A Gene Promoter. Human Genetics 103(3), 273279.
Schneewind, K. W., & Ruppert, S. (1995). Familien gestern und heute: ein Generationenvergleich
ber 16 Jahre. Mnchen: Quintessenz.
Schtze, Y. (2002). Zur Vernderung im Eltern-Kind-Verhltnis seit der Nachkriegszeit. In R.
Nave-Herz (Ed.), Kontinuitt und Wandel der Familie in Deutschland. Eine zeitgeschichtliche Analyse (pp. 7198). Stuttgart: Lucius und Lucius.
Schwartz, J. (2005). How much can genetics tell us about the causes of crime and violence? Criminal Justice Matters 62(1), 2042. doi,10.1080/09627250508553094.

236

Stefan Walter

Schwind, H.-D. (2011). Kriminologie, Eine praxisorientierte Einfhrung mit Beispielen. Heidelberg, Mnchen, Landsberg, Frechen, Hamburg: Kriminalistik.
Silva, P. A., & Stanton, W. R. (1996). From Child to Adult, The Dunedin Multidisciplinary Health
and Development Study. Oxford: Oxford University Press.
Straus, M. A., Sugarman, D. B., & Giles-Sims, J. (1997). Spanking by Parents and Subsequent
Antisocial Behavior of Children. Archives of Pediatrics & Adolescent Medicine 151(8), 761
767. doi:10.1001/archpedi.1997.02170450011002.
Tillmann, K.-J. (2010). Sozialisationstheorien, Eine Einfhrung in den Zusammenhang von Gesellschaft, Institution und Subjektwerdung. Reinbek bei Hamburg: Rowohlt-Taschenbuch.
Wekerle, C., Miller, A. C., Wolfe, D. A., & Spindel, C. B. (2006). Childhood maltreatment. Cambridge (Mass.), Gttingen, Ashland (OH), Toronto, Ontario, Bern: Hogrefe.
Weber, M. (1978). Economy and Society, An outline of interpretative sociology. Edited by G. Roth
& C. Wittich. Berkeley, Los Angeles, London: University of California Press. German edition: Weber, M. (1956). Wirtschaft und Gesellschaft, Grundri der verstehenden Soziologie.
Tbingen: Mohr.
Widom, C. S. (1989). The Cycle of Violence. Science 244, 160166. doi,10.1126/science.2704995.
Widom, C. S., & Brzustowicz, L. M. (2006). MAOA and the Cycle of Violence, Childhood
Abuse and Neglect, MAOA Genotype, and Risk for Violent and Antisocial Behavior. Biological Psychiatry 60(7), 684689. doi,10.1016/j.biopsych.2006.03.039.
Zammit, S., Owen, M. J., & Lewis, G. (2010). Misconceptions about Gene-Environment Interactions in Psychiatry. Evidence-Based Mental Health 13(3), 6568. doi,10.1136/ebmh1056.

Part III
Reassessment of Established Terminology
in Modern Debates

Aristotles Moral Philosophy and Moral Psychology


A Basic Terminology
Friedo Ricken

Abstract
Ciceros De officiis and Aristotles Nicomachean Ethics have influenced the history of
moral philosophy like no other work. In contrast to Cicero, Aristotle developed a differentiated terminology of moral psychology which was later extended most notably by
Thomas Aquinas. Therefore, it is justified to take the terms and theses of the Nicomachean Ethics as a starting point in a discussion about the relation between psychology and ethics.

Morals, Ethics, Metaethics

What is the subject of the work which has been passed down to us under the title of
Nicomachean Ethics (NE)? What is ethics? In Metaphysics (Met.) VI 1, Aristotle
distinguishes three forms of thought. All thought (dianoia) is either practical or productive (poietike) or theoretical1 (1025b25). As an example of theoretical thought,
Aristotle mentions physics, i.e. the study of substances which have the principle of their
rest and movement present in themselves. Productive and practical thought are distinguished from this form of thought by the fact that the principle of movement here is in
the thinkers themselves (1025b18-24). The carpenter asks herself what she needs to do
in order to produce a bed from a number of planks; practical thought asks which decision is the right one in the given situation. Therefore, productive and practical thought
is distinguished from theoretical thought by the question to which it seeks an answer.
Theoretical thought asks What is the case?; productive and practical thought asks (in a
* Friedo Ricken
Munich School of Philosophy
email: friedo.ricken@hfph.de
1
The translations by W. D. Ross (Nicomachean Ethics and Metaphysics) and J. A. Smith (De anima) were used in
this chapter.

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_11

240

Friedo Ricken

broad sense) What shall I do? The science of the soul is a theoretical science; insofar as
that part of the soul is concerned which is necessarily linked with matter, it belongs to
physics (1026a5f.).
What is ethics? How does the concept of ethics relate to the broad concept of practical thought? So far, we only learned that the aim of practical thought is the decision
(1025b24). How does Aristotle describe the subject of his treatise in the first chapter of
the Nicomachean Ethics? Ethics is an inquiry (1094a1) or a science (1094a26). It speaks
about the actions that occur in life; they are its subject and the starting point for the
conclusions it draws (1095a3f.). Anyone who is to listen intelligently to lectures about
ethics needs to have received a good education: she must have been brought up in
good habits (ethos) (1095b4). In this sense, ethics is a lecture about actually experienced morality, mores, morals.
But what is the aim of this lecture? How does the practical thought which is pursued
methodically or as a science, i.e. ethics, differ from the practical thought in everyday
life? [E]very science which is ratiocinative or at all involves reasoning deals with causes and principles (Met. 1025b6f.). Every action aims at some good; this good is the
final cause of the action. The cause for which practical thought pursued as a science
asks is the final goal of any action, the highest of all goods achievable by action (NE
1095a16f.). Everyday morality provides different answers to this question; it is contested which way of life is the best: a life which aims at pleasure, or a life which aims at
honor, or a life which aims at science? Ethics has to decide which answer is the right
one. This is of great importance for everyday morality because if we know the final goal
of an action, we will, like archers who have a mark to aim at, be more likely to hit upon what is right (1094a23f.). Therefore, the aim of ethics is not knowledge but action
(1095a5f.).
Practical thought can only be pursued as a science if the meaning of the words with
which it works is clarified. In Platos Meno, Socrates explains that he could only answer
the question whether virtue is teachable if he knew what virtue is (71a6f.). Ethics presupposes that the language of morals has been clarified, and this is one of the tasks of
metaethics.

Happiness

The language of morals has a name for the highest good achievable by action: happiness. It also gives an explanation of what is to be understood as happiness: the good life,
in the double meaning of acting well and doing well. For his definition, Aristotle takes
the different types of acting and producing as a starting point. What is their respective
good? That for whose sake everything else is done; this is e.g. health in medicine, victo-

Aristotles Moral Philosophy and Moral Psychology

241

ry in strategy, a house in architecture. In a formal definition, it is the goal for whose


sake everything else is done. Through the more specific definition of the concept of
goal, Aristotle gains two formal criteria which the highest good has to meet. It needs to
be distinguished between: (a) goals which are themselves means to a further goal, e.g.
tools; (b) goals which we pursue for their own sake but which are only one component
of the good life, e.g. pleasure or honor; (c) the goal which comprises everything we aim
at for its own sake. This is happiness. It is the perfect goal in the sense that it is aimed at
exclusively for its own sake; it is neither a means to another goal nor a component of a
more comprehensive goal. This is the first formal criterion which a contentual definition of happiness has to meet. The second criterion is self-sufficiency. It demands a
good which satisfies all aspirations and needs of man; a good which is sufficient to
make human life worth living; a good whose possession eliminates every lack. However, happiness does not only have to satisfy the demand for self-sufficiency for the individual; it has a social dimension. [B]y self-sufficient we do not mean that which is
sufficient for a man by himself, for one who lives a solitary life, but also for parents,
children, wife, and in general for his friends and fellow citizens, since man is born for
citizenship (1097b8-11).
Which contentual definition of the highest good meets these formal criteria? The
language of morals says that happiness is the good life, but what is the good life? Aristotle replies by means of analogies. He takes the different professions and arts as a starting point. The good of an artist as artist consists in his work; this is the statue for the
sculptor and flute playing for the flute player. The same applies to an organ; the good of
the eye is seeing and the good of the foot is walking. What is the good of the human
being as human being? Human beings are living things; the work of the living is life; to
live is the being of the living (De anima 415b13). But there are different forms of life,
and we ask for the form specific to humans. Humans share the life of nutrition and
growth with everything that lives and the life of perception with animals. What remains
is the life of rationality, and it needs to be distinguished between the capability and the
activity, which is life in the more proper sense. Therefore, the life or being and thus the
function of humans is an activity of the soul in accordance with rationality. However,
the value of an activity does not only depend on the capability by which it is performed;
it also depends on the qualification of the person who performs it. Thus, a distinction is
to be made between the playing of a flutist and the playing of a good flutist. Both play
the flute, but the good flutist is characterized by playing well, and the good playing has
a greater value. [H]uman good turns out to be activity of soul in accordance with virtue (aret), and if there are more than one virtue, in accordance with the best and most
complete (1098a16-18).

242

Friedo Ricken

Virtue

The Greek word aret, unlike the English word virtue with which it is translated, is not
restricted to moral usage. [E]very aret both brings into good condition the thing of
which it is the aret and makes the work of that thing be done well; e.g. the aret of the
eye makes both the eye and its work good; for it is by the aret of the eye that we see
well (1106a15-19). Happiness is an activity of humans in accordance with aret; therefore, in order to define the term happiness more closely, we have to clarify the term
aret, namely the term human aret, i.e. virtue. By human aret we mean not that of
the body but that of the soul; and happiness also we call an activity of soul (1102a1517). Therefore, ethicists have to deal with the soul, but, as Aristotle emphasizes, only to
the extent which is necessary for their questions. He refers to his exoteric works, i.e. the
works which are not intended for use at school but for a wider public; according to
Aristotle, that which is stated there would suffice for the purpose in hand.
A distinction is to be made between an irrational component (alogon) and a rational
component (logon echon). Whether these components are separated from one another
in the same way as the parts of the body, or whether it is only a terminological distinction just as we may refer to the same line as convex or concave is of no importance
for the present study. The cause of nutrition and growth belongs to the irrational component; its aret is no human aret because this part is most active in sleep, while a
good and a bad person are least distinguishable in sleep.
There seems to be also another irrational element in the soul one which in a
sense, however, shares in a rational principle (1102b13f.). Aristotle takes the phenomena of continence (enkrateia) and incontinence (akrasia) as a starting point. In both
cases, there is a conflict between reflection (logismos) and desire. The incontinent person knows that what he does is bad, and he does it anyway due to desire; the continent
person, on the other hand, knows that desires are bad and does not obey them due to
reason (cf. 1145b12-14). In the incontinent and in the continent person, we praise the
rational part of the soul because it urges them towards the morally right. [B]ut there is
found in them also another element naturally opposed to the rational principle, which
fights against and resists that principle [...] Now even this seems to have a share in a
rational principle, as we said; at any rate in the continent man it obeys the rational
principle (1102b16-27).
Because this irrational part of the soul partakes in reason, we can also count it
among the rational part of the soul. In that case, we have to differentiate the rational
component of the soul into the part which is rational in the proper sense and the part
which can listen to reason, and this is the element with appetites (orektikon). The virtues are differentiated accordingly. Virtues are praiseworthy attitudes (hexis). We call
the virtues of the rational component of the soul which is rational in itself dianoetic or

Aristotles Moral Philosophy and Moral Psychology

243

intellectual virtues, i.e. virtues of thought or rational virtues, and the virtues of the element with appetites ethical or moral virtues, i.e. virtues of character.

Ethical Virtue

[I]ntellectual virtue in the main owes both its birth and its growth to teaching (for
which reason it requires experience and time), while moral virtue comes about as a
result of habit, whence also its name (ethik) is one that is formed by a slight variation
from the word ethos (habit) (1103a15-18). From this, Aristotle concludes: character
traits (virtue and vice) do not exist by nature, but are acquired. That which exists by
nature cannot be changed by habituation. By nature, a stone moves downwards, and
this would not change even if it were thrown upwards countless times. The plasticity of
the desiring element is natural, but the attitudes it takes up are not natural but a result
of habituation. Aristotle sees an analogy between the acquisition of character traits and
the acquisition of the arts (techn). For the things we have to learn before we can do
them, we learn by doing them, e.g. men become builders by building and lyreplayers by
playing the lyre; so too we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts (1103a32-b2). The quality of the person who performs an activity results from the quality of the activity; men will be good or bad
builders as a result of building well or badly [...]; by doing the acts that we do in our
transactions with other men we become just or unjust (1103b10-16).
Aristotle refers to the aim of legislation and the need of a teacher in the arts. The aim
of legislation is to turn citizens into good citizens by habituation; only a person who
apprentices to a master can learn a craft or an art appropriately and become a good
builder or a good lyre player. The activity gets its quality through the norm, and the
quality of the acting person results from the quality of the activity.
What is ethical virtue? Aristotle assumes that there are three things in the desiring
part of the soul: passions (pathos), potentialities (dynamis) and attitudes (hexis). Virtues are neither passions nor potentialities; therefore, they are attitudes. Examples of
passions include appetite, anger, fear, envy, love, hatred; potentialities are that due to
which we can suffer a passion; attitudes are that due to which we act in a good or in a
bad way towards the passions. For example, we act in a good way towards anger if it is
appropriate to the situation it concerns; we act in a bad way if it is too intense or too
weak. Passions are no virtues and vices because we are not praised or criticized merely
due to passions as such; we are not praised and criticized for the fact that we are angry
but for the way we are angry; by contrast, virtue and vice are subject of praise and criticism. In contrast to virtues, passions are not based on a decision. Passions are processes, virtues and vices are states. Virtues and vices are no potentialities because we are

244

Friedo Ricken

neither praised nor criticized for the fact that we can suffer passions; praise and criticism refer to how we react affectively. Potentialities are natural; virtue and vice result
from repeated actions.
Ethical virtue is the attitude due to which we act in a right way towards the passions.
For instance, both fear and confidence and appetite and anger and pity and in general
pleasure and pain may be felt both too much and too little, and in both cases not well;
but to feel them at the right times, with reference to the right objects, towards the right
people, with the right motive, and in the right way, is what is both intermediate and
best, and this is characteristic of virtue (1106b18-23).

Desire and Passion

The Greek word pathos is derived from the verb paschein to suffer; a pathos in the
broadest sense is a suffering, something that is expressed in the linguistic voice of the
passive instead of the active. In our context, it is a suffering of the soul, and sensory
perception is included in this meaning (De anima 403a7; 416b33). A further limitation
occurs if we only consider that which the desiring element suffers. Aristotle lists these
forms of suffering in NE II 4 where he asks for the concept of ethical virtue. By passions (pathos) I mean [desire], anger, fear, confidence, envy, joy, friendly feeling, hatred
[] and in general the feelings that are accompanied by pleasure or pain (1105b2123). Here, desire (epithymia) is counted among the passions. The phenomena listed are
passive in the sense that they are not based on a decision which constitutes an action.
From this concept, we have to distinguish the concept of passion in the strictest or
proper sense, which does not include desire.
Animals are distinguished from plants by the faculty of sensation (aisthsis). The elementary sense which all living things have and with which they perceive their food is
the sense of touch. A being that has sensation experiences pleasure (hdon) and pain
and therefore has desire, for desire (epithymia) is just appetite (orexis) of what is pleasant (hdy) (De anima 414b6). Desire is an exclusively sensuous appetition; by contrast,
passion in a certain sense listens to reason, as Aristotle shows using the example of
anger (thymos). It behaves like an overeager servant who is given an order by his master
and does not wait until the master has finished speaking but runs out after the first few
words and then acts out the order in a wrong way; or like dogs who bark as soon as
they hear a sound, without looking to see if it is a friend. Anger perceives an insult or a
slight but it does not wait for the order of reason: anger, reasoning as it were that anything like this must be fought against, boils up straightway. On the other hand, desire
(epithymia) does not conclude that it must do something; desire, if argument or per-

Aristotles Moral Philosophy and Moral Psychology

245

ception merely says that an object is pleasant, springs to the enjoyment of it. Therefore
anger obeys the argument in a sense, but desire does not (1149a35-b2).
Which discipline deals with the passions in the proper sense? In his work On the
Soul, Aristotle lists various definitions of anger (org). The definition of the dialectician
is: appetite for returning pain for pain and that of the physicist (physikos) is: a boiling of the blood or warm substance surround the heart. [One] assigns the material conditions, [the other] the form or formulable essence (403a30-b2). Which of the two is
responsible for ethics? Which of the two does the ethicist address when asking for the
definition of passions? Aristotle analyzes the passions in the second book of his Rhetoric, the doctrine of the art of persuasion. Rhetoric is an offshoot of dialectic and also of
ethical studies. Ethical studies may fairly be called political (1356a25-27). The politician must be capable of persuading, and one of the means to achieving this is influencing the passions of the listeners. Dialectic is a method of argumentation. Aristotle distinguishes between a proof (apodeixis) and a dialectic syllogism. A syllogism is a proof
if its premises are true and primary, i.e. not deduced statements. On the other hand, a
dialectic syllogism draws conclusions from accepted opinions (endoxa). [T]hose
opinions are 'generally accepted' which are accepted by every one or by the majority or
by the philosophers i.e. by all, or by the majority, or by the most notable and illustrious of them (Topics 100b21-23).
The passions, according to the general definition in the Rhetoric, are something that
causes a change in humans which affects their judgement and which is accompanied by
pleasure and pain (vgl. 1378a19-21). The Rhetoric defines anger (org) as an impulse,
accompanied by pain, to a conspicuous (phainesthai) revenge for a conspicuous slight
directed without justification towards what concerns oneself or towards what concerns
one's friends (1378a30-32). Pain is caused by the opinion of having been neglected or
disparaged, pleasure is caused by the idea of revenge. In contrast to desire (epithymia)
which is based exclusively on a sensory impression, anger is a propositional attitude. The
angry person is convinced that she experienced contempt, neglect or disparagement; that
these constitute an injustice towards her; that that which she wants to inflict on the other
person is a punishment. Whether these propositions, which are assumed to be true, are
indeed true remains open; not the passion, but only reason can decide on that question.
Reason has the opportunity to calm the passion by disproving the convictions on which it
rests. Aristotle designates growing angry as the opposite of growing calm and anger as the
opposite of calmness. Now we get angry with those who slight us; and since slighting is a
voluntary act, it is plain that we feel calm towards those who do nothing of the kind, or
who do or seem to do it involuntarily (1380a9-12).

246

Friedo Ricken

Action and Decision

All thought (dianoia) is either practical or productive (poietike) or theoretical


(Met.1025b25). Practical and productive thought have in common that the principle of
movement is in the thinker; but how do they differ? Intellect itself, however, moves
nothing, but only the intellect which aims at an end and is practical; for this rules the
productive intellect, as well (NE 1139a35-b1). A person who produces something produces it in order to use it; in that sense, the product of the productive process is a
means to a superior end. By contrast, action refers to the end, namely good action
(eupraxia 1139b3) or happiness. Producing is about a certain, limited good, e.g. a house
or health; acting is about the good life in general (1140a28). Production skills (techn)
yield a good product; acting is the being of the acting person herself (cf. 1105a26-31); it
is the acting person herself who becomes good or bad through her actions. Good action
is to be understood in a comprehensive sense; it is the action of a human being insofar
as by nature, she is a being that lives in a society with other human beings (vgl. 1097b811). The greatest of ethical virtues, justice, determines the relation to other humans; it
is anothers good (1130a3); we call those acts just that tend to produce and preserve
happiness and its components for the political society (1129b17-19).
The origin of action in the sense of the origin of movement is decision, and the
origin of decision is appetite and reasoning directed at an end. Appetite is determined
by character. This is why decision cannot exist either without reason (nous) and intellect (dianoia) or without a moral state (1139a33f.). The decision can only be good if
the reason [is] true and the [appetite] right (1139a24). Let us consider these two factors of the decision in detail.
In the third book of the Nicomachean Ethics, Aristotle initially describes the scope of
deliberation (bouleuesthai). We only deliberate things which we can do ourselves. Here,
in turn, we do not deliberate when there are fixed rules, e.g. for spelling or grammar;
rather, the physician deliberates which therapy she should apply; the banker deliberates
which shares she should buy or sell; the helmsperson deliberates how she should act in
a given weather situation. Not the ends are the subject of deliberation, but the means;
the physician does not deliberate whether she should cure. Rather, deliberation assumes a given aim and traces the causal chain until it reaches that which the acting
person can do herself in order to initiate this causal process. Decision is deliberate
desire of things in our own power (1113a10f.). While the subject of decision is the
means, the wish refers to the end. One wishes that which one considers good or which
appears good, and this is determined by the ethical attitude. Aristotle uses a comparison. Sensations of taste and temperature are different in a diseased person and a
healthy person; a thing that is sweet to a healthy person is bitter to a diseased person; a
diseased person feels cold while exposed to a temperature that is pleasantly warm to a

Aristotles Moral Philosophy and Moral Psychology

247

healthy person. How can the question be decided whether the dish is actually sweet or
bitter? The judgement of the healthy person is decisive. An analogous case can be made
for the wish. The character decides whether that which somebody considers to be good
is really good or merely an apparent good. For each state of character has its own ideas
of the noble and the pleasant, and perhaps the good man differs from others most by
seeing the truth in each class of things, being as it were the norm and measure of them
(1113a31-33).

Moral Insight

Is deliberation as presented by Aristotle in NE III 5 a practical or rather a technical


deliberation? The techn has areas where it needs to balance prospects of success and
risks, but in contrast to practical deliberation, it is always about a specific good; the
physician deliberates how to cure her patients and the helmsperson deliberates how to
bring passengers and crew safely into port. By contrast, acting is about the good life as a
whole; here, goods which come into conflict with one another have to be balanced, and
questions of justice deal with the conflict of what is good for me and what is good for
the others. Therefore, the sixth book distinguishes between producing and acting and it
matches them with two different dianoetic virtues: production skills (techn) and moral insight or practical wisdom (phronsis). When we ask what the right practical deliberation is, we are referred to a virtue, and when we then ask what this virtue consists in,
we are answered that we should consider those people who have this virtue. [P]ractical
wisdom is a right rule about such matters (1144b27f.). Regarding practical wisdom
we shall get at the truth by considering who are the persons we credit with it
(1140a24f.). [W]e think Pericles and men like him have practical wisdom, viz. because
they can see what is good for themselves and what is good for men in general (1140b810). For Aristotle, moral insight is a form of intellectual seeing. The good person is
characterized by seeing the truth in each class of things (1113a32f.). Practical wisdom
is the virtue of the eye of the soul (1144a30), but the soul can only see correctly if it
has ethical virtue. Aristotle describes practical deliberation as a syllogism. '[S]ince the
end, i.e. what is best, is of such and such a nature' (1144a32f.), I have to do such and
such a thing. However, only the good person recognizes this end; wickedness causes us
to be deceived about the end of the action. What we see as happiness is determined by
our character. The end for the sake of which we do something is the starting point of
practical deliberation. [B]ut the man who has been ruined by pleasure or pain forthwith fails to see any such originating cause to see that for the sake of this or because of
this he ought to choose and do whatever he chooses and does; for vice is destructive of
the originating cause of action (1140b17-20).

248

Friedo Ricken

However, the primary subject of moral insight are not the first terms, but it is the individual case, the conclusion of the practical deliberation, that which needs to be done
here and now (1142a24f.). That permits us to understand why Aristotle uses the metaphors of seeing: It is the senses which give us the most authoritative knowledge of
particulars (Met. 981b11). The starting point of moral insight is the individual case.
For the fact is the starting-point, and if this is sufficiently plain to him, he will not at
the start need the reason as well (NE 1095b6f.).

The New Synthesis in Moral Psychology versus


Aristotelianism.
Content and Consequences
Kristjn Kristjnsson

Abstract
The aim of this chapter is to explore the social consequences of recent developments in
moral psychology aimed at psychologizing morality: developments that Jonathan Haidt
terms the new synthesis (NS). As a prelude, I diagnose what in the content of the NS
undergirds those consequences and how it differs from the Aristotelian alternatives
with which it is commonly contrasted. More specifically, I explore the NSs take on
moral ontology, moral motivation, moral ecology and moral domains. In all cases, I
deem the response offered by the NS to radical rationalism hyperbolic and argue that
Aristotelianism provides a more plausible, if more moderate, alternative. In the final
section, I address the putative social consequences of the NS, both general consequences for public conceptions of the moral life and more specific consequences for moral
education at school. In both cases, I argue that the consequences of adopting the NS
position range from the unfortunate to the outright pernicious.

Introduction

This chapter owes its inception to a request for an exploration of the social consequences of recent developments in moral psychology: developments that, to varying
degrees, aim at psychologizing morality. The developments that have been gaining
currency in recent years are sweeping in compass; they have aptly, if ambitiously, been
named the new synthesis in moral psychology (Haidt 2007) and incorporate diverse

Kristjn Kristjnsson
Jubilee Centre for Character and Virtues
School of Education, University of Birmingham, U.K.
email: k.kristjansson@bham.ac.uk

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_12

250

Kristjn Kristjnsson

philosophical and psychological insights. My aim, in a nutshell, is to critique this new


synthesis from an Aristotelian perspective and to suggest Aristotelian alternatives.
Many of the concerns that I raise are not entirely new; they have been broached before
by Aristotelians. What remains is to synthesize and systematize those concerns in relation to the new synthesis: a task that I begin in this chapter although more work will
clearly be required to complete it.
I need to extend somewhat the remit of exploring putative social consequences. The
reason is that any such consequences must be understood as ramifications of the content of the new synthesis. Thus, although the chapter culminates in a section (Sect. 5)
which argues that the social consequences of the new synthesis range from the unfortunate to the outright pernicious, it behooves me to diagnose what, precisely, in the
content of the new synthesis undergirds and motivates those consequences, and how
it differs from the Aristotelian alternatives. I do so in Sect. 2 (a general discussion),
Sect. 3 (on moral ontology and moral motivation) and Sect. 4 (on moral ecology and
moral domains).
It remains to explain what I mean by social consequences. If taken at face value as
referring to consequences for, say, public policy and lay moral conceptions or even
for the level of general well-being in society someone might question whether scholarly ideas in philosophy ever have salient consequences of that sort. The obvious counter-example of Marxism would not necessarily cut ice with sceptics, for they could argue that the Marxism which had such a profound influence on 20th century social history was not really Marxism qua philosophical theory (dialectical materialism) but
rather Marxism qua economic-cum-political theory. After all, few common people
were swayed towards revolutionary activity by reading Engelss puerile ruminations on
the nature of matter and consciousness. I return to the issue of the possible social ramifications of philosophical positions in Sect. 5; let it suffice to say here that I do, indeed,
believe that the new synthesis does have the potential to enact significant changes in
the public consciousness and that I find those changes disquieting. A significant portion of Sect. 5 will, however, be taken up by a discussion of consequences in a narrower
sense: consequences for the content of moral education at school. There, at least, few
would question the potential causal and logical links between scholarly ideas and practical consequences (for the sort of education that young people are offered in the classroom). Without getting unduly ahead of my argument, I can announce here that I also
deem those more specific social consequences disconcerting.

The New Synthesis in Moral Psychology versus Aristotelianism.

251

The New Synthesis and the New Academic Ecumenism

When recent developments in moral psychology are mentioned, few will doubt what
is being targeted. Jonathan Haidt calls it, as already noted, the new synthesis (hereafter
referred to as the NS) and he has become its most vociferous spokesman. This synthesis, which has attracted substantial following in recent years, is not only one of ethics
and science but also of different strands within psychology (personality, social and
cultural psychology) more precisely, sub-strands that are driven by anti-rationalist
and evolutionary considerations aligning them in a united front. Haidt (2013, 12)
frankly admits that the scales fell from his eyes upon reading Dawkinss work on The
selfish gene; suffice to say here that Charles Darwin might well be called the great
grandfather of the NS. Its more immediate ancestry can be traced, however, to the affective revolution of the 1980s and the concomitant cognitive dual-process theories,
according to which our minds comprise both an ancient and largely automatic affective
system and a newer, weaker and slower reasoning system (Haidt 2007, 998). In short,
Haidt and his colleagues tease out the theoretical and empirical implications of this
dualism and apply it to ethics.
Historically speaking, the NS has already revitalized the field of moral psychology
and renewed its confidence, badly shaken after the demise of Kohlbergs priestly status
in the 1980s. It is almost an understatement to say that the new approach has an ambitious and diverse research agenda (Haste 2013, 319). It offers a panoramic, allembracing view of the field and already promises (or threatens, depending on ones
outlook) to exert the sort of hegemony that characterized Kohlbergs halcyon days. It
has got its high priests in the psychologist Haidt and the philosophers Jesse Prinz and
John Doris, along with a group of attending courtiers. This gathering is nowhere better
seen than in the 2010 O.U.P. Moral psychology handbook, which showcases a rich tapestry of research findings and theoretical constructs, broadly falling within the rubric of
the NS and bound to stir controversy among non-converts. I do not doubt Doriss aspiration to balanced reporting on controversial issues; however, as the handbook editor
he also declares his disinclination to pursue the fools errand of a pretense of impartiality (2010, 2). What is most conspicuously on display in this handbook is, therefore,
a series of contributions by the usual suspects between whom the dots can easily be
joined to give an outline of the hard core of the NS.
I focus in this chapter on four main aspects of the hard core, having to do with (a)
moral ontology (driven by Prinz), (b) moral motivation (driven by Haidt), (c) moral
ecology or contextuality (driven by Doris) and (d) moral domains or foundations
(driven again by Haidt). On offer in (a) is hard sentimentalism on emotion as the donor
rather than the recorder of moral value; in (b) social intuitionism on moral motivation
as innate, emotion-driven and automatic (albeit socially conditioned), with a reduced

252

Kristjn Kristjnsson

role for reason; in (c) radical moral situationism which rejects the existence of global
traits of character; and in (d) a theory of modular moral domains, each with its own
emotional foundation, that operate more or less independently of one another.
The natural opponent of the NS would seem to be rationalism in all its guises, as represented inter alia by Plato and Kant and by extension Kohlberg (to the degree that he
adopted their rationalist assumptions). Hard sentimentalism is the diametrical opposite
of Platonic or Kantian hard rationalism on the nature of morality; social intuitionism
contravenes Kohlbergs rationalist approach to moral motivation; radical moral situationism goes against the grain of the idea of a domain-independent disposition to adjudicate moral issues; and the theory of modular moral domains is incompatible with the
idea that moral reasoning is all of a par. Notice an oddity here, however: The opponent
towards which many of the chief advocates of the NS (for instance, Doris and Prinz)
tend to direct their criticisms is not Kant or Kohlberg but rather Aristotle or, more
specifically, reincarnations of Aristotle in contemporary forms of virtue ethics. Haidt
constitutes an exception, with a considerably more conciliatory approach towards Aristotelianism, but that is partly because he misinterprets Aristotles ideas on natural virtue and habit, as we see in Sect. 3. Paradoxically, one could say that the NS owes some
of its popularity to the recent Aristotelian renaissance in moral philosophy, psychology
and education by offering an antidote to it.
I call this stark contrast drawn between Aristotelianism and the NS odd because, at
first sight at least, the latter seems to share some essential assumptions with the former:
ideas about the emotional nature of moral selfhood, and the antipathy to anything indiscriminately absolute, rationalist and global in the moral domain. (I elicit some of
those shared assumptions in Sect. 3 and 4.) One could even argue that the two regions
penetrate one another so thoroughly that neither can helpfully be set against the other
as its anti-thesis. All of this makes one think, more generally, about the development of
academic ideas. Marxs dialectics painted a simple, parsimonious picture: A thesis
gradually leads to an anti-thesis, and after the two clash, a new synthesis is formed.
What we see in academia, in contrast, is often this: A thesis generates an alternative
position. Soon, however, complaints start to be made that the alternative position does
not go far enough, and a more radical alternative first belonging to the lunatic fringe
but soon going mainstream is suggested. We saw this, for instance, in the way that
Mischels (1968) fairly modest situationist alternative to globalism became superseded
by radical situationists (Doris 2002; Harman 2003), first written off as mischievous
hyperbolists but swiftly acquiring academic respectability. Proponents of the new radical alternative typically aspire to underline their divorce from the parent stock, so that
rather than running in harmonious adjustments to one another, the modest and radical
alternatives transform themselves into absolute antagonism. The radical alternative
commonly adopts conceptualizations from the original thesis (notice e.g. in Sect. 3 how
Haidt actually shares Kantian conceptions about the nature of intuition and reason) but

The New Synthesis in Moral Psychology versus Aristotelianism.

253

then offers a subversive take on them. The moderate alternative, however, becomes the
true whipping boy, attacked from both sides.
We see this trajectory clearly in the way in which the criticism of rationalism, expressed in Anscombes (1958) retrieval of Aristotle-inspired virtue ethics, has now developed into a much more radical form of anti-rationalism encapsulated in the NS.
Synthesis turns out to be something of a misnomer, therefore; it would more felicitously be called the new anti-thesis (of rationalism), with Aristotelianism being the
true synthesis or hybrid. Indeed, this is what I argue in what follows. I present Aristotelianism not as an unstable halfway house but rather as the ideal golden mean between
Kantian rationalism and the radical alternative on offer in the NS. In my view, the pendulum has thus swung too far in the direction away from reason. Because the NS cannot be understood but as an extension of Aristotelian insights, its proponents are doing
themselves a disservice by using Aristotle as a foil. They could also be accused of cutting away the historic branch on which they sit, since their radical opposition to anything rationalist only makes sense against the backdrop of more modest objections
yielded by the Aristotelian retrieval of cognitivism and virtue ethics during the affective revolution half a century ago. The tendency of the NS to monopolize the field of
moral psychology (witness the 2010 handbook) is irksome, and its tendency to dichotomize responses to hard rationalism drives an unnecessary wedge between potential
argumentative partners.
All that said, the NS must be applauded, at a more general methodological level, for
its efforts at healing the traditional schism between moral philosophy and social science. Some philosophers, whose disciplinary austerity is unbending, may resist any
input from psychology. However, naturalists of any ilk, including Aristotelian naturalists, cannot but celebrate the academic ecumenism the unprecedented interdisciplinarity to which the NS has contributed, turning the study of morality into what Haidt
(2013) calls one of the most active convergence zones in the academy. This is, after
all, precisely what Anscombes clarion call in her 1958 paper was about, when she advised philosophers to keep quiet until a burgeoning field of philosophical psychology
had begun to produce the goods. The lack of interest that moral philosophers have
traditionally displayed in empirical evidence leads, at worst, to conclusions that are
irreducibly relative or hopelessly trivial. Conversely, the lack of interest that some social
scientists exhibit in conceptual and theoretical work fosters deceptions and logical errors. It is wise, therefore, to remain equally skeptical of philosophical armchair psychology and of a conceptually sloppy and theoretically under-developed moral psychology (see further in Flanagan 1991; Kristjnsson 2010a). Fortunately the fields of
moral philosophy and moral psychology are now ripe with exercises in experimental
ethics (see e.g. Appiah 2008), and the NS is to thank for quite a few of those bridging
efforts.

254

Kristjn Kristjnsson

Let us not pretend, however, that the marriage of moral philosophy and social science is made in heaven. Although not quite a shotgun marriage, it is better described as
one of convenience rather than of an effortless unitary purpose. Various hindrances
may block useful cross-fertilizations. First, philosophers and social scientists have their
different ways of analyzing concepts. Philosophers, concerned as they are with logical
rigor and economy, have a nasty habit of superimposing their preferred characterization on a given concept in order to secure its true grammar thus running the risk of
trivializing the content of their analysis by making it unrecognizable to ordinary language users. In contrast, social scientists, while allegedly concerned with faithfulness to
the usage of the many, often seem to conduct their inquiries in a conceptual vacuum
(where language has gone on an extended holiday) or to justify their chosen terminology with a brief argumentum-ad-verecundiam (argument from authority) nod to a respected authority in the field or to a standard dictionary (for an extended discussion of
such errors, see Gulliford et al. 2013). Second, many social scientists continue to be held
in thrall to an outdated Humean bifurcation of facts and values conflating it with
another and more reasonable Humean distinction between descriptions and prescriptions which threatens to make their understanding of the language of morality vacuous. In other words, many social scientists seem to think that the value judgment Honesty is morally good has more in common with the prescription Act honestly! than
with the factual judgment Birds need wings to fly, and that by granting the factual
judgment potential truth status, they will turn themselves from scientists into moralists.
Even those who refuse to bow to the prohibition on value judgments have a hard time
understanding the idea of intrinsic value fundamental to an Aristotelian outlook on
moral virtue and tend to see moral values as, in principle, instrumental to some other
values of a non-moral kind, for example subjective happiness (see the critique in Kristjnsson 2013). Third, despite the vocal call for symbiosis between philosophy and psychology, echoed by the NS, various unresolved power issues remain on who should call
the shots in the co-operative enterprise and whether the desired outcome is better described as one of moralised psychology or psychologised morality (cf. Kristjnsson
2010a, Chap. 3).
I cannot help observing at this juncture that while openly promoting the blurring of
boundaries between moral philosophy and moral psychology and himself declaring a
preference for reading books rather than running experiments Haidt comes dangerously close in a number of places to suggesting that psychology should simply replace
ethics. He praises the sociobiologist Wilson for proposing that ethics be removed, albeit
temporarily, from the hands of philosophers and biologicized (Haidt 2013). More explicitly, in an interview, he remarks that while philosophers are certainly licensed to
help us think about what we ought to do [] what we actually do is the domain of
psychologists (Warburton and Haidt 2012). And when he opines that research on
morality beyond harm and fairness is in its infancy (Haidt 2007, 1001), he seems to

The New Synthesis in Moral Psychology versus Aristotelianism.

255

dismiss as irrelevant all the work done already by virtue ethicists. Even if research is
here meant to denote empirical research, there is an off-putting arrogance implicit in
this remark towards the abundant empirical work carried out by non-NS researchers in
moral psychology. It is as if, outside the charmed circle of the NS, there is No Mans
Land in any worthwhile research on morality.
Let me not be understood here as wanting to detract from the merits of the new
ecumenism between philosophy and psychology. I have simply sounded a few warning
signals, indicating that the two may not make wholly harmonious bedfellows, and that
the line between co-operating and conquering can be thin. One more issue regarding
boundary crossings should be mentioned here which bears directly on the direct remit
of this paper: social consequences of the NS. Although increased optimism now reigns
in academic circles on how disciplinary boundaries can be confused, collapsed and
crossed, a serious gap seems to persist between the fields of academia, on the one hand,
and public social policy on the other. Academic findings are often formulated in a language that lacks the required taxonomic bite (of biting into existing lay vocabularies)
to cut ice with politicians and policy makers (see e.g. Walker et al. 2014).
There is some reason to believe, however, that the NS is better equipped than many
academic approaches for bridging the gap between research and social reality: enacting
real changes in the way people think about moral issues. I say this because many of the
chief representatives of the NS in particular Haidt himself seem to have a way of
couching their findings in a language that appeals to, and makes them the darlings of,
the media. Whether we like it or not, media pundits are the true knowledge brokers in
todays world, on whose shoulders the practical destiny of academic findings often
rests. Haidt prides himself on his snappy sound bites and pithy metaphors: of dogs and
tails, elephants and riders (Warburton and Haidt 2012). I am not sure that without his
light-hearted, but cleverly calculated, belletrisms, the NS would have attracted the same
public attention that it seems to enjoy today.

Moral Ontology and Moral Motivation

I shall be relatively brief in my exploration of the first aspect of the NS, hard sentimentalism, as I have critiqued it elsewhere (Kristjnsson 2010b). A few reminders are in
order, however. The debate about the ontology of morality between rationalists and
sentimentalists is a philosophical debate about the origin and nature of moral truths,
and the epistemological role of emotion in tracking or creating such truths. The main
NS contestant in this debate is thus, fittingly, a philosopher: Jesse Prinz.
Hard rationalists such as Plato, Kant and Kohlberg famously believe that moral facts
exist independently of our emotions, and that those facts can be tracked by human

256

Kristjn Kristjnsson

reason. Not only do all moral facts exist independently of our emotions, on this view,
but more uncompromisingly, emotions hinder rather than help reasons quest for those
facts and may even detract from their moral value. Soft rationalists, in contrast, distinguish themselves from their hard counterparts in believing that not only proper actions
but also proper reactions are conducive to moral functioning. A distinctive feature of
the canonical soft rationalist model, namely Aristotles virtue theory, is thus the assumption that emotional reactions may constitute virtues. Emotions are central to who
we are, and they can, no less than actions, have an ideal best condition when they are
felt at the right times, about the right things, towards the right people, for the right end
and in the right way (Aristotle 1985, 44 [1106b1735]). Emotions are felt in this proper way when they have been infused with reason, not in the sense of being policed by
reason, but in the sense of being united with it. This remark may indicate that soft rationalists differ from hard rationalists only in so far as the former consider emotions to
be an indispensable under-laborer of, rather than an intruder into, reason. But things
are a bit more complicated. Just as anthropologists theories about the cultures in which
they live are part of those very cultures and may influence them in various ways, so the
fact that we have emotions becomes partly constitutive of our moral wellbeing. As the
soft rationalist David Pugmire explains, not every kind of moral value we rightly attribute to states of affairs can be separated from the powers that those states have to affect
us emotionally: Sometimes the significance we give things lies precisely in how they
move us, in what they can evoke in us. Emotions thus have not only an exploratory
but also a constructivist role to play in moral evaluation (Pugmire 2005, 1718). For
soft rationalists, no neat distinction can thus be drawn between our rational and our
sensuous natures.
The advantage of the soft rationalist position seems to be that it can at once account
for the insights and empirical findings which gave rise to what Haidt calls the affective
revolution in cognitive science and make sense of the assumption, which underlies
most theories of emotion regulation and education, that emotions are essentially corrigible: They can be judged morally appropriate or inappropriate, and they are open to
correction and coaching. Emotions are not infallible as a normative guide, therefore.
What feels right here and now cannot simply be assumed to be truly right (see further
in Kristjnsson 2010b).
Sentimentalists do not rest content with this intermediary position and its slackening of the rationalist monopoly, however. They want to go the whole hog away from
rationalism and understand emotions as the essential, sole donors of moral value. Socalled soft or neo-sentimentalists think that this can be done without compromising
the corrigibility assumption, and they have devised ingenious if somewhat farfetched ways in which to explain how and when emotion gets things wrong (see e.g.
DArms and Jacobson 2000). For Jesse Prinz, however, even soft sentimentalism is but a
miserable subterfuge. Moral rightness and wrongness can be defined exclusively and

The New Synthesis in Moral Psychology versus Aristotelianism.

257

incorrigibly in terms of sentiments that are the constituents of moral judgments. Thus,
a form of conduct is truly wrong for someone if that person has a sentiment of disapprobation toward it (2007, 138). If two people disagree on moral issues, there is no fact
of the matter to decide who is right: that is, unless they share the same grounding
norms and one of them manages to persuade the other that he or she has misapplied
those values in the particular case through some oversight (2007, 120, 125). There is an
unbounded number of possible personal moralities (2007, 288), and there is no objective moral criterion no universal Humean sentiment which can adjudicate whether
practices such as cannibalism, incest, bestiality, infanticide or gladiator sports are morally right or wrong.
Hard sentimentalism of the Jesse-Prinz kind is an extremely radical thesis which, by
maintaining that emotions are morally self-justifying and essentially incorrigible, alienates the great majority of moral philosophers and moral educators. The practical implications of this thesis are worrisome, as I explain in Sect. 5. Yet I do not propose to offer
here a refutation of hard sentimentalism. After all, despite being a minority view and
despite any potential unfortunate implications, it could still be true. For present purposes, the relevant question to ask is rather: Why does the NS require such a radical
view when there are other alternatives, such as Aristotles soft rationalism, that seem to
be able to make full sense of the emotional nature of moral selfhood?
Let us now turn from ontology to psychology, more specifically from philosophical
anti-rationalism to anti-rationalism about moral motivation, which means moving
from Prinz to Haidt. There is a slight problem here in that it is not always clear (to
readers or even perhaps to Haidt himself) if his musings are about ontology or psychology. He sometimes defines the rationalism he attacks as a thesis about moral
truth/knowledge (see e.g. Haidt 2001, 814; 2013), but his actual arguments are directed
against a thesis about the psychological origins of moral judgments. However, those
two theses do not necessarily coincide. Someone could thus argue that although reason
may grasp moral truths, we are for some (epistemological or psychological) reason
barred from being motivated to act on them. I think we can safely assume that Haidt
subscribes to something like Prinzs ontological position, or at least that of soft sentimentalism (about emotion as the essential, albeit corrigible, source of moral value),
although he does not say so clearly (cf. Musschenga 2008); in any case, let us focus here
on Haidts moral psychology.
Haidts general position can best be summed up by saying that he considers recent
empirical work in moral psychology to have left the rationalist (Kantian-cumKohlbergian) creed about moral motivation that moral judgment is generally or at
least ideally caused by moral reasoning tattered and torn. Objective research, freed
from the trammels of customary beliefs, shows that, when presented for example with a
test case of consensual sibling sex, intuition comes first, strategic reasoning second
(Haidt 2013, 286). Most of us find the idea of such sex appalling; only after our implicit

258

Kristjn Kristjnsson

snap intuition has been pumped accordingly do we start to look high and low for justifications to confirm it. Haidts common refrain here is that the emotional dog wags its
rational tail, rather than vice versa; the affective system has primacy, time-wise,
strength-wise and development-wise, over the reasoning one (Haidt 2001). Moreover,
the implicit affective reactions are usually good predictors of moral judgments and
behaviors, which is more than can be said about explicit moral reasons (Haidt 2007).
The affective system is essentially innate we are all born equipped with an intuitive
ethics although culture later modifies its content (Haidt and Joseph 2004).
This is a pretty clear general motivational theory. The devil lies, however, as always
in the details, and we need to look more closely at the intuitionreason dichotomy
underlying it. Haidt defines moral intuition as the sudden (quick, effortless, automatic,
non-reflective, non-reason-infused) appearance in consciousness of an affectively valenced moral judgment: a gut feeling of approval or disapproval (2001, 818). He further
hypothesizes that such intuitions come about as evolutionary adaptations, built into
regions of the brain and body. Yet, they need shaping from a particular culture, which
enhances the further development of some of those intuitions but suppresses others,
through immersion in custom complexes and peer socialization. Intuition is thus both
innate and enculturated (2001, 826827). For philosophically minded readers, intuition is, in this model, on a par with a Humean sentiment, not the sort of intuition qua
self-evident cognition of a rational faculty that one may recognize from typical moral
intuitivist theories such as those of G. E. Moore or W. D. Ross.
What about reasoning, then? Haidt (2001) unflinchingly understands reasoning in a
Kantian sense as a conscious, intentional and controllable mental activity of ratiocination and reflection, whereby one evaluates arguments (in this case, moral) and reaches
a conclusion/decision (2001, 818). He is, however, radically un-Kantian in his views on
its scope and power. Moral reasoning is, as evidenced by empirical studies, rarely the
direct cause of moral judgement (2001, 815) or the motivational wellspring of behavior. Rather it presents us with slow ex post facto rationalizations that help us confirm
our snap intuitive judgments and explain them, both to ourselves and others. We illusorily think, however, that our reasons have caused our judgment and that those reasons will then cause our interlocutors to change their minds (2001, 823). About the
only people who may be right about the tails ability to wag the dog are philosophers
who have been trained and socialized to follow the edicts of reason (2001, 829). Not
only is reason in most cases not elicited at all in the process of passing moral judgment,
but when it is called upon for counsel it tends to deliver biased arguments. Two major
classes of motives have been shown to bias moral reasoning: relatedness motives, where
we give proportionally unfair advantage to the views of significant others, and coherence motives, where we prioritize the internal harmony of our moral beliefs over objective validity (2001, 821).

The New Synthesis in Moral Psychology versus Aristotelianism.

259

A key word in Haidts account of moral motivation is that reason rarely plays a role
there. Haidt is not entirely consistent, however, in his use of this qualification. One
could even speak of two Haidts here, one as the uncompromising Haidt of publicmedia fame and the other as Haidt the cautious academic. When in his combative
mood, Haidt seems to relegate moral reasoning to the status of an epiphenomenon (see
Saltzstein and Kasachkoff 2004, 274) or convenient self-deception (Warburton and
Haidt 2012), and he then judges the rational tail of the emotional dog only worth
studying because dogs use their tails so frequently for communication (Haidt 2001,
825). When being more circumspect, however, Haidt says that the primacy of intuition
over reason simply means that reason is generally no match for intuition (2012b),
and he is willing to consider a number of ways in which reason may override immediately intuitive responses (2007, 999). Nevertheless, it is clear that, on either account,
Haidt affords a radically restricted role to moral reasoning and either ignores or rules
out of court cases where the existence of moral intuitions seems to be unthinkable except as a product of rational discussion and deliberation (see e.g. examples in Musschenga 2008, and Saltzstein and Kasachkoff 2004, of complicated and technical moral
quandaries where the idea of innate moral intuitions seems outlandish).
Again, however, my present purpose is not to argue directly against the NS, but rather to ask why, in carefully designing an alternative to rationalist models (Haidt
2001, 814), Haidt and his colleagues always opt for the most radical and least parsimonious anti-thesis. Consider Haidts contention that moral action covaries with moral
emotion more than with moral reasoning (2001, 823). This contention could be seen
to be robbed of some of its thunder by its similarity, in essential thrust, to Aristotelianism. After all, on an Aristotelian account, what we feel says (typically) more about who
we really are deep down than what we do (Kristjnsson 2010a). Yet, while Aristotle
would seem to be able to account for the same empirical data as Haidt does, his take on
it is significantly different, as Railton, for one, has recently pointed out (2013). Aristotle
does not conceive of emotion-driven intuition as essentially arational. Rather, intuition
acts as the autopilot or autofocus culmination of the process of rational deliberation
that trained and experienced agents have already gone through: those who have learnt
to see effortlessly what is prescribed by reason (Aristotle 1985, 73 [1115b1213]).
Virtuous persons do not need to deliberate each time round on what virtue requires, as
this has become second nature to them although they will be able to justify their emotions and actions retrospectively, if required. Virtues are precisely dispositions to respond aptly to situations in quick and domain-specific but reason-infused ways. The
fact that we can tie our shoelaces or drive to work without any apparent reflection or
even conscious thought does not mean that those behaviors are innate or have developed automatically. More generally, the fact that our affective system is automatic and
fast does not mean that it offers only crude, snap, arational evaluations (cf. Snow 2010,
Chap. 2). For instance, the intuition that sibling sex is morally pernicious may already

260

Kristjn Kristjnsson

incorporate profound implicit assessments of human liabilities to regret and pain and
of what sort of lives tend to make us flourish or flounder.
The problem with the expressed NS position here is that it is cluttered with baggage
from the very sort of radical rationalism that it aims to undermine, in particular on
what constitutes moral reasoning. From an Aristotelian perspective, it offers a strange
mixture of insight and error. Instead of the idea, in Kant, of moral judgment as neither
innate nor automatic, the NS proposes that moral judgment is both, rather than considering the intermediate possibility that it could be automatic (and implicitly rational)
without being innate. If what really troubles Haidt is the lack of innateness in Aristotle,
there are other versions of nativism around that satisfy Ockhams razor better than the
NS does, for instance Marc Hausers (2006) theory of universal moral grammar: a theory which still stops short of the implausible idea of specific moral judgments, for instance about sibling sex, as innate (i.e., structured in advance of experience, which is
Haidts own definition: 2013, 290).
The above remarks may seem to circumvent the concessions that Haidt is ready to
make to Aristotle. Thus, although Haidt the provocateur suggests that we abandon the
ancient Greeks worship of reason wholesale (2001, p. 822), Haidt the conciliator
seems to want to co-opt Aristotle to his camp. He understands and appreciates the idea
of the automaticity of virtue in Aristotle although he thinks Aristotle is wrong in
believing that this automaticity is derived exclusively from environmental cues. He
likes the idea that virtue qua second nature is but a refinement of our basic nature, an
alteration of our automatic responses; and he absolutely loves Aristotles emphasis
on habit. In general, Haidt considers virtue ethics the moral theory that best accords
with recent findings in moral psychology (Haidt and Joseph 2004, 6162; Haidt 2012b;
2012c).
Unfortunately, Haidts sporadic Aristotelianism is largely based on misunderstandings. He seems, for instance, to labor under the common illusion that natural virtue in
Aristotle is a primitive stage of virtue with which we are born and which we later refine.
Natural virtue is anything but that in Aristotle. It is actually a somewhat infelicitous
name for an advanced stage of virtue above that of both incontinence and continence
but below full virtue (see Curzer 2012, 305307). There is, indeed, not a hint of the idea
of any innate natural virtue in Aristotle. For while we are adapted by nature to receive
virtue through being endowed with its raw materials, virtue does not arise in us []
by nature (Aristotle 1985, 33 [1103a2326]), and we are born neither good nor bad.
Furthermore, regarding Haidts declared love of Aristotles habit, he fails to notice
that when writers on Aristotle mention habit, that term is actually used as an (unhelpful) rendering of the Greek notion of hexis. A hexis is a complex dispositional state of
character, incorporating emotion and reason as well as action; it is not a spontaneous
knee-jerk reaction. Haidts Aristotle thus constitutes a lean counterpart, if not a caricature, of the real Aristotle.

The New Synthesis in Moral Psychology versus Aristotelianism.

261

Moral Ecology and Moral Domains

As in the case of moral ontology, I shall be relatively brief on the moral ecology aspect
of the NS, in order not to repeat previous writings (Kristjnsson 2010a, Chap. 6; Kristjnsson 2013, Chap. 6). By moral ecology I am referring to the contextuality of moral
judgment and moral behavior: whether those transcend situational contexts, such that
there is a single ecology of morality, or whether they are somehow situation-specific.
The NS line on this issue, as represented by Doris (2002), is uncompromising: Empirical research (such as the famous Milgram experiments) shows that no global and robust (stable, consistent, situation-crossing) moral traits exist, at least in the vast majority of ordinary moral agents.
The commonly identified opponent of this view is Aristotelian virtue ethics. So far is
it, however, from the truth that Aristotelianism fits the bill of radical globalism (in Doriss pejorative sense) that almost exactly the opposite seems to be the case. First, Aristotelianism upholds the consistency of hexeis, not of behavioral habits. Hexeis are, as
already mentioned, complex states of character, incorporating specific reasons and
emotions that may, or may not, lead to specific actions (see further in Kristjnsson
2007). Behavioral inconsistency in experiments does not prove the non-existence of a
hexis as long as the relevant reason-responsive emotional script is elicited (there could
be good reasons why the agent does not act upon that script in a particular case); nor
does behavioral consistency prove the existence of a hexis (if the behavior is not motivated by the right reasons and accompanied by the right emotion). Second, Aristotle
never claimed that hexeis are global and robust except in a minority of people at any
given time (fully virtuous moral agents), in which case the results of the situational
experiments should not surprise us (Kristjnsson 2010a, Chap. 6). Third, the moral
appropriateness of a hexis has relativity to individual circumstance (e.g. personal constitution, developmental stage) built into it, which makes it impossible to generalize
about what would instantiate a virtuous hexis for all respondents in a given situation or
experiment (Kristjnsson 2013, Chap. 6). Fourth, the very term situation is used in a
bloated sense in the situationist literature, about anything from the situation of witnessing the sudden dropping of papers in front of a phone booth in a shopping mall, to
being a citizen in Nazi Germany. Aristotelian situations situations relevant to the
assessment of Aristotelian virtue ethics are, however, virtue-calibrated: namely, situations that correspond to the universal spheres of human existence where Aristotelian
virtues are allegedly played out, for instance situations which call for pain at someone
elses undeserved bad fortune (compassion-eliciting situations). Situations that are
much broader and more complex than this (such as being a citizen in Nazi Germany)
or situations presenting people with extraordinary circumstances that they have never

262

Kristjn Kristjnsson

encountered before (such as the Milgram experiments) are much more likely to catch
even relatively virtuous people off guard (Kristjnsson 2013, Chap. 6).
A hallmark of Aristotelian virtue ethics is its focus on situational appreciation, imperfectly captured by any universal moral precepts. It is, therefore, ironical once again
that NS advocates have identified their enemy in a position that is much closer to theirs
than are forms of radical rationalism/universalism about moral judgment and behavior which would have been their appropriate foils. In addition to the misidentification
of the enemy and the inflation of an Aristotelian non-issue, I am concerned about the
practical repercussions of situationism, but I shelve those concerns until Sect. 5.
The final aspect of the NS to be considered here is its theory of moral domains,
where Haidt is again the chief crusader. The roots of this theory lie in cultural psychology which Haidt studied in graduate school on how peoples self-concepts and
moral scripts vary across cultures, for instance between India or Japan and the U.S.A.
Synthesizing findings from cross-cultural research, Haidt (2007) gradually developed
the theory that there is no such thing as morality; rather there are different and independent domains of moral judgment, each with its own specific psychological foundation and separate evolutionary origin. The five domains/foundations identified originally were care/harm, fairness/cheating, loyalty/betrayal, authority/subversion and
sanctity or purity/degradation, but like Gardners intelligences, those domains seem
fluid and tend to proliferate; recently, liberty/oppression has been added as the sixth
domain (Graham et al. 2011; Haidt 2012a). The explanatory power of the domain theory lies in its ability to make sense of particular cultures or sub-cultures foregrounding
some domain(s) at the expense of others. The Indian respect for cow dung (as coming
from the revered cow) will, for example, remain weird for WEIRDs (the Western, Educated, Industrialized, Rich and Democratic) unless they understand that Indians prioritize purity as a facet of morality in a way that appears alien to those of us in the West
who understand morality exclusively in terms of care and fairness (Haidt 2013).
The moral domain aspect of the NS might not have attracted such fervent media attention if not for the political implications that Haidt (e.g. 2012a) and his colleagues
have drawn from it, in particular with reference to U.S. party politics. Thus, the real
difference between Democratic and Republican voters can now be understood in how
the former rest their conception of morality primarily on considerations of harm and
fairness, whereas the latter give a comparatively equal worth to all the five foundations.
(The new sixth foundation, liberty, then serves to distinguish between social conservatives and libertarians within Republican ranks, see Graham et al. 2011). This model also
helps explain why nearly all moral psychologists are politically liberal; it is presumably
because they happen to have been personally drawn towards the subject in virtue of
their innate predisposition for care and fairness (Haidt 2013).
The moral domains theory is the latest in a line of various paradigms in cultural psychology that try to explain moral differences in terms of deep underlying cognitive

The New Synthesis in Moral Psychology versus Aristotelianism.

263

structures, varying between cultures. Moreover, it is susceptible to many of the same


misgivings (see e.g. Kristjnsson 2010a, Chap. 8). For one thing, such a theory has a
hard time explaining the experience of successfully integrated biculturals who seem to
be able to travel effortlessly between moral cultures and reconcile apparent differences
between them. For another thing, as Helen Haste (2013) has pointed out, Haidts version may be relying too heavily on parochial data from U.S. politics, where dividing
lines between political parties are drawn in ways that are alien to the rest of the world.
There may also be problems in squaring the moral domains theory with the social intuitionist thesis about moral motivation (recall Sect. 3), at least the intuitionist part of it.
What evolutionary forces can thus explain the clear geographical distribution of political affiliations in the U.S.A? Why are people in Alabama more innately disposed towards purity than those living in California?
Proponents of the NS take its moral domain aspect to deal a cruel blow to Aristotelian cosmopolitanism. Taking into account Aristotles aforementioned concessions to
social and personal variance, I am not sure the NS does anything of that sort. Aristotelians will also strike back and ask how the NS can here make sense of phronesis (practical wisdom) as a bridge-builder between different moral domains, the evidence for the
existence of which is not only theoretical and Aristotle-specific but also empirical
(Kristjnsson 2010a, Chap. 8). In posing that question, a potential bridge is suggested
between the content of the NS and its consequences, to which I now turn.

Implications and Consequences of the New Synthesis

I have used up considerable space analyzing the content of the NS and comparing it to
available Aristotelian alternatives. I have tried to show that funeral orations over the
corpse of Aristotle, conducted by NS spokespeople, are untimely not only because
Aristotelianism is in fact thriving as never before in moral philosophy and moral education but because the NS and Aristotelianism should ideally be seen as partners in the
common cause of rebutting radical forms of philosophical and psychological rationalism. When inquiring about the potential shift of practical consequences wrought by the
NS, this theoretical backdrop provides the necessary starting point. In this section, I
explore first the potential ramifications of the NS for moral education before turning to
more general social consequences.
What would a shift towards the moral ontology of hard sentimentalism mean for
moral education? Recall that according to the ontological aspect of the NS, moral
judgments are self-justifying: If I make a genuine judgment that X is wrong, then my
judgment is warranted because wrong refers to that towards which I have such a sentiment (Prinz 2007, 236). Where does this leave the epistemological assumption under-

264

Kristjn Kristjnsson

lying current emotion-education practice (for instance, within programs of character


education and social and emotional learning) which states that emotions are essentially
corrigible: that they can be judged morally appropriate or inappropriate and are open
to correction and coaching? Interestingly, Prinz thinks that hard sentimentalism actually allows for emotional correction and reform. Sentiments, and hence moral judgments, can be assessed with respect to consistency, coherence with facts, stability, ease
of implementation, welfare, well-being and universality and such assessments may
lead us to the conclusion that one sentimental norm is better than another (2007,
Chap. 8). Importantly, however, these standards are, in Prinzs view, not moral standards but extra-moral ones (2007, 292). They are, more precisely, standards of pragmatic
convenience pale shadows of standard moral criteria, as Prinz readily admits (2007,
303), but still useful as the only standards we have for adjudicating between differing
sentiments.
We thus see how radically we would need to revise the epistemological assumption
for it to cohere with Prinzs hard sentimentalism. There would be no way to judge or
correct students emotional dispositions from a moral perspective. We would only end
up with suggestions about how their emotions could be made more useful for themselves and others in an extra-moral sense. In that case, however, all that is distinct
about the recently burgeoning practice of emotion education would, I submit, be lost,
as that practice has essentially developed as a sub-branch of moral education. Prinzs
hard sentimentalism would rob current emotion education of its very point; and I, for
one, find that an unacceptable implication (Kristjnsson 2010b). In contrast, when
emptied into the time-revered bottles of Aristotelian soft rationalism, emotion education not only satisfies the epistemological assumption but also respects adequately the
constitutive role that emotions play in our moral lives.
What then, next, about the motivational aspect of the NS which foregrounds emotional motivation but relegates reason to the back seat? If we focus on early-years moral
education, there is not much to choose between the advice that Haidt gives and that
which a typical Aristotelian moral educator would offer. When Haidt talks about (a)
the limits of a direct teaching route to moral education, (b) the importance of linking
up intuitions and virtues, already learnt, with skills that one wants to encourage
through practice, repetition and embodiment and (c) the power of role modeling that
immerses children in environments rich in stories, interpreted by moral exemplars
(older children and adults) via emotion (Haidt 2001, 828; Haidt and Joseph 2004, 65),
those seem like leaves taken out of Aristotles book. For Aristotle, early-years moral
education is, after all, mostly about sensitization to proper emotions (although Aristotle
does not consider those emotions innate in Haidts sense).
It is, however, after the early period of emotional habituation comes to an end in the
Aristotelian model that its differences with Haidts social intuitionism come to the fore.
Haidt (2012b) remains adamantly pessimistic about people actually learning to change

The New Synthesis in Moral Psychology versus Aristotelianism.

265

their moral minds as a result of rational discussion alone; at best, what the educationalcum-political ideal of deliberative democracy can aim for is to calm the passions
and induce tolerance of views that we will never come to share. In contrast, Aristotle
insists that in order to take the step from habituated virtue to full virtue, we must learn
to choose the right actions and emotions after having submitted them to the arbitration
of our own budding phronesis (1985, 40 [1105a3034]). On this view, truly virtuous
persons do not only perform the right actions, but they perform them for the right
reasons and from the right motives: knowing them, taking intrinsic pleasure in them
and deciding that they are worthwhile. Otherwise, their correctly going through the
motions of doing the right things does not really have any moral worth. I say more
about phronesis below; let it suffice to remark here that as sensitive as Haidt usually is
to empirical evidence, he seems to have systematically avoided empirical findings that
point to peoples ability to develop critical faculties of moral decision making and to
overcome biases of deliberative reasoning (see e.g. Musschenga 2008; Kristjnsson
2013, Chap. 7). Haidts social intuitionism may help explain the rarity of radical moral
self-change, but it is also pretty powerless in explaining the nature of such self-change
when it actually does occur (Kristjnsson 2010a, Chap. 10). Haidts ideas about the
reduced role of reason make mischief not only in Kohlbergian rationalist models of
moral education but also in more moderate Aristotle-inspired ones, which actually rule
the roost in todays moral education. In default of empirical evidence showing the nonfeasibility of phronesis-cultivating and phronesis-mediated moral education, I suggest
we treat the educational implications of social intuitionism with a healthy dose of skepticism.
Turning next to the moral ecology of situationism, Doris (2002) has famously argued
that situationism does not imply the abandonment of the time-honored ideal of moral
education as the cultivation of pro-social traits. The traits that can and may be promoted are simply not global traits for instance, general moral virtues on an Aristotelian
understanding but rather local (more domain-specific) traits. Furthermore, moral
education should focus on situation modification: helping kids acquire a knack for
staying away from the sort of situations in which they can expect their local traits to be
tested beyond breaking point. Against the background of moral situationism, various
other practical tips do seem to make sense, such as trying gradually to deprovincialize
local virtues and make them more expansive (Chen 2010); teaching children to be active in producing new situations themselves rather than being at the mercy of already
existing ones; and designing environments beforehand such that they are more conducive to eliciting positive local traits (Alfano 2013).
Apart from the apparent paradox of trying to inculcate a global trait of situation
modification, against the background of a theory that only accepts domain-specific
traits, Aristotelian moral educators will take no serious exception to this practical advice. The division of opinion runs much deeper; it concerns the very evidence that situ-

266

Kristjn Kristjnsson

ationists derive from experiments to undergird their theory, and the characterization of
an Aristotelian hexis, which Aristotelians will contend that the situationists simply do
not understand (recall Sect. 4). Most seriously, perhaps, situationists foreground evidence from social psychology while systematically disregarding evidence from personality psychology: evidence which seems to confirm the existence of stable traits of personality witness the famous Big-Five traits (Kristjnsson 2013, Chap. 3). The problem
for situationism is that if stable global traits of personality exist, then it would be mysterious why global traits of moral character best understood as a sub-set of the former
do not exist also. If those do exist, however, then the pessimism about the cultivation
of moral traits of character not only appears implausible and unfortunate but deeply
pernicious for the practice of moral education.
In this overview of implications for moral education, a few words need to be said, finally, about the moral domains theory. It is here that the elision of phronesis becomes
most conspicuous; not because phronesis happens to occupy pride of place in Aristotelian moral and educational theory, but because something like phronesis has been
shown to exist in successfully adjusted biculturals, who succeed in making the most of
their varied background by synergistically integrating insights at a higher level of cognitive synthesis (Kristjnsson 2010a, Chap. 8). The avoidance of that evidence, and the
resulting skepticism about an individuals faculty to mediate moral outlooks (as opposed to merely respecting difference), stands out in Haidts (2012a) moral foundations
theory. In Aristotelian theory, by contrast, phronesis plays a vital role in overseeing,
integrating and adjudicating the whole virtue repertoire. However, phronesis does not
only act as an adjudicator in cases of virtue conflicts. It is precisely because each individual moral virtue is concerned with the identification of ethically salient features of
situations, and with the making of good choices in the moral sphere, that it cannot exist
without practical wisdom. Phronesis is thus a constitutive part of every virtue in addition to lending the whole repertoire its overarching unity.
Contrast this with the view of Haidt who takes over the idea from Kohlberg his alleged arch enemy of virtues as a motley, incoherent bag, or what Haidt (2012c) calls a
buffet, without any overarching meta-skill. As Flanagan and Williams (2010) argue
convincingly, however, a modular theory of Haidts kind without any central processing system is neither theoretically adequate nor empirically well grounded. From
the very evolutionary perspective that Haidt represents, it seems bizarre to ignore the
evolutionary advantage that best sets humans apart from other animals: namely, the
enormous plasticity we possess to make and remake ourselves in ever-new ways by
using our heads (Flanagan and Williams 2010, 446) by going meta. Moral education
which ignores this advantage can simply not be deemed adequate. All in all, then, I
consider the possible social consequences of adopting the NS approach to moral education bleak and detrimental to what has traditionally been the very point of such education producing emotionally sensitive moral reasoners, capable of higher-order recon-

The New Synthesis in Moral Psychology versus Aristotelianism.

267

ciliations and adjudications between conflicting points of view (cf. Musschenga 2013,
on the missing ideal of compromise in moral foundations theory).
A final note is in order about more general social consequences. I mentioned at the
outset possible misgivings about the power of any philosophical ideas to enact changes
in the way ordinary people understand and conduct their lives. Take, as a case in point,
Strawsons (1962) famous argument to the effect that even if hard determinism were
proven to be true, this would not stop ordinary people from experiencing and acting on
reactive attitudes, such as blame and shame; we could simply not imagine a world in
which people did not make a distinction between the true evildoer and the person who
commits evil acts because of, say, the uncontrollable effects of a brain tumor. I am not
sure if Strawsons skepticism holds water, however. In a famous court case in 1924, for
example, the hard determinist lawyer Clarence Darrow managed to persuade the presiding jury to save two murderers from the death penalty because those were simply, as
he argued, the unfortunate victims of their heredity and environment (Linder, 1997). I
see no particular reason why a decisive empirical proof of determinism could not, more
generally, overthrow a reactive, blame-assigning world-view. In default of any such
proof, however, hard determinists had better temper their claims for the greater public
good. Notably, Daniel Dennett himself a declared determinist, albeit a soft (compatibilist) one agrees. In his latest book he takes those neuroscientists, psychologists and
philosophers to task who pronounce the truth of hard determinism, because of the
dire consequences that their pronouncements could have for public morality. Those
academics need, in Dennetts view, to think through the presuppositions and implications of their public pronouncements with the same care demanded of people who
preach about impending asteroid strikes (2013, 358).
I shall not contend here that the social consequences of adopting the NS framework
are as dire as those of embracing hard determinism. However, a framework that
makes moral judgments self-justifying, relegates reason to a motivational handmaid,
only acknowledges the existence of domain-specific moral traits, considers distinct
moral foundations irreconcilable with rational arguments, and has the potentially debilitating ramifications for moral education delineated above, needs to be administered
with considerable care and modesty if it is to avoid switching on Dennetts reasonable
warning lights about the public good.
Haidt and his colleagues seem to think that the most significant social consequence
of the NS is the exclusively positive one of a warrant for greater tolerance (of opposing
sentiments and moral points of view), ideological calmness and mutual respect (Haidt
and Joseph 2004, 66; Haidt 2012b), counteracting the bitterness, futility and selfrighteousness of most current moral and political arguments (Haidt 2001, 823). I am
not sure, however, that a spirit of indiscriminate tolerance is necessarily uplifting and
enabling rather than cringing and disabling, especially if it involves tolerance built on
the idea of the opaque otherness of opposing points of view. I would go as far as stating

268

Kristjn Kristjnsson

that tolerance is something of an over-prized commodity in todays world and that the
ease with which this word now falls from the lips of academics is inimical to the ideal of
respect for all persons a truly good thing by conflating it with the idea of respect for
the content of all the ideas that those people may hold, for instance in the moral
sphere a truly bad thing.
To conclude this journey, then, the NS has clearly given tonic to the troops of opponents of radical (hard) forms of rationalism. It has misdirected its energy, however, in
combating soft Aristotelian rationalism. In their quest for something so simple, so
beautiful (Haidt 2013, 292), the advocates of the NS have forgotten that truth in philosophy and psychology is often complex, capacious and messy. They have dug out
their moats further than is necessary or defensible. Aristotelians should be grateful to
the NS for sparking greater attention to issues that have long been close to their own
hearts. There is no good reason, however, for them to throw in the academic towel just
yet.

References

Alfano, M. (2013). Identifying and defending the hard core of virtue ethics. Journal of Philosophical Research 38(1), 233260.
Anscombe, G. E. M. (1958). Modern moral philosophy. Philosophy 33(1), 119.
Appiah, K. A. (2008). Experiments in ethics. Cambridge, MA: Harvard University Press.
Aristotle (1985). Nicomachean ethics. (trans: Irwin, T.) Indianapolis: Hackett Publishing.
Chen, Y.-L. (2010). A philosophical examination of character education with special reference to
a debate about character between situationism and virtue ethics. Unpublished PhD thesis.
London: Institute of Education, University of London.
Curzer, H. J. (2012). Aristotle and the virtues. Oxford: Oxford University Press.
DArms, J., & Jacobson, D. (2000). Sentiment and value. Ethics 110(4), 722748.
Dennett, D. (2013). Intuition pumps and other tools for thinking. New York: W. W. Norton &
Co.
Doris, J. (2002). Lack of character: Personality and moral behavior. Cambridge: Cambridge University Press.
Doris, J., & The Moral Psychology Research Team (Eds.) (2010). The moral psychology handbook. Oxford: Oxford University Press.
Flanagan, O. (1991). Varieties of moral personality: Ethics and psychological realism. Cambridge,
MA: Harvard University Press.
Flanagan, O., & Williams, R. A. (2010). What does the modularity of morals have to do with
ethics? Four moral sprouts plus or minus a few. Topics in Cognitive Science 2(3), 430453.
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral
domain. Journal of Personality and Social Psychology 101(2), 366385.

The New Synthesis in Moral Psychology versus Aristotelianism.

269

Gulliford, L., Morgan, B., & Kristjnsson, K. (2013). Some recent work on the concept of gratitude in philosophy and psychology. Journal of Value Inquiry 47(3), 285317.
Haidt, J. (2001). The emotional dog and its rational tale: A social intuitionist approach to moral
judgment. Psychological Review 108(4), 814834.
Haidt, J. (2007). The new synthesis in moral psychology. Science 316 (18 May), 9981002.
Haidt, J. (2012a). The righteous mind: Why good people are divided by politics and religion. New
York: Pantheon.
Haidt, J. (2012b). Reasons matter (when intuitions dont object). http://opinionator.blogs.
nytimes.com/2012/10/07/reasons-matter-when-intuitions-dont-object/?_r=0. Accessed 1
September 2013.
Haidt, J. (2012c). Out-take from The righteous mind: Virtue ethics. http://righteousmind.com/
wp-content/uploads/2012/08/Righteous-Mind-outtake.virtue-ethics.pdf. Accessed 1 September 2013.
Haidt, J. (2013). Moral psychology for the twenty-first century. Journal of Moral Education
42(3), 281297.
Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus 133(4), 5566.
Harman, G. (2003). No character or personality. Business Ethics Quarterly 13(1), 8794.
Haste, H. (2013). Deconstructing the elephant and the flag in the lavatory: Promises and problems of moral foundations research. Journal of Moral Education 42(3), 316329.
Hauser, M. (2006). Moral minds: How nature designed our universal sense of right and wrong.
New York: HarperCollins.
Kristjnsson, K. (2007). Aristotle, emotions, and education. Aldershot: Ashgate.
Kristjnsson, K. (2010a). The self and its emotions. Cambridge: Cambridge University Press.
Kristjnsson, K. (2010b). Emotion education without ontological commitment? Studies in Philosophy and Education 29(3), 259274.
Kristjnsson, K. (2013). Virtues and vices in positive psychology: A philosophical critique. Cambridge: Cambridge University Press.
Linder, D. O. (1997). The Leopald and Loeb trial: A brief account. http://law2.umkc.edu/faculty/
projects/ftrials/leoploeb/accountoftrial.html. Accessed 21 June 2014.
Mischel, W. (1968). Personality and assessment. New York: Wiley.
Musschenga, A. W. (2008). Moral judgement and moral reasoning: A critique of Jonathan Haidt.
In M. Dwell, C. Rehmann-Sutter & D. Mieth (Eds.), The contingent nature of life: Bioethics
and the limits of human existence (pp. 131146). Dordrecht: Springer.
Musschenga, A. W. (2013). The promises of moral foundation theory. Journal of Moral Education 42(3), 330345.
Prinz, J. J. (2007). The emotional construction of morals. Oxford: Oxford University Press.
Pugmire, D. (2005). Sound sentiments: Integrity in the emotions. Oxford: Oxford University
Press.
Railton, P. (2013). The affective dog and its rational tale. http://www.law.yale.edu/documents/
pdf/Intellectual_Life/LTW-Railton.pdf. Accessed 1 September 2013.
Saltzstein, H. D., & Kasachkoff, T. (2004). Haidts moral intuitionist theory: A psychological and
philosophical critique. Review of General Psychology 8(4), 273282.

270

Kristjn Kristjnsson

Snow, N. E. (2010). Virtue as social intelligence: An empirically grounded theory. London:


Routledge.
Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy 48(1), 125.
Walker, D. I., Roberts, M. P., & Kristjnsson, K. (2014). Towards a new era of character education in theory and in practice. Educational Review. doi: 10.1080/00131911.2013.827631.
Warburton, N., & Haidt, J. (2012). Jonathan Haidt on moral psychology. http://www.social
sciencespace.com/2012/10/jonathan-haidt-on-moral-psychology/. Accessed 1 September
2013.

Ethos, Eidos, Habitus


A Social Theoretical Contribution to Morality and Ethics.
Nathan Emmerich

Abstract
This essay sets out a practice theory perspective on morality and ethics within a Bourdieuan frame. The terms ethos and eidos are developed as field level accounts of morality the normative character or structure of a society of culture and ethics or, rather,
the collective socio-logic of ethical thinking. I then discuss the idea that, consistent with
Bourdieus social theory, social structures such as ethos and eidos are ontologically
complicit with the systems of dispositions constitutive of habitus. Following my discussion of this idea that the structures of habitus (systems of dispositions) stand in a
homologous relationship with the structures of the social fields within which they were
developed I turn to some recent research in moral psychology. I attempt to show that
the view I have outlined can assist us in understanding the picture of morality and ethics emerging from this scholarship.

Introduction

The intellectual zeitgeist of contemporary research into morality and ethics is to connect philosophical and empirical enquiries. Thus far the latter has been largely a matter
of psychological analysis, albeit with side orders of cognitive science and experimental
philosophy (X-Phi). However, these discourses remain largely disconnected from sociology and anthropology, both disciplines that have also turned their attention to morality and ethics in recent times (Fassin 2012; Fassin and Lz 2013; Hitlin and Vaisey

Nathan Emmerich
Visiting Research Fellow
School of Politics
International Studies and Philosophy
Queens University Belfast
n.emmerich@qub.ac.uk

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_13

272

Nathan Emmerich

2010). Whilst some research in these domains draws directly on moral philosophy,
particularly virtue ethics (Laidlaw 2013),1 these disciplines not only have their own
theoretical perspectives on morality/ethics (Abend 2010) and more generally. Consistent with my earlier work (Emmerich 2013; Emmerich 2014), this essay sketches a
Bourdieuan perspective on morality/ethics. I then discuss some recent research into
moral psychology and consider if the proffered Bourdieuan framework can assist our
understanding.
As such, what I present should be considered a social theoretical and praxeological
perspective grounded in ethos, eidos and habitus. The latter term, sometimes rendered
as hexis, can be found throughout the history of (Aristotelian) moral philosophy. However, in the account I adopt, habitus is a socio-analytic concept and has relevance to all
facets of social character and practice. This meaning and implications of Bourdieus
habitus are discussed in more detail below, suffice to note that I am not alone in considering it important to a social theory of morality (Ignatow 2009). In contrast, the
terms ethos and, particularly, eidos are unusual. Even when the term ethos appears it
tends to retain its common meaning (usage) and remains relatively untheorized (Wolff
1998). The terms are an attempt to theorize the social structures of morality at the level
of the field (Thompson 2012). Thus, in what follows, morality is to be understood as
ethos: the normative dimensions of a social or cultural field.
As is consistent with other sociological and anthropological enquiries in this area
(Edel and Edel 2000, 810; Kleinman 1995, 45), this idea of morality-as-ethos is a much
broader conception than that found in mainstream moral philosophy, particularly
modern moral philosophy. It includes many things, such as matters of taste and etiquette, which are usually eliminated from philosophical analysis. However, this does
not necessitate that matters of taste and etiquette are taken to be ethical concerns. Rather, my use of the term ethics refers to a particular cultural domain and a more restricted set of concerns.2 However, it would be imprecise to parallel morality-as-ethos
with ethics-as-eidos without, at least, acknowledging that both taste and etiquette have
their own, possibly related, eidos. The term eidos names the cultural logic of a phenomenon: particular modes, ways or practices of thought that are rooted in, and representative of, the broader ethos.3 Thus, whilst eidos is a specific aspect of ethos, we can
1

That said, it is worth noting that anthropological work inspired by virtue ethics often draws on Foucaultian, as
well as Aristotelian, perspectives. It cannot, therefore, be considered as uncritically adopting a moral philosophy as,
in the process of turning it to its own ends, such philosophy takes on the character of social theory. Clearly this is
something that virtue ethics is more amenable to in a way that is simply not the case for analytic or modern moral
philosophy.
2
Given the methodological commitments of sociology the precise content of ethics cannot be considered as a
philosophical given but a social construct and relative to particular times and places. What are today considered
matters of taste and etiquette may have been matters of ethics in the past.
3
Thus, just as ethics is a specific aspect of morality, eidos is a particular aspect of ethos, a particular domain within
the normative landscape of a culture. Even if there is a great deal of affinity between them, to the degree that in

Ethos, Eidos, Habitus

273

focus on the (or a) eidos of ethics as a particular domain within the normative landscape
of a culture.
The view that ethics or, better, particular practice(s) of ethics have an eidos should
be taken as echoing the view that practices have a logic. For example, Mol (2008) has
analyzed the logic of care, contrasting it with the logic of choice or autonomy. Both
logics are operative within healthcare and exist in tension. Such uses of the term logic
run counter to the idea of a single, universal rationality and instead seek to capture the
diverse ways in which social life can be ordered. Thus the logic of practice cannot be
considered a unitary subject of social analysis (Lynch 2001, 165) either in the sense
that all social practices exhibit a basic or fundamental logic in virtue of being practices
or in the sense that the logic(s) of a specific practice or context necessarily exhibit an
internal consistency.4 Instead, we should consider the idea of a socio-logic as referring
to the way practice(s) exhibit a consistency of style, one that reflects the cultural context(s) in which they take place. The suggestion is that the field structures practice. The
ethos of a field is its normative or moral structure whilst the eidos of a field is the operative socio-logical characteristic of its ethics or, better, the practice(s) including reflective practices - of ethics that take place within it.
If we take contemporary psychological research into morality and ethics as presenting a prima facie challenge to standard philosophical accounts, then my purpose here
might be considered as an attempt to mediate between the two. In considering the socio-logic or eidos of ethics, I seek to relativize ethical reason, and not just moral affect
(emotion) or intuition, to particular socio-cultural realities embodied in habitus. In
suggesting that both our affective and intuitive moral responses as well as our cognitive
and reflective ethical reasons are the product of our social (psychological) development,
I am developing the view that morality (intuition) and ethics (reason) should not be
dichotomized, at least not to the degree suggested by dual process theories proffered
by contemporary moral psychology. A broader sociological or collective understanding of morality and ethics reveals that they are interlinked and have the potential to
stand in the kind of critical and reflexive relationship that will facilitate change and
development in each of these domains.
This runs counter to the largely a-temporal focus on specific examples of individual
ethical decision-making we find in moral psychology. To a degree, this focus is a function of psychologys methodology, a consequence of disciplinary norms and experimental design. However, it suggests that moral psychology is not as social as it might
be.5 The challenge presented by moral psychology to moral philosophy is a function of
some socio-historical locations they are conceptually indistinguishable, we can distinguish between the modes of
thought we might call ethics and those we might call etiquette.
4
As Bourdieu puts it, the logic of practice is not that of the logician and can only function by taking all sorts of
liberties with logical logic (Bourdieu 1992a, 86, 267).
5
I am not alone in making this complaint, see Anon. Editorial (2014, 149).

274

Nathan Emmerich

the way it demonstrates that we are not the rational subjects that philosophical moral
theory takes us to be.6 However, by taking this demonstration as a challenge we implicitly assume that moral beings ought to be rational in an asocial or transcendental
sense.7 However, a truly social view should not be concerned by the fact that human
morality cannot transcend the historical, social and cultural conditions of its own production. Such a view should not be taken to suggest that we cannot move beyond our
current habits of thought and action to creatively remake some aspect of ourselves and
our world (Johnson 2014, xii). Rather it suggests we can only do so on the basis of our
current habits of thought and action and that this is the only reasonable notion of
transcendence available to human [beings] (ibid., xii). It turns out moral psychology
provides us with an opportunity to rethink or retheorize morality/ethics as part of social reality rather than as a transcendental ideal.
Albeit in a small way, these are the ends I pursue in this essay. First I discuss the
terms ethos and eidos in more detail and make use of Bourdieus concept habitus to
suggest that these socio-logical structures become lodged within individuals as dispositions. On the basis of this picture, I consider some recent research in moral psychology
and if the findings can be comprehended in the terms I have outlined.

Ethos and Eidos

As noted, whilst the term ethos is reasonably commonplace it has been subject to surprisingly little [philosophical] analysis (Wolff 1998, 105). This may well be a function
of the terms range and flexibility;8 etymologically or otherwise, it is connected to a
diversity of other concepts, many of which are directly relevant to the present inquiry.9
In everyday language we can consider the characteristic ethos (spirit, morality, ethic,
attitudes, worldviews) of particular companies, schools, universities, nation states, social identities, or modes of economic organization (e.g. capitalism). All of these examples can be considered as social fields. The term can even be applied to individuals. As
such, an ethos is a durable and, in the sense that attaches to Hegels Sittlichkeit (Gregg

Of course the challenge that psychology poses to the rational subject is not restricted to moral psychology. See,
for example, Kahneman (2011).
7
For an extended argument against both the idea of and the need for morality to be rationally transcendent and its
corollary, moral absolutism/ fundamentalism, see Johnson (2014).
8
In terms of Bourdieus theory of practice, flexibility is a bona fide virtue. Whilst my analysis is almost entirely
theoretical, Bourdieus social theory is conceived for the purpose of socio-analysis. As such, the theoretical or
conceptual meaning of particular terms remains incomplete until it becomes operationalized in the context of a
particular empirical research project.
9
Including morality, ethics (and ethic), character, custom, tradition, habit (habitus and hexis), style, attitude, spirit
(Geist) and worldview (and lifeworld).

Ethos, Eidos, Habitus

275

2003, 102), normatively thick facet of social and cultural life. An ethos can be considered a common-morality (Kukla 2014a) and one that is not only realized10 in habitus
but in the structures of society and the perspectives of culture. As Geertz (1957) suggests:
[T]he moral (and aesthetic) aspects of a given culture, the evaluative elements, have commonly been summed up in the term ethos[]. A people's ethos is the tone, character, and
quality of their life, its moral and aesthetic style and mood; it is the underlying attitude toward themselves and their world that life reflects (ibid., 421).

Geertz differentiates ethos from world view or the cognitive, existential aspects
(ibid., 421) of a given culture, a distinction that reflects the way in which eidos should
be considered an aspect of ethos. My use of eidos is inspired by Bateson (1958), who
conceived of it as a way to capture what he had previously considered as the particular
logic of a culture.11 Bateson drew a distinction between the affective and the cognitive,
suggesting the eidos of a culture is an expression of the standardized cognitive aspects
of the individuals, while the ethos is the corresponding expression of their standardized
affective aspects (ibid., 33).12 Nevertheless, and as suggested by Bateson, we should
consider eidos an aspect of ethos and, therefore, position the affective and cognitive
domains as interrelated phenomena. As such, they exhibit a continuity and so we might
speak of ethos-eidos.
Bourdieu also makes use of these terms suggesting that we can consider eidos as a
system of logical schemes and ethos as a system of practical, axiological schemes and
he further suggests that [t]he practical principles of classification which constitute the
habitus are inseparably logical and axiological, theoretical and practical (Bourdieu
1993, 86). Such a view reflects the continuity of these phenomena as it implies any eidos (logic or socio-logic) always reflects a particular ethos (or axio-logic). Whilst the
logical schemes of applied or practical ethics are an attempt to transcend the axiological
nature of culture, common-morality or ethos, the attempt can never be fully successful.
Even here, at the apotheosis of ethical rationality, we cannot escape ethos as the very
constitution of disciplinary practices implies its existence. As Daston and Galison

10

In Bourdieuan parlance there is a particular meaning attached to the term realized, one that implies certain
phenomena should be understood as socially constructed; as rendered real through the particular practices that
surround them. It is a meaning that can be expressed with a hyphen: real-ized.
11
One should point out that Batesons use of the term eidos is only tenuously related to Aristotles use of the term
and has next to nothing in common with the meaning it has in Husserlian phenomenology.
12
In the hands of other authors inspired by Bateson, eidos is said to be the predominant character of the whole
stock of ideas available in a [culture,] society or group (Madge 1964, 13) and a cultures appearance, its phenomena, all that about it which can be described explicitly (Kroeber 1963, 101102). It is worth noting that not all
appearances can be made fully explicit and that any explicit description will rely on the existence of an ethos in
common if it is to generate a mutual understanding.

276

Nathan Emmerich

suggest, [i]t is perhaps conceivable that an epistemology without an ethos may exist,
but we have yet to encounter one (2007, 40).13
Due to the constraints of length, our discussion of ethos and eidos must, at this
point, be curtailed. However, and by way of summation, it is worth recalling that eidos
is to be understood as part of ethos and, as such, any attempt at specification is an attempt to more clearly demonstrate the connection between, on the one hand, our values and, in this case, our embodied morality and, on the other, our reasoned and reflective judgments, or our codified or codifiable ethics.
Thus, the concept of eidos illuminates Wolffs conception of ethos as having three
levels, these being values, principles, and practice (1998, 105).14 Whilst some values
can be made explicit (at least to a degree), others remain tacit and, therefore, cannot be
considered as subject to the type of reflection and questioning that can cast doubt on
explicitly held values (Wolff 1998, 159160 fn.11), something fully consistent with a
practice theory perspective. This is also consistent with Kuklas (2014a) analysis of our
common morality. Whilst I consider her notion of common morality to be consistent
with the idea of an ethos, we can, regardless, agree with the general point that common
morality does not consist of a set of shared propositional judgments (2014b, 102). A
proper understanding of common morality cannot be limited to codified rules or explicitly stated values, it must also consist of embodied principles that determine the
correct application of a rule (Taylor 1993) and the tacit values that underpin practice.
Furthermore, a set of shared propositional judgments or a codified ethics is an expression of the common morality, they are embedded in a particular ethos. Our understanding of this expression is facilitated by the notion of eidos, as the standardized or
cognitive aspects of a cultural or sub-cultural group, understood the socio-logic of its
ethics. Eidos refers to the characteristic modes of thought of a particular group or,
more accurately, of an ethos.

13

Whilst Daston and Gallison are discussing the nature of objectivity as a value that is variously operationalized
within concrete scientific practices (disciplines, research programs and, in particular, historical time frames), their
arguments can, I think, be extended to the nature of objectivity as embedded in applied ethical discourses. Furthermore, we might note Hmlinens suggestion that the difference between theoretical and anti-theoretical
perspectives in moral philosophy is predominantly a matter of spirit or what I call ethos rather than, simply,
what one sees as the force of the better argument (2009, 545546; See also: Kolodny 1996). Finally, we might
consider the conflict regarding the concept of dignity in applied ethical discourses (Schklenk and Pacholczyk
2010). The conflicting perspectives on whether or not this term has utility for applied ethics are not purely a matter
of reasoned or logical disagreement, but a matter of eidos and, therefore, ethos. They are a matter of the differing
axiological perspectives that underpin varying conceptions of morality ethics and, one might add, human being.
14
In fact, Wolff (1998) considers this an idealization, suggesting that we should not consider them to be fully
delineated. Furthermore, we might be well advised to reject the suggestion of hierarchy implied by the term levels
and instead think of different, mutually supportive, aspects of human life.

Ethos, Eidos, Habitus

277

Habitus

Consistent with my other work (Emmerich 2013; 2014), the terms ethos and eidos are
presented as part of a sociological and anthropological theoretical framework, as a contribution to Bourdieus social theory of practice. As such, whilst some Aristotelian philosophers may feel the term habitus (or hexis)15 offers them a sense of familiarity, it is
important to recognize the limitations of discerning a direct lineage, not least because
Bourdieu had little direct interest in either moral theory or moral practice. Thus, whilst
some definitions of the term can be considered as having an Aristotelian echo, others
can appear somewhat alien. For Bourdieu, habitus refers to:
[A] system of dispositions, that is of permanent manners of being, seeing, acting, and
thinking. Or a system of long-lasting (rather than permanent) schemes or schemata or
structures of perception, conception and action (Bourdieu 2002, 2728).
[S]ystems of durable, transposable dispositions, structured structures predisposed to function as structuring structures, that is, as principles which generate and organize practices
and representations that can be objectively adapted to their outcomes without presupposing a conscious aiming at ends or an express mastery of the operations necessary in order
to attain them (Bourdieu 1977, 72).

The idea of habitus is used to conceptualize the way in which individuals, and therefore
their practices, are socially structured. Via the collective enterprise of inculcation, our
dispositions become acclimatized, adjusted and attuned to the sociological environment; a fact that affects all aspects of our being. Habitus is not defined substantively but
relationally, alongside and intertwined with Bourdieus conception of the social field.
As suggested by the echo between habitus and habitat, the relationship between habitus
and field or, rather, the structures of habitus (disposition) and field (position) is one of
ontological complicity (Bourdieu and Wacquant 1992, 20; Bourdieu 1981, 306; Bourdieu
1977, 77).16 The structural facets of habitus, systems of dispositions, should be considered as homologous counterparts of the structural aspects of the field and our positions
within them. As such, habitus is a theory of social and cultural reproduction. It can be
used to show how processes of socialization, informal education and tacit pedagogy
(Bourdieu and Passeron 2000, 47) promote the internalization of social reality. Fur15

Aristotles hexis has commonly been translated as habitus (Carlisle 2013, 33). Interestingly, Bourdieu also makes
use of the term hexis but restricts it to the particular aspect of habitus that involves the comportment of the body.
Thus, our ways of walking, ways that vary across gender, class, and national context, are aspects of our bodily
hexis, a corporeal phenomenon that is, nevertheless, fundamentally social and therefore an aspect of Bourdieus
conception of habitus.
16
Bourdieus own statement of the ontological complicity of habitus and habitat has been criticized as being exaggerated. Whilst I would suggest that is merely a matter of correctly understanding the nature of the relationship
between habitus and field, Sayers comments are instructive (2010, 111).

278

Nathan Emmerich

thermore, deploying the term enculturation,17 I have argued it can also be used to
understand formal medical ethics education and the workings of formal or explicit
pedagogy (Emmerich 2014).18 Taken together, socialization and enculturation constitute what Bourdieu calls the collective enterprise of inculcation, (ibid.) the primary
function of which is the reproduction of (social and cultural) practices. Furthermore,
such practices not only emerge from habitus (systems of dispositions), but from its
interaction with the field. As such, practice functions to maintain existing dispositions,
as well as to reproduce the social structures of the field, the existing social and cultural
reality. As it is implicated in our phenomenological social perceptions our understanding of what is, and is not, objective and our modes of thought, the habitus is
implicated in the production and reproduction of subjectivity itself.
Meisenhelder has called Bourdieus habitus a subtle and contemporary version of
the idea of social character, albeit one that is, in his own words, decentered, and
offers a sociological conception of subjectivity (Meisenhelder 2006, 64-65). Thus,
where Aristotles notion of character and habitus is almost singularly concerned with
morality and ethics, Bourdieus has a broader focus. Certainly there are some points of
contact between the Aristotelian and the Bourdieuan conception of habitus, particularly if one takes a broader view of the notion of habit than the one Bourdieu dismisses as
repetitive, mechanical [and] automatic (1993, 87).
To return to the topic at hand morality only rarely did Bourdieu attend to practices of ethics directly.19 It is for this reason that I have, both here and elsewhere, developed the field-level concepts of ethos and eidos. Coupled with habitus we have the
beginnings of a social or sociological theory of morality and ethics in a broadly Bourdieuan frame.20 Furthermore, the perspective I am developing is consistent with Bour17

My use of the term enculturation compliments Toulmins, who considers it to be an apprenticeship-style learning process by which certain explanatory skills [] [such as] the repertory of intellectual techniques, procedures,
skills, methods of representation are transferred. The relevant skills are those employed in giving explanations of
events and phenomena within the scope of the science concerned. In this case the science or mode of thought we
are concerned with is ethics (Toulmin 1972, 159).
18
Whilst the distinction between formal (explicit) and informal (implicit or tacit) pedagogy might be considered in
terms of the formally organized classroom and the informal learning that takes place during an apprenticeship, the
suggestion that pedagogy can be tacit or explicit suggests something deeper. An example might be the way children
learn to talk simply through exposure to language. They learn the meaning of words and they develop the ability to
speak in a way that makes grammatical sense without, for the most part, any explicit instruction on how to do so.
In contrast, children cannot learn to write in the same way. However, such explicit pedagogy also carries tacit
lessons. As part of learning to write, children will learn fine motor skills, how to sit at a desk, and the difference
between the kind of expressions appropriate to written language and those appropriate to speech.
19
An exception is Towards a Policy of Morality in Politics (Bourdieu 1992b) reprinted as A Paradoxical Foundation of Ethics (Bourdieu 1998, Chap. 7). However, the relevance of this work to the present argument is limited.
20
Some might consider this Aristotles starting point. However, it is patently obvious that he was not a sociologist
and the roots of the discipline cannot simply be traced to his thought. Certainly neo-Aristotelian moral philosophy
has more in common with sociology and social theory than does modern moral philosophy. Nevertheless, the
difference remains and can, perhaps, be traced to the respective purposes of moral philosophy and social theory.

Ethos, Eidos, Habitus

279

dieus suggestion that [t]he strength of the ethos is that it is a morality made flesh
(1993, 86) and Kuklas suggestion that common morality is embodied in habitus
(2014a, 85 fn.25). It also accords with the way in which habitus operates as an unconscious driving force of practice as well as the role intuition is said to play in our moral
lives. It also accords with the suggestion, recently presented by Zahle (2013; 2014), that
we are able to perceive normative states directly. Furthermore, it is of a piece with
Bourdieus suggestion that the practice of giving reasons moves one away from embodied morality or ethos and into the domain of eidos:
[S]imply by asking questions, interrogating, one forces people to move from ethos to ethic;
in inviting a judgement on constituted, verbalized norms, one assumes that this shift has
been made (Bourdieu 1993, 86).

We might add that the assumption appears to be that not only is this shift made but
that it can be made unproblematically, given that many of those who comment on the
notion of ethos suggest that not only is it a non-propositional form of knowledge but
that it is, in principle, resistant to being fully expressed in a propositional manner
(Kukla 2014a; 2014b; Wolff 1998).
The idea of habitus can be attached to various, reasonably well-delineated social
roles such as police constable, teacher, or healthcare professional. We might think in
terms of, for example, the medical habitus of doctors (Luke 2003; Sinclair 1997) and the
way in which this constructs their subjectivity. Consider, for example, the idea of the
(bio)medical gaze or, indeed, the idea of the (bio)medical ethical gaze or the characteristic ethos of medicine. Whilst the idea of the medical gaze has been, since Foucaults
(2003) analysis, commonplace, the suggestion here is that it can be expanded to include
all aspects of medical practice, including its ethics. As such, habitus can be considered
the locus of our moral embodiment. Social structures such as ethos-eidos have homologous dispositions of habitus. Thus, the practices of social life include not only the
normative know-how required to negotiate everyday society but the giving and receiving, or exchange, of ethical reasons and reasoning. As such, ethics is a practice, one that
involves the presentation and representation of our embodied moral knowledge in
propositional form. This shift cannot be accomplished unproblematically precisely
because, in Polanyis famous phrase: we know more than we can tell (Polanyi
2009, 4).

The former tries to determine what morality ought to be, and does so for philosophical, and not necessarily practical, purposes. The latter attempts to theorize what we might take morality to be in reality or in fact. Here the
purpose in so doing is to understanding social reality as it is and to facilitate conduct of empirical research. Moral
philosophy cannot fulfill this purpose, as what morality is in practice may not be what philosophers think it ought
to be indeed, if it was, would one need moral philosophers? and a social theory of morality must not take moral
philosophy as a given but consider it as part of social reality, theorize it as a social field and a practice.

280

Nathan Emmerich

If we reflect on specific examples, then the nature of habitus can be more fully revealed. Consider the ideas of customer service a practice that can be found across the
retail sector and beyond. From a Bourdieuan perspective, this practice is a function of
habitus, of a particular system of dispositions. Given the practice appears in a variety of
retail and non-retail contexts (or fields), it would seem likely that the dispositions, or
habitus, that underpin the practice of customer service are highly transposable. The
ethos of customer service acquired in one context, a call center say, are easily transferred and adjusted to another arena.21 Such thinking provides clarity to the sociological
nature of Bourdieus concept. The idea of a customer service habitus appeals to a sui
generis notion of customer service that can be operationalized in a variety of ways so
as to include or exclude various different practices depending on the particular interest
motivating the project at hand. This is of a piece with what is perhaps the most commonly investigated form of habitus: that of different social classes. The idea of a working or middle class habitus and, for that matter, ethos has proved sociologically powerful. Whilst discussing this phenomenon may, to some, seem otiose in this context
implying the existence of a working class ethos is to imply the existence of a working
class morality the point is well made.22 Reflecting on what is meant by a middle class
morality a set of values that condition the way we perceive, feel, and think about a
large variety of phenomena, a set of values that both describes and prescribes what it is
to be middle-class then it becomes clear what is being suggested when we equate
morality with ethos. It also makes clear the broader relevance of the notion of eidos
beyond simply ethics. There are particular modes of thought that one can consider
characteristic of the middle and working class. One can connect this to what Bourdieu
calls love of fate (amor fati) the idea that our expectations are adjusted to our unconscious perceptions of the objective social conditions in which we exist. Classed differences in ethos result in classed differences when it comes to the articulation of ones
ambitions, expectations and possibilities presented by the future.
To return to the concern at hand, morality-as-ethos, the related eidos that informs
ethical thought, and habitus-as-social-character, we might recognize the way in which

21

Of course it helps that almost everybody has knowledge and experience of retail in so far as they will have been
customers and on the receiving end of customer service. It may be less easy to transpose a customer service habitus
to unfamiliar arenas, e.g. that appropriate to the corporate banking sector, say, or to waiting tables in the UK or the
USA, in a fast food restaurant or Michelin starred establishment.
22
Those who find that the discussion of class obscures rather than facilitates their understanding at this point could
first reflect on the related notion of bourgeois morality and second consider gender instead. There is a normative
dimension to gender and the behaviors associated with gender suggesting that masculinity and femininity differ in
terms of their ethos or morality. Furthermore, the idea that there is a double standard when it comes to sex suggests the ethos of gender contributes to differing ethical analysis of what is and is not appropriate for men and
women. Of course none of this is to suggest that the ethos and eidos that forms the basis of our understanding of
gender is justified or that we should not seek to change it. Indeed, seen in the light of alternative modes of thought,
such as that underpinning feminism and the ideas of political equality, obviously we should.

Ethos, Eidos, Habitus

281

the account I have given suggests these concepts should be understood in a collective,
social or sociological manner rather than simply in relation to individuals. The purpose
of taking up these tools is to promote a wider appreciation of morality as a social and
cultural phenomenon. In reflecting on the ethos of medicine, the middle classes or any
other social field, we should not think in terms of clearly delineable phenomena that
are fully external to our investigation and analysis. At minimum, they are a function of
the scale of our analytic focus. For example we might consider the middle class ethos of
the UK or of Bristol, a city in the UK, or the ethos of the NHS (National Health Service)
or of a particular NHS hospital. Indeed, there is little need to consider ethos to have a
real existence over and above any analysis or the way in which those located within
these fields interact and thereby maintain its existence. If considered as social character,
habitus cannot be understood as a complete theory of the individual or the self as it
does not encompass the entirety of what we might consider as falling under these
terms. It does not include personality, for example. Nevertheless, it expands our understanding of the self, and subjectivity, by suggesting that we think in broader sociological
terms. It remains the case, however, that habitus does not entirely repudiate the insights of other perspectives or purport to give a completely self-sufficient account and
thereby stand-alone.
Similarly, we might take Bourdieus habitus to suggest that an individualistic view of
ethics and ethical reason is limited, as it suggests our reason must be seen in terms of its
relationship to the field, to eidos and therefore ethos. Those practices we term ethical
reasoning are socially structured by a particular eidos, the logic of a culture and its
ethos, as these structures are embodied in the dispositions of habitus. Habitus and the
structures of the field (in this case, ethos-eidos) are ontologically complicit. Thus, the
giving of ethical reasons, the justification of moral judgments, can be treated sociologically. In the first instance, the ethical judgments of individuals can often be considered
stable, for example, vegetarians do not consider their commitments anew each time
they peruse the menu. Furthermore, the reasons vegetarians might give for their dietary
choices will likely be part of a wider debate about the ethics of animal husbandry and
meat eating. Our ethics are not idiosyncratic, they are not merely our own, rather they
are part of the wider discourse about right and wrong, good and bad.
All this can be understood through the lens of habitus, as Bourdieu suggests:
[T]he history of the individual is never anything other than a certain specification of the
collective history of his group or class, each individual system of dispositions may be seen
as a structural variant of all the other group or class habitus, expressing the difference between trajectories and positions []. Personal style [] is never more than a deviation in
relation to the style of a period or class so that it relates back to the common style not only

282

Nathan Emmerich

by its conformity []. But also by the difference which makes the whole manner
23
(Bourdieu 1977, 86).

However we consider it, whether embodied in habitus or as a sociological construct,


ethos is not a homogenous account of morality. Just as the Christian ethos can vary, so
can the moral habitus of Christians. Nevertheless, there are limits to this variation and
there must, in the final analysis, be some continuity in this variation, the kind of continuity Wittgenstein called family resemblance (Wittgenstein 2009, 67). Similarly,
whilst we might expect to find a certain degree of logical inconsistencies in the ethics of
a society and of individuals, we should expect to find a unity of style. From a Bourdieuan perspective, we should not, as an academic habitus inclines us to do, assume
that the logic of things will conform to the things of logic (Bourdieu and Wacquant
1992, 123). What is of concern to the Bourdieuan sociologist are the logics (and axiologics) of practice as it is these that inform the ways in which morality and ethics are
actually practiced.

Social Theory Contra Contemporary Moral Psychology

Whilst my development of these conceptual and social theoretical tools is primarily


motivated by a desire to contribute to sociological research into morality and ethics,
they can, I think, provide a useful perspective on some recent research in moral psychology and X-Phi. In so doing, we can modulate some of the more provocative and
troubling conclusions such work suggests. In the first instance, we might consider the
perspective that we have two modes or systems of thought (Kahneman 2011), a view
that has wide application but has been particularly fruitful in the formulation of dual
process theories in moral psychology (Haidt 2012; Cushman et al. 2010). There is a
prima facie affinity between, on the one side, ethos, morality, fast thinking, emotion
and intuition and, on the other, eidos, ethics, slow thinking and reason, reflection or
cognition. However, the embodiment of ethos (and therefore eidos) in habitus and the
fact that habitus is history incarnated suggests that we should see moral intuition and
ethical reflection as conceptually and developmentally interconnected. Whilst we can
differentiate between socialization and enculturation, and thereby focus on the reproduction of ethos or eidos, the pedagogically informal or the pedagogically formal, the
23

This relates to another perspective of Bourdieus. Any orthodoxy can be contrasted with its heterodox counterpart(s), nevertheless both will exhibit an underlying doxastic unity (Bourdieu 1977, 168). Furthermore, we can find
examples where contrasting opinions exist within the same field. If we focus at a macro level, the same ethos-eidos
can be seen as producing different substantive opinions, whereas a micro level focus might suggest that they differ.
Clearly, what we might take to be orthodox or heterodox is given relative to a particular field and, therefore, to the
particular focal depth of our lenses.

Ethos, Eidos, Habitus

283

tacit or the explicit, they are to be understood as co-productive and unified within what
Bourdieu calls a collective enterprise of inculcation (Emmerich 2014). Just as the reflective practices involved in coaching can (re)condition the embodied practice of playing a
sport (Noble and Watkins 2003), the reflective practice of ethics can be understood as,
at minimum, having the potential to (re)condition our embodied moral intuitions.24
We can consider this to be an autopoetic or reflexive reading of the relationship between morality (ethos) and ethics (eidos) as embodied in habitus.
Of course any reflective practice of ethics will proceed on the basis of our existing
moral intuitions. Nevertheless, eidos does not merely influence ethos but, over time,
can aim at its reconstruction. Furthermore, whilst we can distinguish between the (interrelated) histories of both ethos and eidos, moral intuition and (forms of) ethical
reflection, they are not in fact distinct but phylogenetically and ontogenetically interconnected. There is a collective, as well as individual, history to morality and ethics,
ethos and eidos. Furthermore, the moral and ethical development of individuals can be
considered as continuing across the life-course.25 This view runs counter to the methodologies of moral psychology and X-Phi which not only tend to neglect the developmental history of the individual in the fullest sense but also see ethical development as a
fundamentally individualist or intrapersonal, rather than intersubjective and interpersonal, phenomenon. Thus, whilst the development of intuitive moral judgments is often understood as socio-historically produced, this is not extended to the development
of ethical reasoning, understood as both a mode of thought and as a range of substantive judgments and perspectives. In addition, the role of ethical reason is seen as being
restricted to the justification of particular judgments by individuals. However, ethical
justifications are more often native to particular cultures, sub-cultures or communities.
If the theoretical, methodological and substantive dimensions of ethical reasoning are
understood as having a history, and a socio-cultural shape, then ethics and ethical reasoning can be considered a collective phenomenon and the specific judgments of individuals being formed within the particular conditions of their (socio-cultural) existence. Such thinking can be articulated in relation to individuals as well as to sociocultural life and specific moral and ethical cultures (fields), such as medicine or, indeed,
the discipline of applied ethics. Bearing this in mind, let us briefly consider some specific aspects of psychological research into morality and ethics.
The identification and experimental (re)production of moral dumbfounding
(Haidt et al. 2000) has been particularly influential for the development of moral psychology. The effect is produced by offering research participants carefully crafted vi24

Although he adopts a Foucaultian perspective, and is more concerned with the emotional dimension of our lives
as moral beings, Zigon (2008) also considers ethical reflection to involve the reflexive or autopoetic reconditioning
of our moral dispositions.
25
I take a participationist view on development that understands it in terms of changes to practice rather than
individuals (Sfard 2010, 80).

284

Nathan Emmerich

gnettes that, at first blush, invite and perhaps even provoke negative moral evaluations.
Examples include: cooking and eating a family pet; having sex with a store bought
chicken before cooking and eating it; and a case of brother/sister incest (Haidt 2001,
817). The research participant is asked to reflectively justify their initial intuition. The
vignettes are designed in such a way as to counter standard justifications for the immorality of the activity based on, in the case of incest, damage to the sibling relationship,
risk of psychological harm or the conception of a genetically abnormal child. Participants are said to be morally dumbfounded when they cannot justify their initial moral
intuition but, nevertheless, wish to maintain the immorality of the suggested activities
or only reluctantly acknowledge that the described activities are morally permissible.
In the view articulated above, our initial moral intuitions are functions of the ethos
embodied in habitus whilst our ethical justifications are related to the specific aspect of
ethos I have called eidos. If we recall Bourdieus suggestion, cited above, that simply by
asking questions, interrogating, one forces people to move from ethos to ethic and also
consider the subsequently made statement that one forgets that people may prove
incapable of responding to ethical problems whilst being quite capable of responding in
practice to situations raising the corresponding questions (1993, 86), then whilst we
may not be able to justify an absolute, or non-contingent, prohibition on eating our
pets, the ethos of pet ownership would be compromised if it did not produce the moral
intuition that doing so is wrong. Similarly, whilst we can construct thought experiments that guarantee no negative consequences will result from a particular case of
sibling incest, no such guarantee can be given in practice. Thus, it seems important to
the maintenance of social order and cultural norms that we react negatively to the suggestion of incest. The moral ethos cannot be reconstructed in the light of outr ethical
contingencies as to do so would defeat the social and cultural purpose of its existence.
The logic of everyday ethical reasoning obeys the logic of practice, and not that of the
logician (Bourdieu 1992a, 87, 267).
This view connects to another aspect or domain of contemporary research in moral
psychology and X-Phi, what Abend calls trolleyology (2013, 161), the seemingly
endless iteration, reiteration, analysis, reanalysis and now experimental (re)analysis of
trolley dilemmas. Such cases also require the reader to assume knowledge that could
not, in practice, be guaranteed. The fat man may not dislodge the trolley and those on
the track could well have a means of escape. However, what is interesting about these
examples is the way in which they model the eidos and (anti-ethos) ethos of applied
ethics.26 Thus, the average participant in trolleyology research is being asked to provide
26

This turn of phrase, the anti-ethos ethos, is something I have taken from Andersons analysis of The Way We
Argue Now (Anderson 2005, 178). This quasi-paradoxical term expresses the way certain cultures or social fields
can be structured around a kind of value neutrality. Whilst Anderson focuses on Habermasian procedural politics
(2005, Chap. 7), Daston and Galisons (2007) work on scientific objectivity, briefly discussed above, can be considered in the same vein. As both of these texts suggest, a culture of value neutrality is not a culture without values

Ethos, Eidos, Habitus

285

an ethical response to the scenario presented. However, the scenario presented cannot
be considered as being in accordance with their embodied moral and ethical sense,
where uncertainty is the rule. Instead, it accords with the normative structure and logic
(eidos) of an academic model. There is a dissonance between the determinative structure and thin nature of such thought experiments and the indeterminacy of everyday
life experienced as morally thick. In this, we find an echo of the dissonance identified
by Gilligan (1993) when she studied the moral responses of women to the classic abstract case of Kohlbergian moral psychology, the Heinz dilemma,27 and the reality of
deciding to have an abortion. Such thought experiments deny an important facet of
moral experience and practice: the thickness of cultural life, its indeterminacy and the
ongoing possibility of intersubjective negotiation.
Whilst it is unproblematic for moral psychology and X-Phi to adopt the perspective
of applied ethics as a methodological device, it is, however, problematic if this subsequently becomes a normative prescription, even if this prescription remains relatively
implicit. This is something Bourdieu would consider as slip[ping] from the model of
reality to the reality of the model (Bourdieu 1977, 29) and characteristic of what he
calls the scholastic fallacy (2000, Chap. 2). In the absence of a properly articulated social
theory of morality and ethics (of the kind sketched above), social scientific research
cannot engage in normative prescription and nor can it properly engage in critique.
Precisely because ordinary people are not moral philosophers or applied ethicists, they
cannot be expected to obey the norms of these disciplines, even if these disciplines purport to set down the norms of morality and ethics. The misguided nature of this expectation can be given alternative expression: precisely because ordinary people do not
occupy the academic field of moral philosophy or applied ethics, they do not embody
(and nor are they subject to) its norms. They cannot be expected to express their ethical
views according to the anti-ethos ethos of scholastic reason.
Interestingly, these ideas can be used to make sense of another series of somewhat
playful, lighthearted but nevertheless insightful studies conducted by Schwitzgebel and
his interlocutors (Schwitzgebel 2009 & 2013; Schwitzgebel and Cushman 2012;
Schwitzgebel and Rust 2009, 2010 & Forthcoming; Rust and Schwitzgebel 2013). This
research purports to demonstrate that academic ethicists are no more (and possibly
less) moral, which is to say pro-social, than other, social scientifically comparable,
groups. Seminar rooms used by ethicists attending a conference are no more or less

but, rather, a culture which values neutrality and therefore values things like objectivity and procedure. We might
also consider Gellners comments on Rationality as a Way of Life (1992, Chap. 7 & 8).
27
The Heinz dilemma can be summarized as follows: A dying woman and her husband cannot afford the particular
treatment that will save her life. It is available from a local druggist who is making a large profit. The couple can
raise a large sum of money, enough so that the druggist will still make a profit, but when offered this sum the
druggist refuses to sell his product at a discount. Should the man break into the drug store and steal the treatment
or let his wife die? (Kohlberg 1981)

286

Nathan Emmerich

untidy than those used by other philosophers, they seem to be as courteous as other
philosophers, and they are no less likely to avoid paying a conference fee; books on
ethics were more likely to go missing from the library; ethicists do not appear to vote
more often than others; and nor are they more inclined to respond to email inquiries
than any other academics. Most damning is the fact that, as those conducting this research put it, [p]rofessional ethicists behave no morally better, on average, than do
other professors (Schwitzgebel and Rust Forthcoming). This claim rests on the finding
that whilst ethicists held more stringent views on meat eating, giving to charity and the
donation of blood and organs, these attitudes were not unequivocally reflected in ethicists behavior (ibid.). Furthermore, it seems that professional philosophers are as
susceptible as others to priming bias in the formation of moral judgments (Schwitzgebel and Cushman 2012). It is no surprise then to find Schwitzgebel wondering, perhaps rhetorically, if teaching (applied) ethics is even a good idea (Schwitzgebel 2013).28
The view I have argued for does not find these results overly surprising. The antiethos nature of applied ethics is such that it would be difficult to embody the conclusions produced by this discipline.29 Certainly ethicists have no problem embodying the
ethos-eidos of their discipline but the morality of the discipline has little to do with the
substantive problems addressed. As in the case of medical ethics, the problems addressed are more likely to be embedded in other practices and, therefore, sub-cultures.
Furthermore, we are plural actors (Lahire 2010) capable of traversing different fields
and adjusting our behavior accordingly. Thus, the particular set of dispositions (habitus) invoked when we address the informal ethical questions that arise in the course of
our everyday lives differs from those invoked when we address the formal ethical
questions as professionals. This does not mean our practices are entirely determined by
context but, rather, they are sensitive to context we behave differently when we visit
our grandmothers as compared to when we visit a friends grandmother. Furthermore,
whilst it is unlikely that the reflective practices (eidos) of everyday life will operate to
the complete exclusion of the reflective practices of professional or applied ethics, the
medical doctors and the academic ethicists are both, for the most part, everyday moral

28

It is worth pointing out that Schwitzgebel considers the current literature on the effects of teaching ethics to be
limited and flawed to the extent that it is difficult to draw any conclusions (2013). The ease with which this
view is presented is, I would suggest, indicative of a failure to grasp the complexities of conducting social scientific
research into morality and ethics that is methodologically informed and theoretically sophisticated by social theoretical, as opposed to applied philosophical, standards. From the disciplinary perspective of a sociology and anthropology moral psychology, applied ethics, X-Phi and various other social scientific projects conducted in a
piecemeal fashion can also appear limited and flawed to the extent that it is difficult to draw any conclusions.
Such is the challenge of interdisciplinary scholarship both in general and in this case in particular.
29
This is not to suggest that we should abandon the academic discipline of applied or practical ethics but, instead,
try to understand it in relation to, and as a part of, our broader moral culture. In short, we need a fully sociological
view of applied ethics as an intellectual practice and social field.

Ethos, Eidos, Habitus

287

agents whose non-professional habitus embodies a moral ethos not unlike any other
everyday moral agent.
Finally, we might offer the following thoughts in regards to what has come to be
called the Knobe effect (Knobe 2003). This is where the ascription of intentionality
(and therefore moral responsibility) is greater when the consequences of a known sideeffect are negative as compared to cases where they are positive. Thus, despite the fact
that financial gain is the stated and sole motivation in both cases, a chairman who pursues a policy that will negatively impact the environment is considered to have intentionally harmed the environment, whilst another chairman who pursues a policy that
will positively impact the environment is not considered to have intentionally helped
the environment. The view I have presented goes some way to explaining Knobes findings as it suggests our ethos or common-morality is such that greater moral weight
is attached to doing harm or risk of harm than to doing good or risking doing good a
fact borne out by the oddity of the phrase, the risk of doing good. We might also consider the influence of both Mills harm principle, and the related idea of the precautionary principle, and contrast them with what has been called the pro-actionary principle
(Fuller 2012; Fuller and Lipinska 2014). I would suggest that the relative status of these
principles suggests the degree to which our ethos (and eidos) differentially weights
(positive) prescription and (negative) proscription.30 Mills liberalism is an essential
aspect of our ethos and, therefore, the way we think about ethico-political issues. Our
ascription of intentionality will similarly vary as intentionality is not simply a factual
statement but one that ascribes, which is to say evaluates, the (moral) responsibility the
individual has with regard to their actions. The attribution of intent is not a matter of
mere fact but of value.31
As such, the everyday meaning of the term intent is expanded to include morally
responsible for thus, when we distinguish between behaviors that are performed
intentionally and those that are performed unintentionally (Pettit and Knobe 2009),
we are also distinguishing behaviors that are being performed in a morally responsible
manner and those that are being performed in a morally irresponsible manner. We are
prepared to judge the intentionality of others in the light of whether or not they are
acting in accordance with the collective moral ethos. Thus, pace Sripada and Konraths
(2011) interpretation of the Knobe effect, we tell more than we can know precisely
because, as per the Polanyi phrase used above, we know more than we can tell, when we

30

Consider further the Hippocratic dictum to first do no harm and the degree to which Kantian moral theory
direct our focus towards acts that are impermissible rather than those that are mandatory.
31
It is highly pertaining to note that this finding is discovering the basic methodological principles of nonpositivist, or theoretically sophisticated, social science: facts and values are entangled. It is highly problematic for
the methodological maintenance of the related distinction between is and ought.

288

Nathan Emmerich

interpret the actions of others.32 The ethical judgments elicited by Knobe express a
moral ethos, one that the respondents do not have full awareness of. They are an expression of the respondents embodied and tacit moral knowledge. Given the interrelation between ethos, morality and tacit knowledge, all of which are embodied in habitus,
and the fact that not only do the dispositions of habitus facilitate the formation of our
intuitive moral judgments but we also (empathically) ascribe habitus to others and
interpret their actions accordingly, neither the Knobe effect nor the fact that philosophers are as susceptible as everyday moral agents to order-effects in making moral
judgments should be all that surprising.

A Brief Note on the Situationist Challenge

The situationist challenge is something leveled at virtue ethics and Aristotelian accounts of moral character (Doris 2002; Harman 2009). Its essence is that the theoretically irrelevant influences can impact moral behaviors and that this fact undermines the
notion of stable, or global, virtues such as compassion or empathy. The psychological
research underpinning the challenge claims motivates, in Harmans words (2009), skepticism about character traits and suggests the conclusion that we lack moral character
or, at least, we lack the type moral character (neo)Aristotelian moral theories appear to
rely on. Psychological research into character traits such as compassion, for example,
suggests that compassionate behavior is dependent on mood (Alfano 2014, 36; Doris
2002, 28-32; Webber 2006, 10). Thus, having had a bad day, the compassionate person
is less likely to help someone in need. Such examples call into question the idea that we,
or our characters, can be considered truly compassionate. There is no shortage of responses to these points and I do not intend to go into them here. Suffice to say that the
situationists seem to insist that, if we are to lay claim to virtuous dispositions, then they
must be understood in a highly determinist manner. It is also worth noting that, when
considered sociologically, social character or habitus applies to classes or groups of
individuals. Thus, whilst doctors or women are, on the whole, more compassionate or
empathic, some doctors and women may deviate from this norm either globally or in
particular instances. They have these dispositions as they occupy positions within (various) social fields the ethos of which considers this a normative (orthodox) demand of
being a doctor and characteristic of femininity. Heterodoxy is, of course, possible, but it
is not without its consequences.
Bourdieu is often criticized for offering an overly determinist account (Jenkins
1982). Such views suggest that we our natures and our practices - are not determined
32
However, contra Sripada and Konraths (2011) suggestion, this is certainly a normative or moral assessment.
Normativity is more than the formal expression of principle.

Ethos, Eidos, Habitus

289

by social location and trajectory, at least not to the degree that his critics take him to be
suggesting. This critique can be compared to the situationist challenge. The point
appears to be that Aristotle is suggesting that the virtues are highly determinative of
practice or action and that, empirically speaking, this appears not to be the case. The
responses given by those who defend both Bourdieu and virtue ethics suggest that reading their theoretical proposal as fully determinist, or even as determinist as their critics
would have it, is misguided. Focusing on the debate surrounding Bourdieus social
theory, we find attempts to reconcile, or hybridize, habitus and reflexivity (Adams
2006). Such innovations can be seen as an attempt to include our knowledge practices
within habitus. Depending on ones perspective this is either an exciting advance in
social theory or proof that the determinist critique is correct. After all, what could be
more determinist than suggesting our reflective abilities are products of sociological
forces? However, this is precisely what is being proposed when we consider the role of
eidos with regard to ethical thinking. It seems to me that the inevitable conclusion is
that our agency is not only social, culturally and historically conditioned but also a
product of these conditions.33 Thus, we can only overcome the social conditions of
production by engaging in the (socio-cultural) promotion of reflexive practices such as
ethics. Such practices are always and unavoidably rooted in a particular form of disciplinary inquiry, which is to say, an intellectual tradition or culture. The idea of objective enquiry understood as a practice that can transcend the social conditions of its
production is deeply problematic. Objectively constituted enquiries should aim at the
critical development of what has gone before, as transcending it is an unrealistic and
unreasonable goal (Johnston 2014, xii).
Some might consider this a blow for the possibility of objective, absolute or nonrelativist morality. I, however, consider it a blow for the particular rationalist assumptions that underpin a particular form of ethics, one usually called applied ethics. Furthermore, rather than thinking in terms of agency and habitus as placing conditions on
the way in which it is exercised, we might think in terms of freedom. This is something
we might consider to be facilitated by habit and habitus, and positioned between (autonomous) agency and determinism.34 As Bourdieu puts it the only form of durable
freedom is that given by the mastery of an art, whatever the art (1999, 340). Mastering
an art is a matter of habit, rehearsal and the development of expert, virtuoso or skillful
practice; as such, it is a matter of disposition and habitus. Paradoxically, the freedom of
the violinist is predicated on their embodied ability to play their instrument without
reflection. The musician is only free to play as they have placed limits on their freedom
through their training. Whilst repetition has its place, such training is not simply a
33

This is an increasingly common view in philosophy (Christman 2009) as well as a basic commitment of almost
all social theory (Archer 2003; Martin and Dennis 2010; Lahire 2010; Smith 2007).
34
In the light of Ravaissons (2008) analysis, Carlisle (2010) considers habit as mediating between freedom and
necessity. Habit is positioned between autonomous agency and social determinism.

290

Nathan Emmerich

matter of repetition or rote learning. The violinist who plays with an orchestra or as
part of a quartet responds to the music played by others just as these respond to the
violinist. Even the soloist responds to their surroundings; they do not formulaically
repeat a rote-learned performance. Similarly, our reflective ethical freedom is only possible as a consequence of our dispositional embodiment of ethos, and the ontological
complicity that exists between habitus and field. Human freedom by which, lest we
forget, we mean adult human freedom, is predicated on our moral socialization. Moral
agency implies responsibility and accountability and, as such, it must be considered
with respect to some ethos. Pace the violinist, our reflective ethical freedom is paradoxically predicated on our socially produced dispositions and moral habitus; as such, both
morality and ethics should be considered arts that one should seek to master.35
Rather than think of moral character as an immutable and always active form of
character - or self - we should see it as akin to social forms of character or as habitus.
For example, a doctors medical habitus, and the underlying system of dispositions, is
not always active or in use. Furthermore, even when a doctor is engaged in the practice of medicine they may not always perform at their best. In the first instance, our
practices always exhibit a natural degree of variation. Even in the face of high levels of
repetition (habituation), the performance of, say, athletes or stage actors will vary - we
are not, after all, robots. In the second instance, practice is produced through the interaction of habitus and field, and both of these phenomena had short, medium and longterm histories that are present in the moment. Having an argument with ones spouse
in the morning may have consequences for the afternoon performance of doctors, athletes and actors. And it may have consequences for the moral actions of moral agents.
Just as the medium and long-term histories of field and habitus have consequences for
the practices they produce, so do our short-term histories. We might then conclude
that an unavoidable consequence of our social, cultural and historical existence, of the
nature of human being, is that our (moral and ethical) freedom is a function of the way
our (moral and ethical) practices are improvisations, forged in the interaction of habitus (socio-moral character) and field.

Conclusion

The aim of this paper has been to set out some social theoretical concepts and to show
that they can be used to shed light on some recent research in moral psychology. My
broader aim has been to suggest that the existing dialogue between moral philosophy
and moral psychology should be expanded to include social theory and the developing
35

This does not imply that morality and ethics are a branch of aesthetics, at least no more than the suggestion that
medical practice is both an art and a science does similarly.

Ethos, Eidos, Habitus

291

bodies of work identified as the sociology of morality and the anthropology of ethics.
Although one cannot say the same of X-Phi most, if not all, moral psychology is social
psychology. Therefore, there should be some scope to forge greater interdisciplinary
connections. Thus, whilst one can see my complaint as yet another iteration of the view
that social psychology contains too much psych and not enough ssh (Anon.
Editorial 2014, 149), it is not aimed at absolving the sociologist (or anthropologist) for
their part in this state of affairs. In my view, habitus is an excellent candidate for a
bridging concept that can be analyzed, articulated and operationalized in all of the relevant domains: psychology; philosophy; sociology; and anthropology. In an interdisciplinary world, a broadly Bourdieuan approach, where the need for conceptual flexibility and variation is not only acknowledged but considered a virtue, has much to recommend it. Given this view, and the fact that Bourdieus habitus already evinces connections to a variety of domains including psychology (Lizardo 2004) and, perhaps
more importantly, cognitive anthropology (Strauss and Quinn 1998, 4447), then the
above account can be considered as providing a starting point for further investigation.

References

Anon. Editorial. (2014). Characterology. Contemporary Sociology: A Journal of Reviews 43 (2),


149154.
Abend, G. (2010). Whats New and Whats Old about the New Sociology of Morality. In S. Hitlin
& S. Vaisey (Eds.), Handbook of the Sociology of Morality (pp. 561584). London, UK:
Springer.
Abend, G. (2013). What the Science of Morality Doesnt Say About Morality. Philosophy of the
Social Sciences 43 (2), 157200.
Adams, M. (2006). Hybridizing Habitus and Reflexivity. Sociology 40(3), 511528.
Alfano, M. (2013). Character as Moral Fiction. Cambridge, UK: Cambridge University Press.
Anderson, A. (2005). The Way We Argue Now: A Study in the Cultures of Theory. Princeton:
Princeton University Press.
Archer, M.S. (2003). Structure, Agency and the Internal Conversation. Cambridge, UK: Cambridge University Press.
Bateson, G. (1958). Naven: A Survey of the Problems Suggested by a Composite Picture of the
Culture of a New Guinea Tribe Drawn from Three Points of View. Stanford: Stanford University Press.
Bourdieu, P. (1993). Sociology in Question. London, UK: Sage.
Bourdieu, P. (1977). Outline of a Theory of Practice. Cambridge, UK: Cambridge University
Press.
Bourdieu, P. (1979). Public Opinion Does Not Exist. In A. Mattelart & S. Siegelau (Eds.), Communication and Class Struggle: An Anthology (pp. 124130). New York, USA: International
General.

292

Nathan Emmerich

Bourdieu, P. (1981). Men and Machines. In K. Knorr-Cetina & A.V. Cicourel (Eds.), Advances in
Social Theory and Methodology (pp. 304317). London, UK: Routledge.
Bourdieu, P. (1992a). The Logic of Practice. Cambridge, UK: Polity Press.
Bourdieu, P. (1992b). Towards a Policy of Morality in Politics. In W.R. Shea & A. Spadafora
(Eds.), From the Twilight of Probability. Canton: Science History Publications.
Bourdieu, P. (1996a). On the Family as a Realized Category. Theory Culture and Society 13 (3),
1926.
Bourdieu, P. (1996b). Distinction: A Social Critique of the Judgement of Taste. Reprint. Cambridge, MA: Harvard University Press.
Bourdieu, P. (1998). Practical Reason: On the Theory of Action. Cambridge, UK: Polity Press.
Bourdieu, P. (1999). Scattered Remarks. European Journal of Social Theory 2 (3), 334340.
Bourdieu, P. (2000). Pascalian Meditations. Cambridge, UK: Polity Press.
Bourdieu, P. (2002). Habitus. In J. Hillier & E. Rooksby (Eds), Habitus: A Sense of Place (pp. 43
49). Aldershot: Ashgate.
Bourdieu, P., & Passeron, J.C. (2000). Reproduction in Education, Society and Culture. 2nd Edition. London, UK: Sage.
Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology. Cambridge, UK: Polity Press.
Brubaker, R. (1993). Social Theory as Habitus. In C. Calhoun, E. LiPuma, & M. Postone (Eds.),
Bourdieu: Critical Perspectives (pp. 212234). Chicago: University of Chicago Press.
Carlisle, C. (2013). The Question of Habit in Theology and Philosophy: From Hexis to Plasticity.
Body & Society 19 (23), 3057.
Carlisle, C. (2010). Between Freedom and Necessity: Flix Ravaisson on Habit and the Moral
Life. Inquiry 53 (2), 123145.
Christman, J. (2009). The Politics of Persons: Individual Autonomy and Socio-Historical Selves.
Cambridge, UK: Cambridge University Press.
Cushman, F., Young, L. & Greene, J.D. (2010). Multi-System Moral Psychology. In J. Doris
(Ed.), The Moral Psychology Handbook. Oxford: Oxford University Press.
Daston, L., & Galison, P. (2007). Objectivity. Cambridge, MA: MIT Press.
Doris, J.M. (2002). Lack of Character: Personality and Moral Behavior. Cambridge, UK: Cambridge University Press.
Edel, M., & Edel, A. (2000). Anthropology and Ethics: The Quest for Moral Understanding. London, UK: Transaction Publishers.
Emmerich, N. (2013). Medical Ethics Education: An Interdisciplinary and Social Theoretical
Perspective. London, UK: Springer.
Emmerich, N. (2014). Bourdieus Collective Enterprise of Inculcation: The Moral Socialisation
and Ethical Enculturation of Medical Students. British Journal of Sociology of Education,
Online First.
Fassin, D. (Ed.) (2012). A Companion to Moral Anthropology. Malden, MA: Wiley-Blackwell.
Fassin, D., & Lz, S. (Eds.) (2013). Moral Anthropology: A Critical Reader. London, UK:
Routledge.
Foucault, M. (2003). The Birth of the Clinic. New Ed. London, UK: Routledge.

Ethos, Eidos, Habitus

293

Fuller, S. (2012). Precautionary and Proactionary as the New Right and the New Left of the
Twenty-First Century Ideological Spectrum. International Journal of Politics, Culture, and
Society 25 (4), 157174.
Fuller, S., & Lipinska, V. (2014). The Proactionary Imperative: A Foundation for Transhumanism. London, UK: Palgrave Macmillan.
Geertz, C. (1957). Ethos, World-View and the Analysis of Sacred Symbols. The Antioch Review
17 (4), 421.
Gellner, E. (1992). Reason and Culture: A Sociological and Philosophical Study of the Role of
Rationality and Rationalism. Oxford, UK: Blackwell.
Gilligan, C. (1993). In a Different Voice: Psychological Theory and Womens Development. Cambridge, MA: Harvard University Press.
Gregg, B. (2003). Thick Moralities, Thin Politics: Social Integration Across Communities of Belief.
Durham, NC: Duke University Press.
Haidt, J. (2001). The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to
Moral Judgment. Psychological Review 108(4), 814834.
Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion.
New York, NY: Pantheon.
Haidt, J., Bjorklund, F. & Murphy, S. (2000). Moral Dumbfounding: When Intuition Finds No
Reason. Unpublished Manuscript, University of Virginia. http://commonsenseatheism.
com/wp-content/uploads/2011/08/Haidt-Moral-Dumfounding-When-Intuition-Finds-NoReason.pdf
Hmlinen, N. (2009). Is Moral Theory Harmful in Practice? Relocating Anti-Theory in
Contemporary Ethics. Ethical Theory and Moral Practice 12(5), 539553.
Harman, G. (2009). Skepticism about Character Traits. The Journal of Ethics 13 (23), 235242.
Hitlin, S., & Vaisey, S. (Eds.) (2010). Handbook of the Sociology of Morality. London, UK:
Springer.
Ignatow, G. (2009). Why the Sociology of Morality Needs Bourdieus Habitus. Sociological Inquiry 79 (1), 98114.
Jenkins, R. (1982). Pierre Bourdieu and the Reproduction of Determinism. Sociology 16(2), 270
278.
Johnson, M. (2014). Morality for Humans: Ethical Understanding from the Perspective of Cognitive Science. Chicago: University of Chicago Press.
Kahneman, D. (2011). Thinking, Fast and Slow. London: Penguin.
Kleinman, A. (1995). Writing at the Margin: Discourse between Anthropology and Medicine.
Berkley, USA: University of California Press.
Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language. Analysis 63 (3), 190
194.
Kohlberg, L. (1981). Essays on Moral Development. Vol. 1. San Francisco, CA: Harper & Row.
Kolodny, N. (1996). The Ethics of Cryptonormativism: A Defense of Foucaults Evasions. Philosophy & Social Criticism 22 (5), 6384.
Kroeber, A.L. (1963). Anthropology. New York, NY: Harcourt, Brace & World.
Kukla, R. (2014a). Living with Pirates: Common Morality and Embodied Practice. Cambridge
Quarterly of Healthcare Ethics 23 (1), 7585.

294

Nathan Emmerich

Kukla, R. (2014b). Response to Strong and Beauchamp: At Worlds End. Cambridge Quarterly of
Healthcare Ethics 23 (1), 99103.
Lahire, B. (2010). The Plural Actor. Cambridge, UK: Polity Press.
Laidlaw, J. (2013). The Subject of Virtue: An Anthropology of Ethics and Freedom. Cambridge,
UK: Cambridge University Press.
Lizardo, O. (2004). The Cognitive Origins of Bourdieus Habitus. Journal for the Theory of Social
Behaviour 34 (4), 375401.
Luke, H. (2003). Medical Education and Sociology of Medical Habitus: Its Not About the Stethoscope! Dordrecht: Kluwer Academic Publishers.
Lynch, M. (2001). Ethnomethodology and the Logic of Practice. In T.R. Schatzki, K. KnorrCetina, & E. von Savigny (Eds.), The Practice Turn in Contemporary Theory (pp. 13148).
London, UK: Routledge.
Madge, C. (1964). Society in the Mind: Elements of Social Eidos. London, UK: Faber & Faber.
Martin, P.J., & Dennis, A. (Eds.) (2010). Human Agents and Social Structures. Manchester, UK:
Manchester University Press.
Meisenhelder, T. (2006). From Character to Habitus in Sociology. The Social Science Journal 43
(1), 5566.
Mol, A. (2008). The Logic of Care: The Problem of Patient Choice. London, UK: Routledge.
Nederman, C. J. (1989). Nature, Ethics, and the Doctrine of Habitus: Aristotelian Moral Psychology in the Twelfth Century. Traditio 45: 87110.
Noble, G., & Watkins, M. (2003). So, How Did Bourdieu Learn To Play Tennis? Habitus, Consciousness and Habituation. Cultural Studies 17 (3), 520539.
Pettit, D., & Knobe, J. (2009). The Pervasive Impact of Moral Judgment. Mind & Language 24
(5), 586604.
Polanyi, M. (2009). The Tacit Dimension. Reissue. Chicago, USA: University of Chicago Press.
Ravaisson, F. (2008). Of Habit. (trans. C. Carlisle & M. Sinclair). London, UK: Continuum.
Reay, D. (2004). Its All Becoming a Habitus: Beyond the Habitual Use of Habitus in Educational Research. British Journal of Sociology of Education 25 (4), 431444.
Rust, J, & Schwitzgebel, E. (2013). The Moral Behavior of Ethicists and the Power of Reason. In
H. Sarkissian & J. Wright (Eds.), Advances in Experimental Moral Psychology (pp. 91109).
London, UK: Bloomsbury Academic.
Sayer, A. (2010). Reflexivity and the Habitus. In M.S. Archer (Ed.), Conversations about Reflexivity (pp. 10822). London, UK: Routledge.
Schklenk, U., & Pacholczyk, A. (2010). Dignitys Wooly Uplift. Bioethics 24 (2), ii.
Schwitzgebel, E. (2009). Do Ethicists Steal More Books? Philosophical Psychology 22 (6), 711
725.
Schwitzgebel, E. (2013). Do Ethics Classes Influence Student Behavior? University of California,
Riverside. http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/EthicsClasses.htm
Schwitzgebel, E, & Cushman, F. (2012). Expertise in Moral Reasoning? Order Effects on Moral
Judgment in Professional Philosophers and Non-Philosophers. Mind & Language 27 (2),
135153.
Schwitzgebel, E., & Rust, J. (2009). The Moral Behaviour of Ethicists: Peer Opinion. Mind
118(472), 10431059.

Ethos, Eidos, Habitus

295

Schwitzgebel, E., & Rust, J. (2010). Do Ethicists and Political Philosophers Vote More Often
Than Other Professors? Review of Philosophy and Psychology 1(2), 189199.
Schwitzgebel, E., & Rust, J. (Forthcoming). The Moral Behaviour of Ethicists. In J. Sytsma & W.
Buckwalter (Eds.), Blackwell Companion to Experimental Philosophy. http://www.faculty.
ucr.edu/~eschwitz/SchwitzAbs/EthBehBlackwell.htm
Sfard, A. (2010). Thinking as Communicating: Human Development, the Growth of Discourses,
and Mathematizing. New York, NY: Cambridge University Press.
Sinclair, S. (1997). Making Doctors: An Institutional Apprenticeship. Oxford, UK: Berg Publishers.
Smith, R. (2007). Being Human: Historical Knowledge and the Creation of Human Nature. Manchester, UK: Manchester University Press.
Sripada, C.S., & Konrath, S. (2011). Telling More Than We Can Know About Intentional Action.
Mind & Language 26 (3), 353380.
Strauss, C., & Quinn, N. (1998). A Cognitive Theory of Cultural Meaning. Cambridge, UK: Cambridge University Press.
Taylor, C. (1993). To Follow a Rule. In C. Calhoun, E. LiPuma, & M. Postone (Eds.), Bourdieu:
Critical Perspectives (pp. 4560). Chicago: University of Chicago Press.
Thompson, P. (2012). Field. In M. Grenfell (Ed.), Pierre Bourdieu: Key Concepts, 2nd Ed. (pp.
6781). Stocksfield, UK: Acumen Publishing.
Toulmin, S.E. (1972). Human Understanding. Princeton, NJ: Princeton University Press.
Wittgenstein, L. (2009). Philosophical Investigations. 4th Ed. Oxford, UK: Wiley-Blackwell.
Webber, J. (2006). Virtue, character and situation. Journal of Moral Philosophy 3(2), 193213.
Wolff, J. (1998). Fairness, Respect, and the Egalitarian Ethos. Philosophy & Public Affairs 27(2),
97122.
Zahle, J. (2013). Practices and the Direct Perception of Normative States: Part I. Philosophy of the
Social Sciences 43(4), 493518.
Zahle, J. (2014). Practices and the Direct Perception of Normative States: Part II. Philosophy of
the Social Sciences 44(1), 7485.
Zigon, J. (2008). Morality: An Anthropological Perspective. Oxford, UK: Berg.

Pragmatism, Religion, and Ethics


A Reminder from Rorty
Alissa MacMillan

Abstract
Richard Rorty argues that concerns of democracy, so questions of politics and ethics,
should take precedence over those of philosophy. Worries about the social good should
precede metaphysical concerns and arguments about conceptions of the self. If we first
come up with a theory of the self and then use this theory as our base for taking on
ethical and political questions, we are doing things backwards, says Rorty. The argument of this paper is that, nearly 25 years later, Rortys insight is one we need to take
seriously and one useful for taking on questions about the relationship between psychology and ethics. In subtle ways, especially in the study of religion and related work
in ethics, we tend to slip back onto a conception of the self, justifying ethical claims
based on those conceptions, forgetting the important pragmatist insight about the priority of politics and ethics. The paper focuses on Rortys essay, The Priority of Democracy to Philosophy, and some evidence of this tendency in current uses of the work of
the pragmatist philosopher Robert Brandom. From this, I assess some of the prospects
and limits to the use of cognitive science in the study of religion and ethics.

Introduction

One important insight coming out of the American pragmatist tradition is the priority
given to pressing and present doubts. Talk of metaphysics, of foundations, of objective,
universal claims, is replaced with a primary focus on current worries, social problems,
and on dealing first and foremost with the vital and live questions facing a given society. This broad pragmatist perspective holds from its more technical side, beginning

Alissa MacMillan
Institute for Advanced Study in Toulouse
macmillanalissa@hotmail.com

Springer Fachmedien Wiesbaden 2016


C. Brand (Ed.), Dual-Process Theories in Moral Psychology, DOI 10.1007/978-3-658-12053-5_14

298

Alissa MacMillan

with Charles Peirce (1966), to its more current social theoretical expression, with its
attention given to democracy, ethics, religion, and the social good.
In this spirit, Richard Rorty argues in his 1991 essay, The Priority of Democracy to
Philosophy, that concerns of democracy, so questions of politics and ethics, should take
precedence over those of philosophy. Worries about the social good should precede
metaphysical concerns and, in particular, arguments about conceptions of the self. As
Rorty argues, scholars of the past in true pragmatist spirit rightly rejected using, for
example, God to justify a political or ethical program. More recently, and more pertinent to us, Rorty points to John Rawls (e.g. 1985) and Thomas Jefferson (Rorty cites
1905) as arguing against the tendency to do the same with conceptions of the human
being. If we first come up with a theory of the self and then use this theory as our base
for taking on ethical and political questions, we are doing things backwards, Rorty says.
Also appealing to John Dewey (see 1990) as a clear voice from the pragmatist tradition,
Rorty argues for giving primacy to pressing doubts, to questions of upmost concern in
our society, as opposed to appealing to a prior imposition of weighted views of philosophers and theologians.
The argument of this paper is that, nearly 25 years later, Rortys insight is one we
need to take seriously and one useful for taking on questions about the relationship
between psychology and ethics. In subtle ways, especially in the study of religion and
related work in ethics, we tend to appeal to a conception of the self, justifying ethical
claims based on those conceptions, forgetting the important pragmatist insight about
prioritizing politics and ethics. Not that debates about human nature are not of value
and well worth engaging, but this analysis seeks to point out some of their ethical and
political limits.
This paper focuses on Rortys essay and some evidence of this tendency in current
uses of the work of the pragmatist philosopher Robert Brandom, also briefly assessing
some of the prospects and limits to the use of cognitive science in the study of religion
and ethics in light of this analysis. In Brandoms linguistic theory (e.g. 1994) one
grounded in pragmatics and identifying a normativity to the deepest aspects of linguistic exchange scholars have begun, subtly, to appeal to a conception of the self in defense of democracy, undermining a founding pragmatist insight. The recent turn to
cognitive science as a compelling resource for our view of the human being also reflects
this tendency. While these avenues are essential for our work and invaluable lines of
inquiry, questions remain regarding the relationship between these realms.
This analysis is central to questions like: how far can we take insights from psychology before we revert to a robust conception of the self as justification for our ethical
claims? What must be the relationship between our conception of the self using
whatever resources we have and our ethical projects? Rorty, Ill argue here, gives us
an important warning and reminder.

Pragmatism, Religion, and Ethics

299

Pragmatism, briefly

A philosophical tradition born in the late 1800s, American pragmatism sees itself as
providing a new method for resolving debates, in part as a response to what was seen as
the totalizing, universalizing, and primarily metaphysical debates of the long-dominant
philosophical traditions. Peirce, William James, and Dewey, often considered the classical pragmatists, see pragmatism as a method first and foremost, one for clarifying
ideas and put to use for settling doubts and contending with practical concerns, their
work focusing on matters like knowledge, truth, belief, and meaning.1 James, who also
turns the pragmatic method on religion (1999), calls pragmatism an attitude of orientation, or The attitude of looking away from first things, principles, categories, supposed necessities; and of looking towards last things, fruits, consequences, facts (1978,
32). Pragmatism provides a critique of aspects of Cartesianism, dualisms, and rationality, seeking to embrace uncertainty over certainty, a perpetual process of inquiry over
fixed principles and ideas; pragmatists turn away from metaphysical concerns and toward the practical, to find out what definite difference it will make to you and me
(ibid., 30).
Some central points fundamental to the view include an underlying fallibilism and
anti-foundationalism beliefs and truths never being the final or last word and always
able to be proven wrong. Pragmatism also maintains a rejection of the kinds of claims,
including metaphysical, that universalize in a way that stands outside experience. With
a scientific attitude, inquiry begins with the human being, and the human being is always implicated in her inquiry. A rooted, perspectival task, there is no access to a
birds-eye-view of the world, a view philosophers of the past might have sought.
Pragmatism has played itself out in various ways, a range of interpretive directions
taken following its earliest articulation, but the methodological focus, fallibilism, and
primary interest in pressing social concerns remains central to the tradition. While
former philosophies, from Aristotle through to Descartes and beyond, might seek firm
and final answers to metaphysical questions, the pragmatists see this as getting things
backwards. As Dewey writes: Thus philosophy in its classic form became a species of
apologetic justification for belief in an ultimate reality in which the values which should
regulate life and control conduct are securely enstated (1990, 23). The pragmatists
reject this ultimate reality and the security and stability of values as part of that reality.
Instead, values are perpetually made, formed, and reformed in the context of human
social life.
Dewey also critiques the traditional philosophical quest for certainty (ibid.) so
common in the field: If men had associated their ideas about values with practical
1

For a good historical account of the emergence of the tradition, see Menand 2002. And see Brandom 2002 for a
philosophical overview.

300

Alissa MacMillan

activity instead of with cognition of antecedent Being, they would not have been troubled by the findings of science. They would have welcomed the latter (ibid., 34). Dewey flips the script on traditional philosophy (ibid., 21-39), objecting to the appeal to a
notion of Being here, one example of a sweeping metaphysical claim as first justification for particular values or ethical claims. Using insights from science, itself recognized as a fallible enterprise, is his preferred form of philosophical and democratic
activity.
When it comes to a conception of the individual, pragmatists also call into question
the priority traditionally given to rationality, especially a rationality that renders the
human being in some way special or separate from the rest of nature. For the pragmatist, in general, reason itself is a human faculty, formed and shaped in a social, cultural,
historical setting, and, especially beginning with James, psychology plays an equally
central role in accounts of the individual (1957). So, no rational, autonomous self, fixed
by nature, somehow standing outside of experience, exists prior to experience, unformed by culture, history, and experience. Charles Taylor (1989) also offers an articulation and defense of this perspective.
As Rorty explains:
Anthropologists and historians of science have blurred the distinction between innate rationality and the product of acculturationThe result is to erase the picture of the self
common to Greek metaphysics, Christian theology, and Enlightenment rationalism: the
picture of an ahistorical nature center, the locus of human dignity, surrounded by an adventitious and inessential periphery (Rorty 1985, 258).

The self, then, has been brought down to earth, rendered natural and subject to the
same causes and forces as any other being.2 Rorty adds:
What counts as rational or as fanatical is relative to the group to which we think it necessary to justify ourselves to the body of shared belief that determines the reference of the
word we. The Kantian identification with a central transcultural and ahistorical self is
thus replaced by a quasi-Hegelian identification with our own community, thought of as a
historical product (ibid., 259).

There is no final account of the self, based in a fixed form of rationality. The self is contingent, historical, and formed relative to a given culture and community. And this
contingency of the self, as will be argued, means that there is no fixed and final version
of the human being to which we might appeal in our philosophical engagement. The
human being is shaped and reshaped in new social and cultural contexts.

The main claim of this paper is grounded in debates coming out of the American pragmatist tradition, but some
analogous views are developed in the phenomenological tradition. Rorty himself touches on some of these similarities in Rorty 1980.

Pragmatism, Religion, and Ethics

301

Rortys claim

With the pragmatist recognition of the perspectival nature of our own inquiries, so the
view that we are the ones who make and remake those inquiries and make and remake
the values implicated in those inquiries, along with the fallible nature of truth, for the
pragmatist, there has been a rejection of this primary appeal to metaphysical or totalizing claims. As Dewey explains, this is to search for certainty and immutability where
there is none, to appeal to an unchanging truth when reality is one of change (1990, 239).
In light of some of these pragmatist insights, including the conception of the individual at work in our theories, the priority of doubts and social problems to philosophy, the rejection of a reliance on first principles and metaphysics, Rortys argument
identifies a way in which we might be undermining some of these insights, or falling
back on old philosophical habits that were rejected by the pragmatists. As Rorty sees it,
we are falling back on a tendency to appeal to metaphysical or totalizing claims to justify a political or ethical position.
Rorty presents the question of whether there is any sense in which liberal democracy needs philosophical justification at all (Rorty 1985, 260). Does democracy, and we
can think of this as our example of a practical, situated, political and ethical practice
and tradition, need something maybe not God but something like a conception of
the self, as justification for its value or truth? His answer to this question is no. For
Rorty, ethics and politics, the kind we practically engage as members of a society or
community, do not require prior philosophical justification.
Situating the debate in the pragmatist tradition, and appealing to Rawls and Deweys
rejection of the primacy of metaphysical claims, Rorty explains, Rawls, following up
on Dewey, shows us how liberal democracy can get along without philosophical presuppositions (Rorty 1990, 261). He claims:
For purposes of social theory, we can put aside such topics as an ahistorical human nature,
the nature of selfhood, the motive of moral behavior, and the meaning of human life. We
treat these as irrelevant to politics as Jefferson thought questions about the Trinity and
about transubstantiation (ibid., 261-262).

Rorty argues that Rawls was doing this very thing in his political theory. It is not that
we know, on antecedent philosophical grounds, that it is of the essence of human beings to have rights, and then proceed to ask how a society might preserve and protect
these rights (ibid., 263), Rorty explains. This is getting things backwards. He continues, Rawls does not believe that for the purposes of political theory, we need to think
of ourselves as having an essence that precedes and antedates history [] (ibid.). Indeed, as Rawls explains of his theory, The veil of ignorance [] has no metaphysical
implications concerning the nature of the self; it does not imply that the self is ontolog-

302

Alissa MacMillan

ically prior to the facts about persons that the parties are excluded from knowing
(Rawls 1985, 238). Rorty gets from Rawlss work the clear view that, Rawls wants views
about mans nature and purpose to be detached from politics (Rorty 1991, 263).3 Just
as we dont need a conception of God to back or justify an ethical or political position,
so too do we not need a conception of the self to justify those positions. To appeal to a
metaphysical claim is to do something similar to appealing to a religious claim, one not
shared by the entire community. Justification is instead generated in the context of the
community, by the community, and using claims that work for that community, justification is an ongoing process and an ongoing experiment in cooperation (ibid., 274).
Nothing stands over and above those debates; there is no metaphysical trump card
this is what God says or this is how the human being is for use in any and all contexts.
He reads Rawls as relegating questions about the point of human existence, or the
meaning of human life (ibid., 263), to the realm of the private. As Rorty sees it, Rawls
holds that [a] liberal democracy will not only exempt opinions on such matters from
legal coercion, but also aim at disengaging discussions of such questions from discussions of social policy (ibid.). What you see as the true nature of the self is your own
concern, not an issue to be appealed to on the hearing floor, in policy making, in the
ethical and political questions that constitute the formation and maintenance of a democracy. Rorty continues, As citizens and social theorists, we can be as indifferent to
philosophical disagreements about the nature of the self as Jefferson was to theological
differences about the nature of God (ibid.).
But, of course the issue is somewhat complicated, especially in light of Rortys own
view of the self. Although it should be considered a private matter, or one for the philosophers, Rorty still sees it as an incredibly important part of what it is to be human
and what is required for social progress.4 And being in the philosophy business, he does
have a preferred conception a Humean self as centerless, as a historical contingency
all the way through (ibid., 267). There is no fixed core or center to what it is to be human, a core often attributed to a rational, autonomous, free self; we are instead shaped
and reshaped in our historical moments. And to stress a point that Charles Taylor
makes (1989), for someone who shares this view of the self, how we think about these
questions, so how we think about conceptions of the human being, will contribute to
shaping who we are substantively, which will necessarily have an impact on the kinds of
ethical problems we have and the approaches we take to those problems.5 Because the
3

Rorty himself admits that he has read Rawls as arguing for an ontological priority to the self, this priority found
in the rationality of the individual, but that this was a misreading (Rorty 1991, 277, fn 21).
4
See Rorty (1999), especially Chap. 3 and 4, A World Without Substances and Essences and Ethics Without
Principles. Both reiterate his point about our need to avoid using knowledge of our own nature to tell us what we
should do in the ethical arena.
5
Nathan Emmerich pointed out to me the slippery nature of this insight. As he explains, we can think of human
beings as having a first and second nature, part of our first nature being that we are creatures with a second,
reflexive nature. In this way, our indifference to or rejection of the nature of the human being as a source of ethical

Pragmatism, Religion, and Ethics

303

human being is always implicated in her inquiries and activities, a conception of the
self might very well play a role in the kind of political arrangement we have. And, he
also finds that it may be that this version of the self, one defended by Rorty, Taylor, and
others, one that makes the community constitutive of the self does comport well with
liberal democracy (ibid., 261). So, one of the dominant views of the self at the moment
happens to be one that is well suited to democracy.
But Rortys point is that, although it might happen to be a fitting conception of the
self for our own democracy, we do not need and should not use this kind of conception
of the self in defense of that democracy. Even if conceptions of the self have an impact
on the kinds of practical democratic creatures we are and Rorty famously maintains
that they do6 it still does not mean that our being those practical, democratic creatures is a source of justification for democracy. This is to again give a priority of philosophy to democracy.7
While the study of religion, philosophy, and ethics has worked long and hard to shed
its theological husk, turning to a naturalistic, humanistic, historical, scientific approach
to make sense of religious beliefs and practices, Rorty points to our tendency to engage
in the same theological moves, using non-theological concepts. It might be clear to us
now that appeals to God can no longer serve as justification for our political or ethical
claims, but we cannot make the same mistake, replacing God with the self. It might
seem appealing when our vision of the self comports with our visions of politics or
ethics, but this might not always be the case. To appeal to any metaphysical conception,
to use that conception as a first justification for a political or moral view, is to fall back
on the old form of philosophy, to appeal to a truth or value that lies outside the world,
one that claims a kind of objectivity where none exists. Justification needs to instead be
born in and from the process of inquiry, always open to reassessment.

Uses of Robert Brandom

To illustrate the slippery nature of this issue, I want to point to one recent example of
this tendency in use of the work of pragmatist philosopher (and former student of Rorty) Robert Brandom by some scholars of religion, ethics, and politics. After introducing
or political justification, which Rorty defends, is itself a socially informed position. Rorty would likely agree with
this important point.
6
[] changing the way we understand ourselves through the stories we weave to make sense of our lives is at the
heart of Rortys conception of human progress, writes Christopher Voparil in the General Introduction to Rorty
(2010, 42).
7
Using other language, it is to render a description of human nature into a prescription to take what human
nature is and turn it into what an (already preferred) politics ought to look like. Or to return to a warning from
David Hume.

304

Alissa MacMillan

some of the general, current engagement with Brandoms work, the section will describe some key elements to his thought and then point to ways in which uses of his
work seem to speak to Rortys pragmatist warning, his as an example of how easy it can
be to subtly revert to metaphysical claims in politics and ethics.
Brandoms linguistic account found especially in Making it Explicit (1994) is seen
as a potential tool for better describing religion and religious practices in particular,
including concepts like belief, objectivity, and authority. Grounded in pragmatics, and
identifying a normativity at the deepest levels of linguistic exchange, his theory is seen
as carefull