You are on page 1of 4

SPEAKER 1 Rational (s3927388):

Define/identify the topic: Philosophers and social scientists have long debated how moral decisions
are reached. To some scholars, moral decision-making is fundamentally rational and is mediated by
deliberate, controlled, and reflective moral reasoning. Others have argued that instinctive,
emotional, and intuitive modes of decision-making serve as primary mediators in moral decision-
making instead.

Stating team’s interpretation of the topic: Having reviewed the available body of research, our
affirmative team argues that moral decision-making is rational, not emotional. That is, moral
reasoning is based on explicit practical reasons, involving choices about what to do or intend to
do, how to achieve one's goals, and what goals one should have in the first place. Evidence
supporting this view will be covered by our members, Anh Tran, Bao Le, and Anh Pham.

Argument 1: We begin with the premise that research on moral judgment has been dominated by
rationalist models, in which moral judgment is thought to be caused by moral reasoning, one of
which is the Cognitive Moral Development model by Lawrence Kohlberg. It has been among the
most popular lines of research into ethical behavior across different situations. This stage model ties
moral decision-making with cognitive development, assuming that, over time, people change from
being selfish to making more principled judgments of choices. At the pre-conventional level (stages 1
and 2), moral judgments are made based on straightforward, direct consequences on oneself (that
is, punishments and rewards). Following this, reasoning at the conventional level (stages 3 and 4)
follows the guidelines or standards of acceptable conduct established by the laws, rules, and norms.
Lastly, at the post-conventional level (stages 5 and 6), moral judgment criteria override the authority
of social norms as the person acquires a stronger personal commitment to self-selected universal
principles. Kohlberg demonstrated his model by constructing moral dilemmas like a man stealing
expensive medicines to save his wife and interviewing people on the reasons for their judgment of
these cases. People’s reasoning patterns across age groups confirmed the model.

These findings have been well-replicated. For example, Lawrence Walker conducted a longitudinal
study with both hypothetical and personally generated real-life dilemmas. When interviewed about
their choices and judgment, participants gave reasons in a manner consistent with Kohlberg’s Moral
Development model. There is also some cross-cultural evidence, as it has also been empirically
recorded in China by Judy Tsui.

SPEAKER 1 Emotional:

Stating team’s interpretation of the topic: Taking on this debate, our negative team assumes the
opposite perspective and argues that moral decision-making is essentially emotional, not rational.
That is, people’s moral stances are driven by emotions and intuitions - they are quick and automatic
responses. Our team members, Anh Nguyen, Duong Do, and Vy Vu will demonstrate the evidence
for our belief.

Argument 1: The theoretical basis of moral intuition is Jonathan Haidt’s Social Intuitionist model,
which was famously introduced in his article: The Emotional Dog and its Rational Tail. It states that
moral judgment is generally the result of quick, automatic evaluations and reasoning only follows
that evaluation due to learned social demands. For example, one feels a quick flash of revulsion at
the thought of incest and knows intuitively that something is wrong. But instead of saying "I don't
know, I can't explain it, I just know it's wrong.", people are socially demanded to give reasons for
these hunches, coming up with arguments in an ex post facto manner like incest causes birth
defects. Here, moral reasoning is a post hoc element that is only made up after the decision to justify
it. This is especially important in criticizing rationalist Kohlberg’s findings with moral dilemmas. Given
that his study and others replicating it were conducted in academic settings where participants were
asked to provide reasons for their judgments, they would likely put on the “thinking cap” and try to
verbally justify their decisions even if these came from automatic intuitions.

Haidt’s model is supported by an interesting study cited by Haidt and Hersh in 2001 interviewing
politically conservative and liberal college students about controversial homosexual activities. They
observed the phenomenon termed “dumbfounding,” that is, participants voiced strong opinions that
homosexual relationships were morally wrong without being able to explain why. Regression
analysis also showed that participants’ emotional reactions predicted their moral judgments better
than their reasons or justifications. Presented evidence suggests that moral decisions can be made
without definitive reasons.

SPEAKER 2 Rational:

Rebut: There are several critiques of Jonathan Haidt’s intuitionist model. Firstly, he regarded
reasoning only as a post hoc product. However, there is evidence that moral reasoning does disrupt
and override moral intuitions. Monteith and colleagues (2002), for instance, found that white people
used prospective reasoning to prevent the manifestation of racial stereotypical biases which are
automatic and intuitive. Secondly, Haidt also failed to explain how we often have different moral
judgments from the ones we had before and sometimes attribute such changes to others’ persuasive
arguments. This is easily explained if the role of reasoning is acknowledged: we rationally consider
others’ reasoning. About Haidt and Hersh 2001’s interview mentioned by the negative team,
researchers, interesting, also wrote that (quote) “one conservative woman began by condemning
homosexuality, but as she thought about the possibility that sexual orientation is innate rather than
chosen [she said], If you get right down to it, then their act shouldn’t be condemned either.” (end of
quote) Here, her reasoning changed her moral judgment!

Argument 2: Meanwhile, aside from Lawrence Kohlberg’s moral development model which
characterizes moral reasoning at different developmental stages, Shaun Nichols proposed a theory
explaining how moral rules and representations are learned with Statistical Learning in cognitive
science. Statistical learning refers to figuring out how regularly features and objects co-occur in
the environment over space and time. For example, you have two dice, one has four sides, the
other ten. A friend rolls one of the dice several times and gets three, two, four, two, and one. You
would guess that it is the four-sided dice, not 10, as it would be a suspicious coincidence if it were
the 10-sided dice and they only rolled one to four.

Simple principles like this can explain how children learn complicated rules of moral systems based
on observation. Nichols and other theorists wonder why children think it’s wrong for people to litter
but not wrong for people to leave litter that’s already on the ground. It’s unlikely that parents
explicitly tell their kids that “you shouldn’t litter yourself, but you don’t need to pick up litter you
see”. Rather, it’s enough that parents show disapproval only toward acts of littering and not to
people who leave litter on the ground. If the rule about littering also applied to people leaving litter
on the ground, it would be a suspicious coincidence that this is never mentioned. Hence, Nichols’
theory helps explain how moral rules are rationally learned.

SPEAKER 2 Emotional:

Rebut: We argue that Shaun Nichols’ statistical moral learning is fundamentally limited in explaining
the acquisition of more complex moral rules. Indeed, he assumed that children learned moral rules
with statistical inferences drawn from observations of their daily life, which was explained using the
littering example. However, littering is a very common daily occurrence, unlike murder, assault,
theft, and fraud. Most children are taught very little about homicide, beyond simple injunctions such
as “Thou shalt not kill,” and in fact, most parents actively prevent their children from witnessing any
images, discussion, or other information about this topic because it’s upsetting and inappropriate.
How, then, can children learn complex moral rules about murder, which does not simply stop at not
murdering, but more subtle distinctions such as attempted murder; murder versus failing to perform
a life-saving rescue; or the necessary elements of self-defenses to murder? There is simply not
enough daily exposure for children to statistically learn such rules.

Argument 2: We find empirical instances of infants’ altruism particularly strong in falsifying both
Kohlberg and Nichol’s theories while providing support for Haidt’s Intuitionist model. Indeed, in Felix
Warneken’s study, human infants as young as 14 to 18 months old help others attain their
goals, for example, by helping them to fetch out-of-reach objects or opening cabinets for
them. They do this without any reward from adults, and very likely with no concern for
reciprocation and reputation. These suggest that human infants are naturally altruistic, and
it’s essentially intuitions as infants of this age are arguably incapable of performing abstract
reasoning like helping behaviors.

Further evidence that moral reasoning matters less than moral emotions comes from the study of
psychopaths. Cleckley's case studies present chilling portraits of people in whom reasoning has
become dissociated from moral emotions. Psychopaths know the rules of social behavior and they
understand the harmful consequences of their actions on others. They just simply do not care.
Cleckley's psychopaths show a general lack of major affective reactions, particularly those triggered
by suffering, condemnation, or attachment. They steal from friends, dismember live animals, and
even murder their parents for insurance benefits without remorse or shame. Therefore, moral
reasoning alone, detached from emotional moral intuitions, is not enough for morality.

SPEAKER 3 Rational:

Rebut: There are limitations to the evidence presented by the negative team. Firstly, Warneken’s
study on what he termed infants’ altruism is essentially observational. That is, while the behaviors of
infants were observed, there were no insights into what these behaviors meant to them. We don’t
know if it was the manifestation of innate altruistic drives or simple copying of adults’ behaviors
through observational learning. Secondly, Cleckley’s purposive sampling of psychopaths may have
been skewed toward extreme cases of misconduct. Interestingly, a recent meta-analysis by Marshall
and colleagues found that there is only a small relationship between non-standard moral judgment
and psychopathic traits, suggesting that psychopathic individuals only exhibit subtle differences in
moral judgment compared to others. Therefore, emotions are not all that important to moral
decision-making. The same meta-analysis, importantly, also reported that normal people’s anterior
cingulate cortex (involved in cognitive conflicts) was found with fMRI to be more active during the
judgment of the famously difficult Kant’s dilemma: whether to push the man off the bridge to stop
the train. Authors interpreted this as a further suggestion that emotion-based moral response was
being ‘mentally challenged’ by rationality.

Summary: To wrap up, our affirmative team argued that morality is rational, not emotional.
Theoretical and empirical evidence supporting our belief has been demonstrated throughout. We
first drew on an influential model in moral psychology known as the Cognitive Moral Development
model by Lawrence Kohlberg. This model characterizes the nature of moral reasoning in people
across different developmental stages. The self-focused reasoning based on immediate punishments
and rewards in children would be gradually replaced by more societal and universal principles. This
postulation has been demonstrated by a substantial body of research on moral dilemmas and has
been replicated cross-culturally. We also looked into Shaun Nichols’ statistical moral learning, which
explains how moral representations and rules are learned by children through rational statistical
inferences. Lastly, we briefly mentioned how fMRI supported the role of rationality in judging moral
dilemmas. In light of all the provided evidence, we are convinced that rationality has the upper hand
in moral decision-making.

SPEAKER 3 Emotional:

Rebut: In response to the neuroimaging result cited by the affirmative team, we cite another study
with fMRI to rebut it. Specifically, with the same Kant’s moral dilemma, Robert Blair found that
normal people were more likely to judge that it was wrong to push the man off the bridge. When
normal people make the judgment not to push, fMRI demonstrates increased activation in brain
areas associated with emotion, which are the amygdala and the ventromedial prefrontal cortex.
Therefore, even if the anterior cingulate cortex is activated to inhibit the automatic emotional
reactions as stated by the affirmative team, it is not enough to suppress that strong moral intuition.
Here, it can be inferred that rationality is not capable of overriding emotions in moral judgments.

Summary: To summarize, our negative team argued that morality is emotional, not rational. Our
viewpoint has been supported by both theoretical and empirical data. To begin with, we mentioned
Jonathan Haidt’s Social Intuitionist model, postulating that moral decisions are driven by emotional
intuitions, which are instantaneous and automatic. Moral reasoning is only something people make
up after the decision to justify their choice due to social demands. This model has been supported by
intriguing data from an interview on controversial sexual relationships where participants failed to
rationalize their strong moral stance. Next, we presented a study on infants’ altruistic behaviors.
These babies behaved morally without any tangible rewards and punishments from adults, all the
while being too young for abstract reasoning. Further, we provided insights into how emotions are
necessary for making moral decisions using the cases of people diagnosed with psychopathy and
their antisocial behaviors. Finally, we provided neuroimaging data which showed the activation of
brain regions associated with emotions when participants were made to judge a challenging moral
dilemma. It also demonstrated how rationality failed to override emotions in that instance. All in all,
we believe that the case for emotions is strong when it comes to moral decision-making.

You might also like