You are on page 1of 5

For Against

Kohlberg (Rationalist) Cognitive developmental


models of moral development
regnant in the second half of
the twentieth century
attributed moral decision-
making almost exclusively to
rational processes.
Haidt (Sentimentalist)
Kahneman (Both, Dual
Process)

Structure

 S1 (Ra): Introduce moral, the topic of debate, state point of view, Kohlberg moral development
+ evidence
 S2 (Sen): Critique Kohlberg (theory, evidence) the “thinking cap” in academic research settings
=> state point of view, Haidt emotional dog, rational tail + evidence
 S3 (Ra): Critique Haidt (Rationality can overwrite emotionality) => Statistical learning
(alternative explanation of emotions, a strong base of morality)
 S4 (Sen): Critique statistical learning => Psychopathy and/or infants altruistic behaviors
(emotions are important)
 S5 (Ra): Critique the evidence => (referencing Kahneman’s dual process) rationality through
fMRI (Kant’s dilemma)
 S6 (Sen): But fMRI also showed emotionality in Kant’s dilemma, intuitive morality is faster,
judgment in emotionally salient conditions

Statistical learning (OG source):


https://academic.oup.com/book/39649/chapter-abstract/339621602?redirectedFrom=fulltext#no-
access-message

Kahneman’s “Thinking, Fast and Slow” (physical book)

R J R Blair https://pubmed.ncbi.nlm.nih.gov/17707682/

SPEAKER 1R:

Define/identify the topic: Philosophers and social scientists have long debated how moral decisions
are reached. To some scholars, moral decision-making is fundamentally rational and is mediated by
deliberate, controlled, and reflective moral reasoning. Others have argued that instinctive, emotive,
and intuitive modes of decision-making serve as the primary mediators in moral decision-making
instead.

Stating team’s interpretation of the topic: Having reviewed the available body of evidence, our
affirmative team argues that moral decision-making is rational, not emotional. That is, moral
reasoning is based on explicit practical reason, which involves choices about what to do or intend
to do, including how to achieve one's goals and what goals one should have in the first place.
Evidence supporting this view will be covered by our members, A B, and C.
Argument 1: We begin with the premise that research on moral judgment has been dominated by
rationalist models, in which moral judgment is thought to be caused by moral reasoning. In fact, the
Cognitive Moral Development model by Lawrence Kohlberg has been among the most popular lines
of research into ethical behavior across different situations. This stage model ties moral decision-
making with cognitive development, assuming that, over time, people change from being selfish to
making more principled judgments of choices. At the pre-conventional level (stages 1 and 2), moral
judgments are made based on straightforward, direct consequences on oneself (that is, punishments
and rewards). Following this, the emphasis of reasoning at the conventional level (stages 3 and 4) is
on following the guidelines or standards of acceptable conduct established by outside groups, such
as classmates, families, and society. At the principled level (stages 5 and 6), moral judgment criteria
override the authority of social norms as the person acquires a stronger personal commitment to
self-selected universal principles.

Such postulation has been empirically supported by research. For instance, Lawrence Walker
conducted a longitudinal study with 233 subjects who ranged in age from 5 to 63 years. They were
provided with both hypothetical and personally generated real-life dilemmas (like a man having to
steal expensive medicines for his dying wife) and interviewed on how they would morally judge
these actions and, importantly, the reasons for these judgments. Responses were scored for moral
orientation and moral stage. It was demonstrated that subjects gave reasons for their moral
decisions with rationale in stage sequences as suggested by Kohlberg, thus supporting the theory.
There is also some cross-cultural evidence for the theory, as it has also been empirically recorded in
Australia and China by Judy Tsui and Carolyn Windsor.

SPEAKER 1E:

Stating team’s interpretation of the topic: Taking on this debate, our negative team assumes the
opposite perspective and argues that moral decision-making is essentially emotional, not rational.
That is, people’s moral stances are culturally driven intuitions - they are quick and automatic
responses. Our team members, D, E, and F will demonstrate the evidence for our belief.

Argument 1: Jonathan Haidt proposed an influential theory called the Social Intuitionist model. This
de-emphasizes rationality and focuses instead on social and cultural influences. It states that moral
judgment is generally the result of quick, automatic evaluations and reasoning only follows that
evaluation due to learned social demands. For example, one feels a quick flash of revulsion at the
thought of incest and knows intuitively that something is wrong. But instead of saying "I don't know,
I can't explain it, I just know it's wrong.", people are socially demanded to give reasons for these
hunches, coming up with arguments in an ex post facto manner like incest causes birth defects. This
is especially important in criticizing rationalist Kohlberg’s findings with moral dilemmas. Given that
his study and others replicating it were conducted in academic settings where participants were
asked to provide reasons for their judgments, they would likely put on the “thinking cap” and try to
verbally justify their decisions even if these came from automatic intuitions.

Haidt’s model is supported by an interesting study cited by Haidt and Hersh in 2001 interviewing
politically conservative and liberal college students about controversial homosexual activities. They
observed the phenomenon of “dumbfounding,” that is, the voicing of strong opinions without the
ability to explain one’s position. Regression analysis showed that emotional reactions reported by
the participants predicted their moral judgments (of approval or disapproval) better than their
reasons or justifications.

SPEAKER 2R:
Rebut: There are several critiques of Jonathan Haidt’s intuitionist model. Firstly, he proposed that
moral judgments come from quick, automatic emotional reactions and reasoning is only a post-hoc
product. However, there is evidence that moral reasoning does disrupt and override moral
intuitions. Monteith and colleagues (2002), for instance, found that white participants in their study
used prospective reflection to prevent racial stereotypical biases from manifesting toward black
people. They also experienced guilt about the automatic racially biased response, knowing that it
falls short of their rational standards. Secondly, Haidt also failed to account for the fact that we often
come to have different intuitions from ones we had before and sometimes attribute such changes in
our moral perspective to others’ persuasive arguments. This is easily explicable if the role of
reasoning is acknowledged. Regarding the Haidt and Hersh 2001 interview mentioned by the
negative team, researchers, interesting, also noted that “one conservative woman began by
condemning homosexuality, but as she thought about the possibility that sexual orientation is innate
rather than chosen [she said], If you get right down to it, then their act shouldn’t be condemned
either.” Here, her own reasoning changed her moral judgment!

Argument 2:

Meanwhile, aside from Lawrence Kohlberg’s moral development model which characterizes moral
reasoning at different developmental stages, Shaun Nichols proposed a theory explaining how moral
representations are achieved with statistical learning in cognitive sciences. Nichols proposes that
people learn morality by paying attention to rational aspects of a situation. To explain statistical
learning, consider this example. You have two dice, one with four sides and the other with 10.
Imagine that a friend rolls one of the dice randomly several times and comes up with the results:
three, two, four, two, and one. You would then be likely to think that your friend rolled the four-
sided dice, not 10, as it would be a suspicious coincidence if it were the 10-sided dice and you only
rolled one to four.

Simple principles like this can explain how children learn complicated rules of moral systems based
on the evidence they observe, Nichols said. He and other theorists wonder why children think it’s
wrong for people to litter but not wrong for people to leave litter that’s already lying on the ground.
It is unlikely that parents explicitly tell their kids that “you shouldn’t litter yourself but you don’t
need to pick up litter you see”. Rather, he proposed that it’s enough that parents show disapproval
exclusively toward acts of littering and not to people who leave litter on the ground. If the rule about
littering also applied to people leaving litter on the ground, it would be a suspicious coincidence that
this is never mentioned. So, there is subtle evidence in the environment that children can use to
make these rational inferences. Hence, Nichols’ theory helps explain how moral rules are rationally
learned.

SPEAKER 2E:

Rebut: We argue that Shaun Nichols’ statistical moral learning is fundamentally limited in explaining
the acquisition of more complex moral rules. Indeed, he assumed that children learned moral rules
with statistical inferences drawn from observations of their daily life, which was explained using the
littering example. However, littering is a very common daily occurrence, unlike murder, assault,
theft, and fraud. Most children are taught very little about homicide, beyond simple injunctions such
as “Thou shalt not kill,” and, in fact, most parents actively shield their children from any images,
discussion, or other information about this topic because it is so emotionally upsetting. How, then,
can children learn complex moral rules about murder, which does not simply stop at not murdering,
but more subtle related distinctions such as attempted murder; the difference between murder and
failing to perform a life-saving rescue; or the necessary elements of defenses to murder, such as self-
defense? There is simply not enough daily exposure for children to statistically learn such rules.

Argument 2: We find empirical instances of infants’ altruism particularly damning to both Kohlberg
and Nichol’s theories while providing strong support for Haidt’s Intuitionist model. Indeed, in a study
by Felix Warneken, human infants as young as 14 to 18 months of age help others attain their
goals, for example, by helping them to fetch out-of-reach objects or opening cabinets for
them. They do this irrespective of any reward from adults, and very likely with no concern for
such things as reciprocation and reputation. These results suggest that human infants are
naturally altruistic, and it is essentially intuitions as infants of this age are arguably incapable
of performing abstract reasoning like helping behaviors.

Further evidence that moral reasoning matters less than moral emotions comes from the study of
psychopaths. Cleckley's case studies present chilling portraits of people in whom reasoning has
become dissociated from moral emotions. Psychopaths know the rules of social behavior and they
understand the harmful consequences of their actions on others. They simply do not care about
those consequences. Cleckley's psychopaths show a general poverty of major affective reactions,
particularly those that would be triggered by the suffering of others, condemnation, or attachment.
Psychopaths can steal from their friends, dismember live animals, and even murder their parents to
collect insurance benefits without showing any trace of remorse or, when caught, of shame.
Therefore, moral reasoning alone, detached from emotional moral intuitions, is not enough for
moral decisions.

SPEAKER 3R:

Rebut: There are limitations to the evidence presented by the negative team. Firstly, Warneken’s
study on what he termed infants’ altruism is essentially observational. That is, while the behaviors of
infants were observed, there were no insights into what these behaviors meant to the infants.
Therefore, no motivation can be pinpointed. We are not sure if it was the manifestation of innate
altruistic drives or simple copying of adults’ behaviors through observational learning. Secondly,
Cleckley’s purposive sampling of psychopaths may have been skewed toward extreme cases of
misconduct. Interestingly, a recent meta-analysis in 2018 by Marshall and colleagues found that
there is only a small relationship between non-standard moral judgment and psychopathic traits,
suggesting that perhaps psychopathic individuals may only exhibit subtle differences in moral
judgment in comparison to others. Therefore, emotions may not be all that important to moral
decision-making.

Continuing on the psychopathy research line, the same study also found that in the famous Kant’s
dilemma, people diagnosed with psychopathy are more likely to decide to of pushing the fat man off
the bridge thus stopping the train. For normal people, this is classed as a difficult moral dilemma,
and the anterior cingulate cortex (involved in cognitive conflicts) was found with fMRI to be more
active during its judgment process. This further suggests that emotion-based moral response was
being ‘mentally challenged’ by rationality.

Summary: In conclusion, our affirmative team argued that morality is rational, not emotional.
Theoretical and empirical evidence supporting our belief has been demonstrated throughout. We
first drew on an influential model in moral psychology known as the Cognitive Moral Development
model by Lawrence Kohlberg. This model characterizes the nature of moral reasoning in people
across different developmental stages. The self-focused reasoning based on immediate punishments
and rewards in children would be gradually replaced by more societal and universal principles. This
postulation has been demonstrated by a substantial body of research on moral dilemmas and has
been replicated cross-culturally. We also looked into Shaun Nichols’ statistical moral learning, which
explains how moral representations and rules are learned by children through rational statistical
inferences. Lastly, we briefly mentioned how fMRI supported the role of rationality in judging moral
dilemmas. In light of all the provided evidence, we are convinced that rationality has the upper
hand.

SPEAKER 3E:

Rebut: In response to the neuroimaging result cited by the affirmative team, we cite another fMRI
study to rebut it. Specifically, with the same Kant’s moral dilemma, Robert Blair found that normal
people were more likely to judge that it’s wrong to push the man off the bridge, unlike psychopaths
who were more willing to push. When normal people make that judgment, fMRI demonstrates
increased activation in brain areas that are associated with emotion, which are the amygdala and
the ventromedial prefrontal cortex. Therefore, even if the anterior cingulate cortex is activated to
inhibit the automatic emotional reaction as stated by the affirmative team, it is not enough to
override the intuitive emotional reaction in people with normal functioning.

Summary: To summarize, our negative team argued that morality is emotional, not rational. Our
viewpoint has been supported by both theoretical and empirical data. To begin with, we mentioned
Jonathan Haidt’s Social Intuitionist model, which posits that moral decisions are driven by emotional
intuitions, which are instantaneous and automatic. Moral reasoning is only something people make
up after the decision to justify their choice due to social demands. This model has been supported by
intriguing data from an interview on controversial sexual relationships where participants failed to
rationalize their strong moral stance. Next, we presented a study on infants’ altruistic behaviors.
These babies behaved morally without any tangible rewards and punishments from adults, all the
while being too young for abstract reasoning. Further, we provided insights into how emotions are
necessary for making moral decisions using the cases of people diagnosed with psychopathy and
their antisocial behaviors. Finally, we provided neuroimaging data which showed the activation of
brain regions associated with emotions when participants were made to judge a challenging social
dilemma. It also demonstrated how rationality failed to override emotions in that case. All in all, we
believe that the case for emotions is strong when it comes to moral decision-making.

You might also like