You are on page 1of 6

Ismail 1

Name: Elyud Ismail Date: May 15, 2013 Submitted to: Professor Richard Holton Assignment 3 For many years moral psychologists have been divided on the notion of where moral judgments on certain actions originate from. That is when one considers a certain controversial situation in which one might be forced to act against ones moral values, how does one make the determination that performing, or failing to perform that action is morally acceptable? For a long time, philosophers have advocated for one of two possible explanations the first is the rational model. According to this model, all moral determinations are made by means of reasoned analysis of the consequences of the action. The most notable champion of the rationalist model was Immanuel Kant. The other explanation is the emotion-based model. This model suggests that we judge moral actions primarily by our emotional reactions to them. This way of thinking has garnered wide-spread support, but there are empirical and neurological works that perhaps pose serious challenges to this model. David Hume was the champion of this model, and the model is commonly referred to as the Humean model. Empirical and neurological evidence suggest that determinations of morality might not be as clear cut as described above. Some philosophers have suggested the need to perhaps consider a combination of the two models when studying morality. One such philosopher is Joshua Greene. In his study, which he documented in a paper titled The Neural Basis of Cognitive Conflict and Control in Moral Judgment, he develops a framework that makes room for both emotional reactions to a moral dilemma, as well as subsequent rational interpretation and analysis of the dilemma.

Ismail 2

Before diving into his analysis, Greene first makes a distinction between two types of moral dilemmas. The first one is personal moral dilemma, in which the agent is actively performing an action that directly affects another agent. The example he gives of this is the footbridge scenario, in which an agent has to sacrifice one man by pushing him on a set of train tracks in order to avoid the death of five others that would otherwise be hit by an approaching trolley. The second type of dilemma is an impersonal dilemma which, unlike the personal dilemma, involves performing an action that affects others indirectly, where the agent is physically detached from the consequence on a basic level. An example of this is the trolley where this time an agent simply has to divert the path of a trolley that would kill only one person, instead of five. In this case, the agent that does the diversion is physically divorced from the consequence. What Greene found when he conducted experiments testing peoples responses to the various dilemmas was that they were more prepared to do the rational thing in the case of the impersonal moral dilemma than in the case of the personal moral dilemma. This is particularly striking because strictly from a utilitarian point of view, the consequence in both cases is identical. Therefore one would expect identical responses from both sets of subjects if we were to believe just one of the models mentioned above (Kantian, or the Humean models). But clearly there is interplay between the two broad models and the above example from Greene illustrates that. So what exactly were Greenes findings? When he asked his subjects if it was morally acceptable to manipulate the tracks so as to divert the trolley onto the path that only had one person on it (i.e. the impersonal moral dilemma), most said yes. But when he phrased the question so that this time, instead of manipulating tracks, if one were to physically push another onto the tracks to be sacrificed to save five others, most respondents found it morally

Ismail 3

unacceptable to do so (the personal moral dilemma). Clearly there is tension between following a strictly rational, utilitarian approach, and a strictly emotion driven approach to a problem which basically has the same ending one way or another. Greene believes that when it comes to impersonal moral dilemmas we typically access the morality of an action, or determine if someone is morally responsible for the consequence of that dilemma by applying a utilitarian model. This model advocates for the greatest satisfaction of the greatest number of people. Therefore if we had the power to save five lives by sacrificing one, then a utilitarian would say absolutely go for it. On the other hand when we are faced with personal moral dilemmas, like the footbridge example I cited above, we typically make determinations of morality based primarily on our emotional reaction to it. What dominates this time is the opposite of utilitarianism, called deontology. This is a principle where one is supposed to stick to ones moral and ethical values no matter what the circumstances. Therefore it shouldnt matter whether there could be five lives saved, it is morally unacceptable, according to deontology, to sacrifice one life. In addition to responding primarily as a result of emotional reactions, Greene noticed that the subjects presented with the personal moral dilemma took some time before they made a final determination. This suggested to him that perhaps the conclusion we drew above about personal moral dilemmas often being resolved by emotion might be misleading. If people take too long to respond then that must mean they are weighing the different options, going through different permutations, and selecting the best outcome. But Greenes analysis based on neurological evidence suggests something a bit different. He suggests that what in fact happens is that when faced with moral dilemmas, our minds identify a certain conflict then take the time to reflect on that conflict before coming to a moral judgment. Conflicts mainly arise from the tension between

Ismail 4

the principles of utilitarianism and deontology mentioned above that we humans are very tuned to since early childhood. Greene suggests that evolution had a big role in shaping our moral reactions to various situations and we are therefore born with the innate ability to somewhat determine the morality of certain actions. On the other hand, through our interactions with others in the community as well as academia, we internalize some of the basic utilitarian principles of achieving the greatest benefit to the greatest amount of people. But when faced with personal moral dilemmas, the evolutionary deontological instincts come in direct confrontation with utilitarian principles we have grown to internalize over a life time. Neurological evidence analyzed by Greene suggests that there is a region in the human brain known as the anterior cingulate cortex (ACC) that is dedicated to identifying such conflicts. When he presented personal moral dilemmas to his subjects, Greene showed increased activity in this region. This suggests that moral determinations, particularly personal ones, are not simply emotional knee jerk reactions, but are rather the result of cognitive negotiations between emotion and rationality. Once such conflicts have been identified, Greene showed that another region of the brain in charge of cognitive control and abstract reasoning would trigger causing a delayed reflection time to a certain personal moral dilemma. This region is called the dorsolateral prefrontal cortex (DLPFC). This suggests that not only is it not true that our moral judgments are governed solely by our emotional reaction as philosophers such as Hume would have us believe, we actually go through a deliberation period in which we reason out why certain actions can be deemed morally acceptable or not. This of course does not mean that one will necessarily arrive at the most rational conclusion, but what it means is that as I mentioned before it takes more than an

Ismail 5

emotional shot of hormones in our body for us to determine the moral acceptability of a certain action. One thing to be cautious about is that the reflection time that Greene saw among his subjects was different depending on the difficulty of the personal moral dilemma. Greene makes the distinction between difficult personal moral dilemma and easy personal moral dilemma. He explains the distinction by way of example. He gives the hypothetical of a crying baby as an example of a difficult moral dilemma. In this case, a father is faced with the choice of smothering his crying baby to death in order to avoid being discovered by guards who are out to kill everyone including the child. An instance of a rather easy personal moral dilemma is Greenes infanticide example. In this case a mother is left to decide whether she wishes to get rid of an unwanted unborn child. Greenes subjects showed very different deliberation time when asked to make a decision on behalf of the characters in the examples. As expected, people were much quicker making a decision in the easy case than in the difficult case. Greene made a further analysis on those subjects that were asked to look into the difficult moral dilemma. In particular, he wanted to see if there was a difference in the regions of the brain that would trigger among those who responded by saying it was acceptable to smother the child as opposed to those who said otherwise. What he found was that just as in the personal vs impersonal moral dilemmas, there seemed to be a tension between utilitarian considerations and emotional and deontological instincts that caused brain activity in the ACC as well as the DLPFC. What the above empirical and neurological evidence tell us about morality is that it is a constant struggle between two distinct instincts. One is the emotional instinct which has its roots in evolutionary history while the other is reason which we develop as part of what it means to grow up. Whenever were faced with a moral dilemma, depending on what type it is, the two

Ismail 6

axis are constantly struggling for control. It seems like the conclusion from the evidence summarized above that for generally impersonal and easy personal moral dilemmas, the rational (more utilitarian) axis wins out, while for difficult personal moral dilemmas we usually succumb to our emotions and give a judgment that is not completely rational in a strictly utilitarian point of view.