You are on page 1of 140

WHY STUDY ETHICS?

It is clear that we often disagree about questions of value. Should same-sex marriage be legal? Should
women have abortions? Should drugs such as marijuana be legalized? Should we torture terrorists in
order to get information from them? Should we eat animals or use them in medical experiments? These
sorts of questions are sure to expose divergent ideas about what is right or wrong.

Discussions of these sorts of questions often devolve into unreasonable name- calling, foot-stomping,
and other questionable argument styles. The philosophical study of ethics aims to produce good
arguments that provide reasonable support for our opinions about practical topics. If someone says that
abortion should (or should not) be permitted, he or she needs to explain why this is so. It is not enough
to say that abortion should not be permitted because it is wrong or that women should be allowed to
choose abortion because it is wrong to limit women’s choices. To say that these things are wrong is
merely to reiterate that they should not be permitted. Such an answer begs the question. Circular,
question-begging arguments are fallacious. We need further argument and information to know why
abortion is wrong or why limiting free choice is wrong. We need a theory of what is right and wrong,
good or evil, justified, permissible, and unjustifiable, and we need to understand how our theory applies
in concrete cases. The first half of this text will discuss various

theories and concepts that can be used to help us avoid begging the question in debates about ethical
issues. The second half looks in detail at a number of these issues.

It is appropriate to wonder, at the outset, why we need to do this. Why isn’t it sufficient to simply state
your opinion and assert that “x is wrong (or evil, just, permissible, etc.)”? One answer to this question is
that such assertions do nothing to solve the deep conflicts of value that we find in our world. We know
that people disagree about abortion, same- sex marriage, animal rights, and other issues. If we are to
make progress toward understanding each other, if we are to make progress toward establishing some
consensus about these topics, then we have to understand why we think certain things are right and
others are wrong. We need to make arguments and give reasons in order to work out our own
conclusions about these issues and in order to explain our conclusions to others.

It is also insufficient to appeal to custom or authority in deriving our conclusions about moral issues.
While it may be appropriate for children to simply obey their parents’ decisions, adults should strive for
more than conformity and obedience to authority. Sometimes our parents and grandparents are
wrong—or they disagree among themselves. Sometimes the law is wrong—or laws conflict. And
sometimes religious authorities are wrong—or authorities do not agree. To appeal to authority on moral
issues, we would first have to decide which authority is to be trusted and believed. Which religion
provides the best set of moral rules? Which set of laws in which country is to be followed? Even within
the United States, there is currently a conflict of laws with regard to some of these issues: some states
have legalized medical marijuana or physician assisted suicide, others have not. The world’s religions
also disagree about a number of issues: for example, the status of women, the permissibility of abortion,
and the question of whether war is justifiable. And members of the same religion or denomination may
disagree among themselves about these issues. To begin resolving these conflicts, we need critical
philosophical inquiry into

basic ethical questions. In Chapter 2, we discuss the world’s diverse religious traditions and ask whether
there is a set of common ethical ideas that is shared by these traditions. In this chapter, we clarify what
ethics is and how ethical reasoning should proceed.

WHAT IS ETHICS?

On the first day of an ethics class, we often ask students to write one-paragraph answers to the
question, “What is ethics?”

How would you answer? Over the years, there have been significant differences of opinion among our
students on this issue. Some have argued that ethics is a highly personal thing, a matter of private
opinion. Others claim that our values come from family upbringing. Other students think that ethics is a
set of social principles, the codes of one’s society or particular groups within it, such as medical or legal
organizations. Some write that many people get their ethical beliefs from their religion.

One general conclusion can be drawn from these students’ comments: We tend to think of ethics as the
set of values or principles held by individuals or groups. I have my ethics and you have yours; groups—
professional organizations and societies, for example—have shared sets of values. We can study the
various sets of values that people have. This could be done historically and sociologically. Or we could
take a psychological interest in determining how people form their values. But philosophical ethics is a
critical enterprise that asks whether any particular set of values or beliefs is better than any other. We
compare and evaluate sets of values and beliefs, giving reasons for our evaluations. We ask questions
such as, “Are there good reasons for preferring one set of ethics over another?” In this text, we examine
ethics from a critical or evaluative standpoint. This examination will help you come to a better
understanding of your own values and the values of others.

Ethics is a branch of philosophy. It is also called moral philosophy. In general, philosophy is a discipline
or study in which we ask—and attempt to answer—basic questions about key areas or subject matters
of human life and about pervasive and significant aspects of experience. Some philosophers, such as
Plato and Kant, have tried to do this systematically by interrelating their philosophical views in many
areas. According to Alfred North Whitehead, “Philosophy is the endeavor to frame a coherent, logical,
necessary system of general ideas in terms of which every element of our experience can be
interpreted.”1 Some contemporary philosophers have given up on the goal of building a system of
general ideas, arguing instead that we must work at problems piecemeal, focusing on one particular
issue at a time. For instance, some philosophers might analyze the meaning of the phrase to know, while
others might work on the morality of lying. Some philosophers are optimistic about our ability to
address these problems, while others are more skeptical because they think that the way we analyze the
issues and the conclusions we draw will always be influenced by our background, culture, and habitual
ways of thinking. Most agree, however, that these problems are worth wondering about and caring
about.

We can ask philosophical questions about many subjects. In the philosophical study of aesthetics,
philosophers ask basic or foundational questions about art and objects of beauty: what kinds of things
do or should count as art (rocks arranged in a certain way, for example)? Is what makes something an
object of aesthetic interest its emotional expressiveness, its peculiar formal nature, or its ability to
reveal truths that cannot be described in other ways? In the philosophy of science, philosophers ask
whether scientific knowledge gives us a picture of reality as it is, whether progress exists in science, and
whether the scientific method discloses truth. Philosophers of law seek to understand the nature of law
itself, the source of its authority, the nature of legal interpretation, and the basis of legal responsibility.
In the philosophy of knowledge, called epistemology, we try to answer questions about what we can
know of ourselves and our world, and what it means to know something rather than just to believe it. In
each area, philosophers ask basic questions about the particular subject matter. This is also true of
moral philosophy.

Ethics, or moral philosophy, asks basic questions about the good life, about what is better and worse,
about whether there is any objective right and wrong, and how we know it if there is.

One objective of ethics is to help us decide what is good or bad, better or worse. This is generally called
normative ethics. Normative ethics defends a thesis about what is good, right, or just. Normative ethics
can be distinguished from metaethics. Metaethical inquiry asks questions about the nature of ethics,
including the meaning of ethical terms and judgments. Questions about the relation between
philosophical ethics and religion—as we discuss in Chapter 2—are metaethical. Theoretical questions
about ethical relativism—as discussed in Chapter 3—are also metaethical. The other chapters in Part I
are more properly designated as ethical theory. These chapters present concrete normative theories;
they make claims about what is good or evil, just or unjust.

From the mid 1930s until recently, metaethics predominated in English-speaking universities. In doing
metaethics, we often analyze the meaning of ethical language. Instead of asking whether the death
penalty is morally justified, we would ask what we meant in calling something “morally justified” or
“good” or “right.” We analyze ethical language, ethical terms, and ethical statements to determine what
they mean. In doing this, we function at a level removed from that implied by our definition. It is for this
reason that we call this other type of ethics metaethics—meta meaning “beyond.” Some of the
discussions in this chapter are metaethical discussions—for example, the analysis of various senses of
“good.” As you will see, much can be learned from such discussions.

ETHICAL AND OTHER TYPES OF EVALUATION “That’s great!” “Now, this is what I call a delicious meal!”
“That play was wonderful!” All of these statements express approval of something. They do not tell us
much about the meal or the play, but they do imply that the speaker thought they were good. These are
evaluative statements. Ethical statements

or judgments are also evaluative. They tell us what the speaker believes is good or bad. They do not
simply describe the object of the judgment—for example, as an action that occurred at a certain time or
that affected people in a certain way. They go further and express a positive or negative regard for it. Of
course, factual matters are relevant to moral evaluation. For example, factual judgments about whether
capital punishment has a deterrent effect might be relevant to our moral judgments about it. So also
would we want to know the facts about whether violence can ever bring about peace; this would help
us judge the morality of war. Because ethical judgments often rely on such empirical information, ethics
is often indebted to other disciplines such as sociology, psychology, and history. Thus, we can distinguish
between empirical or descriptive claims, which state factual beliefs, and evaluative judgments, which
state whether such facts are good or bad, just or unjust, right or wrong. Evaluative judgments are also
called normative judgments. Moral judgments are evaluative because they “place a value,” negative or
positive, on some action or practice, such as capital punishment.

“That is a good knife” is an evaluative or normative statement. However, it does not mean that the knife
is morally good. In making ethical judgements, we use terms such as good, bad, right, wrong, obligatory,
and permissible. We talk about what we ought or ought not to do. These are evaluative terms. But not
all evaluations are moral in nature. We speak of a good knife without attributing moral goodness to it. In
so describing the knife, we are probably referring to its practical usefulness for cutting. Other
evaluations refer to other systems of values. When people tell us that a law is legitimate or
unconstitutional, that is a legal judgment. When we read that two articles of clothing ought not to be
worn together, that is an aesthetic judgment. When religious leaders tell members of their com-
munities what they ought to do, that is a religious matter. When a community teaches people to bow
before elders or use eating utensils in a certain way, that is a matter of custom. These various normative
or evaluative judgments appeal to practical, legal, aesthetic, religious, or customary norms for their
justification.

How do other types of normative judgments differ from moral judgments? Some philosophers believe
that it is a characteristic of moral “oughts” in particular that they override other “oughts,” such as
aesthetic ones. In other words, if we must choose between what is aesthetically pleasing and what is
morally right, then we ought to do what is morally right. In this way, morality may also take precedence
over the law and custom. The doctrine of civil disobedience relies on this belief, because it holds that we
may disobey certain laws for moral reasons. Although moral evaluations differ from other nor- mative
evaluations, this is not to say that there is no

Descriptive (empirical) judgment: Capital punishment acts (or does not act) as a deterrent. Normative
(moral) judgment: Capital punishment is justifiable (or unjustifiable).

We also evaluate people, saying that a person is good or evil, just or unjust. Because these evaluations
also rely on beliefs in general about what is good or right, they are also normative. For example, the
judgment that a person is a hero or a villain is based upon a normative theory about good or evil sorts of
people.

TRAITS OF MORAL PRINCIPLES

A central feature of morality is the moral principle. We have already noted that moral principles are
guides for action, but we must say more about the traits of such principles. Although there is no
universal agreement on the characteristics a moral principle must have, there is a wide consensus about
five features: (1) pre- scriptivity, (2) universalizability, (3) overridingness, (4) publicity, and (5) practica-
bility. Several of these will be examined in chapters throughout this book, but let’s briefly consider them
here.

First is prescriptivity, which is the commanding aspect of morality. Moral principles are generally put
forth as commands or imperatives, such as “Do not kill,” “Do no unnecessary harm,” and “Love your
neighbor.” They are intended for use: to advise people and influence action. Prescriptivity shares this
trait with all normative discourse and is used to appraise behavior, assign praise and blame, and
produce feelings of satisfaction or guilt.

Second is universalizability. Moral principles must apply to all people who are in a relevantly similar
situation. If I judge that an act is right for a certain person, then that act is right for any other relevantly
similar person. This trait is exemplified in the Golden Rule, “Do to others what you would want them to
do to you.” We also see it in the formal principle of justice: It cannot be right for you to treat me in a
manner in which it would be wrong for me to treat you, merely on the ground that we are two different
individuals.4

Universalizability applies to all evaluative judgments. If I say that X is a good thing, then I am logically
committed to judge that anything relevantly similar to X is a good thing. This trait is an extension of the
principle of consistency: we ought to be consistent about our value judgments, including one’s moral
judgements. Take any act that you are contemplating doing and ask, “Could I will that everyone act
according to this principle?”

Third is overridingness. Moral principles have predominant authority and override other kinds of
principles. They are not the only principles, but they take precedence over other considerations,
including aesthetic, prudential, and legal ones. The artist Paul Gauguin may have been aesthetically
justified in abandon- ing his family to devote his life to painting beautiful Pacific Island pictures, but
morally he probably was not justified, and so he probably should not have done it. It may be prudent to
lie to save my reputation, but it probably is morally wrong to do so. When the law becomes egregiously
immoral, it may be my moral duty to exercise civil disobedience. There is a general moral duty to obey
the law because the law serves an overall moral purpose, and this overall purpose may give us moral
reasons to obey laws that may not be moral or ideal. There may come a time, however, when the
injustice of a bad law is intolerable and hence calls for illegal but moral defiance. A good example would
be laws in the South prior to the Civil War requiring citizens to return runaway slaves to their owners.

Fourth is publicity. Moral principles must be made public in order to guide our actions. Publicity is
necessary because we use principles to prescribe behavior, give advice, and assign praise and blame. It
would be self-defeating to keep them a secret.

Fifth is practicability. A moral principle must have practicability, which means that it must be workable
and its rules must not lay a heavy burden on us when we follow them. The philosopher John Rawls
speaks of the “strains of commitment” that overly idealistic principles may cause in average moral
agents.5 It might be desirable for morality to require more selfless behavior from us, but the result of
such principles could be moral despair, deep or undue moral guilt, and ineffective action. Accordingly,
most ethical systems take human limitations into consideration.

Although moral philosophers disagree somewhat about these five traits, the above discussion offers at
least an idea of the general features of moral principles.

DOMAINS OF ETHICAL ASSESSMENT

At this point, it might seem that ethics concerns itself entirely with rules of con- duct that are based
solely on evaluating acts. However, it is more complicated than that. Most ethical analysis falls into one
or more of the following domains: (1) action, (2) consequences, (3) character traits, and (4) motive.
Again, all these domains will be examined in detail in later chapters, but an overview here will be
helpful.

Let’s examine these domains using an altered version of the Kitty Genovese story. Suppose a man
attacks a woman in front of her apartment and is about to kill her. A responsible neighbor hears the
struggle, calls the police, and shouts from the window, “Hey you, get out of here!” Startled by the
neighbor’s reprimand, the attacker lets go of the woman and runs down the street where he is caught
by the police.

1.2 Agency

If, as the result of an earthquake, a boulder were to break off from the face of a cliff and kill an
unfortunate mountaineer below, it wouldn’t make sense to hold either the boulder or the Earth morally
accountable for her death. If, on the other hand, an angry acquaintance dislodged the rock, aiming to kill
the mountaineer for the sake of some personal grudge, things would be different. Why?

One of the key differences between the two deaths is that the second, unlike the first, involves
“agency.” This difference is a crucial one, as agency is often taken to be a necessary condition or
requirement of moral responsibility. Simply put, something can only be held morally responsible for an
event if that something is an agent. Angry colleagues are agents but the Earth is not (assuming, of
course, that the Earth isn’t some animate, conscious being).This seems obvious enough, but what
precisely is agency, and why does it matter?

Agency for many involves the exercise of freedom. Freedom is usually taken to require the ability to act
otherwise or in ways contrary to the way one is currently acting or has acted in the past. For many
holding this point of view, being responsible (and thence an agent) means possessing a “free will”
through which one can act independently of desires and chains of natural causes. Of course, there are
also many philosophers who don’t think much of this conception of freedom. Most of the critics,
however, nevertheless do accept using the term for actions that proceed in a causal way from one’s self
or one’s own character in the absence of external compulsion, coercion, or mental defect. (These
philosophers are called “compatibilists.”)

Conditions of agency

For thinkers following Aristotle (384–322 BCE), agency requires that one understands what one’s doing,
what the relevant facts of the matter are, and how the causal order of the world works to the extent
that one is able to foresee the likely consequences of chosen courses of action.

It’s also important that an agent possess some sort of self-understanding – that is, some sense of self-
identity, knowledge of who and what one is, what one’s character and emotional architecture are like,
what one is capable and not capable of doing. Self-knowledge is important because it doesn’t normally
make sense to think of someone as a free agent who is unaware of what he or she does – for example,
while asleep or during an unforeseen seizure. It can still make sense to talk of some of this kind of action
as the result of agency, however, if the impairments that lead to the unconscious conduct are the result
of one’s own free choices. For example, consider someone who voluntarily gets drunk while piloting an
airliner, knowing full well what’s likely to happen; or consider someone else whose ignorance about the
small child standing behind the car he has just put into gear results from negligence, from his not
bothering to look out the rear window.

For Immanuel Kant (1724–1804), the ability to reason is crucial to agency. In Kant’s Critique of Practical
Reason (1788), what’s important is that one act unselfishly, purely on the basis of reason or a certain
kind of rational principle (a categorical imperative), instead of on the basis of desire or fear. Only this
sort of rational action qualifies for Kant as truly moral action, because even acting well on the basis of
desire ultimately boils down to the same thing as acting in other ways for the sake of desire. Desires and
fears simply come over us, the result of natural and social causes beyond our control. To act strictly from
desire is to be a slave to desire. Only by acting on the basis of reason alone are we, for Kant,
autonomous – that is, self-governing beings who legislate moral laws of action to ourselves.

Other conditions of agency

But perhaps it’s wrong to regard feelings and desires as irrelevant. Indeed, shouldn’t moral agency also
be understood to require the capacity to sympathize with others, to be distressed by their suffering, and
to feel regret or remorse after harming others or acting immorally? Would it make sense to regard as a
moral or free agent a robot that behaved rationally and that possessed all the relevant information but
didn’t have any inner, affective life? It’s not obvious what the answer to this question is. Star Trek’s Mr
Spock, for example, seemed to be a moral agent, even though the only reason he had for condemning
immoral acts was that they were “illogical.”

Similarly, it might be thought that the right social conditions must be in place for moral agency to be
possible. Could people truly be moral agents capable of effective action without public order and
security, sufficient means of sustenance, access to information and communication, education, a free
press, and an open government? But again, this is far from obvious. Although it seems true that when
civilization breaks down immorality or amorality rises, it also seems excessively pessimistic to conclude
that moral agency is utterly impossible without many of the supports and constraints of society.

Types of agent

It may seem strange to consider things like corporations or nations or mobs or social classes as agents,
but the issue often arises in reflections about whether one should make judgments that attribute
collective responsibility. People did speak of the guilt of the German nation and demand that all
Germans contribute to war reparations after World War I. When the government of a truly democratic
nation goes to war, because its policy in some sense expresses “the will of the people,” the country
arguably acts as though it were a kind of single agent. People also, of course, speak collectively of the
responsibilities of the ruling class, corporations, families, tribes, and ethnic groups. Because human life
is populated by collectives, institutions, organizations, and other social groupings, agency can
sometimes be dispersed or at least seem irremediably unclear. These “gray zones,” as thinkers like
Primo Levi (The Periodic Table, 1975) and Claudia Card (The Atrocity Paradigm, 2002) have called them,
make determining agency in areas like sexual conduct and political action exceedingly difficult.

There are three ways of understanding how we talk about collectives as agents. One is that it’s just
mistaken and that collectives cannot be agents. The second is that collectives are agents in some
alternative, perhaps metaphorical sense – that they are like real agents but not quite the same as them.
The third is that collectives are as much agents as individual people, who are themselves perhaps not as
singular, cohesive, and unified as many would like to believe.

1.4 Autonomy

The legitimacy of “living wills” or “advance directives” is at present a hotly contested social and moral
issue. Expressing people’s preferences should they become unable to do so because of illness or injury,
these curious documents aim to make sure that physicians treat individuals as they wish, not as others
think best. The 2005 case of Terri Schiavo, the brain-damaged American woman whose husband and
parents fell into a sensational and painful legal wrangle concerning her wishes, illustrates all too well the
reasons people write living wills.

Proponents of the practice argue that one of the most important bases for the human capacity to act in
moral ways is the ability not only to choose and to act on those choices but also to choose for oneself, to
be the author of one’s own life. This capacity is known as “autonomy.” But what does it mean to be
autonomous?

Autonomy requires, at the very least, an absence of compulsion. If someone is compelled to act rightly
by some internal or external force – for instance, to return a lost wallet packed with cash – that act isn’t
autonomous. So, even though the act was morally proper, because it was compelled it merits little
praise.

For philosophers like Immanuel Kant (1724–1804), this is why autonomy is required for truly moral
action. Kant argues in his Critique of Practical Reason (1788) and elsewhere that autonomously acting
without regard for one’s desires or interests is possible because people are able to act purely on the
basis of rational principles given to themselves by themselves. Indeed, the word “autonomous” derives
from the Greek for self (auto) and law (nomos) and literally means self-legislating, giving the law to
one’s self. Actions done through external or internal compulsion are, by contrast, “heteronomous” (the
law being given by something hetero or “other”). In this way autonomy differs from, though also
presupposes, metaphysical freedom, which is commonly defined as acting independently of the causal
order of nature. Political freedom, of course, has to do with people’s relationship to government and
other people regardless or their relationship to systems of cause and effect. But theories of political
freedom also draw upon the concept of autonomy.

Politics

Conceptions of autonomy are important politically, because one’s ideas about politics are often bound
up with one’s ideas about what people are and what they’re capable of doing or not doing.Those who
think that people are not capable or little capable of self-legislating, self-regulating action are not likely
to think that people are capable of governing themselves.

Liberal democratic theory, however, depends upon that ability. The authority of government in liberal
democracies draws its justification from the consent of the governed. Through systems of elections and
representation the people of democracies give the law to themselves. Liberal democracies are also
configured to develop certain institutions (like the free press) and to protect political and civil rights
(such as the rights to privacy and property) toward the end of ensuring people’s ability to act
autonomously and effectively. In this way, liberal democrats recognize autonomy not only as an intrinsic
human capacity but also as a political achievement and an important element of human well-being.
The legitimacy of liberal democracy is therefore threatened by claims that human beings are not the
truly autonomous agents we believe our- selves to be. And there is no shortage of people prepared to
argue this view. Many critics maintain that people really can’t act independently of their passions, of
their families, of the societies in which they live, of customs, conventions, and traditions, of structures of
privilege, exploitation, and oppression, including internalized oppression. Some go as far as to claim that
the sort of autonomy liberal democrats describe is the fantasy of wealthy, white, European and North
American males – or, worse, a privilege they enjoy only because they deny it to others. Still other critics
regard the idea as a mystifying ideology through which the ruling class deludes people about the
exploitive system under which they labor.

1 Relative Ethics

These may seem to be very broad ethical questions, yet the existence of child labor, breast ironing,
female circumcision, and divergent sexual practices make them very real questions – and in some cases,
where children’s lives are at stake, quite urgent. People have thought about and struggled with these
kinds of questions about the origins of ethics for many centuries. When one faces these hard questions,
thinks about the philosophical problem of the origins of ethics, and becomes aware of the great variety
of human customs the world over, it becomes tempting to say that right and wrong are just a matter of
opinion, since what is regarded as right or wrong in one culture may not be seen in the same way in
another culture. Right and wrong seem culturally relative. Also, some practices that were once regarded
as right, either a century ago or 20 years ago, are nowadays regarded as wrong. Ethical standards seem
to change, and there is so much disagreement between cultural practices that ethical relativism, the
view that right and wrong are always relative, seems justified.

Those who defend the idea that ethics is relative emphasize the differences among our ethical
judgments and the differences among various ethical traditions. Some relativists call these cultural and
ethical traditions folkways. This is a helpful concept for understanding ethical relativism because it
points out to us that the ways and customs are simply developed by average people (folk) over long
periods of time. Here is how the twentieth-century social scientist William G. Sumner describes the
folkways:

The folkways . . . are not creations of human purpose and wit. They are like products of natural forces
which men unconsciously set in operation, or they are like the instinctive ways of animals, which are
developed out of experience, which reach a final form of maximum adaptation to an interest, which are
handed down by tradition and admit of no exception or variation, yet change to meet new conditions,
still within the same limited methods, and without rational reflection or purpose. From this it results
that all the life of human beings, in all ages and stages of culture, is primarily controlled by a vast mass
of folkways handed down from the earliest existence of the race. (Sumner 1906: 19–20)

Something is right, an ethical relativist will say, if it is consistent with a given society’s folkways and
wrong if it goes against a society’s folk- ways. Relative ethics will say that in cultures where female
circumcision has taken place for centuries, it is right to continue to circumcise young girls, and wrong to
attempt to change this tradition.
Relativists believe that ethical differences between cultures are irreconcilable. On their view,
irreconcilable differences are actually quite predictable because each society today has its own unique
history and it is out of this history that a society’s ethical values and standards have been forged. Around
the globe, each society has its own unique history; consequently, each society has its own unique set of
ethical standards. Relativists would say that if there are any agreements between cultures on ethical
values, standards, or issues, we should not place any importance on that accidental fact, because, after
all, the true nature of ethics is relative, and the origin of ethics lies in each society’s unique history.

1.2 Universal Ethics

Not everyone, though, is content with the relativist’s rather skeptical answer to the question about the
ultimate nature and origin of ethics. Instead of a relativist answer to the question, plenty of people have
asserted that not everything is relative. A critic of relativism will say that not everything in ethics is
relative, because some aspects of ethics are universal. Those who hold this view are called ethical
universalists. In contrast to the ethical relativist who claims that all ethics is relative, the universalists
contend that there are at least some ethical values, standards, or principles that are not relative. And
this somewhat modest claim is all that a universalist needs to challenge the relativist’s generalization
that all ethics is relative. An easy way to grasp what universalists are talking about is to consider the
concept of universal human rights. The Universal Declaration of Human Rights was created in 1948 by
the United Nations General Assembly. It has inspired close to 100 bills of rights for new nations. People
who believe in universal human rights hold ethical universalism: they believe there are certain rights
that all human beings have, no matter what culture or society they belong to. An ethical relativist will
deny this, and maintain that rights are meaningful only within a particular cultural tradition, not in a
universal sense.

1.3 Cultural Relativism or Ethical Relativism?

In order to achieve a bit more clarity on the issue of relativism, we must consider the difference
between cultural relativism and ethical relativism

Cultural relativism is the observation that, as a matter of fact, different cultures have different practices,
standards, and values. Child labor, breast ironing, divergent sexual practices, and female circumcision
are examples of practices that are customary in some cultures and would be seen as ethical in those
cultures. In other cultures, however, such practices are not customary, and are seen as unethical. If we
took the time to study different cultures, as anthropologists and other social sci- entists do, we would
see that there is no shortage of examples such as these. As the anthropologist Ruth Benedict has put it:
“The diversity of cultures can be endlessly documented” (1934: 45).

As examples, consider wife and child battering, polygamy, cannib- alism, or infanticide. There are some
cultures (subcultures at least) that endorse these practices as morally acceptable. Western culture, by
contrast, regards these practices as immoral and illegal. It seems to be true, therefore, just as a matter
of fact, that different cultures have different ethical standards on at least some matters. By comparing
different cultures, we can easily see differences between them, not just on ethical matters, but on many
different levels.
What we need to notice about ethical relativism, in contrast with cultural relativism, is that ethical
relativism makes a much stronger and more controversial claim. Ethical relativism is the view that all
ethical standards are relative, to the degree that there are no permanent, universal, objective values or
standards. This view, though, cannot be justified by simply comparing different cultures and noticing the
differences between them. The ethical relativist’s claim goes beyond observation and predicts that all
ethical standards, even the ones we have not yet observed, will always be relative.

1.4 Cultural Relativism and Universal Ethics

A universalist will respond to ethical relativism by pointing out that very general basic values – not
specific moral rules or codes – are recognized, at least implicitly, to some extent in all societies. Even
though on the surface, in particular actions or mores, there seems to be unavoidable disagreement, a
universalist will observe that there are general values that provide the foundations of ethics. One
ambition, then, for the universalists who wish to immerse themselves in cultural studies, is not only to
attempt to understand and appreciate other cultures’ perspectives and experiences, but to detect what
common ground – common values – are shared by the different cultures. Certainly there is cultural
difference on how these values are manifested, but according to universalism, the values themselves
represent more than arbitrary social conventions.

An ethical universalist, then, can agree that there are cultural differences and accept that some social
practices are merely conventional. In other words, ethical universalism is consistent with cultural
relativism (see Diagram 1.1).

Although ethical universalism is consistent with cultural relativism, social scientists from the first half of
the twentieth century who have done extensive research into different cultures and societies have
contributed to the linking in our minds of ethical relativism and cultural relativism. But the distinction
between cultural relativism and ethical relativism is an important one to have in hand when one is
reading the works of social scientists, for they can move from one to the other and back again without
our noticing.

2.7 Egoism

“All sensible people are selfish,” wrote Ralph Waldo Emerson (1803–82). Nowadays, conventional
wisdom is that one doesn’t even have to be sensible to selfish – because in fact everyone is always
selfish. In some circles, a belief in genuine altruism is taken as a sign of naivety.

Emerson’s line, however, need not inspire cynicism. The question, “Can egoism be morally justified?” is
clearly not self-contradictory and needs to be answered. Furthermore, if being good and being selfish
happen to require the same things, then selfishness would be something to celebrate.

Psychological egoism

First, however, something must be said about the view that, as a matter of fact, everyone is at heart an
egoist. People may not do what’s in their own best interests, but they will, according to the
psychological egoist, only do what they believe is in their own best interests. Apparent counterexamples
are just that – apparent. Take the sentiments expressed in Bryan Adams’s soppy ballad, “(Everything I
Do) I Do It ForYou.” Echoing countless other love songs, Adams sings “Take me as I am, take my life. / I
would give it all, I would sacrifice.”Yet even this extreme profession of selflessness can easily be seen as
masking a deeper selfishness.Why, after all, is he saying this? For the purposes of seduction, of course.
He may believe he is sincere, but then perhaps this is one of nature’s tricks: only by fooling the seducer
can the seduction be successful. Besides, even if he’s telling the truth, what does that show? That he
would rather die than be without his love? Selfishness again! Death is better than being miserable for
him.

This view – known as psychological egoism – can be very persuasive. But although you can always
explain away altruistic behavior in selfish terms, it’s not clear why we should prefer a selfish explanation
over an altruistic one simply because it’s possible to do so.

From a logical point of view it’s important to see that from the fact that the act is pleasing it doesn’t
follow that the act was done for the sake of the pleasure. From the fact that saving a drowning swimmer
makes one feel good, for example, it doesn’t follow that the saving was done for the sake of the good
feeling. Pleasure may be a happy result of an action while not being the reason for the action.

There’s also an objection that can be brought against the egoistic hypothesis from the point of view of
scientific method – it can’t be tested. If every act can be interpreted as selfish, it’s not even possible to
construct an experiment that might falsify the hypothesis. If someone saves a drowning swimmer, he
did it for selfish reasons. If he doesn’t save the drowning swimmer, he didn’t do it for selfish reasons.
Admissible hypotheses must, at least in principle, be somehow testable. And since every possible act can
be interpreted as selfish, no observation could ever in principle test psycho- logical egoism.

Ethical egoism

Even if psychological egoism is true, however, it only says something about the facts of human
psychology. It doesn’t say anything about whether or not being egoistic is rational or moral – whether
one ought to be selfish. In short, it leaves all the big ethical questions unanswered. Ethicists cannot
avoid the question of whether egoism is morally justified.

Adam Smith (1732–90) took a stab at an answer, at least in part, by arguing that selfishness in economic
affairs is morally justified because it serves the common good in the most efficient way: “It is not from
the benevolence of the butcher, the brewer, or the baker, that we expect our dinner,” he wrote, “but
from their regard to their own interest. We address ourselves, not to their humanity but their self-love,
and never talk to them of our own necessities but of their advantages.” Smith’s argument in The Wealth
of Nations does not, however, justify what is known as ethical egoism: the view that it’s always ethical
to act in one’s own interests. Even though it may be true that egoism is an efficient route to the
common good in certain contexts, it’s implausible that it’s always so. Contrary to popular conception,
Smith’s general moral theory is, in fact, decidedly not egoistic, grounding morality instead in sympathy,
moral sentiment, and an unselfish “impartial spectator.” Smith does not defend ethical egoism as a
universal or even general principle. To do that, one needs to argue that egoism is itself morally
justifiable, that it’s justifiable even if it doesn’t serve as a means to some other good.

Rational egoism

So, how might one argue that egoism is ethically justified? Well, many believe that ethics must be
rational. Moral laws might not be entirely derived from rational principles, but at the very least ethics
must accord with reason, and not command anything contrary to reason – that is, any- thing that’s
inconsistent, self-contradictory, or conceptually incoherent. So, if ethics must be rational, and one may
rationally (consistently, etc.) act for the sake of self-interest, then acting selfishly meets at least a
rationality test for morality.

It’s not at all clear, however, how acting rationally for the sake of self- interest is in any ethical sense
decisive. Helping oneself seems no more or less rational than helping someone else. Might one not act
rationally for the sake of immoral aims? Indeed, many would argue that aims or goals cannot be
established by rationality alone.

Perhaps the most important question with regard to this issue is whether there’s any conflict between
self-interest and altruism anyway. Many ancient Greek philosophers, including Plato and Aristotle,
wouldn’t have seen any conflict between egoism and altruism because they thought that if one behaves
badly one ultimately harms oneself. The greedy man, for example, is never at peace with himself,
because he is never satisfied with what he has. In contrast, as Plato had Socrates say before his own
execution, “a good man cannot be harmed either in life or in death.” That may be too optimistic a view.
But the idea that being good is a form of “enlightened self-interest” is plausible.

But does enlightened self-interest give people a reason for being altruistic, or does it show genuine
altruism isn’t possible? Some would argue that any act that’s in one’s self-interest cannot be called
altruistic, even if it helps others: the concept of altruism excludes self-interested actions, even those
that coincide with the interests of others. An alternative view holds that altruism and self-interest are
compatible: the fact that do-gooders know that doing good helps them in no way diminishes the extent
to which what they do is done for others. The dilemma can be posed with regard to the Bryan Adams
song. Is he lying when he says everything he does, he does it for her, if he also does it for himself? Or has
he just conveniently neglected to point out that his altruism requires no self-sacrifice?

2.8 Hedonism

Why be moral? One way to try to answer this question is to consider why it would be a good thing if
every moral problem were actually sorted out. What would everyone being good actually lead to? World
peace. No one dying of hunger. Everyone being free. Justice reigning supreme. And what would be so
good about that?

The obvious answer is that then everyone would be happy – or at least, as happy as is humanly possible.
So, the point of being good is that it would lead to a happier world.

If this is right, then the basis of morality is hedonism: the view that the only thing that is of value in itself
is happiness (or pleasure, though for simplicity we will talk only of happiness for now), and the only
thing bad in itself is unhappiness (or pain). This might seem a surprising conclusion. After all, hedonism
is usually associated with the selfish pursuit of fleeting pleasures. So, how can it be the basis of
morality?

Happiness as the ultimate good

The answer to this question must start with an explanation of why happiness is the only good. Aristotle
(384–322 BCE) thought this was evidently true, because there are things done for their own sake and
things done for the sake of something else. Things done for the sake of something else are not valuable
in themselves, but only instrumentally valuable, as means to an end. Those things done for their own
sake, in contrast, are intrinsically val- uable, as ends in themselves. Of all the good things in life, only
happiness, it seems, is prized for its own sake. Everything else is valued only because it leads to
happiness. Even love is not valued in itself – a love that makes us permanently miserable is not worth
having.

There is, however, nothing in this conclusion that entails pursuing selfish, fleeting pleasures. Epicurus
(341–271 BCE), one of the first hedonic philosophers, understood this well. He thought that no one
could be happy if he or she permanently sought intense pleasures, especially of the fleeting kind (what
he called kinetic or active pleasures). Rather, to be truly happy – or, perhaps better, “content” – one
needs a certain calm, tranquillity, and peace of mind (static pleasures). And if we see that happiness has
value in itself, then we have reason to be concerned with the happiness of others, not just our own.
Hence, Epicurus concluded, “It is impossible to live a pleasant life without living wisely and honorably
and justly, and it is impos- sible to live wisely and honorably and justly without living pleasantly.”

One of the most important hedonic ethics of the modern era is the utilitarianism of Jeremy Bentham
(1749–1832) and John Stuart Mill (1806–73). From the same premise – that pleasure and happiness are
the only goods, and pain and unhappiness the only evils – they concluded that actions are right in so far
as they promote the greatest happiness of the greatest number and wrong in so far as they diminish it.

Precisely what?

One of the recurring problems for hedonic philosophies is pinning down just what it is that is supposed
to be intrinsically valuable. Is it pleasure – by which we mean pleasant sensations? Or is it happiness, in
which case what is that? A stable state of mind? A temporary feeling of well-being? Objec- tively
flourishing? Or are each of these good in themselves?

The problem is a persistent and serious one, for if we understand hap- piness and pleasure in
conventional senses, it becomes far from clear that they are intrinsic goods, above all others. Moreover,
philosophers’ attempts to precisely define the crucial qualities of pleasure (as Bentham did, for example,
by pointing to properties like “intensity” and “duration”) are notoriously slippery. Critics of Mill’s work
argue that, if he were serious, he would have to admit that the life of a contented pig is better than that
of a troubled philosopher. Mill tried to reply to this by distinguishing between higher pleasures of the
mind and lower pleasures of the body (Utilitarianism, 1859).

But what makes higher pleasures higher? Mill thought “competent judges,” who had experienced both,
would prefer a life with some higher pleasures than one with only lower ones, but not vice versa.Yet,
even if this were true, it doesn’t seem to be the case that the higher pleasures are pre- ferred simply
because they are more pleasurable. If, however, there are other reasons for choosing them, then
hedonic considerations are not the only important ones after all.

Robert Nozick made an even stronger argument against hedonism in a thought experiment in which he
asked if one would choose to live happily in a virtual world or less happily in the real one. Almost
everyone, he sug- gested, would prefer the real world, which suggests people prefer reality to
happiness. If that’s right, then happiness is not the only thing that’s good in itself. It seems that truth
and authenticity are, as well.
AYER AND EMOTIVISM

The next player in the story is Alfred Jules Ayer (1910–1989), who was influenced both by Hume’s and
Moore’s presentation of the fact–value problem. Hume and Moore each showed two things. First, they
explained why there is a fact–value problem; second, they offered solutions to the problem by showing
what moral value really is. For Hume, the problem involves the fallacy of deriving ought from is, and his
solution is that moral value rests on emotional reactions. For Moore, the problem involves the
naturalistic fallacy, and his solution involves intuitively recognizing moral goodness within things.

Ayer also takes this two-pronged approach. First, he argues that the fact– value problem arises because
moral statements cannot pass a critical test of meaning called the verification principle. Second,
expanding on Hume, his solution is that moral utterances are only expressions of feelings, a position
called emotivism. Let’s look at each of these components.

Ayer’s Theory

Regarding the verification principle, in the 1930s, Ayer went to Vienna to study with a group of
philosophers called the “Logical Positivists,” who believed that the meaning of a sentence is found in its
method of verification. According to that test, all meaningful sentences must be either

(a) Tautologies (statements that are true by definition and of the form “A is A” or reducible to such
statements) or

(b) Empirically verifiable (statements regarding observations about the world, such as “The book is
red”).

Based on this test, mathematical statements are meaningful, such as all triangles have three sides,
because they are tautologies. The statement “The Empire State Building is in New York City” is
meaningful because it is empirically verifiable.

What, though, about value statements such as “Charity is good”? According to the above test, they are
meaningless because they are neither tautologies nor verifiable statements. That is, it is not true by
definition that charity is good, and there is no way to empirically verify whether charity is good.
Similarly, accord- ing to the above test, a theological statement such as “God is guiding your life” is
meaningless because it is neither a tautology nor empirically verifiable. Ayer makes his point about the
meaninglessness of value utterances here:

[T]he fundamental ethical concepts are unanalyzable, inasmuch as there is no criterion by which one can
test the validity of the judgments in which they occur. ... The reason why they are unanalyzable is that
they are mere pseudo-concepts. The presence of an ethical symbol in a proposition adds nothing to its
factual content. Thus if I say to someone, “You acted wrongly in stealing that money,” I am not stating
anything more than if I had simply said, “You stole that money.” In adding that the action is wrong, I am
not making any further statement about it.4

His argument is essentially this:

(1) A sentence is meaningful if and only if it can be verified. (2) Moral sentences cannot be verified. (3)
Therefore, moral sentences are not meaningful.
Thus, there is a fact–value problem insofar as moral utterances fail the verification test and are not
factual statements.

Ayer’s solution to the fact–value problem is that moral utterances function in a special nonfactual way.
Although they are indeed factually meaningless, they are not just gibberish. For Ayer, utterances such as
“Charity is good” express our positive feelings about charity in much the same way as if we shouted out

“Charity—hooray!” Similarly, the utterance “Murder is wrong” expresses our negative feelings about
murder just as if we shouted “Murder—boo!” The view that moral utterances merely express our
feelings is called emotivism. Ayer emphasizes that moral utterances don’t even report our feelings; they
just express our feelings. Here’s the difference:

■■
Reported feeling: “Charity is good” means “I have positive feelings about charity.” Expressed feeling:
“Charity is good” means “Charity—hooray!”

Even reports of feelings are in some sense factual: It is either true or false that “I have positive feelings
about charity,” and I can empirically verify this with a psychological analysis of my mental state.
However, the emotional expression “Charity—hooray!” is like a grunt or a sigh; there is nothing to
factually report.

Philosophers have introduced two terms to distinguish between factual and nonfactual utterances:
cognitive and noncognitive. When a statement has factual content, it is cognitive: We can know (or
“cognize”) its truth value—whether it is true or false. When a statement lacks factual content, it is
noncognitive: It has no truth value. Traditional moral theories all claim to be cognitivist: They all claim
that moral statements have truth value. Here is how four traditional theories would give a cognitivist
interpretation of the moral utterance “Charity is good”:

■■■■
Egoism: Charity maximizes self-interest. Utilitarianism: Charity maximizes general pleasure. Kantianism:
Charity is a rational duty. Virtue theory: Charity promotes human flourishing.

Moore’s emotivist solution to the fact–value problem is also cognitivist because for him “Charity is
good” means “Charity has the indefinable property of moral goodness” (which, according to Moore, we
know to be true through moral intuition). For Ayer, all these cognitivist theories are misguided. Because
moral utterances like “Charity is good” do not pass the test for meaning by the verification principle,
they cannot be cognitive. The content that they have is only noncognitive and takes the form of
expressing our feelings.

Ayer’s account of emotivism directly attacks many of our cherished assumptions about morality. We
typically think that moral utterances are factually meaningful— not so according to Ayer. We typically
think that morality involves some use of our reasoning ability—again, not so for Ayer. What’s perhaps
most unsettling about Ayer’s theory is its implication that ethical disagreement is fundamentally a
disagree- ment in attitude. Suppose you and I disagree about whether abortion is morally per- missible
and we debate the issue—in a civilized way without any emotional outbursts. In Ayer’s view, this is still
simply a matter of us having underlying emotional attitudes that conflict; it is not really a disagreement
about facts of the matter.

Criticisms of Emotivism

Several objections to Ayer’s emotivism were quickly forthcoming after the appearance of his book. A
first criticism was that the verification theory of meaning, upon which Ayer’s emotivism was founded,
had serious problems.

Specifically, it did not pass its own test. Here in brief is the principle:

Verification principle: A statement is meaningful if and only if it is either tautological or empirically


verifiable.

We now ask the question, “Is the verification principle itself either tautological or empirically
verifiable?” The answer is that it is not, which means that the verification principle is meaningless. If
that’s the case, then we are not obliged to use the verification principle as a test for moral utterances.
The rest of Ayer’s emotivist analysis of morality thus falls apart.

Second, there is a problem with the emotivist view that ethical disagreements are fundamentally
disagreements in attitude. Specifically, this blurs an important distinction between having reasons for
changing attitudes and having causes that change our attitudes. Suppose again that you and I are
debating the abortion issue. Consider now two methods of resolving our dispute. Method 1 involves you
giving me a series of reasons in support of your position, and I eventually agree with you. Method 2
involves a surgeon operating on my brain in a way that alters my emotional attitude about the abortion
issue. Method 1 involves reasons behind my changed view, and Method 2 involves causes for my
changed view. The emotivist theory cannot easily distinguish between these two methods of attitude
change. One way or another, according to emotivism, changes in attitude will come only through some
kind of causal manipulation with our emotions. This is a problem because virtually everyone would
agree that there is a major difference between what is going on in Method 1 and Method 2, and it is
only the former that is a legitimate way of resolving moral disagreements.

Third, morality seems deeper than mere emotions or acting on feelings or attitudes. Moral judgments
are universalizable: If it is wrong for Jill to steal, then it is wrong for anyone relevantly similar to Jill to
steal. Emotivism reduces morality to isolated emotive expressions or attitudes that don’t apply
universally. It makes more sense to see morality as a function of applying principles such as “It is wrong
to steal,” which has a universal element.

Ayer’s version of emotivism is rather extreme, and it is no surprise that it creates so many problems. A
more moderate version of emotivism was later pro- posed by Charles Leslie Stevenson (1908–1979) in
his book Ethics and Language (1944).5 Stevenson agrees that moral utterances have an emotive
component that is noncognitive. However, he argues that moral utterances sometimes have cog- nitive
elements too. Moral utterances are so complex, Stevenson says, that we cannot give a specific pattern
that applies to all moral utterances all the time.

Nevertheless, a typical moral utterance like “Charity is good” might have these specific components:

■■■
Emotive expression (noncognitive): “Charity—hooray!” Report about feelings (cognitive): “I approve of
charity.”

Description of other qualities (cognitive): “Charity has qualities or relations X, Y, and Z” (for example,
reduces suffering, reduces social inequality).

Stevenson’s suggestion is reasonable. If we are unhappy with Ayer’s extreme emotivism, we can still
accept that there is some noncognitive emotive element to moral utterances. Indeed, considering how
frequently emotion enters into our moral evaluations, such as the opening example from the Weblog,
we will want to recognize at least a more limited role of emotive expressions within moral discussions.

1. Examples
In Book I of Plato’s Republic, Cephalus defines ‘justice’ as speaking the truth and paying
one’s debts. Socrates quickly refutes this account by suggesting that it would be wrong to
repay certain debts—for example, to return a borrowed weapon to a friend who is not in his
right mind. Socrates’ point is not that repaying debts is without moral import; rather, he
wants to show that it is not always right to repay one’s debts, at least not exactly when the
one to whom the debt is owed demands repayment. What we have here is a conflict between
two moral norms: repaying one’s debts and protecting others from harm. And in this case,
Socrates maintains that protecting others from harm is the norm that takes priority.
Nearly twenty-four centuries later, Jean-Paul Sartre described a moral conflict the resolution
of which was, to many, less obvious than the resolution to the Platonic conflict. Sartre (1957)
tells of a student whose brother had been killed in the German offensive of 1940. The student
wanted to avenge his brother and to fight forces that he regarded as evil. But the student’s
mother was living with him, and he was her one consolation in life. The student believed that
he had conflicting obligations. Sartre describes him as being torn between two kinds of
morality: one of limited scope but certain efficacy, personal devotion to his mother; the other
of much wider scope but uncertain efficacy, attempting to contribute to the defeat of an
unjust aggressor.
While the examples from Plato and Sartre are the ones most commonly cited, there are many
others. Literature abounds with such cases. In Aeschylus’s Agamemnon, the protagonist
ought to save his daughter and ought to lead the Greek troops to Troy; he ought to do each
but he cannot do both. And Antigone, in Sophocles’s play of the same name, ought to
arrange for the burial of her brother, Polyneices, and ought to obey the pronouncements of
the city’s ruler, Creon; she can do each of these things, but not both. Areas of applied ethics,
such as biomedical ethics, business ethics, and legal ethics, are also replete with such cases.

2. The Concept of Moral Dilemmas


What is common to the two well-known cases is conflict. In each case, an agent regards
herself as having moral reasons to do each of two actions, but doing both actions is not
possible. Ethicists have called situations like these moral dilemmas. The crucial features of a
moral dilemma are these: the agent is required to do each of two (or more) actions; the agent
can do each of the actions; but the agent cannot do both (or all) of the actions. The agent thus
seems condemned to moral failure; no matter what she does, she will do something wrong
(or fail to do something that she ought to do).
The Platonic case strikes many as too easy to be characterized as a genuine moral dilemma.
For the agent’s solution in that case is clear; it is more important to protect people from harm
than to return a borrowed weapon. And in any case, the borrowed item can be returned later,
when the owner no longer poses a threat to others. Thus in this case we can say that the
requirement to protect others from serious harm overrides the requirement to repay one’s
debts by returning a borrowed item when its owner so demands. When one of the conflicting
requirements overrides the other, we have a conflict but not a genuine moral dilemma. So in
addition to the features mentioned above, in order to have a genuine moral dilemma it must
also be true that neither of the conflicting requirements is overridden (Sinnott-Armstrong
1988, Chapter 1).

3. Problems
It is less obvious in Sartre’s case that one of the requirements overrides the other. Why this is
so, however, may not be so obvious. Some will say that our uncertainty about what to do in
this case is simply the result of uncertainty about the consequences. If we were certain that
the student could make a difference in defeating the Germans, the obligation to join the
military would prevail. But if the student made little difference whatsoever in that cause, then
his obligation to tend to his mother’s needs would take precedence, since there he is virtually
certain to be helpful. Others, though, will say that these obligations are equally weighty, and
that uncertainty about the consequences is not at issue here.
Ethicists as diverse as Kant (1971/1797), Mill (1979/1861), and Ross (1930, 1939) have
assumed that an adequate moral theory should not allow for the possibility of genuine moral
dilemmas. Only recently—in the last sixty years or so—have philosophers begun to
challenge that assumption. And the challenge can take at least two different forms. Some will
argue that it is not possible to preclude genuine moral dilemmas. Others will argue that even
if it were possible, it is not desirable to do so.
To illustrate some of the debate that occurs regarding whether it is possible for any theory to
eliminate genuine moral dilemmas, consider the following. The conflicts in Plato’s case and
in Sartre’s case arose because there is more than one moral precept (using ‘precept’ to
designate rules and principles), more than one precept sometimes applies to the same
situation, and in some of these cases the precepts demand conflicting actions. One obvious
solution here would be to arrange the precepts, however many there might be, hierarchically.
By this scheme, the highest ordered precept always prevails, the second prevails unless it
conflicts with the first, and so on. There are at least two glaring problems with this obvious
solution, however. First, it just does not seem credible to hold that moral rules and principles
should be hierarchically ordered. While the requirements to keep one’s promises and to
prevent harm to others clearly can conflict, it is far from clear that one of these requirements
should always prevail over the other. In the Platonic case, the obligation to prevent harm is
clearly stronger. But there can easily be cases where the harm that can be prevented is
relatively mild and the promise that is to be kept is very important. And most other pairs of
precepts are like this. This was a point made by Ross in The Right and the Good (1930,
Chapter 2).
The second problem with this easy solution is deeper. Even if it were plausible to arrange
moral precepts hierarchically, situations can arise in which the same precept gives rise to
conflicting obligations. Perhaps the most widely discussed case of this sort is taken from
William Styron’s Sophie’s Choice (1980; see Greenspan 1983 and Tessman 2015, 160–163).
Sophie and her two children are at a Nazi concentration camp. A guard confronts Sophie and
tells her that one of her children will be allowed to live and one will be killed. But it is
Sophie who must decide which child will be killed. Sophie can prevent the death of either of
her children, but only by condemning the other to be killed. The guard makes the situation
even more excruciating by informing Sophie that if she chooses neither, then both will be
killed. With this added factor, Sophie has a morally compelling reason to choose one of her
children. But for each child, Sophie has an apparently equally strong reason to save him or
her. Thus the same moral precept gives rise to conflicting obligations. Some have called such
cases symmetrical (Sinnott-Armstrong 1988, Chapter 2).

4. Dilemmas and Consistency


We shall return to the issue of whether it is possible to preclude genuine moral dilemmas.
But what about the desirability of doing so? Why have ethicists thought that their theories
should preclude the possibility of dilemmas? At the intuitive level, the existence of moral
dilemmas suggests some sort of inconsistency. An agent caught in a genuine dilemma is
required to do each of two acts but cannot do both. And since he cannot do both, not doing
one is a condition of doing the other. Thus, it seems that the same act is both required and
forbidden. But exposing a logical inconsistency takes some work; for initial inspection
reveals that the inconsistency intuitively felt is not present. Allowing OAOA to designate that
the agent in question ought to do AA (or is morally obligated to do AA, or is morally required
to do A)A), that OAOA and OBOB are both true is not itself inconsistent, even if one adds
that it is not possible for the agent to do both AA and BB. And even if the situation is
appropriately described as OAOA and O¬AO¬A, that is not a contradiction; the contradictory
of OAOA is ¬OA¬OA. (See Marcus 1980 and McConnell 1978, 273.)
Similarly rules that generate moral dilemmas are not inconsistent, at least on the usual
understanding of that term. Ruth Marcus suggests plausibly that we “define a set of rules as
consistent if there is some possible world in which they are all obeyable in all circumstances
in that world.” Thus, “rules are consistent if there are possible circumstances in which no
conflict will emerge,” and “a set of rules is inconsistent if there are no circumstances, no
possible world, in which all the rules are satisfiable” (Marcus 1980, 128 and 129). Kant,
Mill, and Ross were likely aware that a dilemma-generating theory need not be inconsistent.
Even so, they would be disturbed if their own theories allowed for such predicaments. If this
speculation is correct, it suggests that Kant, Mill, Ross, and others thought that there is an
important theoretical feature that dilemma-generating theories lack. And this is
understandable. It is certainly no comfort to an agent facing a reputed moral dilemma to be
told that at least the rules which generate this predicament are consistent because there is a
possible world in which they do not conflict. For a good practical example, consider the
situation of the criminal defense attorney. She is said to have an obligation to hold in
confidence the disclosures made by a client and to be required to conduct herself with candor
before the court (where the latter requires that the attorney inform the court when her client
commits perjury) (Freedman 1975, Chapter 3). It is clear that in this world these two
obligations often conflict. It is equally clear that in some possible world—for example, one
in which clients do not commit perjury—that both obligations can be satisfied. Knowing this
is of no assistance to defense attorneys who face a conflict between these two requirements
in this world.
Ethicists who are concerned that their theories not allow for moral dilemmas have more than
consistency in mind. What is troubling is that theories that allow for dilemmas fail to
be uniquely action-guiding. A theory can fail to be uniquely action-guiding in either of two
ways: by recommending incompatible actions in a situation or by not recommending any
action at all. Theories that generate genuine moral dilemmas fail to be uniquely action-
guiding in the former way. Theories that have no way, even in principle, of determining what
an agent should do in a particular situation have what Thomas E. Hill, Jr. calls “gaps” (Hill
1996, 179–183); they fail to be action-guiding in the latter way. Since one of the main points
of moral theories is to provide agents with guidance, that suggests that it is desirable for
theories to eliminate dilemmas and gaps, at least if doing so is possible.
But failing to be uniquely action-guiding is not the only reason that the existence of moral
dilemmas is thought to be troublesome. Just as important, the existence of dilemmas does
lead to inconsistencies if certain other widely held theses are true. Here we shall consider two
different arguments, each of which shows that one cannot consistently acknowledge the
reality of moral dilemmas while holding selected (and seemingly plausible) principles.
The first argument shows that two standard principles of deontic logic are, when conjoined,
incompatible with the existence of moral dilemmas. The first of these is the principle of
deontic consistency
(PC)OA→¬O¬A.(PC)OA→¬O¬A.
Intuitively this principle just says that the same action cannot be both obligatory and
forbidden. Note that as initially described, the existence of dilemmas does not conflict with
PC. For as described, dilemmas involve a situation in which an agent ought to do AA, ought
to do BB, but cannot do both AA and BB. But if we add a principle of deontic logic, then we
obtain a conflict with PC:
(PD)□(A→B)→(OA→OB).(PD)◻(A→B)→(OA→OB).
Intuitively, PD just says that if doing AA brings about BB, and if AA is obligatory (morally
required), then BB is obligatory (morally required). The first argument that generates
inconsistency can now be stated. Premises (1), (2), and (3) represent the claim that moral
dilemmas exist.
1. OAOA
2. OBOB
3. ¬C(A&B)¬C(A&B) [where ‘¬C¬C’ means ‘cannot’]
4. □(A→B)→(OA→OB)◻(A→B)→(OA→OB) [where ‘□◻’ means physical
necessity]
5. □¬(B&A)◻¬(B&A) (from 3)
6. □(B→¬A)◻(B→¬A) (from 5)
7. □(B→¬A)→(OB→O¬A)◻(B→¬A)→(OB→O¬A) (an instantiation of 4)
8. OB→O¬AOB→O¬A (from 6 and 7)
9. O¬AO¬A (from 2 and 8)
10. OA and O¬AOA and O¬A (from 1 and 9)
Line (10) directly conflicts with PC. And from PC and (1), we can conclude:
11. ¬O¬A¬O¬A
And, of course, (9) and (11) are contradictory. So if we assume PC and PD, then the
existence of dilemmas generates an inconsistency of the old-fashioned logical sort. (Note: In
standard deontic logic, the ‘□◻’ in PD typically designates logical necessity. Here I take it to
indicate physical necessity so that the appropriate connection with premise (3) can be made.
And I take it that logical necessity is stronger than physical necessity.)
Two other principles accepted in most systems of deontic logic entail PC. So if PD holds,
then one of these additional two principles must be jettisoned too. The first says that if an
action is obligatory, it is also permissible. The second says that an action is permissible if and
only if it is not forbidden. These principles may be stated as:
(OP)OA→PA;(OP)OA→PA;
and
(D)PA↔¬O¬A.(D)PA↔¬O¬A.
Principles OP and D are basic; they seem to be conceptual truths (Brink 1994, section IV).
The second argument that generates inconsistency, like the first, has as its first three
premises a symbolic representation of a moral dilemma.
1. OAOA
2. OBOB
3. ¬C(A&B)¬C(A&B)
And like the first, this second argument shows that the existence of dilemmas leads to a
contradiction if we assume two other commonly accepted principles. The first of these
principles is that ‘ought’ implies ‘can’. Intuitively this says that if an agent is morally
required to do an action, it must be possible for the agent to do it. This principle seems
necessary if moral judgments are to be uniquely action-guiding. We may represent this as
4. OA→CAOA→CA (for all AA)
The other principle, endorsed by most systems of deontic logic, says that if an agent is
required to do each of two actions, she is required to do both. We may represent this as
5. (OA&OB)→O(A&B)(OA&OB)→O(A&B) (for all AA and all BB)
The argument then proceeds:
6. O(A&B)→C(A&B)O(A&B)→C(A&B) (an instance of 4)
7. OA&OBOA&OB (from 1 and 2)
8. O(A&B)O(A&B) (from 5 and 7)
9. ¬O(A&B)¬O(A&B) (from 3 and 6)
So if one assumes that ‘ought’ implies ‘can’ and if one assumes the principle represented in
(5)—dubbed by some the agglomeration principle (Williams 1965)—then again a
contradiction can be derived.

5. Responses to the Arguments


Now obviously the inconsistency in the first argument can be avoided if one denies either PC
or PD. And the inconsistency in the second argument can be averted if one gives up either
the principle that ‘ought’ implies ‘can’ or the agglomeration principle. There is, of course,
another way to avoid these inconsistencies: deny the possibility of genuine moral dilemmas.
It is fair to say that much of the debate concerning moral dilemmas in the last sixty years has
been about how to avoid the inconsistencies generated by the two arguments above.
Opponents of moral dilemmas have generally held that the crucial principles in the two
arguments above are conceptually true, and therefore we must deny the possibility of genuine
dilemmas. (See, for example, Conee 1982 and Zimmerman 1996.) Most of the debate, from
all sides, has focused on the second argument. There is an oddity about this, however. When
one examines the pertinent principles in each argument which, in combination with
dilemmas, generates an inconsistency, there is little doubt that those in the first argument
have a greater claim to being conceptually true than those in the second. (One who
recognizes the salience of the first argument is Brink 1994, section V.) Perhaps the focus on
the second argument is due to the impact of Bernard Williams’s influential essay (Williams
1965). But notice that the first argument shows that if there are genuine dilemmas, then
either PC or PD must be relinquished. Even most supporters of dilemmas acknowledge that
PC is quite basic. E.J. Lemmon, for example, notes that if PC does not hold in a system of
deontic logic, then all that remains are truisms and paradoxes (Lemmon 1965, p. 51). And
giving up PC also requires denying either OP or D, each of which also seems basic. There
has been much debate about PD—in particular, questions generated by the Good Samaritan
paradox—but still it seems basic. So those who want to argue against dilemmas purely on
conceptual grounds are better off focusing on the first of the two arguments above.
Some opponents of dilemmas also hold that the pertinent principles in the second
argument—the principle that ‘ought’ implies ‘can’ and the agglomeration principle—are
conceptually true. But foes of dilemmas need not say this. Even if they believe that a
conceptual argument against dilemmas can be made by appealing to PC and PD, they have
several options regarding the second argument. They may defend ‘ought’ implies ‘can’, but
hold that it is a substantive normative principle, not a conceptual truth. Or they may even
deny the truth of ‘ought’ implies ‘can’ or the agglomeration principle, though not because of
moral dilemmas, of course.
Defenders of dilemmas need not deny all of the pertinent principles. If one thinks that each
of the principles at least has some initial plausibility, then one will be inclined to retain as
many as possible. Among the earlier contributors to this debate, some took the existence of
dilemmas as a counterexample to ‘ought’ implies ‘can’ (for example, Lemmon 1962 and
Trigg 1971); others, as a refutation of the agglomeration principle (for example, Williams
1965 and van Fraassen 1973). A common response to the first argument is to deny PD. A
more complicated response is to grant that the crucial deontic principles hold, but only in
ideal worlds. In the real world, they have heuristic value, bidding agents in conflict cases to
look for permissible options, though none may exist (Holbo 2002, especially sections 15–
17).
Friends and foes of dilemmas have a burden to bear in responding to the two arguments
above. For there is at least a prima facie plausibility to the claim that there are moral
dilemmas and to the claim that the relevant principles in the two arguments are true. Thus
each side must at least give reasons for denying the pertinent claims in question. Opponents
of dilemmas must say something in response to the positive arguments that are given for the
reality of such conflicts. One reason in support of dilemmas, as noted above, is simply
pointing to examples. The case of Sartre’s student and that from Sophie’s Choice are good
ones; and clearly these can be multiplied indefinitely. It will tempting for supporters of
dilemmas to say to opponents, “If this is not a real dilemma, then tell me what the agent
ought to do and why?” It is obvious, however, that attempting to answer such questions is
fruitless, and for at least two reasons. First, any answer given to the question is likely to be
controversial, certainly not always convincing. And second, this is a game that will never
end; example after example can be produced. The more appropriate response on the part of
foes of dilemmas is to deny that they need to answer the question. Examples as such cannot
establish the reality of dilemmas. Surely most will acknowledge that there are situations in
which an agent does not know what he ought to do. This may be because of factual
uncertainty, uncertainty about the consequences, uncertainty about what principles apply, or
a host of other things. So for any given case, the mere fact that one does not know which of
two (or more) conflicting obligations prevails does not show that none does.
Another reason in support of dilemmas to which opponents must respond is the point about
symmetry. As the cases from Plato and Sartre show, moral rules can conflict. But opponents
of dilemmas can argue that in such cases one rule overrides the other. Most will grant this in
the Platonic case, and opponents of dilemmas will try to extend this point to all cases. But the
hardest case for opponents is the symmetrical one, where the same precept generates the
conflicting requirements. The case from Sophie’s Choice is of this sort. It makes no sense to
say that a rule or principle overrides itself. So what do opponents of dilemmas say here?
They are apt to argue that the pertinent, all-things-considered requirement in such a case is
disjunctive: Sophie should act to save one or the other of her children, since that is the best
that she can do (for example, Zimmerman 1996, Chapter 7). Such a move need not be ad
hoc, since in many cases it is quite natural. If an agent can afford to make a meaningful
contribution to only one charity, the fact that there are several worthwhile candidates does
not prompt many to say that the agent will fail morally no matter what he does. Nearly all of
us think that he should give to one or the other of the worthy candidates. Similarly, if two
people are drowning and an agent is situated so that she can save either of the two but only
one, few say that she is doing wrong no matter which person she saves. Positing a disjunctive
requirement in these cases seems perfectly natural, and so such a move is available to
opponents of dilemmas as a response to symmetrical cases.
Supporters of dilemmas have a burden to bear too. They need to cast doubt on the adequacy
of the pertinent principles in the two arguments that generate inconsistencies. And most
importantly, they need to provide independent reasons for doubting whichever of the
principles they reject. If they have no reason other than cases of putative dilemmas for
denying the principles in question, then we have a mere standoff. Of the principles in
question, the most commonly questioned on independent grounds are the principle that
‘ought’ implies ‘can’ and PD. Among supporters of dilemmas, Walter Sinnott-Armstrong
(Sinnott-Armstrong 1988, Chapters 4 and 5) has gone to the greatest lengths to provide
independent reasons for questioning some of the relevant principles.

6. Moral Residue and Dilemmas


One well-known argument for the reality of moral dilemmas has not been discussed yet. This
argument might be called “phenomenological.” It appeals to the emotions that agents facing
conflicts experience and our assessment of those emotions.
Return to the case of Sartre’s student. Suppose that he joins the Free French forces. It is
likely that he will experience remorse or guilt for having abandoned his mother. And not
only will he experience these emotions, this moral residue, but it is appropriate that he does.
Yet, had he stayed with his mother and not joined the Free French forces, he also would have
appropriately experienced remorse or guilt. But either remorse or guilt is appropriate only if
the agent properly believes that he has done something wrong (or failed to do something that
he was all-things-considered required to do). Since no matter what the agent does he will
appropriately experience remorse or guilt, then no matter what he does he will have done
something wrong. Thus, the agent faces a genuine moral dilemma. (The best known
proponents of arguments for dilemmas that appeal to moral residue are Williams 1965 and
Marcus 1980; for a more recent contribution, see Tessman 2015, especially Chapter 2.)
Many cases of moral conflict are similar to Sartre’s example with regard to the agent’s
reaction after acting. Certainly the case from Sophie’s Choice fits here. No matter which of
her children Sophie saves, she will experience enormous guilt for the consequences of that
choice. Indeed, if Sophie did not experience such guilt, we would think that there was
something morally wrong with her. In these cases, proponents of the argument (for
dilemmas) from moral residue must claim that four things are true: (1) when the agents acts,
she experiences remorse or guilt; (2) that she experiences these emotions is appropriate and
called for; (3) had the agent acted on the other of the conflicting requirements, she would
also have experienced remorse or guilt; and (4) in the latter case these emotions would have
been equally appropriate and called for (McConnell 1996, pp. 37–38). In these situations,
then, remorse or guilt will be appropriate no matter what the agent does and these emotions
are appropriate only when the agent has done something wrong. Therefore, these situations
are genuinely dilemmatic and moral failure is inevitable for agents who face them.
There is much to say about the moral emotions and situations of moral conflict; the positions
are varied and intricate. Without pretending to resolve all of the issues here, it will be pointed
out that opponents of dilemmas have raised two different objections to the argument from
moral residue. The first objection, in effect, suggests that the argument is question-begging
(McConnell 1978 and Conee 1982); the second objection challenges the assumption that
remorse and guilt are appropriate only when the agent has done wrong.
To explain the first objection, note that it is uncontroversial that some bad feeling or other is
called for when an agent is in a situation like that of Sartre’s student or Sophie. But the
negative moral emotions are not limited to remorse and guilt. Among these other emotions,
consider regret. An agent can appropriately experience regret even when she does not believe
that she has done something wrong. For example, a parent may appropriately regret that she
must punish her child even though she correctly believes that the punishment is deserved.
Her regret is appropriate because a bad state of affairs is brought into existence (say, the
child’s discomfort), even when bringing this state of affairs into existence is morally
required. Regret can even be appropriate when a person has no causal connection at all with
the bad state of affairs. It is appropriate for me to regret the damage that a recent fire has
caused to my neighbor’s house, the pain that severe birth defects cause in infants, and the
suffering that a starving animal experiences in the wilderness. Not only is it appropriate that I
experience regret in these cases, but I would probably be regarded as morally lacking if I did
not. (For accounts of moral remainders as they relate specifically to Kantianism and virtue
ethics, see, respectively, Hill 1996, 183–187 and Hursthouse 1999, 44–48 and 68–77.)
With remorse or guilt, at least two components are present: the experiential component,
namely, the negative feeling that the agent has; and the cognitive component, namely, the
belief that the agent has done something wrong and takes responsibility for it. Although this
same cognitive component is not part of regret, the negative feeling is. And the experiential
component alone cannot serve as a gauge to distinguish regret from remorse, for regret can
range from mild to intense, and so can remorse. In part, what distinguishes the two is the
cognitive component. But now when we examine the case of an alleged dilemma, such as
that of Sartre’s student, it is question-begging to assert that it is appropriate for him to
experience remorse no matter what he does. No doubt, it is appropriate for him to
experience some negative feeling. To say, however, that it is remorse that is called for is to
assume that the agent appropriately believes that he has done something wrong. Since regret
is warranted even in the absence of such a belief, to assume that remorse is appropriate is
to assume, not argue, that the agent’s situation is genuinely dilemmatic. Opponents of
dilemmas can say that one of the requirements overrides the other, or that the agent faces a
disjunctive requirement, and that regret is appropriate because even when he does what he
ought to do, some bad will ensue. Either side, then, can account for the appropriateness of
some negative moral emotion. To get more specific, however, requires more than is
warranted by the present argument. This appeal to moral residue, then, does not by itself
establish the reality of moral dilemmas.
Matters are even more complicated, though, as the second objection to the argument from
moral residue shows. The residues contemplated by proponents of the argument are diverse,
ranging from guilt or remorse to a belief that the agent ought to apologize or compensate
persons who were negatively impacted by the fact that he did not satisfy one of the
conflicting obligations. The argument assumes that experiencing remorse or guilt or
believing that one ought to apologize or compensate another are appropriate responses only
if the agent believes that he has done something wrong. But this assumption is debatable, for
multiple reasons.
First, even when one obligation clearly overrides another in a conflict case, it is often
appropriate to apologize to or to explain oneself to any disadvantaged parties. Ross provides
such a case (1930, 28): one who breaks a relatively trivial promise in order to assist someone
in need should in some way make it up to the promisee. Even though the agent did no wrong,
the additional actions promote important moral values (McConnell 1996, 42–44).
Second, as Simon Blackburn argues, compensation or its like may be called for even when
there was no moral conflict at all (Blackburn 1996, 135–136). If a coach rightly selected
Agnes for the team rather than Belinda, she still is likely to talk to Belinda, encourage her
efforts, and offer tips for improving. This kind of “making up” is just basic decency.
Third, the consequences of what one has done may be so horrible as to make guilt inevitable.
Consider the case of a middle-aged man, Bill, and a seven-year-old boy, Johnny. It is set in a
midwestern village on a snowy December day. Johnny and several of his friends are riding
their sleds down a narrow, seldom used street, one that intersects with a busier, although still
not heavily traveled, street. Johnny, in his enthusiasm for sledding, is not being very careful.
During his final ride he skidded under an automobile passing through the intersection and
was killed instantly. The car was driven by Bill. Bill was driving safely, had the right of way,
and was not exceeding the speed limit. Moreover, given the physical arrangement, it would
have been impossible for Bill to have seen Johnny coming. Bill was not at fault, legally or
morally, for Johnny’s death. Yet Bill experienced what can best be described as remorse or
guilt about his role in this horrible event (McConnell 1996, 39).
At one level, Bill’s feelings of remorse or guilt are not warranted. Bill did nothing wrong.
Certainly Bill does not deserve to feel guilt (Dahl 1996, 95–96). A friend might even
recommend that Bill seek therapy. But this is not all there is to say. Most of us understand
Bill’s response. From Bill’s point of view, the response is not inappropriate, not irrational,
not uncalled-for. To see this, imagine that Bill had had a very different response. Suppose
that Bill had said, “I regret Johnny’s death. It is a terrible thing. But it certainly was not my
fault. I have nothing to feel guilty about and I don’t owe his parents any apologies.” Even if
Bill is correct intellectually, it is hard to imagine someone being able to achieve this sort of
objectivity about his own behavior. When human beings have caused great harm, it is natural
for them to wonder if they are at fault, even if to outsiders it is obvious that they bear no
moral responsibility for the damage. Human beings are not so finely tuned emotionally that
when they have been causally responsible for harm, they can easily turn guilt on or off
depending on their degree of moral responsibility. (See Zimmerman 1988, 134–135.)
Work in moral psychology can help to explain why self-directed moral emotions like guilt or
remorse are natural when an agent has acted contrary to a moral norm, whether justifiably or
not. Many moral psychologists describe dual processes in humans for arriving at moral
judgments (see, for example, Greene 2013, especially Chapters 4–5, and Haidt 2012,
especially Chapter 2). Moral emotions are automatic, the brain’s immediate response to a
situation. Reason is more like the brain’s manual mode, employed when automatic settings
are insufficient, such as when norms conflict. Moral emotions are likely the product of
evolution, reinforcing conduct that promotes social harmony and disapproving actions that
thwart that end. If this is correct, then negative moral emotions are apt to be experienced, to
some extent, any time an agent’s actions are contrary to what is normally a moral
requirement.
So both supporters and opponents of moral dilemmas can give an account of why agents who
face moral conflicts appropriately experience negative moral emotions. But there is a
complex array of issues concerning the relationship between ethical conflicts and moral
emotions, and only book-length discussions can do them justice. (See Greenspan 1995 and
Tessman 2015.)

7. Types of Moral Dilemmas


In the literature on moral dilemmas, it is common to draw distinctions among various types
of dilemmas. Only some of these distinctions will be mentioned here. It is worth noting that
both supporters and opponents of dilemmas tend to draw some, if not all, of these
distinctions. And in most cases the motivation for doing so is clear. Supporters of dilemmas
may draw a distinction between dilemmas of type VV and WW. The upshot is typically a
message to opponents of dilemmas: “You think that all moral conflicts are resolvable. And
that is understandable, because conflicts of type VV are resolvable. But conflicts of
type WW are not resolvable. Thus, contrary to your view, there are some genuine moral
dilemmas.” By the same token, opponents of dilemmas may draw a distinction between
dilemmas of type XX and YY. And their message to supporters of dilemmas is this: “You
think that there are genuine moral dilemmas, and given certain facts, it is understandable why
this appears to be the case. But if you draw a distinction between conflicts of
types XX and YY, you can see that appearances can be explained by the existence of
type XX alone, and type XX conflicts are not genuine dilemmas.” With this in mind, let us
note a few of the distinctions.
One distinction is between epistemic conflicts and ontological conflicts. (For different
terminology, see Blackburn 1996, 127–128.) The former involve conflicts between two (or
more) moral requirements and the agent does not know which of the conflicting requirements
takes precedence in her situation. Everyone concedes that there can be situations where one
requirement does take priority over the other with which it conflicts, though at the time
action is called for it is difficult for the agent to tell which requirement prevails. The latter
are conflicts between two (or more) moral requirements, and neither is overridden. This is
not simply because the agent does not know which requirement is stronger; neither is.
Genuine moral dilemmas, if there are any, are ontological. Both opponents and supporters of
dilemmas acknowledge that there are epistemic conflicts.
There can be genuine moral dilemmas only if neither of the conflicting requirements is
overridden. Ross (1930, Chapter 2) held that all moral precepts can be overridden in
particular circumstances. This provides an inviting framework for opponents of dilemmas to
adopt. But if some moral requirements cannot be overridden—if they hold absolutely—then
it will be easier for supporters of dilemmas to make their case. Lisa Tessman has
distinguished between negotiable and non-negotiable moral requirements (Tessman 2015,
especially Chapters 1 and 3). The former, if not satisfied, can be adequately compensated or
counterbalanced by some other good. Non-negotiable moral requirements, however, if
violated produce a cost that no one should have to bear; such a violation cannot be
counterbalanced by any benefits. If non-negotiable moral requirements can conflict—and
Tessman argues that the can—then those situations will be genuine dilemmas and agents
facing them will inevitably fail morally. It might seem that if there is more than one moral
precept that holds absolutely, then moral dilemmas must be possible. Alan Donagan,
however, argues against this. He maintains that moral rules hold absolutely, and apparent
exceptions are accounted for because tacit conditions are built in to each moral rule
(Donagan 1977, Chapters 3 and 6, especially 92–93). So even if some moral requirements
cannot be overridden, the existence of dilemmas may still be an open question.
Another distinction is between self-imposed moral dilemmas and dilemmas imposed on an
agent by the world, as it were. Conflicts of the former sort arise because of the agent’s own
wrongdoing (Aquinas; Donagan 1977, 1984; and McConnell 1978). If an agent made two
promises that he knew conflicted, then through his own actions he created a situation in
which it is not possible for him to discharge both of his requirements. Dilemmas imposed on
the agent by the world, by contrast, do not arise because of the agent’s wrongdoing. The case
of Sartre’s student is an example, as is the case from Sophie’s Choice. For supporters of
dilemmas, this distinction is not all that important. But among opponents of dilemmas, there
is a disagreement about whether the distinction is important. Some of these opponents hold
that self-imposed dilemmas are possible, but that their existence does not point to any deep
flaws in moral theory (Donagan 1977, Chapter 5). Moral theory tells agents how they ought
to behave; but if agents violate moral norms, of course things can go askew. Other opponents
deny that even self-imposed dilemmas are possible. They argue that an adequate moral
theory should tell agents what they ought to do in their current circumstances, regardless of
how those circumstances arose. As Hill puts it, “[M]orality acknowledges that human beings
are imperfect and often guilty, but it calls upon each at every new moment of moral
deliberation to decide conscientiously and to act rightly from that point on” (Hill 1996, 176).
Given the prevalence of wrongdoing, if a moral theory did not issue uniquely action-guiding
“contrary-to-duty imperatives,” its practical import would be limited.
Yet another distinction is between obligation dilemmas and prohibition dilemmas. The
former are situations in which more than one feasible action is obligatory. The latter involve
cases in which all feasible actions are forbidden. Some (especially, Valentyne 1987 and
1989) argue that plausible principles of deontic logic may well render obligation dilemmas
impossible; but they do not preclude the possibility of prohibition dilemmas. The case of
Sartre’s student, if genuinely dilemmatic, is an obligation dilemma; Sophie’s case is a
prohibition dilemma. There is another reason that friends of dilemmas emphasize this
distinction. Some think that the “disjunctive solution” used by opponents of dilemmas—
when equally strong precepts conflict, the agent is required to act on one or the other—is
more plausible when applied to obligation dilemmas than when applied to prohibition
dilemmas.
As moral dilemmas are typically described, they involve a single agent. The agent ought, all
things considered, to do AA, ought, all things considered, to do BB, and she cannot do
both AA and BB. But we can distinguish multi-person dilemmas from single agent ones. The
two-person case is representative of multi-person dilemmas. The situation is such that one
agent, P1, ought to do AA, a second agent, P2, ought to do BB, and though each agent can do
what he ought to do, it is not possible both for P1 to do AA and P2 to do BB. (See Marcus
1980, 122 and McConnell 1988.) Multi-person dilemmas have been called “interpersonal
moral conflicts.” Such conflicts are most theoretically worrisome if the same moral system
(or theory) generates the conflicting obligations for P1 and P2. A theory that precludes
single-agent moral dilemmas remains uniquely action-guiding for each agent. But if that
same theory does not preclude the possibility of interpersonal moral conflicts, not all agents
will be able to succeed in discharging their obligations, no matter how well-motivated or
how hard they try. For supporters of moral dilemmas, this distinction is not all that important.
They no doubt welcome (theoretically) more types of dilemmas, since that may make their
case more persuasive. But if they establish the reality of single-agent dilemmas, in one sense
their work is done. For opponents of dilemmas, however, the distinction may be important.
This is because at least some opponents believe that the conceptual argument against
dilemmas applies principally to single-agent cases. It does so because the ought-to-do
operator of deontic logic and the accompanying principles are properly understood to apply
to entities who can make decisions. To be clear, this position does not preclude that
collectives (such as businesses or nations) can have obligations. But a necessary condition
for this being the case is that there is (or should be) a central deliberative standpoint from
which decisions are made. This condition is not satisfied when two otherwise unrelated
agents happen to have obligations both of which cannot be discharged. Put simply, while an
individual act involving one agent can be the object of choice, a compound act involving
multiple agents is difficult so to conceive. (See Smith 1986 and Thomason 1981.) Erin
Taylor (2011) has recently argued that neither universalizability nor the principle that ‘ought’
implies ‘can’ ensure that there will be no interpersonal moral conflicts (what she calls
“irreconcilable differences”). These conflicts would raise no difficulties if morality required
trying rather than acting, but such a view is not plausible. Still, moral theories should
minimize cases of interpersonal conflict (Taylor 2011, pp. 189–190).To the extent that the
possibility of interpersonal moral conflicts raises an intramural dispute among opponents of
dilemmas, that dispute concerns how to understand the principles of deontic logic and what
can reasonably be demanded of moral theories.

8. Multiple Moralities
Another issue raised by the topic of moral dilemmas is the relationship among various parts
of morality. Consider this distinction. General obligations are moral requirements that
individuals have simply because they are moral agents. That agents are required not to kill,
not to steal, and not to assault are examples of general obligations. Agency alone makes
these precepts applicable to individuals. By contrast, role-related obligations are moral
requirements that agents have in virtue of their role, occupation, or position in society. That
lifeguards are required to save swimmers in distress is a role-related obligation. Another
example, mentioned earlier, is the obligation of a defense attorney to hold in confidence the
disclosures made by a client. These categories need not be exclusive. It is likely that anyone
who is in a position to do so ought to save a drowning person. And if a person has
particularly sensitive information about another, she should probably not reveal it to third
parties regardless of how the information was obtained. But lifeguards have obligations to
help swimmers in distress when most others do not because of their abilities and contractual
commitments. And lawyers have special obligations of confidentiality to their clients because
of implicit promises and the need to maintain trust.
General obligations and role-related obligations can, and sometimes do, conflict. If a defense
attorney knows the whereabouts of a deceased body, she may have a general obligation to
reveal this information to family members of the deceased. But if she obtained this
information from her client, the role-related obligation of confidentiality prohibits her from
sharing it with others. Supporters of dilemmas may regard conflicts of this sort as just
another confirmation of their thesis. Opponents of dilemmas will have to hold that one of the
conflicting obligations takes priority. The latter task could be discharged if it were shown
that one these two types of obligations always prevails over the other. But such a claim is
implausible; for it seems that in some cases of conflict general obligations are stronger, while
in other cases role-related duties take priority. The case seems to be made even better for
supporters of dilemmas, and worse for opponents, when we consider that the same agent can
occupy multiple roles that create conflicting requirements. The physician, Harvey Kelekian,
in Margaret Edson’s (1999/1993) Pulitzer Prize winning play, Wit, is an oncologist, a
medical researcher, and a teacher of residents. The obligations generated by those roles lead
Dr. Kelekian to treat his patient, Vivian Bearing, in ways that seem morally questionable
(McConnell 2009). At first blush, anyway, it does not seem possible for Kelekian to
discharge all of the obligations associated with these various roles.
In the context of issues raised by the possibility of moral dilemmas, the role most frequently
discussed is that of the political actor. Michael Walzer (1973) claims that the political ruler,
qua political ruler, ought to do what is best for the state; that is his principal role-related
obligation. But he also ought to abide by the general obligations incumbent on all.
Sometimes the political actor’s role-related obligations require him to do evil—that is, to
violate some general obligations. Among the examples given by Walzer are making a deal
with a dishonest ward boss (necessary to get elected so that he can do good) and authorizing
the torture of a person in order to uncover a plot to bomb a public building. Since each of
these requirements is binding, Walzer believes that the politician faces a genuine moral
dilemma, though, strangely, he also thinks that the politician should choose the good of the
community rather than abide by the general moral norms. (The issue here is whether
supporters of dilemmas can meaningfully talk about action-guidance in genuinely dilemmatic
situations. For one who answers this in the affirmative, see Tessman 2015, especially
Chapter 5.) Such a situation is sometimes called “the dirty hands problem.” The expression,
“dirty hands,” is taken from the title of a play by Sartre (1946). The idea is that no one can
rule without becoming morally tainted. The role itself is fraught with moral dilemmas. This
topic has received much attention recently. John Parrish (2007) has provided a detailed
history of how philosophers from Plato to Adam Smith have dealt with the issue. And C.A.J.
Coady (2008) has suggested that this reveals a “messy morality.”
For opponents of moral dilemmas, the problem of dirty hands represents both a challenge
and an opportunity. The challenge is to show how conflicts between general obligations and
role-related obligations, and those among the various role-related obligations, can be
resolved in a principled way. The opportunity for theories that purport to have the resources
to eliminate dilemmas—such as Kantianism, utilitarianism, and intuitionism—is to show
how the many moralities under which people are governed are related.

9. Conclusion
Debates about moral dilemmas have been extensive during the last six decades. These
debates go to the heart of moral theory. Both supporters and opponents of moral dilemmas
have major burdens to bear. Opponents of dilemmas must show why appearances are
deceiving. Why are examples of apparent dilemmas misleading? Why are certain moral
emotions appropriate if the agent has done no wrong? Supporters must show why several of
many apparently plausible principles should be given up—principles such as PC, PD, OP, D,
‘ought’ implies ‘can’, and the agglomeration principle. And each side must provide a general
account of obligations, explaining whether none, some, or all can be overridden in particular
circumstances. Much progress has been made, but the debate is apt to continue.

ETHICAL REASONING AND ARGUMENTS


It is important to know how to reason well in thinking or speaking about ethical matters. This
is helpful not only in trying to determine what to think about controversial ethical matters but
also in arguing for something you believe is right and in critically evaluating positions held
by others.
The Structure of Ethical Reasoning and Argument To be able to reason well in ethics you
need to under- stand what constitutes a good argument. We can do this by looking at an
argument’s basic structure. This is the structure not only of ethical arguments about what is
good or right but also of arguments about what is the case or what is true.
Suppose you are standing on the shore and a per- son in the water calls out for help. Should
you try to rescue that person? You may or may not be able to swim. You may or may not be
sure you could rescue the person. In this case, however, there is no time for reasoning, as you
would have to act promptly. On the other hand, if this were an imaginary case, you would
have to think through the reasons for and against trying to rescue the person. You might
conclude that if you could actually rescue the per- son, then you ought to try to do it. Your
reasoning might go as follows:
Every human life is valuable. Whatever has a good chance of saving such a life should be
attempted. My swimming out to rescue this person has a good chance of saving his life.
Therefore, I ought to do so.
Or you might conclude that you could not save this person, and your reasoning might go like
this:
Every human life is valuable. Whatever has a good chance of saving such a life should be
attempted. In this case, there is no chance of saving this life because I cannot swim. Thus, I
am not obligated to try to save him (although, if others are around who can help, I might be
obligated to try to get them to help).
Some structure like this is implicit in any ethi- cal argument, although some are longer and
more complex chains than the simple form given here. One can recognize the reasons in an
argument by their introduction through key words such as since,
because, and given that. The conclusion often con- tains terms such as thus and therefore.
The reasons supporting the conclusion are called premises. In a sound argument, the
premises are true and the con- clusion follows from them. In the case presented ear- lier,
then, we want to know whether you can save this person and also whether his life is valuable.
We also need to know whether the conclusion actually follows from the premises. In the case
of the earlier examples, it does. If you say you ought to do what will save a life and you can
do it, then you ought to do it. However, there may be other principles that would need to be
brought into the argument, such as whether and why one is always obligated to save someone
else’s life when one can.
To know under what conditions a conclusion actually follows from the premises, we would
need to analyze arguments in much greater detail than we can do here. Suffice it to say,
however, that the connection is a logical connection—in other words, it must make rational
sense. You can improve your ability to reason well in ethics first by being able to pick out
the reasons and the conclusion in an argu- ment. Only then can you subject them to critical
examination in ways we suggest here.
Evaluating and Making Good Arguments
Ethical reasoning can be done well or done poorly. Ethical arguments can be constructed
well or con- structed poorly. A good argument is a sound argument. It has a valid form in
which the conclusion actually follows from the premises, and the premises or reasons given
for the conclusion are true. An argument is poorly constructed when it is fallacious or when
the reasons on which it is based are not true or are uncertain. An ethical argument always
involves some claim about values—for example, that saving a life is good. These value-
based claims must be established through some theory of values. Part I of this book examines
different theories that help establish basic values.
Ethical arguments also involve conceptual and factual matters. Conceptual matters are those
that relate to the meaning of terms or concepts. For example, in a case of lying, we would
want to know
what lying actually is. Must it be verbal? Must one have an intent to deceive? What is deceit
itself? Other conceptual issues central to ethical arguments may involve questions such as,
“What constitutes a ‘person’?” (in arguments over abortion, for example) and “What is ‘cruel
and unusual punishment’?” (in death penalty arguments, for example). Some- times,
differences of opinion about an ethical issue are a matter of differences not in values but in
the meaning of the terms used.
Ethical arguments often also rely on factual claims. In our example, we might want to know
whether it was actually true that you could save the drowning person. In arguments about the
death penalty, we may want to know whether such pun- ishment is a deterrent. In such a
case, we need to know what scientific studies have found and whether the studies themselves
were well grounded. To have adequate factual grounding, we will want to seek out a range of
reliable sources of information and be open-minded. The chapters in Part II of this book
include factual material that is relevant to ethical decisions about the topics under
consideration.
It is important to be clear about the distinction between facts and values when dealing with
moral conflict and disagreement. We need to ask whether we disagree about the values
involved, about the concepts and terms we are employing, or about the facts connected to the
case.
There are various ways in which reasoning can go wrong or be fallacious. We began this
chapter by considering the fallacy of begging the question or circular argument. Such
reasoning draws on the argument’s conclusion to support its premises, as in “abortion is
wrong because it is immoral.” Another familiar problem of argumentation is the ad hominem
fallacy. In this fallacy, people say something like, “That can’t be right because just look who
is saying it.” They look at the source of the opinion rather than the reasons given for it. You
can find out more about these and other fallacies from almost any textbook in logic or critical
thinking.
You also can improve your understanding of ethical arguments by making note of a
particular type of reasoning that is often used in ethics: arguments
from analogy. In this type of argument, one com- pares familiar examples with the issue
being disputed. If the two cases are similar in relevant ways, then whatever one concludes
about the first familiar case one should also conclude about the disputed case. For example,
Judith Jarvis Thomson (as discussed in Chapter 11) once asked whether it would be ethically
acceptable to “unplug” someone who had been attached to you and who was using your
kidneys to save his life. If you say that you are justified in unplugging, then a pregnant
woman is also justified in doing the same with regard to her fetus. The reader is prompted to
critically examine such an argument by asking whether or not the two cases were similar in
relevant ways—that is, whether the analogy fits.
Finally, we should note that giving reasons to justify a conclusion is also not the same as
giving an explanation for why one believes something. A woman might explain that she does
not sup- port euthanasia because that was the way she was brought up or that she is opposed
to the death penalty because she cannot stand to see someone die. To justify such beliefs, one
would need rather to give reasons that show not why one does, in fact, believe something but
why one should believe it. Nor are rationalizations justifying reasons. They are usually
reasons given after the fact that are not one’s true reasons. Rationalizations are usually
excuses, used to explain away bad behavior. These false reasons are given to make us look
better to others or our- selves. To argue well about ethical matters, we need to examine and
give reasons that support the conclusions we draw.
ETHICAL THEORY
Good reasoning in ethics usually involves either implicit or explicit reference to an ethical
theory. An ethical theory is a systematic exposition of a particular view about what is the
nature and basis of good or right. The theory provides reasons or norms for judging acts to be
right or wrong; it provides a justification for these norms. These norms can then be used as a
guide for action. We can diagram the relationship between ethical theories and moral
decision making as follows.
We can think of the diagram as a ladder. In practice, we can start at the ladder’s top or
bottom. At the top, at the level of theory, we can start by clarifying for ourselves what we
think are basic ethical values. We then move downward to the level of principles generated
from the theory. The next step is to apply these principles to concrete cases. We can also start
at the bottom of the ladder, facing a particular ethical choice or dilemma. We can work our
way back up the ladder, thinking through the principles and theories that implicitly guide our
concrete decisions. Ultimately and ideally, we come to a basic justification, or the elements
of what would be an ethical theory. If we look at the actual practice of thinking people as
they develop their ethical views over time, the movement is probably in both directions. We
use concrete cases to reform our basic ethical views, and we use the basic ethical views to
throw light on concrete cases.
An example of this movement in both directions would be if we start with the belief that
pleasure is the ultimate value and then find that applying this value in practice leads us to do
things that are contrary to common moral sense or that are repugnant to us and others. We
may then be forced to look again and possibly alter our views about the moral significance of
pleasure. Or we may change our views about the rightness or wrongness of some particular
act or practice on the basis of our theoretical reflections. Obviously, this sketch of moral
reasoning is quite simplified. Feminists and others have criticized this model of ethical
reasoning, partly because it claims that ethics is governed by general
principles that are supposedly applicable to all ethical situations. Does this form of reasoning
give due consideration to the particularities of individual, concrete cases? Can we really
make a general judgment about the value of truthfulness or courage that will help us know
what to do in particular cases in which these issues play a role?
Value and the Quest for the Good
What sorts of things are valuable? Some items that we value are rather trivial, such as a new pair of
shoes or one’s preferred brand of soda. Yes we enjoy them, but they have no real urgency. Other things,
though, seem to be of ultimate importance, and at the top of that list many of us would place the value
of human life. After all, it is hard to find value in anything unless we’re alive to experience it. Some of us
might even claim to place an absolute value on human life. Now suppose I told you that I had invented a
marvelous Convenience Machine that would save everyone an enormous amount of time and energy in
our daily routines. However, the downside of the Convenience Machine is that its use would result in
the deaths of over 75,000 Americans per year. Would you use this machine? Perhaps you’d refuse on
the grounds that the value of life exceeds any amount of convenience.

But suppose our economy centered on the use of this machine, and without it, the nation would be
thrown into an unparalleled economic depression. Perhaps you would still refuse to use it and insist that
we change our economic expectations rather than continually sacrifice so many lives.

Well, we in fact have this Convenience Machine in several brands: Chevrolet, Ford, Chrysler, Toyota,
Honda, Mercedes, and so on. Motor vehicle accidents in the United States result in about 30,000 deaths
a year; another 50,000 deaths are caused by diseases brought on by automobile pollution. So how much
do we really value life? Perhaps not as much as we often claim, and we certainly do not value life as an
absolute. Some people say that it is the quality of life rather than life itself that is valuable. The ancient
Greeks and Romans believed that when life became burdensome, one had the obligation to commit

The human life is just one example of a wide range of things that we find valuable, and a complete list of
them would probably be impossible to create. Nicholas

Rescher, though, classifies some basic values into these eight categories:1

1. Material and physical value: health, comfort, physical security


2. Economic value: economic security, productiveness
3. Moral value: honesty, fairness, kindness
4. Social value: generosity, politeness, graciousness
5. Political value: freedom, justice
6. Aesthetic value: beauty, symmetry, grace
7. Religious value: piety, obedience, faith
8. Intellectual value: intelligence, clarity, knowledge

It is easy enough to devise a list of values like this: just think about what you do during the day and
reflect on what is most important to you. What is less easy, though, is understanding why things are
valuable to begin with and what, if any- thing, our various values have in common. In this chapter, we
explore the notion of value and how value connects with issues of morality.
TYPES OF VALUES
Intrinsic and Instrumental Value

When we look at Rescher’s list of basic values, we see that some seem to be valuable for their own sake,
such as beauty and justice, while others are valuable because of their beneficial consequences, such as
physical and economic security. The essential difference here is between intrinsic and instrumental
goods. Intrinsic goods are good because of their nature and are not derived from other goods. By
contrast, instrumental goods are worthy of desire because they are effective means of attaining our
intrinsic goods. Plato makes this distinction in his book, The Republic, where the characters Socrates and
Glaucon are talking:

SOCRATES:

Tell me, do you think there is a kind of good which we welcome not because we desire its consequences
but for its own sake: joy, for example, and all the harmless pleasures which have no further
consequences beyond the joy which one finds in them?

GLAUCON:

Certainly, I think there is such a good.

SOCRATES:

Further, there is the good which we welcome for its own sake and also for its consequences, knowledge,
for example, and sight and health. Such things we somehow welcome on both accounts.

GLAUCON:

Yes.

SOCRATES:

Are you also aware of a third kind, such as physical training, being treated when ill, the practice of
medicine, and other ways of making money? We should say that these are wearisome but beneficial to
us; we should not want them for their own sake, but because of the rewards and other benefits which
result from them.2

The question “What things are good or valuable?” is ambiguous. We need first to separate the kinds of
values or goods there are. In the above, Socrates distinguishes three kinds of goods: (1) purely intrinsic
goods (of which simple joys are an example); (2) purely instrumental goods (of which medicine and
making money are examples); and (3) combination goods (such as knowledge, sight, and health), which
are good in themselves and good as a means to further goods.
The essential difference is between intrinsic and instrumental goods. We consider some things good or
worthy of desire (desirable) in themselves and other things good or desirable only because of their
consequences. Intrinsic goods are good because of their nature. They are not derived from other goods,
whereas instrumental goods are worthy of desire because they are effective means of attaining our
intrinsic goods.

We may further distinguish an instrumental good from a good instrument. If something is an


instrumental good, it is a means to attaining something that is intrinsically good; but merely to be a
good instrument is to be an effective means to any goal, good or bad. For example, poison is a good
instrument for murdering someone, but murder is not an intrinsically good thing; thus poison, in this use
at least, is not an instrumental good.

Many things that we value are instrumental values. Socrates in our selection from The Republic
mentions two instrumental values: medicine and money. Medicine is an instrumental good in that it can
hardly be valued for its own sake. We can ask “What is medicine for?” The answer is, “It is to promote
health.” But is health an intrinsic value or an instrumental one? Can we ask “What is health for?” Some
will agree with Socrates that health is good for itself and for other things as well, such as happiness and
creative activity. Others will dispute Socrates’ contention and judge health to be wholly an instrumental
good.

Money is Socrates’ other example of an instrumental value. Few, if any, of us really value money for its
own sake, but almost all of us value it for what it can buy. When we ask “What is money for?” we arrive
at such goods as food and clothing, shelter and automobiles, and entertainment and education. But are
any of these really intrinsic goods, or are they all instrumental goods? When we ask, for example, “What
is entertainment for?” What answer do we come up with? Most of us would mention enjoyment or
pleasure, Socrates’ example of an intrinsic good. Can we further ask “What is enjoyment or pleasure
for?” We examine this question in the next section, but, before we do, we need to ask whether the
notion of intrinsic values makes any sense.

Are there any intrinsic values? Are there any entities whose values are not derived from something
else—that is, that are sought for their own sake, that are inherently good, good in themselves? Or are all
values relative to desirers—that is, instrumental to goals that are the creation of choosers? Those who
espouse the notion of intrinsic

value usually argue that pleasure is an example of an intrinsic value and pain an example of an intrinsic
disvalue: It is good to experience pleasure and bad to experience pain. Naturally, these philosophers
admit that individual experiences of pleasure can be bad, because they result in some other disvalue
such as a hangover after a drinking spree. Similarly, individual painful experiences can be valuable, for
example, having a painful operation to save one’s life. The intrinsicalist affirms that pleasure is just
better than pain. We can see this straight off. We do not need any arguments to convince us that
pleasure is good or that gratuitous pain is intrinsically bad. Suppose we see a man torturing a child and
order him to stop at once. If he replies, “I agree that the child is experiencing great pain, but why should
I stop torturing her?” we would suspect some mental aberration on his part.
The nonintrinsicalist denies that the preceding arguments have any force. The notion that the
experience itself could have any value is unclear. It is only by our choosing pleasure over pain that the
notion of value begins to have meaning. In a sense, all value is extrinsic, or a product of choosing. Many
existentialists, most notably Jean-Paul Sartre, believe that we invent our values by arbitrary choice. The
freedom to create our values and thus to define ourselves is godlike and, at the same time, deeply
frightening, for we have no one to blame for our failures but ourselves. “We are condemned to
freedom.... Value is nothing else but the meaning that you choose. One may choose anything so long as
it is done from the ground of freedom.”3

But this seems wrong. We do not choose most of our values in the same way we choose between two
different majors or whether to have soup or salad with our meal. We cannot help valuing pleasure,
health, happiness, and love and disvaluing pain and suffering. With regard to the fundamental values,
they choose us, not we them. Even Sartre’s condition for choosing a value, freedom, is not a value that
we choose but have thrust upon us by our nature. We could override our freedom for other values, but
we can no more choose whether to value it or not value it than we can choose whether or not to be
hungry or thirsty after being deprived of food or drink for days. It is as though God or evolution
preprogrammed us to desire these basic goods. And when we find someone who does not value (or
claims not to value) happiness, freedom, or love, we tend to explain this anomaly as a product of
unfortunate circumstances.

The Value of Pleasure

Philosophers divide into two broad camps: hedonists and nonhedonists. The hedonist (from hedon,
Greek for “pleasure”) asserts that all pleasure is good, that pleasure is the only thing good in itself, and
that all other goodness is derived from this value. An experience is good in itself if and only if it provides
some pleasure. Sometimes, this definition is widened to include the lessening of pain, pain being seen as
the only thing bad in itself. For simplicity’s sake, we will use the former definition, realizing that it may
need to be supplemented by reference to pain.

Hedonists subdivide into two categories: (1) sensualism, the view that equates all pleasure with sensual
enjoyment; and (2) satisfactionism, the view that equates all pleasure with satisfaction or enjoyment,
which may not involve sensuality. Satisfaction is a pleasurable state of consciousness such as we might

experience after accomplishing a successful venture or receiving a gift. The opposite of sensual
enjoyment is physical pain; the opposite of satisfaction is displeasure or dissatisfaction.

The Greek philosopher Aristippus (ca. 435–366 BCE) espoused the sensualist position; that is, the only
(or primary) good was sensual pleasure, and this good- ness was defined in terms of its intensity.

This was also Mustapha Mond’s philosophy in Aldous Huxley’s Brave New World. The brave new world is
a society of the future where people have been liberated from disease, violence, and crime through
immunization, genetic engineering, and behavior modification. They are protected from depression and
unhappiness through a drug, soma, which offers them euphoric sensations. Mustapha Mond, the
brilliant manager of the society, defends this hedonistic utopia against one of the few remaining
malcontents, the “Savage,” who complains that something of value is missing in this “utopia.” The
following dialogue is between Mustapha Mond, the genius technocrat who governs the brave new
world, and the malcontent, “Savage,” who believes that this hedonic paradise lacks something.

SAVAGE:

Yes, that’s just like you. Getting rid of everything unpleasant instead of learning to put up with it.
Whether ’tis better in the mind to suffer the slings and arrows of outrageous fortune, or to take arms
against a sea of troubles and by opposing end them.... But you don’t do either. Neither suffer nor
oppose. You just abolish the slings and arrows. It’s too easy.... Isn’t there something in living
dangerously?

MUSTAPHA MOND:

There’s a great deal in it.... Men and women must have their adrenals stimulated from time to time....
It’s one of the conditions of perfect health. That’s why we’ve made the VPS treatment compulsory.

SAVAGE: VPS?

MUSTAPHA MOND:

Violent Passion Surrogate. Regularly once a month. We flood the whole system with adrenin. It’s the
complete physiological equivalent of fear and rage ... without any of the inconveniences.

SAVAGE:

But I like the inconvenience.

MUSTAPHA MOND:

In fact you’re claiming the right to be unhappy.... Not to mention the right to grow old and ugly and
impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to live in
constant apprehension of what may happen tomorrow; the right to be tortured by unspeakable pains of
every kind.

SAVAGE (after a long silence):

I claim them all.

MUSTAPHA MOND (shrugging his shoulders):

You’re welcome.

e) would probably agree that the brave new world is lacking something. The sensuous version of
pleasure is too simple.

Most hedonists since the third century BCE follow Epicurus (342–270 BCE), who had a broader view of
pleasure:
Life is not made pleasant through continued drinking and partying, or sexual encounters, or feasts of fish
and other such things as a costly banquet offers. It is sober contemplation which examines into the
reasons for all choice and avoidance, and which chases away vain opinions from which the greater part
of the confusion arises which troubles the mind.5

The distinction between pleasure as satisfaction and as sensation is important, and failure to recognize
it results in confusion and paradox. One example of this is the paradox of masochism. How can it be that
the masochist enjoys—that is, takes pleasure in—pain, which is the opposite of pleasure? “Well,” the
hedonist responds, “because of certain psychological aberrations, the masochist enjoys (as satisfaction)
what is painful (as sensation).” But he or she does not enjoy (as sensation) what is painful (as sensation).
There is also a two-level analysis to explain the masochist’s behavior: On a lower, or basic, level, he is
experiencing either pain or dissatisfaction, but on a higher level, he approves and finds satisfaction from
that pain or dissatisfaction.

Nonhedonists divide into two camps: monists and pluralists. Monists believe that there is a single
intrinsic value, but it is not pleasure. Perhaps it is a transcendent value, “the Good,” which we do not
fully comprehend but that is the basis of all our other values. This seems to be Plato’s view. Pluralists, by
contrast, generally admit that pleasure or enjoyment is an intrinsic good, but they add that there are
other intrinsic goods as well, such as knowledge, friendship, aesthetic beauty, freedom, love, moral
goodness, and life itself.

Hedonists such as Jeremy Bentham (1748–1832) argue that although these qualities are good, their
goodness is derived from the fact that they bring pleasure or satisfaction. Such hedonists ask of each of
the previously mentioned values, “What is it for?” What is knowledge for? If it gave no one any
satisfaction or enjoyment, would it really be good? Why do we feel there is a significant difference
between knowing how many stairs there are in New York City and whether or not there is life after
death? We normally do not value knowledge of the first kind, but knowledge of the second kind is
relevant for our enjoyment.

The hedonist asks, “What are friendship and love for?” If we were made differently and got no
satisfaction out of love and friendship, would they still be valuable? Are they not highly valuable,
significant instrumental goods because they bring enormous satisfaction? Even moral commitment or
conscientiousness is not good in itself, argues the hedonist. Morality is not intrinsically valuable but is
meant to serve human need, which in turn has to do with bringing about satisfaction. And, life certainly
is not intrinsically good. It is quality that counts. An amoeba or a permanently comatose patient has life
but no intrinsic value. Only when consciousness appears does the possibility for value arrive. Con-
sciousness is a necessary but not a sufficient condition for satisfaction.

e, the possibility of living in a Pleasure Machine. We have invented a complex machine into which
people may enter to find pure and constant pleasure. Attached to their brains will be electrodes that
send currents to the limbic area of the cerebral cortex and other parts of the brain, producing very
powerful sensations of pleasure. When people get into the machine, they experience these wonderful
feelings. Would you enter such a machine?
If all you want is pleasure or satisfaction, then the Pleasure Machine seems the right choice. You’re
guaranteed all the pleasure you’ve ever dreamed of— without frustration or competition from other
people. But if you want to do something and be something (for example, have good character or a
certain quality of personality) or experience reality (for example, friendship and competition), then you
might think twice about this choice. Is the Pleasure Machine not just another addiction—like alcohol,
heroin, cocaine, or crack? Once in the machine, would we become forever addicted to it? Furthermore,
if all you want is pleasure, why not just hire someone to tickle you for a lifetime? Wouldn’t we become
tired of being passive blobs—even if it was pleasurable? Most of us would reject such an existence as
equivalent to that of a drugged cockroach.

Or suppose there were two worlds with the same number of people and the same amount of total
pleasure, but in World I the people were selfish and even evil, whereas in World II the people were
deeply moral. Wouldn’t it seem that World II was intrinsically better than World I?

Or imagine two lives, those of Suzy and Izzy. Suzy possesses 100 hedons (units of pleasure), even though
she is severely retarded and physically disabled, whereas Izzy enjoys great mental acumen and physical
prowess but has only 99 hedons. Isn’t it obvious that Izzy has the better life? But, hedonists are com-
mitted to saying that Suzy’s life is better, which seems implausible.

It was these sorts of cases that led John Stuart Mill (1806–1873, to be examined in Chapter 7)—in his
classic work, Utilitarianism—to modify the hedonic doctrine, admitting that “it is better to be a human
dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.”6 He suggested
that there were different qualities of pleasure and that those who had experienced the different kinds
could distinguish among them. Whether the notion of quality of pleasure can save hedonism is a
controversial matter, but many of us feel uneasy with the idea that pleasure alone is good. Some
broader notion, such as happiness or object of desire, seems a more adequate candidate for what we
mean by “value.”

FOUNDATIONAL NATURE OF VALUES


Are Values Objective or Subjective?
Do we desire the Good because it is good, or is the Good good because we desire it? The objectivist
holds that values are worthy of desire whether or not anyone actually desires them; they are somehow
independent of us. The subjectivist holds, to the contrary, that values are dependent on desirers, are
relative to desirers.

The classic objectivist view on values (the absolutist version) was given by Plato (428–348 BCE), who
taught that the Good was the highest form, inexpressible, godlike, independent, and knowable only
after a protracted education in philosophy. We desire the Good because it is good. Philosophers in the
Platonic tradition hold to the independent existence of values apart from human or ratio- nal interest.
For example, G. E. Moore claims that the Good is a simple, unanalyzable quality, such as the color
yellow, but one that must be known through intuition. Moore believes that a world with beauty is more
valuable than one that is a garbage dump, regardless of whether there are conscious beings in those
worlds:
Let us imagine one world exceedingly beautiful. Imagine it as beautiful as you can ... and then imagine
the ugliest world you can possibly conceive. Imagine it simply one heap of filth.7

Moore asks us whether, even if there were no conscious beings who might derive pleasure or pain in
either world, we would prefer the first world to exist rather than the second. Moore believes that it is
obvious that the beautiful world is inherently better, but the objector asks, “What good is such a world if
there is no one (even God) to enjoy it?”

Other, weaker objectivist versions treat values as emergent properties, or qualities in the nature of
things. That is, just as the wetness of water is not in the H2O molecules but in the interaction of our
nervous system with millions of those molecules, and just as smoothness is not in the table that I am
touching but in the relationship between the electrical charges of the subatomic particles of which the
table is made up and my nervous system, so values (or good qualities) emerge in the relationship
between conscious beings and physical and social existence. They are synergistic entities, depending on
both our nature and their objective properties.

For example, if we were not beings with desires, we would not be in a position to appreciate values; but
once there are such beings, certain things—such as pleasure, knowledge, freedom, friendship, and
health—will be valuable, and others—such as pain, suffering, boredom, loneliness, disease, and death—
will be disvalued or not valued for their own sake. This synergistic view recognizes both a subjective and
an objective aspect to value.

Subjectivism treats values as merely products of conscious desire. The American pragmatist Ralph
Barton Perry (1876–1957) states that a value is sim- ply the object of interest.8 Values are created by
desires, and they are valuable just to that degree to which they are desired: The stronger the desire, the
greater the value. The difference between the subjectivist and the weak objectivist position (or mixed
view) is simply that the subjectivist makes no normative claims about “proper desiring,” instead judging
all desires as equal. Anything one hap- pens to desire is, by definition, a value, a good.

The objectivist responds that we can separate the Good from what one desires. We can say, for
example, that Joan desires more than anything else to get into the Pleasure Machine, but it is not good;
or that John desires more than anything else to join the Satanic Society, where he will pursue evil for

John). There is something just plain bad about the Pleasure Machine and the Satanic Society, even if
Joan and John never experience any dissatisfaction on account of them.

On the other hand, suppose Joan does not want to have any friends and John does not want to know
any history, literature, philosophy, or science. The objectivist would reply that it really would be an
objectively good thing if Joan did have friends and if John knew something about history, literature,
philosophy, and science.

Perhaps a way to adjudicate the disagreement between the subjectivist and the objectivist is to imagine
an Ideal Desirer, a person who is impartial and has maximal knowledge of the consequences of all
actions. What the Ideal Desirer chooses is by definition the “good,” and what he or she disdains is the
“bad.” If so, we can approximate such an ideal perspective by increasing our understanding and ability
to judge impartially. The study of philosophy, especially moral philosophy, has as one of its main goals
such an ability.
The Relation of Value to Morality
Typically, value theory is at the heart of moral theory. The question, however, is whether moral right
and wrong are themselves intrinsic values (as Kant states, the moral law is “a jewel that shines in its own
light”) or whether rightness and wrong- ness are defined by their ability to further nonmoral values such
as pleasure, happiness, health, and political harmony. To begin to understand this question and to get
an overview of the workings of morality, let me offer a schema of the moral process (Figure 4.1), which
may help in locating the role of values in moral theory.

The location of values in the schema of the moral process (box 3) indicates that values are central to the
domain of morality. They are the source of principles (box 4) and rooted in the forms of life (box 2).
Examples of values are life, loving relationships, freedom, privacy, happiness, creative activity,
knowledge, health, integrity, and rationality. From our values, we derive principles (box 4), which we
may call action-guiding value “instantiators” or “exemplifiers” (because they make clear the action-
guiding or prescriptive force latent in values). From the value “life,” we derive the principles “Promote
and protect life” and/or “Thou shall not kill.” From the value “freedom,” we derive the principle “Thou
shall not deprive another of his or her freedom.” From the value “privacy,” we derive the principle
“Respect every person’s privacy.” From the value “happiness,” we derive the principle “Promote human
happiness,” and so forth with all the other values.

This schema makes no judgment as to whether values are objective or subjective, intrinsic or
instrumental. Neither does it take a stand on whether values or principles are absolute; they need not
be absolute. Most systems allow that all or most values and principles are overrideable. That is, they are
considerations that direct our actions, and whenever they clash, an adjudication must take place to
decide which principle overrides the other in the present circumstances.

7 ACTIONS
Failure: weakness of will leads to guilt

6 DECISIONS

Failure: perverse will leads to guilt

5 JUDGMENTS  Weighing

Failure: error in application

4 PRINCIPLES

Normative questions: What ought I to do?

3 VALUES Objects of desire or objects existing independently of desires

2 FORMS OF LIFE  Hierarchies of beliefs, values, and practices; cultures or ways of life

1 RATIONAL JUSTIFICATION 1. Impartiality }


2. Freedom }Ideal conditions
3. Knowledge }

Of ethical theories
FIGURE 4.1 Schema of the moral process

We often find ourselves in moral situations in which one or more principles apply. We speak of making a
judgment as to which principle applies to our situation or which principle wins out in the competition
when two or more principles apply (box 5). The correct principle defines our duty. For example, we have
the opportunnity to cheat on a test and immediately judge that the principle of honesty (derived from
the value integrity) applies to our situation. Or there might be an interpersonal disagreement in which
two or more people differ on which of two values outweighs the other in importance, as when Mary
argues that Jill should not have an abortion because the value of life outweighs Jill’s freedom and bodily
integrity, but John argues that Jill’s freedom and bodily integrity outweigh the value of life.

Even after we judge which principle applies, we are not yet finished with the moral process. We must
still decide to do the morally right act. Then finally, we must actually do the right act.

Note the possibilities for failure all along the way. We may fail to apply the right principle to the
situation (the arrow between boxes 4 and 5). For example, we may simply neglect to bring to mind the
principle against cheating. This is a failure of application. But even after we make the correct judgment,
we may fail to make the right choice, deciding to cheat anyway. In this case, we have a perverse will (the
arrow between boxes 5 and 6). Finally, we may make the correct choice but fail to carry out our decision
(the arrow between boxes 6 and 7). We call this weakness of will: We mean to do the right act but
simply are too morally weak to accomplish it. In our example, we meant to refrain from cheating but
could not control ourselves. “The good that I would, I do not, but the evil that I would not, that I do.”9

A more controversial matter concerns the deep structure in which values are rooted. Some theories
deny that there is any deep structure but assert instead that values simply exist in their own right—
independently, as it were. More often, how- ever, values are seen as rooted in whole forms of life (box
2) that can be actual or ideal, such as Plato’s hierarchical society or Aristotle’s aristocracy or the Judeo-
Christian notion of the kingdom of God (the ideal synagogue or church). Ways of life or cultures are
holistic and hierarchical combinations of beliefs, values, and practices.

The deepest question about morality is whether and how these forms of life are justified (box 1). Are
some forms of life better or more justified than others? If so, how does one justify a form of life?
Candidates for justification are ideas such as God’s will, human happiness, the flourishing of all creation,
the canons of impartiality and knowledge, a deeply rational social contract (Hobbes and Rawls), and the
like. For example, a theist might argue that the ideal system of morality (that is, the ideal form of life) is
justified by being commanded by God. A utilitarian would maintain that the ultimate criterion is the
promotion of welfare or utility. A naturalist or secular humanist might argue that the ideal system is
justified by the fact that it best meets human need or promotes human flourishing or that it would be
the one chosen by ideally rational persons. Some ethicists would make level 2 the final source of
justification, denying that there is any ideal justification at all. These are the ethical relativists, who
contend that each moral system is correct simply by being chosen by the culture or individual.

The main point of the schema, however, is not to decide on the exact deep structure of morality but to
indicate that values are rooted in cultural constructs and are the foundation for moral principles upon
which moral reasoning is based. We could also devise a similar schema for the relationship between
values and virtues (to be discussed in Chapter 9). Each virtue is based on a value and each vice on a
disvalue.

THE GOOD LIFE


Finally, we want to ask what kind of life is most worth living. Aristotle (384–322 BCE) wrote long ago that
what all people seek is happiness:

There is very general agreement; for both the common person and people of superior refinement say
that it is happiness, and identify living well and doing well with being happy; but with regard to what
happiness is they differ, and the many do not give the same account as the wise. For the former think it
is some plain and obvious thing, like pleasure, wealth or honor.10

What is happiness? Again, the field divides up among objectivists, subjectivists, and combination
theorists. The objectivists, following Plato and Aristotle, distinguish happiness from pleasure and speak
of a single ideal for human nature; if we do not reach that ideal, then we have failed. Happiness (from
the Greek eudaimonia, literally meaning “good demon”) is not merely a subjective state of pleasure or
contentment but the kind of life we would all want to live if we understood our essential nature. Just as
knives and forks and wheels have functions, so do species, including the human species. Our function
(sometimes called our “essence”) is to live according to reason and thereby to become a certain sort of
highly rational, disciplined being. When we fulfill the ideal of living the virtuous life, we are truly happy.

Plato speaks of happiness as “harmony of the soul.” Just as the body is healthy when it is in harmony
with itself and the political state is a good state when it is functioning harmoniously, so the soul is happy
when all its features are functioning in harmonious accord, with the rational faculty ruling over the
spirited and emotional elements. Although we no doubt know when we are happy and feel good about
ourselves, the subjective feeling does not itself define happiness, for people who fail to attain human
excellence can also feel happy via self-deception or ignorance.

The objectivist view fell out of favor with the rise of the evolutionary account of human nature, which
undermined the sense of a preordained essence or function. Science cannot discover any innate telos,
or goal, to which all people must strive. The contemporary bias is in favor of value pluralism—that is, the
view that there are many ways of finding happiness: “Let a thousand flowers bloom.” This leads to
subjectivism.

The subjectivist version of happiness states that happiness is in the eyes of the beholder. You are just as
happy as you think you are—no more, no less. The concept is not a descriptive one but a first-person
evaluation. I am the only one who decides or knows whether I am happy. If I feel happy, I am happy,

even though everyone else despises my lifestyle. Logically, happiness has nothing to do with virtue,
although—because of our social nature—it usually turns out that we will feel better about ourselves if
we are virtuous.

The combination view tries to incorporate aspects of both the objectivist and the subjectivist views. One
version is John Rawls’s “plan of life” conception of happiness: There is a plurality of life plans open to
each person, and what is important is that the plan be an integrated whole, freely chosen by the person,
and that the person be successful in realizing his or her goals. This view is pre- dominantly subjective in
that it recognizes the person as the autonomous chooser of goals and a plan. Even if a person should
choose a life plan.

whose only pleasure is to count blades of grass in various geometrically shaped areas such as park
squares and well-trimmed lawns, ... our definition of the good forces us to admit that the good for this
man is indeed counting blades of grass.11

However, Rawls recognizes an objective element in an otherwise subjective schema. There are primary
goods that are necessary to any worthwhile life plan: “rights and liberties, powers and opportunities,
income and wealth ... self- respect ... health and vigor, intelligence and imagination.”12 The primary
goods function as the core (or the hub of the wheel) from which may be derived any number of possible
life plans (the spokes). But unless these primary goods (or most of them) are present, the life plan is not
an authentic manifestation of an individual’s autonomous choice of his or her own selfhood. Thus, it is
perfectly possible that people believe themselves to be happy when they really are not.

Although subjectivist and plan-of-life views dominate the literature today, there is some movement back
to an essentialist, or Aristotelian, view of happiness as a life directed toward worthwhile goals. Some
lifestyles are more worthy than others, and some may be worthless. Philosopher Richard Kraut asks us
to imagine a man who has as his idea of happiness the state of affairs of being loved, admired, or
respected by his friends and who would hate to have his “friends” only pretend to care for him. Suppose
his “friends” really do hate him but “orchestrate an elaborate deception, giving him every reason to
believe that they love and admire him, though in fact they don’t. And he is taken in by the illusion.”13
Can we really call this man happy?

Or suppose a woman centers her entire life around an imaginary Prince Charming. She refuses to date—
let alone marry—perfectly eligible young men; she turns down educational travel opportunities lest they
distract her from this wonderful future event; for 95 years, she bores all her patient friends with tales of
the prince’s imminent appearance. As death approaches at age 96, after a lifetime of disappointment,
she discovers that she’s been duped; she suddenly realizes that what appeared to be a happy life was a
stupid, self-deceived, miserable existence. Would we say that our heroine was happy up until her
deathbed revelation? Do these thought experiments not indicate that our happiness depends, at least to
some extent, on reality and not simply on our own evaluation?

Or suppose we improve on our Pleasure Machine, turning it into a Happiness Machine. This machine is a
large tub that is filled with a chemical solution. Electrodes are attached to many more parts of your
brain. You work with the technician to program all the “happy experiences” that you have ever wanted.
Suppose that includes wanting to be a football star, a halfback who breaks tackles like a dog shakes off
fleas and who has a fondness for scoring last-minute game-winning touchdowns. Or perhaps you’ve
always wanted to be a movie star and to bask in the public’s love and admiration. Or maybe you’ve
wanted to be the world’s richest person, living in the splendor of a magnificent castle, with servants
faithfully at your beck and call. In fact, with the Happiness Machine you can have all of these plus
passionate romance and the love of the most beautiful (or handsome) people in the world. All these
marvelous adventures would be simulated, and you would truly believe you were experiencing them.
Would you enter the Happiness Machine?
What if I told you that once you were unplugged, you could either stay out or go in for another round
but that no one who entered the machine ever chose to leave of his or her own accord, having become
addicted to its pleasures and believing that reality could never match its ecstasy. Now you have an
opportunity to enter the Happiness Machine for the first time. Will you enter? If not, are you not voting
against making the subjectivist view (or even the plan-of-life view) the sole interpretation of happiness?

When I ask this question in class, I get mixed responses. Many students say they would enter the
Happiness Machine; most say they would not. I myself would not, for the same reason that I do not use
drugs and rarely watch television or spectator sports—because some very important things are missing
that are necessary for the happy life. What are these vital missing ingredients?

1. Action. We are entirely passive in the machine, a mere spectator. But the good life requires
participation in our own destiny. We don’t just want things to happen to us; we want to accomplish
things, even at the risk of failure.

2. Freedom. Not only do we want to do things, but we want to make choices. In the Happiness
Machine, we are entirely determined by a preordained plan—we cannot do otherwise. In fact, we
cannot do anything but react to what has been programmed into the machine.

3. Character. Not only do we want to do things and act freely, but we also want to be something
and someone. To have character is to be a certain kind of person, ideally one who is trustworthy, worthy
of respect, and responsible for one’s actions. In the machine, we lose our identity. We are defined only
by our experience but have no character. We are not persons who act out of set dispositions, for we
never act at all. We are mere floating blobs in a glorified bathtub.

4. Relationships. There are no real people in our Happiness Machine life. We subsist in splendid
solipsism. All the world is a figment of our imagination as dictated by the machine; our friends and loved
ones are mere products of our fancy. But we want to love and be loved by real people, not by
phantasms.

In sum, the Happiness Machine is a myth, all appearance and no reality a bliss bought at too high a price,
a deception! If this is so and if reality is a necessary condition for the truly worthwhile life, then we
cannot be happy in the Happiness Machine. But neither can we be happy outside of the Happiness

Machine when the same necessary ingredients are missing: activity, freedom, moral character, loving
relationships, and a strong sense of reality.

The objective and subjective views of happiness assess life from different perspectives, with the
objectivist assuming that there is some kind of independent standard of assessment and the subjectivist
denying it. Even though there seems to be an immense variety of lifestyles that could be considered
intrinsically worthwhile or happy and even though some subjective approval or satisfaction seems
necessary before we are willing to attribute the adjective “happy” to a life, there do seem to be limiting
conditions on what may count as happy. We have a notion of fittingness for the good life, which would
normally exclude being severely retarded, being a slave, or being a drug addict (no matter how satisfied)
and which would include being a deeply fulfilled, autonomous, healthy person. It is better to be Socrates
dissatisfied than to be the pig satisfied, but only the satisfied Socrates is happy.
This moderate objectivism is set forth by John Stuart Mill. Happiness, according to Mill, is

not a life of rapture; but moments of such, in an existence made up of few and transitory pains, many
and various pleasures, with a decided predominance of the active over the passive, and having as the
foundation of the whole, not to expect more from life than it is capable of bestowing.14

This conception of happiness is worth pondering. It includes activity, freedom, and reality components,
which exclude being satisfied by the passive experience in the Happiness Machine, and it supposes that
some pleasing experiences are better than others. I would add to Mill’s definition the ingredients of
moral character and loving relations. A closer approximation might go like this:

Happiness is a life in which there exists free action (including meaningful work), loving relations, and
moral character and in which the individual is not plagued by guilt and anxiety but is blessed with peace
and satisfaction.

The satisfaction should not be confused with complacency; rather, it means contentment with one’s
lot—even as one strives to improve it. Whether this neo objectivist, Millian view of happiness is
adequate, you must decide.

CONCLUSION
In this chapter, we have seen that there is a range of ways to dissect the notion of moral goodness.
Some goods are intrinsic because of their nature and are not derived from other goods, and others are
instrumental because they are effective means of attaining intrinsic goods. Goods are often connected
with pleasure; sensualism equates all pleasure with sensual enjoyment, whereas satisfactionism
identifies all pleasure with satisfaction or enjoyment, which may not involve sensuality. There is a
debate whether values are objective or subjective. Plato held the former position, maintaining that
goods have an independent existence of values apart from human or rational interest; Perry held the
latter view that values are merely products of conscious desire. Although value theory is at the center of
moral theory, there is dispute about whether the moral notions of right and wrong are themselves
intrinsic values. Finally, there is the issue of how values are connected with human happiness and the
good life, particularly whether there is a human purpose, or telos, that defines our capacity for
happiness in terms of specific values.

Technically, culture is always “in the news,” and not just in the arts and entertainment section of our
newspapers. It is like unacknowledged water to a fish, or the oxygen we breathe. Yet recently culture
has been an explicit topic of debate. After Mitt Romney took flak for saying that the power of culture
was responsible for the different living standards of Israelis and Palestinians and some tried to
understand how pop culture might have influenced Aurora, Colorado, shooter James Holmes, it is
worthwhile to examine the ways that culture does and does not influence our behavior.

Romney’s invocation of culture as a means of explaining how one group of people succeeds and another
doesn’t may be misleading because Israel’s culture has been through fits and starts and is still
hammering out a coherent identity. As David Brook’s has written, and though it might seem strange to
an outsider, Israel was not always considered to have such a modern culture. “The odd thing is that
Israel has not traditionally been strongest where the Jews in the Diaspora were strongest,” Brooks
writes. “Instead of research and commerce, Israelis were forced to devote their energies to fighting and
politics.” Only recently have Israeli research and intellectual exchange blossomed to become hallmarks
of that society, Brooks writes.

Many have attempted to describe the great intellectual achievements of the Jews, both in diaspora and
those that have returned to Israel. In his book The Brain and its Self, The Jewish Hungarian
neurochemist Joseph Knoll writes that struggling to survive in the ghettos of Europe and perforce
acquiring neurochemical drives allowed the Jewish people to transmit superior brain development to
the next generation. “In retrospect we may say that to survive Jews were always required to better
exploit the physiological endowments of their brains,” he writes.

So in this important way, culture does matter quite a bit to how we behave and how we think. Knoll’s
assessment is in line with what influential psychologist and neuroscientist Merlin Donald has written on
culture’s influence on our brain functioning — and even our brain structure. Merlin holds that language
has the biggest impact on brain structure but that culture influences brain functioning to a great extent.
In his book A Mind So Rare, he writes:

“The social environment includes many factors that impinge on development, from bonding and
competitive stress to the social facilitation of learning. These can affect brain functioning in many ways,
but usually they have no direct influence on functional brain architecture. However, symbolizing cultures
own a direct path into our brains and affect the way major parts of the executive brain become wired up
during development. This is the key idea behind the notion of deep enculturation... This process entails
setting up the very complex hierarchies of cognitive demons (automatic programs) that ultimately
establish the possibility of new forms of thought. Culture effectively wires up functional subsystems in
the brain that would not otherwise exist.”
This is not to say that culture is responsible for everything we do and think. Indeed, the very formation
of the culture that helped the diaspora Jews succeed was a result of circumstance, rather horrific
circumstance. And sometimes glomming onto the idea of culture’s potency can have disastrous results.
The now discredited broken windows theory held that a culture of crime can quickly take root if citizens
are not bonded together to keep up their neighborhoods and remain serious about punishing minor
crimes. The theory resulted in an uptick in intense community policing, but was not actually
responsible for the drop in crime rates of the late 1990s. It did result in the incarceration of many
African Americans for petty crimes.

Using culture as the lens to explain success and failure also obscures more widespread (and harder to
control) socioeconomic and biological factors. To truly understand culture’s role in shaping us, we must
understand that culture is not just the inert repository of ideas and customs we all live with, but that it
too is shaped by various factors. As President Obama wrote in The Audacity of Hope, fending off claims
that black culture is to blame for African Americans’ plight, “In other words, African Americans
understand that culture matters but that culture is shaped by circumstance. We know that many in the
inner city are trapped by their own self-destructive behavior but that those behavior are not innate.” It
is naive to believe, as the now discredited New Yorker writer Jonah Lehrer did, that culture creates a
person. Culture shapes us, but many events mold culture and we shape these just as much.
To blame our culture for the shootings in Aurora, Colorado, would be wrongheaded and many in the
media have pointed this out for reasons beyond psychological self-defense. Even if culture is a primary
factor in our lives, and that largely depends on the person’s receptivity to culture, it would be nearly
impossible to create a culture ahead of time that is conducive to producing better behavior and
healthier thoughts. This is because much of culture depends on our biological and evolutionary
hardware, which is in flux. And our evolutionary heritage is largely one of aggression and violence,
despite our pains to sublimate these influences through cultural activities like art and religion. Thus, if
we are to blame anything for a tragic mass shooting, it must be our vestigial aggression.

Interestingly, some scientists believe that culture may be adaptive and thus help our brains function
better to help us reproduce more successfully. This would cast culture in relief as something that is both
important for our survival and also subject to the whims of those harder to control and much bigger
forces in life. At the least, it absolves filmmakers who explore issues of violence and responsibility, like
those that made the most recent Batman installment. More broadly, it could account in part for how
some cultures help their members achieve.

Yet we shouldn’t get too hung up on pitting cultures against each other, as Romney did in Israel. In
his Lyrical and Critical Essays, Albert Camus writes, “Men express themselves in harmony with their land.
And superiority, as far as culture is concerned, lies in this harmony and nothing else. There are no higher
or lower cultures. There are cultures that are more or less true.” The goal should be to emulate the
truest, noblest aspects of every culture and try to learn about each culture’s people. The benefits to
brain development or reproduction would surely be just as great in exploring others’ ways of life as
immersing oneself in a single nation’s or group’s traditions, however beneficial that one culture may be.

Highlights

Cultures vary substantially in both moral judgments and moral
behaviors.

Cultural variations in morality within societies can vary as much as
cultural variations in morality between societies.


Cultural factors contributing to this variation include religion, social
ecology (weather, crop conditions, population density, pathogen
prevalence, residential mobility), and regulatory social institutions such
as kinship structures and economic markets.


This variability raises questions for normative theories of morality, but
also holds promise for future descriptive work on moral thought and
behavior.
We review contemporary work on cultural factors affecting moral judgments
and values, and those affecting moral behaviors. In both cases, we highlight
examples of within-societal cultural differences in morality, to show that these
can be as substantial and important as cross-societal differences. Whether
between or within nations and societies, cultures vary substantially in their
promotion and transmission of a multitude of moral judgments and behaviors.
Cultural factors contributing to this variation include religion, social ecology
(weather, crop conditions, population density, pathogen prevalence,
residential mobility), and regulatory social institutions such as kinship
structures and economic markets. This variability raises questions for
normative theories of morality, but also holds promise for future descriptive
work on moral thought and behavior.
Current Opinion in Psychology 2016, 8:125–130
This review comes from a themed issue on Culture
Edited by Michele J Gelfand and Yoshihisa Kashima
For a complete overview see the Issue and the Editorial
Available online 21st September 2015

http://dx.doi.org/10.1016/j.copsyc.2015.09.007
2352-250X/© 2015 Elsevier Ltd. All rights reserved.

There is no question in current moral psychology about whether culture is


important for morality — it is, and recent work is beginning to show exactly
how. Most major theories in moral psychology include a primary role for
cultural transmission of shared norms and values in predicting moral thought
and action [1, 2, 3, 4•, 5]. For instance, cultural learning (in which cultures
differentially build on universally available intuitive systems) is one of the
central tenets of Moral Foundations Theory [3], which was based in part on
Shweder's comparisons of cultures in the three ethics of Autonomy,
Community, and Divinity [1]. The cultural ubiquity of moral norms and values
is a testament to the central role morality plays in holding societies together.
Human beings are a physically weak species whose evolutionary success
depended on the ability to cooperate and live in groups. As such, shared
norms — and their enforcement — are essential [6]. Indeed, children as young
as three years old comprehend and enforce moral norms on behalf of others
[7].
In this paper we review contemporary work on cultural factors affecting moral
judgments and values, and those affecting moral behaviors. We define these
broadly, as any judgments and behaviors people find morally relevant; cross-
cultural research has shown great variety in the very definitions of ‘moral’ or
‘immoral,’ for instance with Westerners using immoral to connote primarily
harmful actions, and Chinese to connote primarily uncivilized actions [8•]. For
both moral judgments and moral behaviors we highlight examples of within-
societal cultural differences in morality, to show that these can be as
substantial and important as cross-societal differences. We end by discussing
future directions for psychological work on culture and morality.

Moral judgments and values


Multifaceted psychological measurement of morality has opened up the doors to studying cross-
cultural similarities and differences in moral judgments across a variety of content domains.
Some domains like honesty are consistently endorsed as morally important across cultural
contexts [9]. However, cultural variations in whether moral concerns focus on individual rights
or communal social duties predict moralization of a broader range of personal and interpersonal
actions [10, 11]. Cultural variations in moral focus affect not only which behaviors individuals
will find morally relevant, but also the extent to which their personal values will be reflected in
their attitudes about social issues. For example, endorsement of self-transcendence values (e.g.
believing that the universal well-being of others is important) strongly predicts prosocial and
pro-environmental attitudes in individual rights-focused cultures, where investing one's own
resources in collective goods is seen as a personal choice. However, the same value–attitude
relationship is attenuated in cultures emphasizing duties toward one's community, as personal
resources are culturally expected to contribute to the common good [12•].
As individualism–collectivism research would suggest, research using multifaceted measurement
has shown that while Western, Educated, Industrialized, Rich, and Democratic (WEIRD) [13•]
cultures are generally more apt to endorse moral codes emphasizing individual rights and
independence, non-WEIRD cultures tend to more strongly moralize duty-based communal
obligations and spiritual purity [8•, 14, 15, 16]. In turn, individuals in autonomy-endorsing
cultures view personal actions such as sexual behaviors as a matter of individual rights, whereas
those in community-endorsing cultures are more likely to see them as a collective moral concern
[10]. These societal prescriptions of what one should do to be a moral person facilitate
endorsement of congruent personal values. Further, whether one's cultural prescriptions provide
a range of morally acceptable responses or only one moral course of action affects the extent to
which individuals’ social attitudes and behaviors are able to reflect personal — rather than
systemic — moral values [17].
These same cross-cultural differences in moral prescriptions of duty versus individual rights also
inform interpersonal moral judgments and moral dilemma responses. In trolley-type dilemmas,
respondents are asked whether they should sacrifice one person (say, by pulling a lever to
redirect a runaway trolley) in order to save several others. While most people across cultures will
say that flipping the lever is the morally right choice, those in collectivist cultures are more likely
to also consider additional contextual information when forming judgments, such as whether or
not it is their place (or duty) to act [18]. This relational consideration in turn leads to less
admonishment of individuals who do not flip the lever, and fewer character attributions of
actions made in absence of their broader contextual meaning [19].
Even when there is cross-cultural agreement in the moral importance of abstract concepts like
justice or welfare, cultural differences can emerge in the perceived meaning of these concepts
[8•, 20]. For people in autonomy-emphasizing cultures, justice and fairness are often viewed as a
matter of equity, in which outcomes are proportional to personal effort regardless of the potential
detriment to less-deserving others. By comparison, people in duty-based, communal cultures
often view justice and fairness as an issue of equality, in which all individuals deserve equal
outcomes and moral judgments are based on whether a self-beneficial outcome will cause others
to suffer [21•, 22, 23].
Factors contributing to cultural differences
In addition to elaborating cultural differences in moral values, current research is also addressing
factors that can help to explain them. One source of cultural variation in moral values,
particularly ones pertaining to fairness and prosocial behavior, can be found in social institutions
such as kinship structures and economic markets [24]. For example, higher degrees of market
integration are associated with greater fairness in anonymous interpersonal transactions [6].
Ecological factors can also promote certain kinds of moral norms and values. For instance,
pathogen prevalence predicts endorsement of loyalty, authority, and purity concerns, which may
discourage behaviors leading to disease contagion [25]. Similarly, exposure to high levels of
threat (e.g. natural disasters or terrorism) produces morally ‘tight’ cultures in which violations of
moral norms related to cooperation and interpersonal coordination are more harshly punished
[26]. And residential mobility in a culture is associated with greater preference for egalitarianism
over loyalty when it comes to preferred interaction partners [27].
Religion is one of the strongest cultural influences on moral values [28], and in a large cross-
national study of values religious values varied between nations more than any other single
factor [29••]. But religious values also vary hugely within nations and societies. For example,
Protestants, Catholics, and Jews, all of whom coexist within many nations, differ in how much
moral weight they give to impure thoughts versus impure actions, with Protestants more strongly
condemning ‘crimes of the mind’ (e.g. thinking about having an affair) [30•].
Cultural differences within societies
While cross-national comparisons of moral judgments have existed for decades, recent work is
showing that cultural differences within nations and societies can be just as substantial. For
example, within the US individuals from higher social classes make more utilitarian decisions in
moral dilemmas than do those from lower classes [31]. Also within the US, state-level analyses
show substantial variation in tightness (rigidly enforced rules and norms) vs. looseness (less rigid
norms, more tolerance of deviance) [32]. Antecedents of tightness (compared to looseness)
include ecological and man-made threats such as natural disasters, lack of resources, and disease
prevalence, and outcomes of tightness include higher social stability, incarceration rates, and
inequality, and lower homelessness, drug use, creativity, and happiness. Thus, the factors
contributing to within-nation variations in tightness-looseness are largely the same as those
contributing to cross-nation variations [33••].
Political ideology has emerged as an important dimension for within-society cultural differences
in morality. Moral Foundations Theory [3] has described ideological debates about moralized
issues as liberal/left-wing cultures (vs. conservative/right-wing cultures) preferentially building
more on Care and Fairness foundations than Loyalty, Authority, and Purity foundations [34, 35].
These left-wing/right-wing differences have been replicated within several different nations and
world areas [16]. Moral foundation endorsements and judgments can vary as much within
nations (vegetarian vs. omnivore subcultures) as between nations (US vs. India) [36].
Moral behavior
The moral status of specific social behaviors can vary widely across cultures [24]. At an extreme,
the most morally repugnant actions in one cultural context (such as killing one's daughter
because she has been raped) can be seen as morally required in another cultural context [37].
And individual-difference and situational factors known to affect prosocial behavior (such as
trait religiosity and religious priming) do so only through culturally transmitted norms, beliefs,
and practices [38, 39].
There has been less work on cultural differences in moral behaviors than moral judgments, and
the vast majority of the moral behavior work has been limited to behaviors in economic games.
Though recent cross-cultural moral research has revealed considerable differences in donations,
volunteering, helpfulness, and cheating (for instance showing less helping of strangers in cultures
prioritizing ingroup embeddedness) [40, 41, 42], most often research has focused on cooperation
(i.e. working together to achieve the same end). This work indicates that there are strong
differences in cooperation between WEIRD and non-WEIRD cultures [43], as well as between
relatively similar industrialized countries [44]. However, it appears that cross-cultural variability
is sensitive to the costs associated with cooperating and with free-riding (benefiting from others’
cooperation while not cooperating oneself). When punishment for freeriding is not a possibility,
intercultural differences are substantially reduced [43]; such differences are similarly lessened
when cooperation is less personally costly [45••].
There are also strong cultural differences in patterns of reciprocity — both positive (rewarding
proven cooperators [44]) and negative (punishing freeloaders [43, 46]). Again, these differences
exist even between WEIRD countries [44]. Cross-cultural differences in antisocial punishment
(the punishment of cooperators) appear to be especially pronounced. While in some countries
(USA, Australia) antisocial punishment is exceptionally rare, in others (Greece, Oman) people
actually punish cooperators as much as free-riders [47]. Relatedly, recent work has uncovered
cultural differences in rates of third-party punishment (i.e. costly punishment made by an agent
for an interaction in which they were not involved [48]), which is more prevalent in cultures with
low social mobility and strong social ties [49].
Factors contributing to cultural differences
Various overlapping factors may account for these differences, including cultural norms,
environmental and structural variables, and demographic and economic factors. Cooperation and
punishment norms vary considerably across cultures, and these differences translate into
meaningful behavioral differences. For instance, antisocial punishment appears to be especially
pervasive in cultures that lack a strong norm of civic cooperation [47]. Historical cultural
traditions also shape moral judgments. Purity behavior is also strongly influenced by cultural
norms. For example, because of their traditional emphasis on the face as a locus of public self-
representation, Southeast Asians are more likely to cleanse their faces following a moral
transgression in order to reduce guilt and negative self-judgment, whereas people from WEIRD
cultures tend to cleanse their hands [50]. But where do these norms come from in the first place?
Research indicates that social-ecological factors — such as a community's staple crops [51] and
population size [6] — contribute to cooperation differences because they alter the types of
behaviors that are required for communities to thrive. There is also growing evidence that
exposure to markets might contribute to moral differences, by increasing positive interaction
experiences, thus encouraging more trust, and, ultimately, increasing cooperation [6, 52].
Cultural differences within societies
There is also evidence of moral differences between groups in the same nation or society. For
instance, even within a single city, residential mobility (the frequency with which people change
where they live) has been associated with less prosocial (and more antisocial)
behavior [53••, 54]. In terms of cooperation, though within-culture variability may be lower than
between-culture variability overall, in the absence of threats of free-rider punishment, there
appears to be even more variability within cultures than between cultures, likely due to
considerable differences in punishment habits between cultures [43].
One specific within-culture difference in cooperation is that low-income people in WEIRD
cultures appear more cooperative than wealthy people [55]. Lower income people are also more
generous with their time, more charitable, and less likely to lie, cheat, or break driving laws
[55, 56•]. At least in part, these differences seem to stem from wealthy people's greater
acceptance of greed [56•].
A sizeable amount of research also indicates there are within-culture moral differences that result
from religious diversity. Though some types of religiosity appear to contribute to in-group bias
[57, 58], recent research has primarily focused on the positive consequences of religious belief.
Religious people appear to naturally act more prosocially [59], and priming religious concepts
increases generosity and reduces cheating, though only among people who hold religious beliefs
[38]. Many explanatory mechanisms have been proposed for religious prosociality [60], but from
a social psychological perspective, promising explanations include the bonds and sentiments
arising from communal activities such as ritual and synchronous movement [28, 61, 62] (see also
[63] in this issue for more on religion and culture).

Future directions
Research on the role of culture in morality, and on the role of morality in culture, will continue to
thrive in coming years. This work is likely to have an increasing societal impact as the role of
moral concerns in intergroup conflicts becomes more well-understood. Sacred moral values
(those people refuse to exchange for mundane resources like money) such as honor or holy land
have been shown to play an exacerbating role in intergroup conflicts [64, 65, 66], and this role
has been shown to vary across cultures (e.g. playing particular roles in Iran and Egypt [67, 68]).
Pluralist approaches to moral judgment [3, 4•] can help delineate which values have such
exacerbating effects in which cultural and relational contexts.

Conclusion
Cultures vary substantially in their promotion and transmission of a multitude of moral
judgments and behaviors. Cultural factors contributing to this variation include religion, social
ecology (weather, crop conditions, population density, pathogen prevalence, residential
mobility), and regulatory social institutions such as kinship structures and economic markets.
Notably, variability in moral thought and action can be just as substantial within societies as
across societies. Such variability brings up many difficult normative questions for any science of
morality, such as what criteria could allow anyone to claim a specific action or practice is
objectively moral or immoral [69]. But at the descriptive level, this variability offers untold
opportunities for future moral psychology as it continues to identify the antecedents, sources, and
structures of our moral lives.

PartV The Limits of Ethics

5.1 Akrasia

Oscar Wilde’s character Lord Darlington famously remarks in Lady Windermere’s Fan (1892) that, “I can
resist anything except temptation.” He is, alas, not alone in this. Most of us have at some time done
something that we’ve known to be wrong but found ourselves unable to resist doing. Aristotle (384–322
BCE) called this failing akrasia (lack of self-mastery or moral “incontinence”; Nicomachean Ethics, VII 1–
10), otherwise known as moral “weakness” (astheneia), or “weakness of the will.”

This phenomenon has puzzled philosophers for centuries. Why do we do what we know or believe we
should not? There are various explanations.

According to the Socrates of Plato (427–347 BCE), all wrongdoing is the result of ignorance. People act
badly simply because they are ignorant about what’s truly good or right – in that situation or generally.
On this view akrasia is impossible, since if we truly knew what was right we’d never choose not to do it.
Apparent examples of akrasia are therefore not what they seem: people never do what they truly know
is wrong. If someone has an affair, for example, and says “I know it is wrong” the adulterer is being
disingenuous. He or she may know it involves deceit or hurt, but on balance somehow the adulterer
thinks going ahead is still justifiable.

Augustine (354–430), on the other hand, saw wrongdoing as a characteristic of human sinfulness.
People clearly know the good but choose the bad, anyway; sometimes they even do what’s bad because
it’s bad, as a form of rebellion.

According to Aristotle, people, through the immediate urgings of passion, act without thinking, or at
least without thinking clearly. If they had thought about the issue more carefully and deliberately, they
might well have acted differently; but the need came over them with sudden forcefulness. Desire

act that might be described as “akratic impetuosity.”

Aristotle also talks of “akratic weakness.” Here immediacy isn’t the issue. People take the time to think
things through and come to the right decision about how to act. But sometimes they simply can’t bring
themselves to act that way because they are overwhelmed by sustained passions, especially desire or
anger, perhaps also fear.

What’s at stake

Which account we take to be true (Plato, Aristotle, or Augustine) affects how we evaluate the extent to
which people can be expected to realize moral rectitude. Just because something is the ethically right
thing to do, is it reasonable to expect people to be able to do it? How much should the presence of
strong emotion mitigate one’s judgment about an ethical lapse or a morally wrong action?
Consider, for example, the distinction drawn between someone coolly, in a premeditated and carefully
planned way, murdering someone; and cases where someone kills another in a fit of rage triggered by
some traumatic event, such as the sudden discovery that the victim had murdered the killer’s child.
Many think of the cases as different because of what one understands about the power and nature of
passion and the reasonable limits of human moral restraint.

Acting well, doing what’s right, becoming and remaining virtuous are difficult things for human beings.
How much slack should they be given? When, if ever, might the force of passion be thought of as so
strong as to render an action non-voluntary? How generous and forgiving should one be in moral
judgment?

activities of businesses and corporations, for instance, are sometimes held to be about one thing and
one thing only: profit. Whether one is kind, honest, generous, and trustworthy is irrelevant to the
conduct of commercial affairs – unless being that way helps maximize profit. This view can be presented
as a critique of capitalism, as stark realism, or perhaps even as a defense of capitalism (by arguing that
amoral conduct in the market actually produces the best outcomes for everyone, as if, as Adam Smith
(1723–90) maintained, the market were guided by a beneficent “invisible hand”).

In war, too, amoralists argue, there is only one objective: victory. Any- thing that contributes to victory is
permissible lying, killing, stealing, destroying property, etc. In fact, like the context of commerce,
obeying moral rules will probably inhibit one from realizing the goal of war.

Politics, too, has been described as an amoral context. Machiavelli (1469– 1527) famously described
how the successful leader must be prepared to present the appearance of moral rectitude but in reality
be prepared to engage in the most ruthless vice in order to obtain and secure power. Many who
maintain that national politics should be governed by moral principle nevertheless argue that
international politics, like war, is entirely amoral. Those holding these views sometimes prefer to be
called political “realists” rather than amoralists.

Drawing the line

If it’s accepted that some human activities fall outside moral consideration, where do we draw the line
that separates the moral from the amoral?

One way of doing this is to appeal to divine principle and argue that there are some activities that divine
commands neither require nor prohibit. Perhaps tugging gently on one’s earlobe is neither moral nor
immoral although tugging on it in order to send a signal to someone across the room to steal something
would be. Another way of sectioning off the moral from the amoral is to use the harm and happiness
principles. Those acts that lead to or at least are likely to lead to some sort of harm, especially serious
harm, are to be regarded as immoral; while those that contribute to happiness or are likely to contribute
to happiness are to be regarded as moral. Activities, however, that contribute neither to harm nor to
happiness or are likely not to do so are amoral. It’s not likely, in most contexts, that a few tugs on one’s
ear will contribute to people’s happiness or unhappiness in any way. So, that action is perfectly amoral –
unless one argues that the opportunity cost of tugging on one’s ear rather than doing something else is
an immoral waste of resources.
More radical is the claim that there’s no line to be drawn, anyway, since morality is an illusion, and the
world is in fact entirely amoral. Even if we don’t go quite this far, many see actual moral codes as in
some sense a sham or a deceit or an instrument by which the strong manipulate the weak. Joseph
Conrad’s Heart of Darkness (1902) and André Gide’s The Immoralist (1902), for example, are both
fictional narratives about Europeans who see the moral systems that had seemed so solid crumble
before their eyes. It seems an affliction suffered by many. Even recent continental philosophers like
Gilles Deleuze (1925–95) and Jacques Derrida (1930–2004) have held that subverting the objectionable
dimensions of what goes by the name of ethics and morality is to stand in a posture of permanent
critique against it.

The trouble is that even those most cynical about established moralities seem not to be fully fledged
amoralists, since their righteous indignation itself requires that they hold some values. Calls to rebellion,
freedom, and critique may entail subverting existing moral orders, but they seem also to imply
moralities themselves.

5.3 Bad faith and self-deception

Poor old Barbra Streisand and Donna Summer. In “No More Tears” they sang that they always dreamed
they’d find the perfect lover, but he turned out to be just like every other man. Still, it wasn’t their fault
it all went so wrong. “I had no choice from the start,” they sang, “I’ve gotta listen to my heart.”

At the risk of being pedantic, however, surely we do all have the power to choose whether or not to get
involved with a lover, and how far we take the relationship? The trouble is that we would rather kid
ourselves that we are not in control. That way, we avoid responsibility for the consequences of our
actions. But given how common this sort of rationalization is, doesn’t it threaten our capacity to make
moral choices?

Self-deception

The very concept of “self-deception” is a curious one, for it requires that one is both the liar (who knows
the truth) and the victim of the lie (from whom the truth has been hidden). But how is this even
possible?

Perhaps the self isn’t a unitary whole but is actually somehow fractured into discrete parts. One of the
most popular ways of explaining self- deception this way is to divide the self between the conscious and
the unconscious. Sigmund Freud (1856–1939) is perhaps most famous for this gesture. But the same
general idea recurs in various forms throughout the history of ideas.

For Immanuel Kant (1724–1804), the self one is able to observe is only an empirical, superficial self,
behind which deeper selves lie. One might say, in fact, that modern questions about self-deception and
an unconscious begin with Descartes’s (1596–1650) worry, in Meditations on First Philosophy, about
whether or not he is possessed by a demon and whether he may be the source of his own possibly false
ideas about the world and God.

Søren Kierkegaard (1815–55) criticized the modern, scientific, rationalistic age and what passes for
Christianity in it in terms of self-deception. The modern world lulls people into a self-deceptive state in
which they pretend they’re leading meaningful lives predicated on faith and reason, when really they
are steeped in a deep despair or malaise.

It’s characteristic of this despair, for Kierkegaard, that people are un- conscious of it, refusing to admit it
to themselves. They therefore live in an inauthentic state, failing to become authentic, passionate
selves. Instead each merely exists as what Kierkegaardian Walker Percy described in The Moviegoer
(1960) as an “Anyone living Anywhere” – not as a true

individual but as a neutral, indefinite “one.” As Kierkegaard wrote in The Sickness unto Death, “the
specific character of despair is this: it is unaware of being despair.”

Bad faith

A specific form of self-deception has been called, “bad faith” (mauvaise foi), a term of criticism
developed by existentialist thinkers like Jean-Paul Sartre (Being and Nothingness, 1944) and Simone de
Beauvoir (Ethics of Ambiguity, 1947). It means a number of things none of them good.

In the first place, bad faith is an effort to avoid the anxiety and responsibility humans must bear because
they are free. To avoid freedom and its responsibility, people say in bad faith that they are merely the
products of society, the results of their upbringing, the unchangeable effects of natural causes. In doing
so they deny their capacities to choose as subjects and stress their status as objects. But, according to
the existentialists, all this is said in bad faith, because on some level it is immediately evident to people
that they are free consciousnesses.

Second, as strange as this sounds, bad faith is manifest when people try to pretend that they are
something, that they have an essence. But people have no fixed essence which defines their being. At
every instant we must choose to be something (a “husband,” a “waiter,” a “woman,” a “homo- sexual,”
“French,” “American,” “black,” or “white,” an “evolved animal”). But this choice can’t be fixed, solidified,
or made permanent; as soon as the choice is made it’s transcended into a new moment where a new
free choice must be made. Nevertheless, one’s present identities (“I am a leftist Lithuanian professor”)
are claimed as if they were real and enduring.

People also engage in a third form of bad faith when they deny others the same freedom they would
have for themselves. The problem with doing so isn’t simply logical, one of consistency. It also stems
from our knowing that everyone else is also a free consciousness and that practically speaking each
person’s freedom depends on the freedom of others. The urgent effort to prove, for example, that black
Africans weren’t equal to European whites betrays the fact that the slaveholders knew that blacks were
enslaved humans like themselves, not sub-human animals. Those who oppress others, who characterize
them as “cockroaches” (as the Hutu militia characterized their Tutsi victims) and “vermin” (as Nazis
characterized Jews), typically do so in bad faith.

For some, the widespread prevalence of self-deception in humans makes them skeptical of the human
capacity to make authentic, moral choices. We must doubt not only the sincerity of others, but also that
of our own moral reasoning. Might we not be kidding ourselves when we argue for moral values as if
they were authentically our own? For the existentialists, however, there are grounds for optimism. We
can be truly free and avoid bad faith. If we do not share this optimism, then we have to accept that
moral discourse will always be infected with self- deception.
5.4 Casuistry and rationalization

Xiao is a manager for a large multinational mining company. He takes ethics very seriously which is why
he is concerned about his latest project. It requires him to pay a bribe, forcibly evict indigenous people
from their land, employ children, and destroy an important, bio-diverse habitat. He reasons, however,
that bribes are just the local way of doing things, as is the practice of employing children, who actually
make an important contribution to stretched household budgets. The evicted people will get
compensation, and it is not as though western countries don’t have compulsory purchase orders. As for
the environmental damage, the company has pledged to create a new sanctuary near the site. Anyway,
if he refuses, the company will simply get someone else to do it. Xiao is uncomfortable, but his
conscience is appeased.

Are the justifications for Xiao’s actions adequate, or are they merely convenient ways for him to excuse
what’s really morally abhorrent behavior? It’s impossible to tell from such a brief description, but the
suspicion is certainly that a more impartial examination of the relevant rights and wrongs may come to a
different conclusion as to the morality of his actions.

This kind of danger is ever-present in the real world of practical ethics, particularly in business ethics. It
would be too cynical to suggest that the authors of corporate ethics policies are always simply trying to
provide a respectable veneer for their employers’ callous self-interest. But whether the relevant conduct
is commercial or personal, it’s easy to end up looking for moral justifications for what one really wants
to do, even if one’s desire to be good is sincere. By contrast, it’s hard to assess the morality of an action
in which one has an interest fairly, dispassionately, and objectively. There are always arguments to be
found for and against any given action, and since ethics is not like mathematics, it’s easy to give more
weight to the reasons that suit than to those that don’t.

Casuistr y

Finding justifications for what one wants to do anyway is sometimes de- scribed as “casuistry.” But in
fact this is a little misleading, since genuine casuistry is a sincere attempt to arrive at solutions to moral
dilemmas on a case-by-case basis, by appeal to paradigm cases and precedents, rather than to a
particular moral framework. This makes it particularly useful for solving real-world debates, since it does
not assume a consensus of moral theory among those attempting to resolve the dilemma. Among other
in- stances, casuistry has a rather noble history in English common law, and it in part grounds the
common legal practice to day of citing precedent cases to justify rulings. All casuistry requires is that
everyone agrees what the right thing to do is in certain given circumstances, which people holding
different theoretical commitments often do. This is why, although it is not usually described in this way,
a lot of work in applied bioethics today takes the form of casuistry.

Because, however, casuistic thinking leaves a lot of room for interpretation and is not about applying a
set of clear moral principles, it’s open to abuse, which is why it got a bad name. Catholic Blaise Pascal
(1623–62), in his Provincial Letters (1656–7) for example, lambasted the Church for misusing casuistry to
rationalize the sinful behavior of the powerful and privileged; and, of course, a host of Protestant
reformers shared his view, reserving special criticism for Jesuit abuses of the casuist method. Where
there is a need for subtlety and interpretation there is also room for self- serving evasiveness and
rationalization.
Correcting bias

But how can one employ casuistry properly and make sure that its reasoning isn’t distorted by desire or
interest? First and foremost, one simply has to accept that everyone is prone to such distortions, even
those (perhaps especially those) who are utterly confident in their ability to make impartial assessments.

One must, therefore, in the second place, make a careful, conscious effort to correct biases – including
biases that may seem imperceptible or of which one seems free. This takes real self-knowledge,
vigilance, and care. A useful technique is to ask oneself honestly what solution one really wants to be
justified and then make an extra effort to see opposing arguments in their strongest light.This kind of
self-monitoring can compensate for the natural, but regrettable, inclination to follow the arguments, not
where they lead (as Socrates advised in Plato’s Phaedo), but where we want them to go.

Understanding some of the mechanisms of self-deception, avoidance, and denial – as well as some of
the typical things that people deny, avoid, or deceive themselves about – can help pull back the cloaks
behind which immoral motives commonly hide themselves. Still, another effective tech- nique is to
discuss one’s choice and the justifications for and against it with someone who is both disinterested and
competent in moral reasoning. A disinterested ear is often the best protection against a clever desire.

5.5 Fallenness

How are we to make sense of events like the Rwandan genocide, petty cru- elties, and perhaps even
environmental degradation? Typically we look for the causes in poor socialization, ignorance, history, or
political dynamics. These travesties are not inevitable but could all be avoided if we could order our
societies and ourselves better.

But there is an older, now less fashionable, way of interpreting phenom- ena like these. Human beings
are inclined to evil because they are fallen. Sinfulness is a part of our nature, and to counter it we
require not simply moral and intellectual virtue, but theological virtue and divine assistance as well. In
short, a purely secular ethics which fails to take into account our fallen natures and the gap between us
and the divine is woefully inadequate.

Fallenness and sin

The Abrahamic religious traditions share broadly speaking an endorsement of the account of Genesis 2–
3, where Adam and Eve eat the fruit taken from the tree of knowledge of good and evil the very
knowledge investigated by moral philosophy! God had forbidden them to eat this fruit, so in punishment
for their transgression He casts them out of Eden.

This transgression or sin and subsequent punishment is called the “Fall.” Its punishments have been
thought to include, variously interpreted, the pain of childbirth, the requirement to labor for
sustenance, mortality, the subordination of women to men, the weakening of the will, the perversion of
desire, and the darkening of the intellect.

These last three in particular suggest limits to what one may expect of people, ethically speaking.
Because the will has been weakened, humanity lacks the rectitude to adhere to moral principle in the
face of adversity or temptation. Because of the perversion of desire, the lust for earthly pleasures
(concupiscence), people can’t be expected to be consistently or naturally inclined to desire the good. On
the contrary, they can be expected to want what’s in fact bad for them and for others, what’s evil.
Because the intellect has been darkened, despite having eaten the fruit of the tree of moral knowledge,
people can be expected to be commonly ignorant about right and wrong and to possess limited
capacities to figure it out on their own. Many Christians hold the additional belief that all people are
born with original sin (the moral stain we inherit as descendants of Adam), and so all humans are
inherently subject to weakness and sin.

forts to improve things on their own, people can be expected frequently to fail to be good. War, crime,
and vice of every sort are inevitable. Sins of the intellect and sins of the emotions will be pervasive.

Dealing with or denial of?

One might say that modernity has been in part the effort to overcome through reason and technology
the consequences of the Fall. Medicine and the health sciences work to reverse and limit pain and even
mortality. Machines reduce the need for labor. Modern science and philosophy raise claims to having
acquired knowledge, while modern ethics and political theory struggle to achieve practical wisdom.
René Descartes lays out much of this in his Discourse on Method (1637). But those who find the account
of fallenness compelling are likely to think that there’s vanity in the modern project, that humanity can
only overcome the Fall through divine assistance. For Christians this assistance is typically articulated
through concepts such as: grace, salvation, redemption, and the sacrifice of the Messiah or Christ.

Martin Heidegger, in Being and Time (1927) developed a different though also ethically relevant
conception of “fallenness” (Verfallenheit, das Verfallen). Following Søren Kierkegaard’s diagnosis of
modern society’s pathologies, Heidegger describes how in average everyday life individuals fall prey to
idle busy-talk, habit, as well as practical, commercial, and technical pro- jects in ways that alienate them
from their authentic and “ownmost” ways of being (as well as from being, Sein, itself).

People who fall into this state of average everydayness can understand themselves only as the
impersonal they understands them, in the way that what Heidegger calls das Man conceives things.
Individuals become average “they-selves,” one (as the neutral grammatical pronoun). To break out of
this fallenness and averageness and resolutely achieve authenticity is, one might say, the ethical
purpose of Heideggerian phenomenology (despite his claim that there is nothing moral or political about
it). Doing so requires, among other things (as it does for many existentialists), coming to terms with
human mortality, as well as the way we are vulnerable to falling.

Of course, if you do not accept Abrahamic theology, all this talk of fallenness might just sound like old-
fashioned guff. But even without religious beliefs, the idea that human beings are by nature inclined
toward wrongdoing must be seriously considered. If accepted, it has major repercussions for what we
think to be possible ethically.
False consciousness

If you’ve ever heard someone say that they deserve what they have because they’ve earned it, you’ve
encountered an example of what some social critics call false consciousness.

But what on earth could be false about something that in many cases seems so obviously true? It’s
perhaps not false that people who say they’ve earned what they’ve got have worked very hard for it and
perhaps exercised remarkable intelligence, creativity, and sacrifice. There is, however, no divine or
natural law about what sort of return or reward someone is to receive for hard work, intelligence,
creativity, sacrifice, or anything else. It’s only the peculiar social arrangements of our society (as well as,
in many cases, a fair measure of good fortune) that have distributed to any particular individual the
precise amount he or she claims to have earned. Other social arrangements might have distributed far
less or far more.

So, we might define “false consciousness” briefly as a set of beliefs people hold, usually called
ideologies, that obscure from them the real social- political-economic relationships that govern their
lives and the true nature of the social-political-economic order in which they live. In an 1893
correspondence with Karl Marx (1818–83), Friedrich Engels (1820–95) remarked that:

Ideology is a process accomplished by the so-called thinker consciously, it is true, but with a false
consciousness. The real motive forces impelling him remain unknown to him; otherwise it simply would
not be an ideo- logical process. Hence he imagines false or seeming motive forces.

Nevertheless, Marx did lay the groundwork for much of what later thinkers made of the idea. Principally,
Marxian theories of false consciousness rely on Marx’s description in Das Kapital (1867) and elsewhere
of the way that capitalism distorts the self-understanding of the proletariat about its real situation.

Among the principal forms of false consciousness is the understanding people acquire about themselves
through what Marx and others have called the fetishism of commodities. “Fetishism” is a process
whereby people project value upon things and then pretend or convince themselves that it’s there
intrinsically. So, people come to believe that diamonds or BMWs have great intrinsic value, when in fact
they are shiny pebbles and machines whose value comes only from the social world in which they’re
situated. A BMW is likely to have little or no value to a nomadic herdsman in the Himalayas. A diamond
or a stock certificate would have had no value to an ancient Spartan.

Updating the idea

Later critics like Guy Debord (1931–94) and Jean Baudrillard (b. 1929) have described the way in which
devices like the media and advertising convince people that they’re defined and have value to the
extent they buy or own certain things and imitate the images that pervade their lives. In Debord’s terms,
“spectacle” replaces human social relations. In Baudrillard’s formulation, we become images of images,
imitations of imitations, simulacra not of real things but rather of other simulacra. People even begin to
prefer imitations or cyber-realities to reality itself. For example, people prefer Disney Europe to Europe,
resorts to beaches, malls to neighborhoods, Internet relationships to flesh and blood, video games to
sport. The wars people know are not real wars but the spectacular images they see on TV.

Frankfurt School critics like Theodor Adorno (1903–69) describe how even the simplest dimensions of
our lives – even things like lipstick and pop music – hide oppressions at the very time they advance
them.

Even the predominant liberal political beliefs with which people under- stand and justify the social
relations they do observe are, according to many critics, instruments of false consciousness. Talk of
“free” markets blinds people to the coercion and manipulation that are endemic to them. Talk of
“freedom of speech” obscures how speech only actually matters politically if one has access to the
media. Talk of “property rights” masks how the ideology of private property makes it possible for vast
concentrations of it to deprive others of their holdings and degrade the natural world with impunity.

The limit on ethical deliberation implied here, then, is that people steeped in false consciousness cannot
be expected to reach sound ethical conclusions when their understanding of themselves and their world
is deeply distorted in a way that prevents them from understanding many of the ethic- ally salient
features of the realities they face.

Of course, the critique only makes sense if you accept that the various beliefs comprising “false
consciousness” are indeed false. They may not be. Moreover, the accusation of false consciousness
might sometimes be turned on its accusers. Is it false consciousness to deny that the value of goods is
determined by markets, for example? At its worst, saying that something is an example of false
consciousness can thus degenerate into mere name- calling: you don’t accept what I see as the truth,
therefore you must be the victim of false consciousness. Those who wish to level the charge of “false
consciousness,” therefore, will do well not only to describe the content of the false consciousness
they’ve identified but also present an error theory which accounts for the mechanism or reasons why
reasonable people see things so wrongly. Otherwise it will be difficult to get around the presumption of
clear-sightedness.

The limit on ethical deliberation implied here, then, is that people steeped in false consciousness cannot
be expected to reach sound ethical conclu- sions when their understanding of themselves and their
world is deeply distorted in a way that prevents them from understanding many of the ethic- ally salient
features of the realities they face.

Of course, the critique only makes sense if you accept that the various beliefs comprising “false
consciousness” are indeed false.They may not be. Moreover, the accusation of false consciousness
might sometimes be turned on its accusers. Is it false consciousness to deny that the value of goods is
determined by markets, for example? At its worst, saying that something is an example of false
consciousness can thus degenerate into mere name- calling: you don’t accept what I see as the truth,
therefore you must be the victim of false consciousness. Those who wish to level the charge of “false
consciousness,” therefore, will do well not only to describe the content of the false consciousness
they’ve identified but also present an error theory which accounts for the mechanism or reasons why
reasonable people see things so wrongly. Otherwise it will be difficult to get around the presump- tion
of clear-sightedness.
5.7 Free will and determinism

In law and in everyday morality, people make allowances for mitigating circumstances. A wife who
murders her husband may be given a lighter sentence if she can show that he frequently battered her
and that she committed her crime under sustained stress. People who can demonstrate diminished
responsibility due to mental illness, chronic or acute, will (or at least should) receive more treatment
and less punishment. It’s also widely accepted that to a certain extent a difficult upbringing can make
someone more likely to turn to crime.

What this shows is that people do not believe that free will is all-powerful. Sometimes people’s actions
are partly determined by what has happened to them, and this makes them less responsible for what
they do. But what if free will normally makes less of a contribution to our actions than we think, or even
plays no role at all? What if, when closely scrutinized, the very concept of free will doesn’t make sense?
Wouldn’t that totally undercut our common sense notions of responsibility and blame?

Ted Honderich (b. 1933) maintains that free will doesn’t exist at all and that our ordinary ideas of moral
responsibility will have to go. On Honderich’s view, moral responsibility only makes sense if one accepts
“origination” the view that the first causes of human actions originate within human agents themselves,
and that these first causes are not them- selves caused by anything outside the agents. Honderich
argues that there can be no such thing as origination. Human beings are as much part of the natural,
material world as anything else, and in this world everything that happens is the effect of past causes.
Causes determine their effects necessarily and uniformly. There is, therefore, simply no room for
something called free will to step in and change the physical course of events, in the brain or in the
ordinary world of human experience J. L. Austin (1911–60) called the world of “medium sized dry
goods.” It follows, then, that determinism is true, and that most ideas we have about moral
responsibility are false.

More radically, does the concept of origination even make sense? If nothing at all causes human
decisions of the will, then, as David Hume argued, they’re no different from random events (Enquiry
Concerning Human Understanding, 1748; Section VIII). But it hardly seems palatable to maintain that
moral responsibility rests on something random, a matter of pure chance, without any cause.

Compatibilism

Talking about free acts in moral discourses and otherwise may still be acceptable, however, through a
strategy known as “compatibilism.” This theory accepts that human actions are as much caused by prior
events as any other. But it also holds that it makes perfect sense to say that people have free will, just as
long as by the words, “free will,” one means just that human actions are not the result of external
coercion or outside force. So long as the proximate (that is, nearest) causes of an action are in some
sense within or part of the person acting, especially if the act flows from the actor’s character, the act
can meaningfully be described as a “free” act. If one jumps through a window because one chooses to
do so, it’s done freely (even if that choice was caused). If one is thrown through a window against one’s
wishes, one’s act of defenestration is not a free one. On this account, however, it still seems true to say
that people really could not do other than they do, and that, for many, still undercuts what is necessary
to attribute moral responsibility.
Harry Frankfurt (b. 1929) has argued, using what have come to be known as “Frankfurt-style” cases
(“Alternative Possibilities and Moral Responsibility,” 1969), that even if it’s true that one can’t do
otherwise, it still can make sense to describe one’s action as free. Suppose, for example, someone
possesses a secret device to force you to do X but won’t use it unless you try to do something else
besides X. If you do in fact choose to do X, says Frankfurt, it’s true both that you couldn’t do otherwise
(that alternatives weren’t possible) and that you chose freely. But for many, the simple idea even in
these cases that people really could not do other than they do undercuts what is necessary to attribute
moral responsibility.

Saving free will

The ability to act otherwise than one has is one way to define freedom. Other definitions include ideas
like acting independently of the causal order of the natural world, acting on the basis of reason alone,
acting independently of desire, acting at any time in opposition to one’s current line of action. In any
case, using a variety of definitions, many philosophers have tried to save free will, or at least freedom. In
the Critique of Practical Reason (1788), for example, Kant advanced a “transcendental argument” for the
reality of free will: people recognize that they have moral duties, but moral duties can only exist if
people have free will. Therefore, since in order for morality to make sense free will must exist, it’s
reasonable to “postulate” that people have free wills – even though there is and in fact can be no proper
proof for it and even though some plausible arguments maintain that it doesn’t exist.

Thomas Nagel (b. 1937) adopts a position similar to Kant’s, arguing that free will seems undeniably not
to exist from a third-person point of view on the world – and undeniably to exist from a first-person
point of view. Humans thus seem condemned to endure a perpetual “double vision” understanding of
the reality of free will.

A weaker argument for free will might be described this way: irrespective of the ultimate truth, people
somehow have to act as though they have free will. This seems to be psychologically true: no matter
what people cling to intellectually, they always seem to feel and act as though they’re free. But as a
philosophical solution this option seems unsatisfactory, as it seems to imply that everyone must
inevitably live under a delusion.

Jean-Paul Sartre maintained, in Being and Nothingness (1943), that human freedom is immediately,
phenomenologically evident to consciousness. On the one hand, that option seems to be a disappointing
cop-out – an attempt to resolve the issue through mere assertion rather than careful argument. If
someone simply replies, “Well, I don’t see it that way,” the debate reaches an impasse. All the Sartrean
can respond with is: “Look again.” But, on the other hand, perhaps for many serious philosophical
issues, at some point one reaches what Ludwig Wittgenstein (1889–1951) called bedrock, where one
simply has to make a fundamental philosophical decision, or where ultimately one simply sees it or
doesn’t. Perhaps Sartre’s appeal to what’s simply evident is enough to cut the Gordian knot.

Things for those on the other side of the barricades aren’t easy either. The challenge for those who
reject both origination and Sartrean immediacy is to explain how one can make sense of moral
responsibility while simultaneously not ignoring the disquieting implications determinism has for it. It’s a
tough row to hoe, but an important one. Indeed, this is perhaps one of the most vibrant philosophical
debates today.
5.8 Moral luck

Aisha was driving home through London one day when her mobile phone rang. She didn’t have a hands-
free set, but she answered it anyway. When the conversation finished, she put the phone down and
carried on with her life. Had she been caught by the police, she would have faced a large fine and could
have lost her license.

At the same time, somewhere else, Sophia was also driving home, and she too answered a mobile
phone call manually. But as she was talking, a child ran out into the road in front of her. Distracted, and
with only one hand on the steering wheel, she was unable to avoid a collision. The child died as a result.
Sophia is now facing a prison sentence of up to 14 years. Had she not been on the phone at the time,
she would have avoided killing the child.

What’s particularly interesting about this comparison is that the only difference between Sophia and
Aisha is luck. Had a child run into the road in front of Aisha, she too would have become a killer. So we
have two women, both of whom performed the same acts; but in one case that act led to the death of a
child and in the other case it did not – and only luck determined which was which. Is it fair that one
woman is punished while the other is not?

Can luck enter into morality?

One’s moral standing isn’t usually considered to be a matter of luck or fortune. But situations such as
this suggest it may play a very important role. The law certainly won’t treat the two women equally,
even though their characters and behavior may be just the same. Morally speaking, most would also
consider Sophia more culpable than Aisha, even though Aisha was driving just as dangerously. The
implication seems to be that how good or bad one is depends partly on what the consequences of our
actions are, but consequences are in turn determined in part by luck.

Accepting luck as a factor in moral status is certainly a counter-intuitive view, and one with which many
today disagree (interestingly, the ancients seem to have taken fortuna more seriously).We might justify
the resistance to luck by arguing that although the law does and perhaps has to distinguish between
reckless driving that leads to death and reckless driving that doesn’t, morally speaking both women are
in truth equally culpable. Perhaps contemporary moral intuitions that distinguish the two are distorted
by the knowledge of what consequences actually follow. Perhaps either Aisha should be morally
condemned a lot more, or Sophia should be condemned a lot less. Perhaps recognizing that only good
fortune prevents most drivers from becoming careless killers should yield more sympathy for the killers.
Indeed, how many of us can honestly claim to drive with due care and attention at all times?

To deny that moral luck exists at all, however, one needs to deny that actions become better or worse
depending on what their consequences are, since what actually happens is almost always beyond
anyone’s full control. But this option also seems counter-intuitive: surely it does matter what actually
happens. To judge people purely on the basis of their intentions or on the nature of the act itself seems
to diminish the importance of what actually happens.

Constitutive luck
There is another kind of moral luck, known as constitutive luck. How good or bad one is depends a great
deal on one’s personality or character. But character is formed through both nature and nurture, and by
the time one becomes mature enough to be considered a morally responsible adult, these character
traits are more or less set. So, for example, a kind person hasn’t fully chosen to be kind: that’s how she
grew up. Certainly many cruel and nasty people were themselves mistreated as children; that abuse
almost certainly affected the way their personalities developed. Since people don’t choose their genes,
or their parents, or their culture of origin, or a lot of the other factors that affect moral development,
there therefore seems to be another important element of luck in morality.

Martha Nussbaum has argued in The Fragility of Goodness (1986) that for the ancient Greeks not only
does a good life depend upon constitutive luck, it also depends upon good luck in the sense of avoiding
increased danger. The very attempt to be good, says Nussbaum, makes one vulnerable to many bad
things that don’t threaten the vicious. For example, the attempt to fulfill their duties led Hector,
Agamemnon, Antigone, and Oedipus each to tragic ends. Perhaps Socrates might be thought of this way
as well.

Given that the role of luck or fortune in life seems indubitable, but the idea of moral luck oxymoronic,
isn’t the best solution to say that where luck enters in, morality cannot be found? Yet, that too is a
controversial road to follow. Screening out those dimensions of a situation attributable to luck may
leave little left to praise or blame. So, however one looks at it, accepting the role of luck presents a
major challenge to judgments of moral praise and blame – but perhaps something essential, too.

5.9 Nihilism

In the Coen Brothers’ 1998 film The Big Lebowski, nihilism is compared to one of the vilest creeds in
human history – and found wanting. On discovering that the people menacing his friend “the Dude” are
nihilists, and not Nazis as he had thought, the character Walter says, “Nihilists! Jesus! Say what you like
about the tenets of National Socialism, Dude, at least it’s an ethos.”

“Nihilism” is often used as a term of criticism and even abuse. It’s most often hurled by those who wish
to defend “absolute” or divinely grounded morals against those they believe subvert them or the
institutions built around them. But the term has also sometimes been used by the subversives
themselves.

Deriving from the Latin nihil, meaning “nothing,” modern usage of the term “nihilism” seems to have
developed in the wake of its use in Ivan Turgenev’s 1862 novel, Fathers and Sons. It came to
characterize Russian social critics and revolutionaries of the nineteenth century like Alexander Herzen
(1812–70), Mikhail Bakhunin (1814–76), and Nikolai Chernyshevsky (1828–89), who were associated
with anarchism and socialism as well as with modern, secular, western materialism generally.

Anarchism, socialism, secularism, and materialism are not, of course, nothing. They comprise very
specific truth-claims and moral values. But achieving their realization and acceptance requires the
destruction or annihilation of the old order – of traditional morals and values and social systems said to
be grounded in something divine or transcendent. After all, these thinkers aimed at the creation of a
new, better world, a truly good world.
But creating that world demanded first violently erasing the old world.

The threat of nihilism

But there’s more to the charge of nihilism than the subversion of things based upon tradition and
religion. Concepts and theories described as nihilistic are commonly taken to imply negative claims like
these: (a) that there is no truth; (b) that there is no right or wrong, good or evil; (c) that life has no
meaning; and even (d) that it’s not possible to communicate meaningfully with one another. In short,
any theory not ultimately grounded or finally justifiable may be subject to the charge of nihilism,
whether its proponents realize it or not.

Most recently, intellectual movements collected under the moniker, “post-modernism” like post-
structuralism and deconstruction – have been called nihilistic. But nearly all things modern have also
been subject to the charge – modern science, evolutionary theory, the Protestant Reformation,
existentialism, pragmatism, modern relativism, rationalism, Kantianism, etc.

There’s often a logical criticism wrapped up in all of this, a critique of consistency or coherence. The
claim that “there is no truth” is itself a truth- claim. The claim that “language cannot communicate
meanings” itself depends upon the ability of language to communicate. But does the claim that there
are no values (no right and wrong) involve holding a value?

Thinkers like Friedrich Nietzsche (1844–1900) and Martin Heidegger (1889–1976) have held that in a
perverse way it does. As they see it, it’s a short hop from asserting that “nothing has value” to positively
affirming the value of nothing. That is to say, nihilistic ideas and social movements, say the critics,
inevitably lead to grotesque outpourings of violence and destruction.

Since nihilism cannot provide any foundation, ground, or reason for morality, ultimately “everything is
permitted.” Since everything is permit- ted, nothing is prohibited. That nothing’s prohibited ought
somehow to be exhibited and made manifest; therefore every act (even the most extreme acts) ought
to happen. Some blame nihilism, therefore, for everything from the French Revolution’s Terror, the
Holocaust, and the Soviet gulags to pornography, drug abuse, abortion, divorce, petty crime, and rock
and roll.

Overcoming nihilism

Traditionalists blame the modern abandonment of God for these maladies and prescribe a return to
tradition, absolutes, and a religiously based society. One of the most influential analyses of the nihilistic
characteristics of the modern world, however, inverts this diagnosis and places responsibility for nihilism
squarely upon the western philosophical and religious traditions themselves.

Nihilism, says Nietzsche, actually results from the Christian-Platonic tradition, from its attempts to
acquire truth that is singular, universal, and unchanging, together with its promoting the morals
developed by a weak and conquered people. One might call these pathologies the “God’s Eye”
conception of truth and “slave” morality. After centuries of careful philosophical scrutiny philosophers
have learned that truth of that sort is unavailable to humans. The frustration and exhaustion of this
disappointing realization (the realization that “God is dead”) together with the soporific effects of slave
morality have finally resulted in thanatos or the desire for nothingness and death, even the desire to
wreak revenge upon the world for this disappointment.

For Nietzsche, our task is not to return to the pathological traditions and philosophies that produced
nihilism but, rather, to overcome nihilism. Overcoming nihilism requires first recognizing and taking
responsibility for the fact that we are the source and creators of value. Next, overcoming nihilism
demands that we find within ourselves the strength to make new affirmative values, healthy values that
honor our human finitude, our embodiedness, and our desires, that love the human fate (amor fati) and
don’t lead to nihilism. Existentialism has in many ways followed Nietzsche in trying to achieve this
project.

5.10 Pluralism

Jean-Paul Sartre (1905–80) told a story of a young man who was caught in dilemma between his duties
to his country and to his mother. Should he join the Free French Forces to fight Nazism or look after his
sick, aged parent? Many moral theories would maintain that there must be some way of determining
which of these duties carries more weight. Sartre disagreed because he thought it was finally up to each
individual to choose his or her values, and no person or system could do it on anyone else’s behalf. But
there’s another explanation for why the dilemma could be irresolvable: perhaps there are many values
worth holding and no objective way of determining which should take priority over others and
sometimes these values simply conflict. This is the position known as pluralism, a doctrine most closely
associated with Isaiah Berlin (1909–97).

Pluralism and relativism

Many critics claim that pluralism amounts to no more than relativism, so it is worth addressing this
accusation directly in order to clarify what pluralism entails.

Relativism holds that there are no absolute moral values and that what’s right or wrong is always
relative to a particular person, place, time, species, culture, and so on. This position, however, differs
from pluralism in a number of important respects. For one thing, the pluralist may well believe that
moral values are not relative. For example, she might claim that the young man in Sartre’s example
really, objectively, has responsibilities to both his mother and his country. Nevertheless, the nature of
morality is such that these duties cannot be weighed up against each other with any kind of
mathematical precision to determine which has priority over the other. They both have a claim on him,
yet he cannot adhere to both.

But conflicts among moral claims may not simply be a matter of imprecision. For the pluralist, there are
many different values worth holding and many moral claims that may be made upon us. As W. D. Ross
(1877–1971) and others have argued, goods, duties, values, claims, and principles may be irreducibly
plural and complex. In certain cases, the constituents of this plurality may stand in conflict, and that
conflict is simply incommensurable that is, there may simply be no way to reconcile them.
Even if the pluralist does not hold that moral values are objective, the reason she has for claiming that
moral values are plural and in conflict may not collapse into crude relativism. While there may be many
ways in which human life has value, there isn’t an unlimited variety. Some moral options for example,
genocide are not permissible. In addition, living in accordance with one option may in fact close off
others. Take the example of the values of communal and individual life. There’s value in living the kind
of life in which one is very closely wedded to one’s community, and there’s a different kind of value in
living as an autonomous, unencumbered individual. But if one lives to reap the benefits of one of these
ways, the benefits of the other must be sacrificed. So, the values of community and individuality may be
both equally important yet incommensurable.

This approach isn’t a form of relativism because it’s consistent with the idea that both ways of life have
absolute value. Nor, again, is just any way of living valuable: there are limits to the plurality of value.
While both community and individuality have value, racial purity does not.

The consequences of pluralism

The key claim of pluralism is simply that at least some values defy being pinned down in a hierarchy,
whereas many other systems of morality contend that it will always be possible to determine which of
our values are more fundamental than others and should thus take priority when there’s a clash.

In practice, this means one has to accept that not all moral disputes can be resolved to everyone’s
satisfaction, and this isn’t just because some people are mistaken in what they see as most important. If
pluralists are right, then there are serious limits on the extent to which moral disagreements can be
settled. Sometimes, the best we can do is to negotiate and reach accommodations with others, not
actually agree on what value is superior to others.

This is particularly important for multicultural societies, where the plurality of values is more evident. A
common ground can’t always be found, but people must still live with each other. The pluralist warns
that insist- ing that all moral disagreements are in principle resolvable forces people to conclude that
those who disagree with them are fundamentally wrong, irrational, and immoral. That in turn generates
tension and conflict, often violence. Pluralism offers the promise of a more peaceable alternative

5.11 Power

The discourses orbiting around the recent war in Iraq include many arguments that the war is unjust,
unnecessary, poorly executed, or illegal. Dealing with these arguments directly is one of the main ways
in which the morality of the war has been debated. But there has been another way of criticizing these
arguments, one that refuses to take any of the arguments at face value. This starts with the question cui
bono – who benefits? Ask this question, many people say, and you will find the real reasons for war – or
opposition to it. What people actually say is beside the point.

This approach reflects a strand in philosophy that analyzes events and discourses in terms of power
relations. Look at the disagreeing parties in the debate and you’ll find that each has some sort of
interest in the stance it takes. The stance, then, whatever it appears to be, is fundamentally a device for
protecting, securing, or enhancing its own power. The discourses about promoting democracy,
advancing human rights, ensuring national security, upholding the requirements of international law,
are therefore often or even always deployed to advance other agendas. Those opposed to the war have
claimed these agendas might include securing access to oil, undermining Saudi power in the region,
protecting Israel, stemming the advance of Russian and European power, weakening international
institutions, galvanizing domestic support for the current government, creating a distraction and
financial crisis to justify the dismantling of American social programs, transferring wealth to the
shareholders of specific corporations, or weakening Islam. Those in favor of the war can also claim the
anti-war movement is motivated by the desire to increase the power of Europe, the left, Ba’athists, or
Islam.

Taken to its extreme view, this kind of analysis claims that, instead of making us excellent, or piling up
treasures in heaven, or making more people happier, morality is largely, even completely, about power.
Moral principles and moral terms are actually clever instruments of manipulation.

Marx, Foucault, and hierarchy

There are many ways to think about the way power works. One way is in a top-down fashion, where
those above (the powerful) exert their power over those below (the powerless or less powerful). The
classical Marxian model seems to follow this rule: owner/slave, lord/serf, capitalist/proletarian; that is,
those who control the means of production (on top)/those who work the means of production (below).
One of the things power of this sort can do is dictate the terms of moral and immoral, right and wrong,
just and unjust.

So, slave owners, aristocrats, and capitalists invent systems of morality and politics that explain, justify,
and secure their dominant position. Some people are born slaves and are intrinsically well suited to it,
the aristocrat Aristotle (384–322 BCE) claimed. Slavery is actually good for slaves, American slavers
argued. God has established the hierarchy where lords rule, said the lords. Their blood is superior. They
create, cultivate, and sustain the refinements of civilization in ways the lower classes cannot. Capitalists
have worked harder and smarter. They’ve been frugal, thrifty, diligent, disciplined, and have invested
wisely.

It’s no wonder, then, that Karl Marx and Friedrich Engels asserted that “The ruling ideas of society are in
every epoch the ideas of the ruling class” (The German Ideology, I. B, 1845–6).

But, of course, power isn’t simply exerted in a top-down way. Those underneath often struggle against
those above, sometimes successfully. Those who occupy lower rungs in the hierarchy often marshal
clever and effective forms of resistance and opposition.

There are, however, other models of power besides the top-down and bottom-up channels of hierarchy.
Sometimes power struggles exist among those on the same rung. Sometimes players in power struggles
change sides or play both sides against each other. Sometimes different power games go on at the same
time, some along the lines of sex, other times through ideas about race, mental illness, criminality,
economic status, political affiliation, family role, species, and personal history. Often these lines of
power and struggle conflict with one another. Sometimes an individual may even be torn in different
directions by different moral discourses, different lines of struggle.

For thinkers like Michel Foucault (1926–84), there is no grand system governing society no single
capitalist system, patriarchy, imperialist or racist order, etc. Rather there are instead countless power
relationships constantly changing, realigning, breaking apart, and reconfiguring. Power is more like a
kaleidoscope or a plate of spaghetti than a pyramid or a chain. On this view, to see something like the
Iraq war as purely being about one group exerting its power over another is far too simplistic.

5.12 Radical particularity

In the debate preceding the invasion of Iraq in 2003, both supporters and advocates appealed to past
precedents to strengthen their cases. Critics pointed to other attempts by western nations to interfere
with the internal affairs of other states, while supporters compared leaving Saddam Hussein in power to
the appeasement of Hitler.

Almost all moral debate requires some comparison. Similar cases require similar treatments, and what is
right in one instance is also right in another, relevantly similar one. But then, as Jacques Derrida (1930–
2004) puts it in The Gift of Death (1992): “tout autre est tout autre” (“every other is completely other”).
No two individuals are the same, let alone identical. No two situations are utterly alike. Words don’t
mean precisely the same thing to me as they do to you, not the same thing in this context as in another,
not the same thing on this reading as another, not the same thing this time as another. One might say
that the very concept of sameness is itself problematic. There are a number of ethical implications to
this.

The law, justice, and violence

Laws, rules, and principles are by definition general. None of them indicates precisely which rules apply
to which cases in which manner. None of them can say whether a particular circumstance presents an
exception. It’s not possible for them to do so. So, when people appeal to a law, principle, or rule in some
particular case, they can in fact only do so by making an utterly singular and unique decision, and that
decision cannot be strictly determined by anything general.

The impossibility of avoiding undeterminable, foundationless choices about what to do, how to live, and
what to believe was something Søren Kierkegaard (1813–55) emphasized as characteristic of the human
existential condition. It’s something that for him is most radically faced in a “leap of faith.” It’s a leap
that, like all ethical choices, no reason, no principle, no theory could ever fully justify. When made
“authentically,” decisions like this particularize the self in a radical way (Fear and Trembling, 1843).

Laws, rules, and principles by their very nature attempt to produce order, regularity, consistency, and
sameness in human practices. The same rewards are to be distributed for the same work; the same
punishments are to be administered for the same crime. Laws, etc., like moral theories, would pretend
to create an utterly closed system a system that deals in a regular fashion with the same sort of cases in
the same way without any arbitrary judgment. But if the presumption of sameness is baseless, then isn’t
it the case that this effort to make things the same necessarily involves a kind of violence against
particularity? Mustn’t the effort to expel the arbitrary, to close or complete that which cannot be closed
or completed, necessarily lead to violence against whatever resists, what must resist? In short, aren’t
ethical rules, as rules, themselves unethical?

To the inevitably unethical nature of ethics, Derridian justice responds with what might be called
permanent critique (echoing Leon Trotsky’s call for “permanent revolution”). Permanent critique
prevents or at least limits the way laws, rules, and principles must be used violently by subverting the
fantasy of sameness and non-arbitrariness that captivates those who wield them.

It’s a stirring call to arms. But what positive ideals of justice and morality does this make possible? What
vision of a good or at least better society can such a view of justice and ethics yield us? The worry is that
in its refusal to be pinned down and to accept any appeal to the general or the universal, such a
permanent critique becomes hollow.

13 The separateness of persons

Jane is an easy-going, hard-working person who does not let misfortune bother her. She has a
moderately well paid job and has recently bought a small car, which gives her some pleasure, even
though she doesn’t use it very much. Mary, in contrast, is lazy and hard to please. But one thing she
would really like is a car, which she can’t currently afford, partly because she doesn’t work very hard. If
she had one, she’d be much more content. Mary and Jane both think that people should do whatever
would increase the sum total of happiness. So Mary tries to persuade Jane that she has a moral duty to
give her the car. After all, it will make Mary much happier, whereas Jane will soon get over the loss she
always does. What reason has Jane to say no?

Most people would think that Mary’s suggestion is outrageous. Jane has worked to get her car, while
Mary has been relatively idle. Yet, Mary is saying she should have Jane’s car, not because that would be
a kind and generous thing for Jane to do, but because it’s the morally right thing. Ridiculous, no?

The trouble is that if one takes act utilitarianism seriously, Mary has a strong argument. Utilitarianism
insists that everyone’s interests should be considered equally, and that the right action is the one that
increases the general happiness. This opens up the possibility that some people should be made worse
off, even though they have done nothing to deserve any deprivation, simply because that would result
in an increase in the general happiness.

What this seems to violate is a principle known as the “separateness of persons.” Individuals are not
simply carriers of welfare, happiness or utility that can be topped up, plundered, or combined like cups
of water in order

to achieve a fairer distribution of these goods. Harm to one individual cannot be compensated by
benefits to another. If a person chooses to sacrifice some of his or her own welfare for the sake of
another, that’s an act of generosity, not the fulfillment of a moral obligation. Any moral system that
ignores this as utilitarianism allegedly does is therefore flawed.

Against the separateness of persons


It’s possible, however, to argue that the separateness of persons has no real moral significance, and that
its apparent obviousness is illusory. For instance, in the case of Mary and Jane, other forms of
utilitarianism, for example rule utilitarianism, just wouldn’t demand that Jane give Mary her car. If one
considers the whole picture, it’s clear that a society operating upon rules that reward the lazy or don’t
allow individuals to keep the fruits of their labors will be dysfunctional, resent-ridden, and unproductive.
So, contrary to appearances, utilitarianism doesn’t necessarily require that the separateness of Jane’s
person be denied on moral grounds in order to deal with Mary’s request.

Still, it’s not clear at all either that people are fully separate (see 3.12 Individual/collective) or that, even
if they are, it follows logically that redistributions of goods are unjust. Redistributions may be desirable
for non-utilitarian reasons, say for reasons of duty or virtue. In addition, once one accepts that transfers
of welfare may be limited by other considerations (e.g. the desire for security and stability of property
and for effort and creativity to be rewarded), the idea that such transfers are unjust becomes less
plausible. European welfare states, for example, routinely redistribute wealth from the rich to the poor
through the taxation system, and most Europeans think this is a requirement of justice, not an affront to
it.

Furthermore, the principle of the separateness of persons may lead to repellent consequences of its
own. For example, suppose that the lives of many millions could be significantly improved by reducing
the quality of life of a few of the best off in a very small way, a way that left them still much better off
than the rest. Unyielding insistence on honoring the separateness of persons would, however, prohibit
anyone from doing so. Is that prohibition something we should be morally willing to accept?

5.14 Skepticism

In June 2002, a local council of elders in the Pakistani village of Meerwala allegedly sentenced 29-year-
old Mukhtar Mai to be gang raped by the male members of another local family in retribution for an
allegedly improper relationship that Mukhtar’s teenage brother had developed with one of the female
members of the other family. International criticism of the sentence, as well as criticism from many
quarters within Pakistan, was fierce.

But who’s to say, and on what basis, that this punishment is unjust or just? Is it even possible to justify
any moral claim, principle, or conclusion in anything but a provisional way? Are there really any moral
“facts” or “truths” about her sentence, at least any that can actually be known? Even if there are, is
there any reason to act morally or to care about morality’s com- mands? The constellation comprising
these and other questions has come to be called “moral skepticism.”

Moral skeptics commonly hold that moral beliefs have purely subjective or internal bases, usually in
feeling, and that no objective or external dimensions of the world can either explain or define moral
practice and language. So, on this score, egoists, hedonists, and even moral sentiment thinkers would
qualify as skeptics.

This recent usage, however, deviates from earlier usages, and overlaps quite a bit with moral nihilism.
Ancient Hellenistic skeptics, like Pyrrho of Elis and Sextus Empiricus, seem to have held more cautious
attitudes toward the possibility of moral truth. Rather than concluding negatively or positively about
whether some doctrine is true, these skeptics withheld judgment, neither affirming nor denying. This, in
turn, led them to a tranquil, undisturbed state (ataraxia), freeing skeptics from the conflict and
disturbance of dogmatic belief. In particular, Hellenistic skeptics refused the Stoics’ claim that people
can apprehend the natural law and moral cataleptic impressions, which supposedly provide an
indubitable and secure ground for moral argument and judgment. Although caricatures like those
presented by Diogenes Laertius (probably third century CE) depict skeptics as paralyzed and unable to
act (unable to move out of the way of runaway carts, for example), Hellenistic skeptics did act and
reflect about action. Instead of pretending to absolute, divine, indubitable or universal moral truths,
skeptics recommend deferring to custom, to what seems natural, and to the imperatives of feeling.

Early modern thinkers like Michel de Montaigne (1533–92) followed the ancients in this understanding
of skepticism, criticizing dogmatists and rationalists for trying to become angels but instead becoming
monstrous (“Of Experience,” in Essays). For Montaigne, it’s better to accept that one is no more than a
finite, history and culture-bound human being.

Answering skepticism

Many of the claims that motivate moral skepticism are accepted by those who nonetheless believe
meaningful morality is still possible. Non-cognitivists, for example, accept that there are no moral facts
as such, but they still believe that moral discourse is meaningful and fruitful. What tips people over to
skepticism is the nagging concern that morality may only be possible if there are absolute moral facts
that we can know, but that there are no such facts. As such, and as with other forms of skepticism,
critics claim that it only gets off the ground because it sets an impossibly high standard for what can
qualify as genuine ethics, and then complains that nothing can meet the test.

On this view, the serious claims of skepticism simply undermine arrogant moralists who purport to base
their claims on the apprehension of universal natural rights, divine moral principles, natural law, or the
commands of reason. In any case, skepticism recommends that if effective moral criticism is to be made,
it must be done in ways that makes sense in terms of the feelings, customs, traditions, and natural
psychological features of those involved.

5.15 Standpoint

G.W. F. Hegel’s 1807 classic, Phenomenology of Spirit, tells an interesting story about the relationship
between a master and a slave. While at the outset, the master in every way appears to hold a superior
position to the slave, by the end of Hegel’s exposition, we find that things are decidedly more complex
and that the slave has achieved certain capacities denied the master – including the capacity to
apprehend various truths the master cannot know. Karl Marx (1818–83) adopted this “master–slave
dialectic,” substituting the exploited working class for the slave and the exploiting ruling class for the
master. Jean-Paul Sartre (1905–80), too, found influence in the idea, using it to devastating effect when
he defended violent rebellion against colonialism in his Preface to Frantz Fanon’s Wretched of the Earth
(1963). The insight common to all three thinkers is that things look very different from different points
of view. This insight underwrites a branch of philosophy that’s come to be called “standpoint theory.”

The claims of standpoint theory

In its most basic form, standpoint theory argues two propositions: First, what appears to be true or good
or right to people is intrinsically related to the social, economic, and gendered position from which they
see it. Second, moral reasoning is neither uniform nor universal. For a very long time, philosophers have
held that reasoning is the same for any rational being at any place and any time like 2 + 2 = 4. But if
moral reasoning is tied to one’s standpoint, then those in different standpoints will reason about ethics
differently. Contrary, to simple relativism, however, not all standpoints are morally or epistemologically
equivalent.

While for example the wealthy may believe they understand the world better than the poor, the
situation is actually just the reverse. The wealthy, because of their snobbery and their fear of the poor,
isolate themselves in protected enclaves – seeing the world only from the top of the skyscraper, as it
were. The poor, by contrast know both life at the bottom (where they live) and life at the top (where
they work). Similarly, minorities know their own communities as well as the larger majority society
because they must circulate in both. Those belonging only to majority races and religions, however, tend
to know only themselves.

It has been feminist theorists, however, that have most fully developed the concept of “standpoint.”
Women, say these theorists, hold distinctive standpoints both as subordinates in the patriarchy and in
their roles as mothers, caregivers, and the organizers of various social networks. Theorists like Sara
Ruddick, in her book Maternal Thinking (1989), have accordingly argued that maternal practices render
women more ethically competent to understand and resolve moral and political conflicts.

Attractions

One advantage often attributed to standpoint theory is that it allows theorists to attribute specific
abilities to a class of people without claiming that the members of that class possess them in an
essential way or by nature. If blacks or women, for example, possess superior capacities of some sort,
they do so not because of some inherent essence that defines them but rather simply through their
contingently occupying certain standpoints in the social order. So, in fact, males can adopt at least some
of what are at present female standpoints when they start thinking and acting from that standpoint,
when they take up “maternal practices.”

If standpoint theory is correct, then significant, perhaps decisive weight must be given to voices from
standpoints that have long been ignored or silenced, from the accounts, judgments, and narratives
articulated by the oppressed. For example, with regard to issues of the sexual harassment of women,
women’s voices must be placed in the foreground. Moral assessments concerning the poor, the working
classes, prisoners, and racial minorities must be attentive to the way things look from their standpoints.

Critique

Sometimes it’s easy to tell whose standpoint has been neglected. For example, it was clear that the
voices of blacks under South African apartheid should have been given a greater hearing. But perhaps
some cases aren’t so clear. In the case of the Israel–Palestine conflict, each adversary claims the
standpoint of the oppressed, besieged, and victimized: Israeli Jews claim a privileged standpoint as
victims of present and historical anti-Semitism sur- rounded by avowed enemies; Palestinian Arabs claim
the standpoint of the dispossessed and of those living under illegal, brutal, racist occupation. How does
one rank or adjudicate the competing claims of different standpoints?

Moreover, doesn’t their superior education, access to information, and opportunity for travel tip the
balance back in favor of the standpoints of privilege? Isn’t it true that oppression brings deprivation
rather than elevation, ignorance rather than understanding? If standpoint theory is right, then, doesn’t
it lead to the rather incredible conclusion that since the oppressed understand things better and possess
better moral capacities, oppression, and deprivation aren’t quite so bad after all? Or at least doesn’t it
lead to this strange trade-off: privileged ignorance on the one side or oppressed wisdom on the other?
Which would you choose?

There’s also the danger of presenting the viewpoint of a particular social group as being more
homogenous than it really is. Can we really speak of a single, uniform standpoint that, say, all women, all
workers, all members of a minority class, or even all slaves share? Or would that mask the individuality
of people who happen to belong to a certain group?

5.16 Supererogation

Siblings Sly, Freddie, and Rose always entered the national lottery together, and one day they won $3
million – $1 million each. Sly spent some and invested some, but gave nothing away. Freddie gave away
20 percent to charity. Rose, however, bought herself a bottle of cheap champagne and gave away the
remaining $999,975 to provide clean water for thousands of people in Tanzania.

When we think about what morality demands of us, many think that it requires a certain lack of
selfishness. Sly may not be the most evil person alive, but a good person would have shared their good
fortune at least a little, perhaps as much as Freddie. But Rose’s generosity seems to go over and above
what could reasonably be expected of her. Giving all her winnings away is said to be a supererogatory
act. People praise such acts as good, but they don’t criticize those who do not perform them. This
because it’s generally recognized that acts like Rose’s involve doing more than one is morally obliged to
do.

The exceptional nature of supererogatory acts means that they’re thought to merit special praise. For
example, the Congressional Medal of Honor is presented to a soldier who “distinguishes himself
conspicuously by gallantry and intrepidity at the risk of his life above and beyond the call of duty.” A
soldier’s simply performing his or her duty is respectable and honor- able, but merely dutiful conduct
doesn’t merit an award like this. There are, it therefore seems, morally praiseworthy forms of conduct in
addition to those that morality requires. There is, one might say, “heroic virtue” in addition to “ordinary
virtue.” Tzvetan Todorov raises this issue with particular poignancy in his reflections on moral life in
concentration camps, Facing the Extreme (1996).

A special category?

Some moral theories, however, accommodate the supererogatory more easily than others.
Deontological or duty-based ethics tend to specify a limited range of acts that people are duty-bound to
perform therefore leaving plenty of space to do more, if one so wishes. But act consequentialist theories
can seem actually to require things that one would ordinarily think of as supererogatory.

For example, let’s imagine Rose has a comfortable home and lifestyle before she wins the lottery. The
extra pleasure she will get out of life from the winnings (the increase in marginal utility, as economists
like to say) is therefore fairly minimal, considering that most research seems to suggest that once a
comfortable material standard of living has been achieved, happiness does not increase much more with
increased wealth. If, however, she spends the money on clean water provision for Tanzanians,
thousands of people see their welfare and happiness increase significantly. Since this is the course of
action that yields the best consequences by far, it would seem wrong for her not to do it. So, what
seems like a heroic action turns out to be one everyone in her position should be expected to perform.
Act consequentialists, therefore, would seem to be committed to the view that supererogatory acts are
very rare.

Exceptional but not supererogatory

This needn’t mean, however, that for consequentialists the intuition that some moral actions are more
heroic than others are simply mistaken. It could be accepted, for example, that although people are
equally bound by all moral duties, human nature and social circumstances make some duties much
harder to perform than others. Rose isn’t to be praised, therefore, because what she did was beyond
her duties, but because the vast majority of human beings would find fulfilling this duty very difficult.

Another way to save the intuition that some acts are exceptionally praise- worthy without recourse to
the supererogatory is to claim that some duties have a stronger claim on us than others. For example,
the duty not to kill others makes so strong a claim that we legislate against it. The duty to be honest
with our spouse seems to make a slightly weaker claim. Hence, lying to one’s spouse about a serious
matter isn’t something people consider a sufficiently serious breach of duty to pass laws against it; but it
is considered serious enough to warrant various kinds of reprimand and social sanction. The duty to give
away wealth seems to make an even weaker claim. Not giving away a portion of one’s wealth, therefore,
although thought by many to be a violation of duty, doesn’t make a sufficiently strong claim upon us to
warrant much disapproval at all.

One problem with this solution, however, is that while it explains why sometimes people aren’t
punished for failing in their duties, it doesn’t explain why they’re praised in extraordinary ways for
fulfilling them. It’s not just that people don’t blame those who fail to give away a portion of their wealth;
they vigorously praise people who do.

It remains a serious possibility, therefore, that we should all act like Rose in the same circumstances and
that our surprise that she was so generous does not show that she acted above the call of duty, but that
we so often fail to fulfill the duties that fall upon us.

5.17 Tragedy

An airplane has been hijacked and is heading for a major city, where the hijackers say it will be
deliberately crashed, bringing devastation and death to thousands. The air force commander doesn’t
believe it’s right to kill civilians, especially those on one’s own side of a conflict. But the only way he can
stop the suicide mission is to order the plane shot down above an unpopulated area, killing
approximately 200 innocent passengers – as well as, of course, the hijackers.

Most people would say that the commander is right to order the plane shot down. Yet, no matter how
one looks at it, the decision involves killing 200 innocent people. It’s true that it seems likely that they’re
going to die anyway. But isn’t there a moral difference between killing and letting die? If someone’s
going to die soon, does that mean it’s okay to kill that person? Isn’t killing the innocent, even to save
other innocents, morally wrong?

No good can come from it

One might say that this is an example of a moral tragedy. In the dramatic sense, a tragedy is when a bad
outcome is the inevitable consequence, usually of the protagonist’s fatal flaw. By contrast, a moral
tragedy occurs when, no matter what one does, something morally bad must result, and the best one
can hope for is to do the least bad thing. In morally tragic situations the choice is not between the good
and the bad, but the more and less bad. Indeed, according to Martha Nussbaum (b. 1947), trying to lead
a morally good life exposes one to moral tragedy. Goodness, in her rendering, is a fragile thing. Others,
following Stanley Cavell (b. 1926), have argued that the pathological qualities of certain philosophical
conundrums, especially those related to skepticism, lead to tragic results, at least in the dramatic sense.

Although the thought that some choices leave us with no truly good option seems perfectly
understandable, there is nonetheless something odd about saying that someone did wrong if what he or
she did was the best thing they could do under the circumstances. For this reason it might be thought
that, contrary to appearances, moral tragedy is impossible: there’s always some best thing that one can
do; and if that is indeed what one does, one does no wrong. But there are several ways of explaining the
seeming paradox of rightly choosing the wrong thing while retaining the idea of moral tragedy.

Good and bad; right and wrong

The key is to distinguish between the good and the bad, and two senses of right and wrong. If one thinks
of “good” and “bad” as pertaining to out- comes or consequences, and “right” and “wrong” as
pertaining to actions, then it clearly is possible for right actions to have bad outcomes (and wrong
actions good ones). In this schema, it’s quite easy to explain moral tragedy in terms of people doing the
right thing, even though what results is a foreseeable bad. Moral tragedy, on this view, is about the
inevitability of bad consequences, not of performing a wrong act.

This solution, however, isn’t available to consequentialists, for whom an action must be wrong if its
consequences are bad. They do, however, have another way of making moral tragedy sound more
plausible. Right or wrong also bear the sense of “correct” and “incorrect.” When someone chooses the
lesser of two evils, therefore, it’s true to say that they do wrong. But in another, important sense, one
can say they did the right thing in the sense that they chose correctly between the options available to
them. It doesn’t make what they did morally right, but it absolves them of any blame for the bad
consequences.

Whether moral tragedy is or isn’t avoidable, to say that someone has behaved in a morally wrong but
nevertheless correct way such that he or she is not morally culpable looks like a rather uncomfortable
conceptual contortion. But perhaps it’s a necessary one. It is usual to think that if someone knowingly
acts wrongly and wasn’t forced to do so, then that person is to blame for the act. But perhaps it should
also be recognized that when there are no good options available, a person is, in a sense, forced to do
wrong. In such cases, therefore, although the wrong is done knowingly, because the wrong was forced
it’s not blameworthy. This seems particularly pertinent in the case of political leaders, who often do find
that their options are limited by circumstances. It’s not only when there’s only one choice that free will
is compromised.

Utilitarianism

Suppose you are on an island with a dying millionaire. With his final words, he begs you for one final
favor: “I’ve dedicated my whole life to baseball and for fifty years have gotten endless pleasure rooting
for the New York Yankees. Now that I am dying, I want to give all my assets, $5 million, to the Yankees.”
Pointing to a box containing money in large bills, he continues: “Would you take this money back to New
York and give it to the Yankees’ owner so that he can buy better players?” You agree to carry out his
wish, at which point a huge smile of relief and gratitude breaks out on his face as he expires in your
arms. After traveling to New York, you see a newspaper advertisement placed by your favorite charity,
World Hunger Relief Organization (whose integrity you do not doubt), pleading for $5 million to be used
to save 100,000 people dying of starvation in Africa. Not only will the $5 million save their lives, but it
will also purchase equipment and the kinds of fertilizers necessary to build a sustainable economy. You
decide to reconsider your promise to the dying Yankee fan, in light of this advertisement.

What is the right thing to do in this case? Consider some traditional moral principles and see if they help
us come to a decision. One principle often given to guide action is “Let your conscience be your guide.” I
recall this principle with fondness, for it was the one my father taught me at an early age, and it still
echoes in my mind. But does it help here? No, since conscience is primarily a function of upbringing.
People’s consciences speak to them in different ways according to how they were brought up.
Depending on upbringing, some people feel no qualms about committing violent acts, whereas others
feel the torments of conscience over stepping on a gnat. Suppose your conscience tells you to give the
money to the Yankees and my conscience tells me to give the money to the World Hunger Relief
Organization. How can we even discuss the matter? If conscience is the end of it, we’re left mute.

Another principle urged on us is “Do whatever is most loving”; Jesus in particular set forth the principle
“Love your neighbor as yourself.” Love is surely a wonderful value. It is a more wholesome attitude than
hate, and we should overcome feelings of hate if only for our own psychological health. But is love
enough to guide our actions when there is a conflict of interest? “Love is blind,” it has been said, “but
reason, like marriage, is an eye-opener.” Whom should I love in the case of the disbursement of the
millionaire’s money the millionaire or the starving people? It’s not clear how love alone will settle
anything. In fact, it is not obvious that we must always do what is most loving. Should we always treat
our enemies in loving ways? Or is it morally permissible to feel hate for those who have purposely and
unjustly harmed us, our loved ones, or other innocent people? Should the survivors of Nazi
concentration camps love Adolph Hitler? Love alone does not solve difficult moral issues.
A third principle often given to guide our moral actions is the Golden Rule: “Do to others as you would
have them do to you.” This, too, is a noble rule of thumb, one that works in simple, commonsense
situations. But it has problems. First, it cannot be taken literally. Suppose I love to hear loud heavy-metal
music. Since I would want you to play it loudly for me, I reason that I should play it loudly for you even
though I know that you hate the stuff. Thus, the rule must be modified: “Do to others as you would have
them do to you if you were in their shoes.” However, this still has problems. If I were the assassin of
Robert Kennedy, I’d want to be released from the penitentiary; but it is not clear that he should be
released. If I put myself in the place of a sex-starved individual, I might want to have sex with the next
available person; but it’s not obvious that I (or anyone else) must comply with that wish. Likewise, the
Golden Rule doesn’t tell me to whom to give the millionaire’s money.

Conscience, love, and the Golden Rule are all worthy rules of thumb to help us through life. They work
for most of us, most of the time, in ordinary moral situations. But, in more complicated cases, especially
when there are legitimate conflicts of interests, they are limited.

A more promising strategy for solving dilemmas is that of following definite moral rules. Suppose you
decided to give the millionaire’s money to the Yankees to keep your promise or because to do otherwise
would be stealing. The principle you followed would be “Always keep your promise.” Principles are
important in life. All learning involves understanding a set of rules; as R. M. Hare says, “Without
principles we could not learn anything whatever from our elders.... Every generation would have to start
from scratch and teach itself.”1 If you decided to act on the principle of keeping promises, then you
adhered to a type of moral theory called deontology. In Chapter 1, we saw that deontological systems
maintain that the center of value is the act or kind of act; certain features in the act itself have intrinsic
value. For example, a deontologist would see something intrinsically wrong in the very act of lying.

If, on the other hand, you decided to give the money to the World Hunger Relief Organization to save an
enormous number of lives and restore economic solvency to the region, you sided with a type of theory
called teleological ethics. Sometimes, it is referred to as consequentialist ethics. We also saw in Chapter
1 that the center of value here is the outcome or consequences of the act. For example, a teleologist
would judge whether lying was morally right or wrong by the consequences it produced.

We have already examined one type of teleological ethics: ethical egoism, the view that the act that
produces the most amount of good for the agent is the right act. Egoism is teleological ethics narrowed
to the agent himself or herself. In this chapter, we will consider the dominant version of teleological
ethics utilitarianism. Unlike ethical egoism, utilitarianism is a universal teleological system. It calls for the
maximization of goodness in society that is, the greatest goodness for the greatest number and not
merely the good of the agent.

CLASSIC UTILITARIANISM

In our normal lives we use utilitarian reasoning all the time; I might give money to charity when seeing
that it would do more good for needy people than it would for me. In time of war, I might join the
military and risk dying because I see that society’s needs at that time are greater than my own. As a
formal ethical theory, the seeds of utilitarianism were sewn by the ancient Greek philosopher Epicurus
(342–270 BCE), who stated that “pleasure is the goal that nature has ordained for us; it is also the
standard by which we judge everything good.” According to this view, rightness and wrongness are
determined by pleasure or pain that something produces. Epicurus’s theory focused largely on the
individual’s personal experience of pleasure and pain, and to that extent he advocated a version of
ethical egoism. Nevertheless, Epicurus inspired a series of eighteenth-century philosophers who
emphasized the notion of general happiness that is, the pleasing consequences of actions that impact
others and not just the individual. Francis Hutcheson (1694–1746) stated that “that action is best, which
procures the greatest happiness for the greatest numbers.” David Hume (1711–1776) introduced the
term utility to describe the pleasing consequences of actions as they impact people.

The classical expressions of utilitarianism, though, appear in the writings of two English philosophers
and social reformers Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873). Their approach to
morality was nonreligious and they tried to reform society by rejecting unfounded rules of morality and
law.

Jeremy Bentham

There are two main features of utilitarianism, both of which Bentham articulated: the consequentialist
principle (or its teleological aspect) and the utility principle (or its hedonic aspect). The consequentialist
principle states that the rightness or wrongness of an act is determined by the goodness or badness of
the results that follow from it. It is the end, not the means, that counts; the end justifies the means. The
utility, or hedonist, principle states that the only thing that is good in itself is some specific type of state
(for example, pleasure, happiness, welfare). Hedonistic utilitarianism views pleasure as the sole good
and pain as the only evil. To quote Bentham, “Nature has placed mankind under the governance of two
sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as
what we shall do.”2 An act is right if it either brings about more pleasure than pain or prevents pain, and
an act is wrong if it either brings about more pain than pleasure or prevents pleasure from occurring.

Bentham invented a scheme for measuring pleasure and pain that he called the hedonic calculus. The
quantitative score for any pleasure or pain experience is obtained by summing the seven aspects of a
pleasurable or painful experience: its intensity, duration, certainty, nearness, fruitfulness, purity, and
extent. Adding up the amounts of pleasure and pain for each possible act and then com- paring the
scores would enable us to decide which act to perform. With regard to our example of deciding between
giving the dying man’s money to the Yankees or to the African famine victims, we would add up the
likely pleasures to all involved, for all seven qualities. If we found that giving the money to the famine
victims would cause at least 3 million hedons (units of happiness) but that giving the money to the
Yankees would cause less than 1,000 hedons, we would have an obligation to give the money to the
famine victims.

There is something appealing about Bentham’s utilitarianism. It is simple in that there is only one
principle to apply: Maximize pleasure and minimize suffering. It is commonsensical in that we think that
morality really is about reducing suffering and promoting benevolence. It is scientific: Simply make
quantitative measurements and apply the principle impartially, giving no special treatment to ourselves
or to anyone else because of race, gender, personal relationship, or religion.

However, Bentham’s philosophy may be too simplistic in one way and too complicated in another. It
may be too simplistic in that there are values other than pleasure (as we saw in Chapter 6), and it seems
too complicated in its artificial hedonic calculus. The calculus is burdened with too many variables and
has problems assigning scores to the variables. For instance, what score do we give a cool drink on a hot
day or a warm shower on a cool day? How do we compare a 5-year-old’s delight over a new toy with a
30-year-old’s delight with a new lover? Can we take your second car from you and give it to Beggar Bob,
who does not own a car and would enjoy it more than you? And if it is simply the overall benefits of
pleasure that we are measuring, then if Jack or Jill would be “happier” in the Pleasure Machine or the
Happiness Machine or on drugs than in the real world, would we not have an obligation to ensure that
these conditions become reality? Because of such considerations, Bentham’s version of utilitarianism
was, even in his own day, referred to as the “pig philosophy” because a pig enjoying his life would
constitute a higher moral state than a slightly dissatisfied Socrates.

John Stuart Mill

It was to meet these sorts of objections and save utilitarianism from the charge of being a pig
philosophy that Bentham’s successor, John Stuart Mill, sought to distinguish happiness from mere
sensual pleasure. His version of the theory is often called eudaimonistic utilitarianism (from the Greek
eudaimonia, meaning “happiness”). He defines happiness in terms of certain types of higher-order
pleasures or satisfactions such as intellectual, aesthetic, and social enjoyments, as well as in terms of
minimal suffering. That is, there are two types of pleasures. The lower, or elementary, include eating,
drinking, sexuality, resting, and sensuous titillation. The higher include high culture, scientific
knowledge, intellectuality, and creativity. Although the lower pleasures are more intensely gratifying,
they also lead to pain when overindulged in. The higher pleasures tend to be more long term,
continuous, and gradual.

Mill argued that the higher, or more refined, pleasures are superior to the lower ones: “A being of
higher faculties requires more to make him happy, is capable probably of more acute suffering, and
certainly accessible to it at more points, than one of an inferior type,” but still he is qualitatively better
off than the person without these higher faculties. “It is better to be a human being dissatisfied than a
pig satisfied; better to be Socrates dissatisfied than a fool satisfied.”3 Humans are the kind of creatures
who require more to be truly happy. They want the lower pleasures, but they also want deep friendship,
intellectual ability, culture, the ability to create and appreciate art, knowledge, and wisdom.

But one may object, “How do we know that it really is better to have these higher pleasures?” Here, Mill
imagines a panel of experts and says that of those who have had a wide experience of pleasures of both
kinds almost all give a decided preference to the higher type. Because Mill was an empiricist one who
believed that all knowledge and justified belief was based on experience he relied on the combined
consensus of human history. By this view, people who experience both rock music and classical music
will, if they appreciate both, prefer Bach and Beethoven to Metallica. That is, we generally move up
from appreciating simple things (for example, nursery rhymes) to more complex and intricate things (for
example, poetry that requires great talent) rather than the other way around.

Mill has been criticized for not giving a better reply—for being an elitist and for unduly favoring the
intellectual over the sensual. But he has a point. Don’t we generally agree, if we have experienced both
the lower and the higher types of pleasure, that even though a full life would include both, a life with
only the former is inadequate for human beings? Isn’t it better to be Socrates dissatisfied than the pig
satisfied—and better still to be Socrates satisfied?

The point is not merely that humans wouldn’t be satisfied with what satisfies a pig but that somehow
the quality of the higher pleasures is better. But what does it mean to speak of better pleasure? The
formula he comes up with is this:
Happiness ... [is] not a life of rapture; but moments of such, in an existence made up of few and
transitory pains, many and various pleasures, with a decided predominance of the active over the
passive, and having as the foundation of the whole, not to expect more from life than it is capable of
bestowing.4

Mill is clearly pushing the boundaries of the concept of “pleasure” by emphasizing higher qualities such
as knowledge, intelligence, freedom, friendship, love, and health. In fact, one might even say that his
litmus test for happiness really has little to do with actual pleasure and more to do with a nonhedonic-
cultivated state of mind.

ACT- AND RULE-UTILITARIANISM

There are two classical types of utilitarianism: act- and rule-utilitarianism. In applying the principle of
utility, act-utilitarians, such as Bentham, say that ideally we ought to apply the principle to all of the
alternatives open to us at any given moment. We may define act-utilitarianism in this way:

Act-utilitarianism: An act is right if and only if it results in as much good as any available alternative.

One practical problem with act-utilitarianism is that we cannot do the necessary calculations to
determine which act is the correct one in each case, for often we must act spontaneously and quickly. So
rules of thumb are of practical importance for example, “In general, don’t lie,” and “Generally, keep
your promises.” However, the right act is still that alternative that results in the most utility.

A second problem with act-utilitarianism is that it seems to fly in the face of fundamental intuitions
about minimally correct behavior. Consider Richard Brandt’s criticism of act-utilitarianism:

It implies that if you have employed a boy to mow your lawn and he has finished the job and asks for his
pay, you should pay him what you promised only if you cannot find a better use for your money. It
implies that when you bring home your monthly paycheck you should use it to support your family and
yourself only if it cannot be used more effectively to supply the needs of others.5

The alternative to act-utilitarianism is a view called rule-utilitarianism elements of which we find in Mill’s
theory. Most generally, the position is this:

Rule-utilitarianism: An act is right if and only if it is required by a rule that is itself a member of a set of
rules whose acceptance would lead to greater utility for society than any available alternative.

Human beings are rule-following creatures. We learn by adhering to the rules of a given subject,
whether it is speaking a language, driving a car, dancing, writing an essay, rock climbing, or cooking. We
want to have a set of action- guiding rules by which to live. The act-utilitarian rule, to do the act that
maximizes utility, is too tedious for most purposes. Often, we don’t have time to decide whether lying
will produce more utility than truth telling, so we need a broad rule prescribing truthfulness that passes
the test of rational scrutiny. Rule-utilitarianism asserts that the best chance of maximizing utility is by
following the set of rules most likely to give us our desired results. Because morality is a social and
public institution, we need to coordinate our actions with others so that we can have reliable
expectations about other people’s behavior.
For the most sophisticated versions of rule-utilitarianism, three levels of rules will guide actions. On the
lowest level is a set of utility-maximizing rules of thumb, such as “Don’t lie” and “Don’t cause harm,”
that should always be followed unless there is a conflict between them. If these first-order rules conflict,
then a second-order set of conflict-resolving rules should be consulted, such as “It’s more important to
avoid causing serious harm than to tell the truth.” At the top of the hierarchy is a third-order rule
sometimes called the remainder rule, which is the principle of act-utilitarianism: When no other rule
applies, simply do what your best judgment deems to be the act that will maximize utility.

An illustration of this is the following: Suppose you promised to meet your teacher at 3 p.m. in his office.
On your way there, you come upon an accident victim stranded by the wayside who desperately needs
help. The two first-order rules in this situation are “Keep your promises” and “Help those in need when
you are not seriously inconvenienced in doing so.” It does not take you long to decide to break the
appointment with your teacher because it seems obvious in this case that the rule to help others
overrides the rule to keep promises. There is a second-order rule prescribing that the first-order rule of
helping people in need when you are not seriously inconvenienced in doing so overrides the rule to keep
promises. However, there may be some situation where no obvious rule of thumb applies. Say you have
$50 that you don’t really need now. How should you use this money? Put it into your savings account?
Give it to your favorite charity? Use it to throw a party? Not only is there no clear first-order rule to
guide you, but there is no second-order rule to resolve conflicts between first-order rules. Here and only
here, on the third level, the general act-utility principle applies without any other primary rule; that is,
do what in your best judgment will do the most good.

Debates between act- and rule-utilitarians continue today. Kai Nielsen, a staunch act-utilitarian, argues
that no rules are sacred; differing situations call forth different actions, and potentially any rule could be
overridden. He thus criticizes what he calls moral conservatism, which is any normative ethical theory
that maintains that there is a privileged moral principle, or cluster of moral principles, prescribing
determinate actions that it would always be wrong not to act in accordance with no matter what the
consequences.

Nielsen argues further that we are responsible for the consequences of not only the actions that we
perform but also the nonactions that we fail to perform. He calls this “negative responsibility.” To
illustrate, suppose you are the driver of a trolley car and suddenly discover that your brakes have failed.
You are just about to run over five workers on the track ahead of you. However, if you act quickly, you
can turn the trolley onto a sidetrack where only one man is working. What should you do? One who
makes a strong distinction between allowing versus doing evil would argue that you should do nothing
and merely allow the trolley to kill the five workers. But one who denies that this is an absolute
distinction would prescribe that you do something positive to minimize evil. Negative responsibility
means that you are going to be responsible for someone’s death in either case. Doing the right thing,
the utilitarian urges, means minimizing the amount of evil. So you should actively cause the one death
to save the other five lives.6 Critics of utilitarianism contend either that negative responsibility is not a
strict duty or that it can be worked into other systems besides utilitarianism.

The Strengths of Utilitarianism

Utilitarianism has three positive features. The first attraction or strength is that it is a single principle, an
absolute system with a potential answer for every situation: Do what will promote the most utility. It’s
good to have a simple, action- guiding principle that is applicable to every occasion even if it may be
difficult to apply (life’s not simple).

Its second strength is that utilitarianism seems to get to the substance of morality. It is not merely a
formal system that simply sets forth broad guidelines for choosing principles but offers no principles
such as the guideline “Do what- ever you can universalize.” Rather it has a material core: We should
promote human (and possibly animal) flourishing and reduce suffering. The first virtue gives us a clear
decision procedure in arriving at our answer about what to do. The second virtue appeals to our sense
that morality is made for people and that morality is not so much about rules as about helping people
and alleviating the suffering in the world.

As such, utilitarianism seems commonsensical. For instance, it gives us clear and reasonable guidance in
dealing with the Kitty Genovese case discussed in Chapter 1: We should call the police or do what is
necessary to help her, as long as helping her does not create more disutility than leaving her alone. And,
in the case of deciding what to do with the dead millionaire’s $2 million, something in us says that it is
absurd to keep a promise to a dead person when it means allowing hundreds of thousands of famine
victims to die. Far more good can be accomplished by helping the needy than by giving the money to the
Yankees.

A third strength of utilitarianism is that it is particularly well suited to address the problem of posterity
namely, why we should preserve scarce natural resources for the betterment of future generations of
humans that do not yet exist. Expressed rhetorically, the question is “Why should I care about posterity;
what has posterity ever done for me?” In Chapter 6, we saw that the theory of ethical egoism failed to
give us an adequate answer to this problem. That is, the egoist gains nothing by preserving natural
resources for future generations that do not yet exist and thus can give no benefit to the egoist.
However, utilitarians have one overriding duty: to maximize general happiness. As long as the quality of
life of future people promises to be positive, we have an obligation to continue human existence, to
produce human beings, and to take whatever actions are necessary to ensure that their quality of life is
not only positive but high.

It does not matter that we cannot identify these future people. We may look upon them as mere
abstract placeholders for utility and aim at maximizing utility. Derek Parfit explains this using this
utilitarian principle: “It is bad if those who live are worse off than those who might have lived.” He
illustrates his principle this way. Suppose our generation has the choice between two energy policies:
the “Safe Energy Policy” and the “Risky Energy Policy.”7 The Risky Policy promises to be safe for us but is
likely to create serious problems for a future generation, say, 200 years from now. The Safe Policy won’t
be as beneficial to us but promises to be stable and safe for posterity—those living 200 years from now
and beyond. We must choose and we are responsible for the choice that we make. If we choose the
Risky Policy, we impose harms on our descendants, even if they don’t now exist. In a sense, we are
responsible for the people who will live because our policy decisions will generate different causal
chains, resulting in different people being born. But more important, we are responsible for their quality
of life because we could have caused human lives to have been better off than they are.

What are our obligations to future people? If utilitarians are correct, we have an obligation to leave
posterity to as good a world as we can. This would mean radically simplifying our lifestyles so that we
use no more resources than are necessary, keeping as much top soil intact as possible, protecting
endangered species, reducing our carbon dioxide emissions, preserving the wilderness, and minimizing
our overall deleterious impact on the environment in general while using technology wisely.

CRITICISM OF UTILITARIANISM

Utilitarianism has been around for several centuries, but so too have been its critics, and we need to
address a series of standard objections to utilitarianism before we can give it a “philosophically clean bill
of health.”

Problems with Formulating Utilitarianism

The first set of problems occurs in the very formulation of utilitarianism: “The greatest happiness for the
greatest number.” Notice that we have two “greatest” things in this formula: “happiness” and
“number.” Whenever we have two variables, we invite problems of determining which of the variables
to rank first when they seem to conflict. To see this point, consider the following example: I am offering
a $1,000 prize to the person who runs the longest distance in the shortest amount of time. Three people
participate: Joe runs 5 miles in 31 minutes, John runs 7 miles in 50 minutes, and Jack runs 1 mile in 6
minutes. Who should get the prize? John has fulfilled one part of the requirement (run the longest
distance), but Jack has fulfilled the other requirement (run the shortest amount of time).

This is precisely the problem with utilitarianism. On the one hand, we might concern ourselves with
spreading happiness around so that the greatest number obtain it (in which case, we should get busy
and procreate a larger population). On the other hand, we might be concerned that the greatest
possible amount of happiness obtains in society (in which case, we might be tempted to allow some
people to become far happier than others, as long as their increase offsets the losers’ diminished
happiness). So should we worry more about total happiness or about highest average?

Utilitarians also need to be clear about specifically whose happiness we are talking about: all beings that
experience pleasure and pain, or all human beings, or all rational beings. One criterion might exclude
mentally deficient human beings, and another might include animals. Finally, utilitarians need to
indicate how we measure happiness and make interpersonal comparisons between the happiness of
different people. We’ve seen Mill’s efforts to address this problem with his notion of higher pleasures;
we’ve also seen the additional complications that his solution creates.

None of these problems defeat utilitarianism as a workable theory, but they do place a heavy burden on
utilitarians to clarify the objectives of their theory.

The Comparative Consequences Objection

Another crucial problem with utilitarianism is that it seems to require a superhuman ability to look into
the future and survey a mind-boggling array of consequences of actions. Of course, we normally do not
know the long-term consequences of our actions because life is too complex and the consequences go
on into the indefinite future. One action causes one state of affairs, which in turn causes another state
of affairs, indefinitely, so that calculation becomes impossible. Recall the nursery rhyme:

For want of a nail, the shoe was lost; For want of a shoe, the horse was lost; For want of a horse, the
rider was lost; For want of a rider, the battle was lost; For want of a battle, the kingdom was lost; And all
for the want of a horseshoe nail.
Poor, unfortunate blacksmith; what utilitarian guilt he must bear all the rest of his days!

But it is ridiculous to blame the loss of one’s kingdom on the poor, unsuccessful blacksmith, and
utilitarians are not so foolish as to hold him responsible for the bad situation. Instead, following C. I.
Lewis, utilitarians distinguish two kinds of consequences: (1) actual consequences of an act and (2)
consequences that could reasonably have been expected to occur.8 Based on these two kinds of
consequences, there are two corresponding right actions. An act is absolutely right if it has the best
actual consequences (as per consequence 1). An act is objectively right if it is reasonable to expect that
it will have the best consequences (as per consequence 2).

Only objective rightness, that based on reasonable expectations, is central here. Actual rightness, based
on actual consequences, is irrelevant because this can only be determined after an action is performed
and we sit back and watch the series of actual consequences unfold. But when an agent is trying to
determine in advance how to act, the most that she can do is to use the best information available and
do what a reasonable person would expect to produce the best overall results. Suppose, for example,
that while Hitler’s grandmother was carrying little Adolph up the stairs to her home, she slipped and had
to choose between either dropping infant Adolph and allowing him to be fatally injured or breaking her
arm. According to the formula just given, it would have been absolutely right for her to let him be killed
because history would have turned out better. But, it would not have been within her power to know
that. She did what any reasonable person would do—she saved the baby’s life at the risk of injury to
herself. She did what was objectively right. The utilitarian theory holds that by generally doing what
reason judges to be the best act based on likely consequences, we will, in general, actually promote the
best consequences.

The Consistency Objection to Rule-Utilitarianism

An often-debated question about rule-utilitarianism is whether, when pushed to its logical limits, it must
either become a deontological system or transform itself into act-utilitarianism. As such, it is an
inconsistent theory that offers no truly independent standard for making moral judgments. Briefly, the
argument goes like this: Imagine that following the set of general rules of a rule-utilitarian system yields
100 hedons (positive utility units). We could always find a case where breaking the general rule would
result in additional hedons without decreasing the sum of the whole. So, for example, we could imagine
a situation in which breaking the general rule “Never lie” to spare someone’s feelings would create
more utility (for example, 102 hedons) than keeping the rule would. It would seem that we could always
improve on any version of rule-utilitarianism by breaking the set of rules whenever we judge that by
doing so we could produce even more utility than by following the set.

To illustrate more fully, consider this example. Suppose a disreputable former convict named Charley
has been convicted of a serious crime and sentenced to a severe punishment. You, the presiding judge,
have just obtained fresh evidence that if brought into court would exonerate Charley of the crime. But
you also have evidence, not admissible in court, that Charley is guilty of an equally heinous crime for
which he has not been indicted. The evidence suggests that Charley is a dangerous man who should not
be on the streets of our city. What should you do? An act-utilitarian would no doubt suppress the new
evidence in favor of protecting the public from a criminal. A rule-utilitarian has a tougher time making
the decision. On the one hand, he has the rule “Do not permit innocent people to suffer for crimes they
didn’t commit.” On the other hand, he has the rule “Protect the public from unnecessary harm.” The
rule-utilitarian may decide the matter by using the remainder principle, which yields the same result as
that of the act-utilitarian. This seems, however, to give us a counterintuitive result. Why not just be an
act-utilitarian and forgo the middle steps if that is what we are destined to reach anyway?

There may be other ways for the rule-utilitarian to approach this. He or she may opt for a different
remainder principle, one that appeals to our deepest intuitions: “Whenever two rules conflict, choose
the one that fits your deepest moral intuition.” Thus, the judge may very well decide to reveal the
evidence exonerating Charley, holding to the rule not to allow people to suffer for crimes for which
there is insufficient evidence to convict them. The rule-utilitarian argues that, in the long run, a rule that
protects such legally innocent but morally culpable people will produce more utility than following an
act-utilitarian principle. If we accept the second intuitionist version of the remainder principle, we may
be accused of being deontological intuitionists and not utilitarians at all.

How might we respond to this criticism of inconsistency? It may be more accurate to see moral
philosophy as complex and multidimensional so that both striving for the goal of utility and the method
of consulting our intuitions are part of moral deliberation and action. Thus, even if rule-utilitarianism
involves consulting moral intuitions, both of these elements may be intertwined and equally legitimate
parts of moral reasoning. What at first appears to be a problem of consistency is really just an indicator
of the multilayered nature of morality.

The No-Rest Objection

According to utilitarianism, one should always do that act that promises to pro- mote the most utility.
But there is usually an infinite set of possible acts to choose from, and even if I can be excused from
considering all of them, I can be fairly sure that there is often a preferable act that I could be doing. For
example, when I am about to go to the cinema with a friend, I should ask myself if helping the homeless
in my community wouldn’t promote more utility. When I am about to go to sleep, I should ask myself
whether I could at that moment be doing something to help save the ozone layer. And, why not simply
give all my assets (beyond what is absolutely necessary to keep me alive) to the poor to promote utility?
Following utilitarianism, I should get little or no rest, and, certainly, I have no right to enjoy life when by
sacrificing I can make others happier. Peter Singer actually advocates an act-utilitarian position similar to
this. According to Singer, middle-class people have a duty to contribute to poor people (especially in
undeveloped countries) more than one-third of their income, and all of us have a duty to contribute
every penny above $30,000 we possess until we are only marginally better off than the worst-off people
on earth.

The problem with approaches like Singer’s is that this makes morality too demanding, creates a
disincentive to work, and fails to account for different levels of obligation. Thus, utilitarianism must be a
false doctrine. But rule-utilitarians have a response to this no-rest objection: A rule prescribing rest and
entertainment is actually the kind of rule that would have a place in a utility-maximizing set of rules. The
agent should aim at maximizing his or her own happiness a well as other people’s happiness. For the
same reason, it is best not to worry much about the needs of those not in our primary circle. Although
we should be concerned about the needs of poor people, it actually would promote disutility for the
average person to become preoccupied with these concerns. Singer represents a radical act-utilitarian
position that fails to give adequate attention to the rules that promote human flourishing, such as the
right to own property, educate one’s children, and improve one’s quality of life, all of which probably
costs more than $30,000 per year in many parts of North America. However, the utilitarian would
remind us, we can surely do a lot more for suffering humanity than we now are doing especially if we
join together and act cooperatively. And we can simplify our lives, cutting back on unnecessary
consumption, while improving our overall quality.

The Publicity Objection

It is usually thought that moral principles must be known to all so that all may freely obey the principles.
But utilitarians usually hesitate to recommend that everyone act as a utilitarian, especially an act-
utilitarian, because it takes a great deal of deliberation to work out the likely consequences of
alternative courses of action. It would be better if most people acted simply as deontologists.9 Thus,
utilitarianism seems to contradict our requirement of publicity.

There are two responses to this objection. First, at best this objection only works against act-
utilitarianism, which at least in theory advocates sitting down and calculating the good and bad
consequences of each action that we plan to perform. Rule-utilitarianism, by contrast, does not focus on
the consequences of particular actions but on the set of rules that are likely to bring about the most
good. These rules indeed are publicized by rule-utilitarians.

A second response is one that act-utilitarians themselves might offer: The objection shows a bias only
toward publicity (or even democracy). It may well be that publicity is only a rule of thumb to be
overridden whenever there is good reason to believe that we can obtain more utility by not publicizing
act-utilitarian ideas.

However, this response places an unacceptably low value on the benefits of publicity. Since we need to
coordinate our actions with other people, moral rules must be publicly announced, typically through
legal statutes. I may profit from cutting across the grass to save a few minutes in getting to class, but I
also value a beautiful green lawn. We need public rules to ensure the healthy state of the lawn. So we
agree on a rule to prohibit walking on the grass even when it may have a utility function. There are many
activities that may bring about individual utility advancement or even communal good, which if done
regularly would be disastrous, such as cutting down trees to build houses or make news- papers or
paper for books, valuable as it is. So we regulate the lumber industry so that every tree cut down is
replaced with a new one and large forests are kept intact. So moral rules must be publicly advertised,
often made into laws, and enforced. In short, while the publicity objection does not affect rule-
utilitarianism, it appears to be a serious obstacle to act-utilitarianism.

The Relativism Objection

Sometimes people accuse rule-utilitarianism of being relativistic because it seems to endorse different
rules in different societies. In one society, it may uphold polygamy, whereas in our society it defends
monogamy. In a desert society, it upholds a rule “Don’t waste water,” whereas in a community where
water is plentiful no such rule exists. But this is not really conventional relativism because the rule is not
made valid by the community’s choosing it but by the actual situation. In the first case, it is made valid
by an imbalance in the ratio of women to men and, in the second case, by the environmental factors
concerning the availability of water. Situationalism is different from relativism and consistent with
objectivism because it really has to do with the application of moral principles in this case, the utility
principle.
But there is a more serious worry about rule-utilitarianism’s tendency toward relativism namely, that it
might become so flexible that it justifies any moral rule. Asked why we support benevolence as a moral
rule, it seems too easy to respond, “Well, this principle will likely contribute to the greater utility in the
long run.” The fear is that the act-utilitarian could give the same answer to rules that we consider
malevolent, such as torture. Shifting conceptions of general happiness will generate shifting moral rules.

How might the rule-utilitarian respond to this? David Hume, an early defender of utilitarian moral
reasoning, argued that human nature forces consistency in our moral assessments. Specifically, he
argues, there are “universal principles of the human frame” that regulate what we find to be agreeable
or disagreeable in moral matters. Benevolence, for example, is one such type of conduct that we
naturally find agreeable. Following Hume’s lead, the rule-utilitarian might ground the key components
of happiness in our common human psychological makeup rather than the result of fluctuating personal
whims. This would give utilitarianism a more objective foundation and thus make it less susceptible to
the charge of relativism.

CRITICISM OF THE ENDS JUSTIFYING IMMORAL MEANS

Chief among the criticisms of utilitarianism is that utilitarian ends might justify immoral means. There
are many dastardly things that we can do in the name of maximizing general happiness: deceit, torture,
slavery, even killing off ethnic minorities. As long as the larger populace benefits, these actions might be
justified. The general problem can be laid out in this argument:

(1) If a moral theory justifies actions that we universally deem impermissible, then that moral theory
must be rejected.

(2) Utilitarianism justifies actions that we universally deem impermissible. (3) Therefore, utilitarianism
must be rejected.

Let’s look at several versions of this argument.

The Lying Objection

William D. Ross has argued that utilitarianism is to be rejected because it leads to the counterintuitive
endorsement of lying when it serves the greater good. Consider two acts, A and B, that will both result
in 100 hedons (units of pleasure of utility). The only difference is that A involves telling a lie and B
involves telling the truth. The utilitarian must maintain that the two acts are of equal value. But this
seems implausible; truth seems to be an intrinsically good thing.

Similarly, in Arthur Koestler’s Darkness at Noon, we find this discussion of Communist philosophy in the
former Soviet Union:

History has taught us that often lies serve her better than the truth; for man is sluggish and has to be led
through the desert for forty years before each step in his development. And he has to be driven through
the desert with threats and promises, by imaginary terrors and imaginary consolations, so that he
should not sit down prematurely to rest and divert himself by worshipping golden calves.11

According to this interpretation, orthodox Soviet communism justified its lies through utilitarian ideas.
Something in us revolts at this kind of value system. Truth is sacred and must not be sacrificed on the
altar of expediency.
In response to this objection, utilitarians might agree that there is something counterintuitive in the
calculus of equating an act of lying with one of honesty; but, they argue, we must be ready to change
our culturally induced moral biases. What is so important about truth telling or so bad about lying? If it
turned out that lying really promoted human welfare, we’d have to accept it. But that’s not likely. Our
happiness is tied up with a need for reliable information (that is, truth) on how to achieve our ends, so
truthfulness will be a member of the rule-utility’s set. But where lying will clearly promote utility without
undermining the general adherence to the rule, we simply ought to lie. Don’t we already accept lying to
a gangster or telling white lies to spare people’s feelings?

The Integrity Objection

Bernard Williams argues that utilitarianism violates personal integrity by com- manding that we violate
our most central and deeply held principles. He illustrates this with the following example:

Jim finds himself in the central square of a small South American town. Tied up against the wall are a
row of twenty Indians, most terrified, a few defiant, in front of them several armed men in uniform. A
heavy man in a sweat-stained khaki shirt turns out to be the captain in charge and, after a good deal of
questioning of Jim which establishes that he got there by accident while on a botanical expedition,
explains that the Indians are a random group of inhabitants who, after recent acts of protest against the
government, are just about to be killed to remind other possible protesters of the advantages of not
protesting. However, since Jim is an honored visitor from another land, the captain is happy to offer him
a guest’s privilege of killing one of the Indians himself. If Jim accepts, then as a special mark of the
occasion, the other Indians will be let off. Of course, if Jim refuses, then there is no special occasion, and
Pedro here will do what he was about to do when Jim arrived, and kill them all. Jim, with some
desperate recollection of schoolboy fiction, wonders whether if he got hold of a gun, he could hold the
captain, Pedro and the rest of the soldiers to threat, but it is quite clear from the setup that nothing of
that kind is going to work: any attempt of that sort of thing will mean that all the Indians will be killed,
and himself. The men against the wall, the other villagers, understand the situation, and are obviously
begging him to accept. What should he do?

Williams asks rhetorically,

How can a man, as a utilitarian agent, come to regard as one satisfaction among others, and a
dispensable one, a project or attitude round which he has built his life, just because someone else’s
projects have so structured the causal scene that that is how the utilitarian sum comes out?

In response to this criticism, the utilitarian can argue that integrity is not an absolute that must be
adhered to at all costs. Some alienation may be necessary for the moral life, and the utilitarian can take
this into account in devising strategies of action. Even when it is required that we sacrifice our lives or
limit our freedom for others, we may have to limit or sacrifice something of what Williams calls our
integrity. We may have to do the “lesser of evils” in many cases. If the utilitarian doctrine of negative
responsibility is correct, we need to realize that we are responsible for the evil that we knowingly allow,
as well as for the evil we commit.

The Justice Objection


With both of the previous problems, the utilitarian response was that we should reconsider whether
truth telling and personal integrity are values that should never be compromised. The situation is
intensified, though, when we consider standards of justice that most of us think should never be
dispensed with. Let’s look at two examples, each of which highlights a different aspect of justice.

First, imagine that a murder is committed in a racially volatile community. As the sheriff of the town, you
have spent a lifetime working for racial harmony. Now, just when your goal is being realized, this
incident occurs. The crime is thought to be racially motivated, and a riot is about to break out that will
very likely result in the death of several people and create long-lasting racial antagonism. You see that
you could frame a tramp for the crime so that a trial will find him guilty and he will be executed. There is
every reason to believe that a speedy trial and execution will head off the riot and save community
harmony. Only you (and the real criminal, who will keep quiet about it) will know that an innocent man
has been tried and executed. What is the morally right thing to do? The utilitarian seems committed to
framing the tramp, but many would find this appalling.

As a second illustration, imagine that you are a utilitarian physician who has five patients under your
care. One needs a heart transplant, one needs two lungs, one needs a liver, and the last two each need a
kidney. Now into your office comes a healthy bachelor needing an immunization. You judge that he
would make a perfect sacrifice for your five patients. Through a utility-calculus, you determine that,
without a doubt, you could do the most good by injecting the healthy man with a fatal drug and then
using his organs to save your five other patients.

These careless views of justice offend us. The very fact that utilitarians even consider such actions that
they would misuse the legal system or the medical system to carry out their schemes seems frightening.
It reminds us of the medieval Roman Catholic bishop’s justification for heresy hunts and inquisitions and
religious wars:

When the existence of the Church is threatened, she is released from the commandments of morality.
With unity as the end, the use of every means is sanctified, even cunning, treachery, violence, simony,
prison, death. For all order is for the sake of the community, and the individual must be sacrificed to the
common good.

Similarly, Koestler argues that this logic was used by the Communists in the Soviet Union to destroy
innocent people whenever it seemed to the Communist leaders that torture and false confessions
served the good of the state because “you can’t make an omelet without breaking eggs.”

How can the utilitarian respond to this? It won’t work this time to simply state that justice is not an
absolute value that can be overridden for the good of the whole society. The sophisticated rule-
utilitarian insists it makes good sense to have a principle of justice to which we generally adhere. That is,
general happiness is best served when we adopt the value of justice. Justice should not be overridden by
current utility concerns because human rights themselves are outcomes of utility consideration and
should not be lightly violated. That is, because we tend subconsciously to favor our own interests and
biases, we institute the principle of rights to protect ourselves and others from capricious and biased
acts that would in the long run have great disutility. Thus, we must not undermine institutional rights
too easily. Thus, from an initial rule-utilitarian assessment, the sheriff should not frame the innocent
tramp, and the doctor should not harvest organs from the bachelor.
However, the utilitarian cannot exclude the possibility of sacrificing innocent people for the greater
good of humanity. Wouldn’t we all agree that it would be right to sacrifice one innocent person to
prevent an enormous evil? Suppose, for example, a maniac is about to set off a nuclear bomb that will
destroy New York City. He is scheduled to detonate the bomb in one hour. His psychiatrist knows the
lunatic well and assures us that there is one way to stop him torture his 10-year-old daughter and
televise it. Suppose for the sake of the argument that there is no way to simulate the torture. Would you
not consider torturing the child in this situation? As the rule-utilitarian would see it, we have two moral
rules that are in conflict: the rule to prevent widespread harm and the rule against torture. To resolve
this conflict, the rule-utilitarian might appeal to this second-level conflict-resolving rule: We may
sacrifice an innocent person to prevent a significantly greater social harm. Or, if no conflict-resolving
rule is available, the rule-utilitarian can appeal to this third- level remainder rule: When no other rule
applies, simply do what your best judgment deems to be the act that will maximize utility. Using this
remainder rule, the rule-utilitarian could justify torturing the girl.

Thus, in such cases, it might be right to sacrifice one innocent person to save a city or prevent some
wide-scale disaster. In these cases, the rule-utilitarian’s approach to justice is in fact the same as the
above-mentioned approach to lying and compromising one’s integrity: Justice is just one more lower-
order principle within utilitarianism. The problem, clearly, is determining which kinds of wide-scale
disasters warrant sacrificing innocent lives. This question invariably comes up in wartime: In every
bombing raid, especially in the drop- ping of the atomic bomb on Hiroshima and Nagasaki, the
noncombatant–combatant distinction is overridden. Innocent civilian lives are sacrificed with the
prospect of ending the war. We seem to be making this judgment call in our decision to drive
automobiles and trucks even though we are fairly certain the practice will result in the death of
thousands of innocent people each year. Judgment calls like these highlight utilitarianism’s difficulty in
handling issues of justice.

CONCLUSION

We have seen that multilevel rule-utilitarianism satisfies the purposes of ethics, gives a clear decision
procedure for moral conduct, and focuses on helping people and reducing suffering in the world. It also
offers a compelling solution to the problem of posterity. Further, rule-utilitarianism has responses to all
the criticisms directed toward it. Whether the responses are adequate is another story. Perhaps it would
be better to hold off making a final judgment about utilitarianism until considering the next two
chapters, in which two other types of ethical theory are discussed.

Kant and Deontological Theories

Let’s look again at our opening story in Chapter 7 on utilitarianism. A millionaire makes a dying request
for you to donate $5 million to the Yankees. You agree but then are tempted to give the money to the
World Hunger Relief Organization instead. What should you do? The utilitarian, who focuses on the
consequences of actions, would tell you to act in a way that advances the greatest good for the greatest
number. In essence, the end justifies the means. Accordingly, breaking your promise to the millionaire
and donating to the World Hunger Relief Organization appears to be the way to go.

The deontological answer to this question, however, is quite the opposite. It is not the consequences
that determine the rightness or wrongness of an act but certain features in the act itself or in the rule of
which the act is a token or example. The end never justifies the means. For example, there is something
right about truth telling and promise keeping even when such actions may bring about some harm; and
there is something wrong about lying and promise breaking even when such actions may bring about
good consequences. Acting unjustly is wrong even if it will maximize expected utility.

In this chapter, we explore deontological approaches of ethics, specifically that by Immanuel Kant
(1724–1804). The greatest philosopher of the German Enlightenment and one of the most important
philosophers of all time, Kant was both an absolutist and a rationalist. He believed that we could use
reason to work out a consistent, nonoverridable set of moral principles.

KANT’S INFLUENCES

To understand Kant’s moral philosophy, it is helpful to know a little about his influences, and we will
consider two here. The first was the philosophical debate of his time between rationalism and
empiricism, the second was natural law intuitionist theories that then dominated moral philosophy.

Rationalism and Empiricism

The philosophical debate between rationalism and empiricism took place in the seventeenth and
eighteenth centuries. Rationalists, such as René Descartes, Baruch Spinoza, Gottfried Leibniz, and
Christian Wolff, claimed that pure reason could tell us how the world is, independent of experience. We
can know meta- physical truth such as the existence of God, the immortality of the soul, freedom of the
will, and the universality of causal relations apart from experience. Experience may be necessary to
open our minds to these ideas, but essentially they are innate ideas that God implants in us from birth.
Empiricists, led by John Locke and David Hume, on the other hand, denied that we have any innate ideas
and argued that all knowledge comes from experience. Our minds are a tabula rasa, an empty slate,
upon which experience writes her lessons.

The rationalists and empiricists carried their debate into the area of moral knowledge. The rationalists
claimed that our knowledge of moral principles is a type of metaphysical knowledge, implanted in us by
God, and discoverable by reason as it deduces general principles about human nature. On the other
hand, empiricists, especially Francis Hutcheson, David Hume, and Adam Smith, argued that morality is
founded entirely on the contingencies of human nature and based on desire. Morality concerns making
people happy, fulfilling their reflected desires, and reason is just a practical means of helping them fulfill
their desires. There is nothing of special importance in reason in its own right. It is mainly a rationalizer
and servant of the passions. As Hume said, “Reason is and ought only to be a slave of the passions and
can never pretend to any other office than to serve and obey them.” Morality is founded on our feeling
of sympathy with other people’s sufferings, on fellow feeling. For such empiricists then, morality is
contingent upon human nature:

Human nature ! Feelings and Desires ! Moral principles

If we had a different nature, then we would have different feelings and desires, and hence we would
have different moral principles.
Kant rejected the ideas of Hutcheson, Hume, and Smith. He was outraged by the thought that morality
should depend on human nature and be subject to the fortunes of change and the luck of empirical
discovery. Morality is not contingent but necessary. It would be no less binding on us if our feelings were
different from what they are. Kant writes,

Every empirical element is not only quite incapable of being an aid to the principle of morality, but is
even highly prejudicial to the purity of morals; for the proper and inestimable worth of an absolutely
good will consists just in this, that the principle of action is free from all influence of contingent grounds,
which alone experience can furnish. We cannot too much or too often repeat our warning against this
lax and even mean habit of thought which seeks for its principle amongst empirical motives and laws;
for human reason in its weariness is glad to rest on this pillow, and in a dream of sweet illusions it
substitutes for morality a bastard patched up from limbs of various derivation, which looks like anything
one chooses to see in it; only not like virtue to one who has once beheld her in her true form.

No, said Kant, it is not our desires that ground morality but our rational will. Reason is sufficient for
establishing the moral law as something transcendent and universally binding on all rational creatures.

Act- and Rule-Intuitionism

Since the Middle Ages, one of the dominant versions of European moral philosophy was natural law
theory. In a nutshell, this view maintained that, through rational intuitions embedded in human nature
by God, we discover eternal and absolute moral principles. Medieval natural law philosopher Thomas
Aquinas argued that we have a special mental process called synderesis that gives us general knowledge
of moral goodness. From this knowledge, then, we derive a series of basic moral obligations. What is key
here is the idea that humans have a natural faculty that gives us an intuitive awareness of morality. This
general position is called intuitionism. During the seventeenth and eighteenth centuries, some sort of
intuitionism was assumed in most ethical theories, and Kant was heavily influenced by some of them.
Two basic forms emerged: act- and rule-intuitionism.

Act-intuitionism sees each act as a unique ethical occasion and holds that we must decide what is right
or wrong in each situation by consulting our con- science or our intuitions or by making a choice apart
from any rules. For each specific act that we consider performing, we must consult our conscience to
dis- cover the morally right (or wrong) thing to do. An expression of act-intuitionism is in the famous
moral sermons of Joseph Butler (1692–1752), a bishop within the Church of England. He writes,

[If] any plain honest man, before he engages in any course of action, ask [s] himself, Is this I am going
about right, or is it wrong? ... I do not in the least doubt but that this question would be answered
agreeably to truth and virtue, by almost any fair man in almost any circumstance.2

Butler believed that we each have a conscience that can discover what is right and wrong in virtually
every instance. This is consistent with advice such as “Let your conscience be your guide.” We do not
need general rules to learn what is right and wrong; our intuition will inform us of those things. The
judgment lies in the moral perception and not in some abstract, general rule.

Act-intuitionism, however, has some serious disadvantages. First, it is hard to see how any argument
could take place with an intuitionist: Either you both have the same intuition about lying or you don’t,
and that’s all there is to it. If I believe that a specific act of abortion is morally permissible and you
believe it is morally wrong, then we may ask each other to look more deeply into our consciences, but
we cannot argue about the subject. There is a place for deep intuitions in moral philosophy, but
intuitions must still be scrutinized by reason and corrected by theory.

Second, it seems that rules are necessary to all reasoning, including moral reasoning, and act-
intuitionists seem to ignore this. You may test this by thinking about how you learn to drive a car, to do
long division, or to type. Even though you may eventually internalize the initial principles as habits so
that you are unconscious of them, one could still cite a rule that covers your action. For example, you
may no longer remember the rules for accelerating a car, but there was an original experience of
learning the rule, which you continue unconsciously to follow. Moral rules such as “Keep your promises”
and “Don’t kill innocent people” seem to function in a similar way.

Third, different situations seem to share common features, so it would be inconsistent for us to
prescribe different moral actions. Suppose you believe that it is morally wrong for John to cheat on his
math exam. If you also believe that it is morally permissible for you to cheat on the same exam, don’t
you need to explain what makes your situation different from John’s? If I say that it is wrong for John to
cheat on exams, am I not implying that it is wrong for anyone relevantly similar to John (including all
students) to cheat on exams? That is, morality seems to involve a universal aspect, or what is called the
principle of universalizability: If one judges that X is right (or wrong) or good (or bad), then one is
rationally committed to judging anything relevantly similar to X as right (wrong) or good (bad). If this
principle is sound, then act-intuitionism is misguided.

The other intuitionist approach, rule-intuitionism, maintains that we must decide what is right or wrong
in each situation by consulting moral rules that we receive through intuition. Rule-intuitionists accept
the principle of universalizability as well as the notion that in making moral judgments we are appealing
to principles or rules. Such rules as “We ought never to lie,” “We ought always to keep our promises,”
and “We ought never to execute an innocent person” constitute a set of valid prescriptions regardless of
the outcomes. The rule-intuitionist to have the greatest impact on Kant was German philosopher
Samuel Pufendorf (1632–1694), the dominant natural law theorist of his time. Pufendorf describes the
intuitive process by which we acquire moral knowledge:

It is usually said that we have knowledge of this [moral] law from nature itself. However, this is not to be
taken to mean that plain and distinct notions concerning what is to be done or avoided were implanted
in the minds of newborn people. Instead, nature is said to teach us, partly because the knowledge of this
law may be attained by the help of the light of reason. It is also partly because the general and most
useful points of it are so plain and clear that, at first sight, they force assent.... Although we are not able
to remember the precise time when they first took hold of our understandings and professed our minds,
we can have no other opinion of our knowledge of this law except that it was native to our beings, or
born together and at the same time with ourselves.

The moral intuitions that we have, according to Pufendorf, fall into three groups: duties to God, to
oneself, and to others. The duties in all these cases are moral rules that guide our actions. Within these
three groupings, the main rules of duty that Pufendorf advocates are these:

- To God. Know the existence and nature of God; worship God.


- To oneself. Develop one’s skills and talents; avoid harming our bodies, such as through gluttony
or drunkenness, and not killing oneself.
- To others. Avoid wronging others; treat people as equals; promote the good of others; keep
one’s promises.

Kant was influenced by Pufendorf in two ways. First, Kant was a rule-intuitionist of a special sort: He
believed that moral knowledge comes to us through rational intuition in the form of moral rules. As we
will see, Kant’s moral psychology is rather complex, and his conception of intuition draws on a distinct
notion of reason, which we don’t find in Pufendorf. Second, Kant accepted Pufendorf’s division of duties
toward God, oneself, and others. Duties toward God, Kant argues, are actually religious duties, not
moral ones. However, duties to oneself and others are genuine moral obligations.

THE CATEGORICAL IMPERATIVE

The principal moral rule in Kant’s ethical theory is what he calls the categorical imperative—essentially
meaning “absolute command.” Before introducing us to the specific rule itself, he sets the stage with an
account of intrinsic moral goodness.

Intrinsic Goodness and the Good Will

As we have noted, Kant wanted to remove moral truth from the zone of contingency and empirical
observation and place it securely in the area of necessary, absolute, universal truth. Morality’s value is
not based on the fact that it has instrumental value, that it often secures nonmoral goods such as
happiness; rather, morality is valuable in its own right:

Nothing can possibly be conceived in the world, or even out of it, which can be called good without
qualification, except the Good Will. Intelligence, wit, judgment, and the other talents of the mind,
however they may be named, or courage, resolution, perseverance, as qualities of temperament, as
undoubtedly good and desirable in many respects; but these gifts of nature also may become extremely
bad and mischievous if the will which is to make use of them, and which, therefore constitutes what is
called character is not good.... Even if it should happen that, owing to special disfavor of fortune, or the
stingy provision of a step-motherly nature, this Good Will should wholly lack power to accomplish its
purpose, if with its greatest efforts it should yet achieve nothing, and there should remain only the Good
Will, ... then, like a jewel, it would still shine by its own light, as a thing which has its whole value in
itself. Its usefulness or fruitfulness can neither add to nor take away anything from this value.

The only thing that is absolutely good, good in itself and without qualification, is the good will. All other
intrinsic goods, both intellectual and moral, can serve the vicious will and thus contribute to evil. They
are only morally valuable if accompanied by a good will. Even success and happiness are not good in
them- selves. Honor can lead to pride. Happiness without good will is undeserved luck, ill-gotten gain.
Nor is utilitarianism plausible, for if we have a quantity of happiness to distribute, is it just to distribute it
equally, regardless of virtue? Should we not distribute it discriminately, according to moral goodness?
Happiness should be distributed in proportion to people’s moral worth.

How successful is Kant’s argument for the good will? Could we imagine a world where people always
and necessarily put nonmoral virtues to good use, where it is simply impossible to use a virtue such as
intelligence for evil? Is happiness any less good simply because one can distribute it incorrectly? Can’t
one put the good will itself to bad use as the misguided do-gooder might? As the aphorism goes, “The
road to hell is paved with good intentions.” Could Hitler have had good intentions in carrying out his
dastardly programs? Can’t the good will have bad effects?
Although we may agree that the good will is a great good, it is not obvious that Kant’s account is correct,
that it is the only inherently good thing. For even as intelligence, courage, and happiness can be put to
bad uses or have bad effects, so can the good will; and even as it does not seem to count against the
good will that it can be put to bad uses, so it should not count against the other virtues that they can be
put to bad uses. The good will may be a necessary element to any morally good action, but whether the
good will is also a sufficient condition to moral goodness is another question.

Nonetheless, perhaps we can reinterpret Kant so as to preserve his central insight. There does seem to
be something morally valuable about the good will, apart from any consequences. Consider the
following illustration. Two soldiers volunteer to cross enemy lines to contact their allies on the other
side. Both start off and do their best to get through the enemy area. One succeeds; the other does not
and is captured. But, aren’t they both morally praiseworthy? The success of one in no way detracts from
the goodness of the other. Judged from a commonsense moral point of view, their actions are equally
good; judged from a utilitarian or consequentialist view, the successful act is far more valuable than the
unsuccessful one. Here, we can distinguish the agent’s worth from the value of the consequences and
make two separate, nonconflicting judgments.

Hypothetical versus Categorical Imperatives

For Kant, all mention of duties (or obligations) can be translated into the language of imperatives, or
commands. As such, moral duties can be said to have imperative force. He distinguishes two kinds of
imperatives: hypothetical and categorical. The formula for a hypothetical imperative is “If you want A,
then do B.” For example, “If you want a good job, then get a good education,” or “If you want to be
happy, then stay sober and live a balanced life.” The formula for a categorical imperative is simply: “Do
B!” That is, do what reason discloses to be the intrinsically right thing to do, such as “Tell the truth!”
Hypothetical, or means ends, imperatives are not the kind of imperatives that characterize moral
actions. Categorical, or unqualified, imperatives are the right kind of imperatives, because they show
proper recognition of the imperial status of moral obligations. Such imperatives are intuitive,
immediate, absolute injunctions that all rational agents understand by virtue of their rationality.

Kant argues that one must perform moral duty solely for its own sake (“duty for duty’s sake”). Some
people conform to the moral law because they deem it in their own enlightened self-interest to be
moral. But they are not truly moral because they do not act for the sake of the moral law. For example,
a businessman may believe that “honesty is the best policy”; that is, he may judge that it is conducive to
good business to give his customers correct change and high-quality products. But, unless he performs
these acts because they are his duty, he is not acting morally, even though his acts are the same ones
they would be if he were acting morally.

The kind of imperative that fits Kant’s scheme as a product of reason is one that universalizes principles
of conduct. He names it the categorical imperative (CI): “Act only according to that maxim by which you
can at the same time will that it would become a universal law.” The categorical imperative, for Kant, is
a procedure for determining the morality of any course of action. All specific moral duties, he writes,
“can be derived from this single imperative.” Thus, for example, duties to oneself such as developing
one’s talents and not killing oneself can be deduced from the categorical imperative. So too can duties
to others, such as keeping promises and helping those in need.
The first step in the categorical imperative procedure is for us to consider the underlying maxim of our
proposed action. By maxim, Kant means the general rule in accordance with which the agent intends to
act. For example, if I am thinking about assisting someone in need, my underlying maxim might be this:
“When I see someone in need, I should assist him or her when it does not cause an undue burden on
me.” The second step is to consider whether this maxim could be universalized to apply to everyone,
such as “When anyone sees someone in need, that person should assist him or her when it does not
cause an undue burden on the person.” If it can be universalized, then we accept the maxim, and the
action is moral. If it cannot be universalized, then we reject the maxim, and the action is immoral. The
general scheme of the CI procedure, then, is this:

Maxim of action

Universalize maxim

Accept successfully universalized maxim (reject unsuccessful maxim)

According to Kant, there is only one categorical imperative, but he presents three formulations of it:

- Principle of the law of nature. “Act as though the maxim of your action were by your will to
become a universal law of nature.”
- Principle of ends. “So act as to treat humanity, whether in your own person or in that of any
other, in every case as an end and never as merely a means.”
- Principle of autonomy. “So act that your will can regard itself at the same time as making
universal law through its maxims.”

The theme that ties all of these formulations together is universalizability: Can a particular course of
action be generalized so that it applies to any rele- vantly similar person in that kind of situation? For
Kant, determining whether a maxim can successfully be universalized hinges on which of the three
specific formulations of the categorical imperative that we follow. The bottom line for all three, though,
is that we stand outside our personal maxims and estimate impartially and impersonally whether our
maxims are suitable as principles for all of us to live by.

Let’s look at each of these formulations, beginning with the first and most influential, the principle of
the law of nature.

The Principle of the Law of Nature: Four Examples

Again, the CI principle of the law of nature is this: “Act as though the maxim of your action were by your
will to become a universal law of nature.” The emphasis here is that you must act analogous to the laws
of physics, specifically insofar as such laws are not internally conflicting or self-defeating. For example,
nature could not subsist with a law of gravity that had an object fall both up and down at the same time.
Similarly, a system of morality could not subsist when a universalized maxim has an internal conflict. If
you could consistently will that everyone would act on a given maxim, then there is an application of the
categorical imperative showing the moral permissibility of the action. If you could not consistently will
that everyone would act on the maxim, then that type of action is morally wrong; the maxim must then
be rejected as self-defeated.

The heart of this formulation of the CI is the notion of a “contradiction,” and there has been much
debate about exactly the kind of contradiction that Kant had in mind. John Stuart Mill famously criticized
this aspect of the CI: “[Kant] fails, almost grotesquely, to show that there would be any contradiction,
any logical (not to say physical) impossibility, in the adoption by all rational beings of the most
outrageously immoral rules of conduct” (Utilitarianism, Ch. 1). But contemporary American philosopher
Christine Korsgaard argues that there are three possible interpretations of what Kant meant by
“contradiction.” First, Kant might have meant that the universalization of such a maxim would be a
logical contradiction, where the proposed action would simply be inconceivable. Second, he might have
meant that it would be a teleological contradiction, where the maxim could not function as a law within
a purposeful and organized system of nature. Third, he might have meant that it would be a practical
contradiction, where my action would become ineffective for achieving my purpose if every- one tried
to use it for that purpose. Korsgaard believes that all three of these interpretations are supported by
Kant’s writings, and Kant himself may not have even seen any differences between the three. But, she
argues, the third one is preferable because it enables the universalization test to handle more cases

successfully. She writes,

What the test shows to be forbidden are just those actions whose efficacy in achieving their purposes
depends upon their being exceptional. If the action no longer works as a way of achieving the purpose in
question when it is universalized, then it is an action of this kind.

This formulation of the CI reveals a practical contradiction in my action insofar as it shows that I am
trying to get away with something that would never work if others did the same thing. It exposes
unfairness, deception, and cheating in what I am proposing.

Kant gives four examples of the application of this test: (1) making a lying promise, (2) committing
suicide, (3) neglecting one’s talent, and (4) refraining from helping others. The first and fourth of these
are duties to others, whereas the second and third of these are duties to oneself. Kant illustrates how
the CI principle of the law of nature works by applying it to each of these maxims.

Making a Lying Promise Suppose I need some money and am considering whether it would be moral to
borrow the money from you and promise to repay it without ever intending to do so. Could I say to
myself that everyone should make a false promise when he is in difficulty from which he otherwise
cannot escape? The maxim of my act is M:

M. Whenever I need money, I should make a lying promise while borrowing the money.

Can I universalize the maxim of my act? By applying the universalizability test to M, we get P:

P. Whenever anyone needs money, that person should make a lying promise while borrowing the
money.

But, something has gone wrong, for if I universalize this principle of making promises without intending
to keep them, I would be involved in a contradiction:
I immediately see that I could will the lie but not a universal law to lie. For with such a law [that is, with
such a maxim universally acted on] there would be no promises at all.... Thus my maxim would
necessarily destroy itself as soon as it was made a universal law.6

The resulting state of affairs would be self-defeating because no one in his or her right mind would take
promises as promises unless there was the expectation of fulfillment. Thus, the maxim of the lying
promise fails the universalizability criterion; hence, it is immoral. Now, I consider the opposite maxim,
one based on keeping my promise:

M1. Whenever I need money, I should make a sincere promise while borrowing it.

Can I successfully universalize this maxim?

P1. Whenever anyone needs money, that person should make a sincere promise

while borrowing it.

Yes, I can universalize M1 because there is nothing self-defeating or contradictory in this. So, it follows,
making sincere promises is moral; we can make the maxim of promise keeping into a universal law.

Committing Suicide Some of Kant’s illustrations do not fare as well as the duty to keep promises. For
instance, he argues that the categorical imperative would prohibit suicide because we could not
successfully universalize the maxim of such an act. If we try to universalize it, we obtain the principle,
“Whenever it looks like one will experience more pain than pleasure, one ought to kill oneself,” which,
according to Kant, is a self-contradiction because it would go against the very principle of survival upon
which it is based. But whatever the merit of the form of this argument, we could modify the principle to
read “Whenever the pain or suffering of existence erodes the quality of life in such a way as to make
nonexistence a preference to suffering existence, one is permitted to commit suicide.” Why couldn’t this
(or something close to it) be universalized? It would cover the rare instances in which no hope is in sight
for terminally ill patients or for victims of torture or deep depression, but it would not cover the kinds of
suffering and depression most of us experience in the normal course of life. Kant seems unduly
absolutist in his prohibition of suicide.

Neglecting One’s Talent Kant’s other two examples of the application of the CI principle of the law of
nature are also questionable. In his third example, he claims that we cannot universalize a maxim to
refrain from developing our talents. But again, could we not qualify this and stipulate that under certain
circumstances it is permissible not to develop our talents? Perhaps Kant is correct in that, if every- one
selfishly refrained from developing talents, society would soon degenerate into anarchy. But couldn’t
one universalize the following maxim M3?

M3. Whenever I am not inclined to develop a talent, and this refraining will not seriously undermine the
social order, I may so refrain.

Refraining from Helping Others Kant’s last example of the way the CI principle of the law of nature
functions regards the situation of not coming to the aid of others whenever I am secure and
independent. He claims that I cannot universalize this maxim because I never know whether I will need
the help of others at some future time. Is Kant correct about this? Why could I not universalize a maxim
never to set myself a goal whose achievement appears to require the cooperation of others? I would
have to give up any goal as soon as I realized that cooperation with others was required. In what way is
this contradictory or self-defeating? Perhaps it would be selfish and cruel to make this into a universal
law, but there seems nothing contradictory or self-defeating in the principle itself. The problems with
universalizing selfishness are the same ones we encountered in analyzing egoism, but it is doubtful
whether Kant’s categorical imperative captures what is wrong with egoism. Perhaps he has other
weapons that do elucidate what is wrong with egoism (we return to this later).

COUNTEREXAMPLES TO THE PRINCIPLE OF THE

LAW OF NATURE

Kant thought that he could generate an entire moral law from his categorical imperative. The above test
of universalizability advocated by Kant’s principle of the law of nature seems to work with such
principles as promise keeping and truth telling and a few other maxims, but it doesn’t seem to give us all
that Kant wanted. It has been objected that Kant’s categorical imperative is both too wide and too
unqualified. The charge that it is too wide is based on the perception that it seems to justify some
actions that we might consider trivial or even immoral.

Counterexample 1: Mandating Trivial Actions

For an example of a trivial action that might be mandated by the categorical imperative, consider the
following maxim M:

M. I should always tie my right shoe before my left shoe. This generates the following principle P: P. We
should always tie our right shoe before our left shoe.

Can we universalize P without contradiction? It seems that we can. Just as we universalize that people
should drive cars on the right side of the street rather than the left, we could make it a law that
everyone should tie the right shoe before the left shoe. But it seems obvious that there would be no
point to such a law—it would be trivial. But it is justified by the categorical imperative.

It may be objected that all this counterexample shows is that it may be permissible (not obligatory) to
live by the principle of tying the right shoe before the left because we could also universalize the
opposite maxim (tying the left before the right) without contradiction. That seems correct.

Counterexample 2: Endorsing Cheating

Another counterexample, offered by Fred Feldman,7 appears to show that the categorical imperative
endorses cheating. Maxim M states:

M. Whenever I need a term paper for a course and don’t feel like writing one, I will buy a term paper
from Research Anonymous and submit it as my own work.

Now we universalize this maxim into a universal principle P:

P. Whenever anyone needs a term paper for a course and doesn’t feel like writing one, the person will
buy one from a suitable source and submit it as his or her own.

This procedure seems to be self-defeating. It would undermine the whole process of academic work
because teachers would not believe that research papers really represented the people who turned
them in. Learning would not occur; grades and transcripts would be meaningless, and the entire
institution of education would break down; the whole purpose of cheating would be defeated.
But suppose we made a slight adjustment to M and P, inventing M1 and P1:

M1. When I need a term paper for a course and don’t feel like writing one, and no change in the system
will occur if I submit a store-bought one, then I will buy a term paper and submit it as my own work.

P1. Whenever anyone needs a term paper for a course and doesn’t feel like writing it, and no change in
the system will occur if one submits a store-bought paper, then one will buy the term paper and submit
it as one’s own work.

Does P1 pass as a legitimate expression of the categorical imperative? It might seem to satisfy the
conditions, but Kantian students have pointed out that for a principle to be universalizable, or lawlike,
one must ensure that it is public.

However, if P1 were public and everyone was encouraged to live by it, then it would be exceedingly
difficult to prevent an erosion of the system. Teachers would take precautions against it. Would
cheaters have to announce themselves publicly? In sum, the attempt to universalize even this qualified
form of cheating would undermine the very institution that makes cheating possible. So, P1 may be a
thinly veiled oxymoron: Do what will undermine the educational process in such a way that it doesn’t
undermine the educational process.

Counterexample 3: Prohibiting Permissible Actions

Another type of counterexample might be used to show that the categorical imperative refuses to allow
us to do things that common sense permits. Suppose I need to flush the toilet, so I formulate my maxim
M:

M. At time t1, I will flush the toilet. I universalize this maxim: P. At time t1, everyone should flush their
toilet.

But I cannot will this if I realize that the pressure of millions of toilets flush- ing at the same time would
destroy the nation’s plumbing systems, and so I could not then flush the toilet. The way out of this
problem is to qualify the original maxim M to read M1:

M1. Whenever I need to flush the toilet and have no reason to believe that it will set off the impairment
or destruction of the community’s plumbing system, I may do so.

From this we can universalize to P1:

P1. Whenever anyone needs to flush the toilet and has no reason to believe that it will set off the
destruction of the community’s plumbing system, he or she may do so.

Thus, Kant could plausibly respond to some of the objections to his theory.

Counterexample 4: Mandating Genocide

More serious is the fact that the categorical imperative appears to justify acts that we judge to be
horrendously immoral. Suppose I hate people of a certain race, religion, or ethnic group. Suppose it is
Americans that I hate and that I am not an American. My maxim is this:

M. Let me kill anyone who is American. Universalizing M, we get P:


P. Always kill Americans.

Is there anything contradictory in this injunction? Could we make it into a universal law? Why not?
Americans might not like it, but there is no logical contradiction involved in such a principle. Had I been
an American when this command was in effect, I would not have been around to write this book, but the
world would have survived my loss without too much inconvenience. If I suddenly discover that I am an
American, I would have to commit suicide. But as long as I am willing to be consistent, there doesn’t
seem to be anything wrong with my principle, so far as its being based on the categorical imperative is
concerned.

As with the shoe-tying example, it would be possible to universalize the opposite—that no one should
kill innocent people. Nevertheless, we certainly wouldn’t want to say that it is permissible to adopt the
principle “Always kill Americans.”

We conclude, then, that even though the first version of the categorical imperative is an important
criterion for evaluating moral principles, it still needs supple- mentation. In itself, it is purely formal and
leaves out any understanding about the content or material aspect of morality. The categorical
imperative, with its universalizability test, constitutes a necessary condition for being a valid moral
principle, but it does not provide us with a sufficiency criterion. That is, if any principle is to count as
rational or moral, it must be universalizable; it must apply to everyone and to every case that is
relevantly similar. If I believe that it’s wrong for others to cheat on exams, then unless I can find a reason
to believe that I am relevantly different from these others, it is also wrong for me to cheat on exams. If
premarital heterosexual sex is prohibited for women, then it must also be prohibited for men
(otherwise, with whom would the men have sex—other men’s wives?). This formal consistency,
however, does not tell us whether cheating itself is right or wrong or whether pre- marital sex is right or
wrong. That decision has to do with the material content of morality, and we must use other
considerations to help us decide about that.

OTHER FORMULATIONS OF THE CATEGORICAL IMPERATIVE

We’ve discussed Kant’s first formulation of the categorical imperative; now we will consider the two
others: the principle of ends and the principle of autonomy.

The Principle of Ends

Again, the principle of ends is this: “So act as to treat humanity, whether in your own person or in that
of any other, in every case as an end and never as merely a means.” Each person as a rational being has
dignity and profound worth, which entails that he or she must never be exploited or manipulated or
merely used as a means to our idea of what is for the general good (or to any other end).

What is Kant’s argument for viewing rational beings as having ultimate value? It goes like this: In valuing
anything, I endow it with value; it can have no value apart from someone’s valuing it. As a valued object,
it has conditional worth, which is derived from my valuation. On the other hand, the person who values
the object is the ultimate source of the object, and as such belongs to a different sphere of beings. We,
as valuers, must conceive of ourselves as having unconditioned worth. We cannot think of our
personhood as a mere thing because then we would have to judge it to be without any value except that
given to it by the estimation of someone else. But then that person would be the source of value, and
there is no reason to suppose that one person should have unconditional worth and not another who is
relevantly similar. Therefore, we are not mere objects. We have unconditional worth and so must treat
all such value-givers as valuable in themselves—as ends, not merely means. I leave it to you to evaluate
the validity of this argument, but most of us do hold that there is something exceedingly valuable about
human life.

Kant thought that this formulation, the principle of ends, was substantively identical to his first
formulation of the categorical imperative, but most scholars disagree with him. It seems better to treat
this principle as a supplement to the first, adding content to the purely formal CI principle of the law of
nature. In this way, Kant would limit the kinds of maxims that could be universalized. Egoism and the
principle regarding the killing of Americans would be ruled out at the very outset because they involve a
violation of the dignity of rational persons. The process would be as follows:

1. Formulate the maxim (M).

2. Apply the ends test. (Does the maxim involve violating the dignity of rational beings?)

3. Apply the principle of the law of nature universalization test. (Can the maxim be universalized?)

4. Successful moral principles survive both tests.

In any event, we may ask whether the CI principle of ends fares better than the CI principle of the law of
nature. Three problems soon emerge. The first has to do with Kant’s setting such a high value on
rationality. Why does reason and only reason have intrinsic worth? Who gives this value to rational
beings, and how do we know that they have this value? What if we believe that reason has only
instrumental value?

Kant’s notion of the high inherent value of reason will be plausible to those who believe that humans
are made in the image of God and who interpret that as entailing that our rational capabilities are the
essence of being created in God’s image: We have value because God created us with worth that is, with
reason. But, even nontheists may be persuaded that Kant is correct in seeing rationality as inherently
good. It is one of the things rational beings value more than virtually anything else, and it is a necessary
condition to whatever we judge to be a good life or an ideal life (a truly happy life).

Kant seems to be correct in valuing rationality. It does enable us to engage in deliberate and moral
reasoning, and it lifts us above lower animals. Where he may have gone wrong is in neglecting other
values or states of being that may have moral significance. For example, he believed that we have no
obligations to animals because they are not rational. But surely the utilitarians are correct when they
insist that the fact that animals can suffer should constrain our behavior toward them: We ought not
cause unnecessary harm. Perhaps Kantians can supplement their system to accommodate this
objection.

This brings us to our second problem with Kant’s formulation. If we agree that reason is an intrinsic
value, then does it not follow that those who have more of this quality should be respected and honored
more than those who have less?
(1) Reason is an intrinsic good.
(2) The more we have of an intrinsically good thing, the better.
(3) Therefore, those who have more reason than others are intrinsically better.

Thus, by Kantian logic, people should be treated in exact proportion to their ability to reason, so
geniuses and intellectuals should be given privileged status in society, as Plato and Aristotle might argue.
Kant could deny the second premise and argue that rationality is a threshold quality, but the objector
could come back and argue that there really are degrees in ability to use reason, ranging from gorillas
and chimpanzees all the way to the upper limits of human genius. Should we treat gorillas and chimps as
ends in themselves while still exploiting small babies and severely senile people because the former do
not yet act rationally and the latter have lost what ability they had? If we accept the Kantian principle of
ends, what should be our view on abortion, infanticide, and euthanasia?

Kant’s principle of ends says all humans have dignity by virtue of their rationality, so they are permitted
to exploit animals (who are intelligent but not rational). But suppose Galacticans who visited our planet
were superrational, as superior to us as we are to other animals. Would we then be second-class citizens
whom the Galacticans could justifiably exploit for their purposes? Suppose they thought we tasted good
and were nutritious. Would morality permit them to eat us? Kantians would probably insist that minimal
rationality gives one status but then, wouldn’t some animals who deliberate (chimps, bonobos, gorillas,
and dolphins) gain status as persons? And don’t sheep, dogs, cats, pigs, and cows exhibit minimally
rational behavior? Should we eat them? The Chinese think nothing is wrong with eating dogs and cats.

There is a third problem with Kant’s view of the dignity of rational beings. Even if we should respect
them and treat them as ends, this does not tell us very much. It may tell us not to enslave them or not to
act cruelly toward them without a good reason, but it doesn’t tell us what to do in situations where our
two or more moral duties conflict.

For example, what does it tell us to do about a terminally ill woman who wants us to help her die? What
does it tell us to do in a war when we are about to aim our gun at an enemy soldier? What does it mean
to treat such a rational being as an end? What does it tell us to do with regard to the innocent, potential
victim and the gangsters who have just asked us the whereabouts of the victim? What does it tell us
about whether we should steal from the pharmacy to procure medicine we can’t afford in order to bring
healing to a loved one? It’s hard to see how the notion of ends helps us much in these situations. In
fairness to Kant, however, we must say that virtually every moral system has trouble with dilemmas and
that it might be possible to supplement Kantianism to solve some of them.

The Principle of Autonomy

The final formulation of the categorical imperative is the principle of autonomy: “So act that your will
can regard itself at the same time as making universal law through its maxims.” That is, we do not need
an external authority be it God, the state, our culture, or anyone else to determine the nature of the
moral law. We can discover this for ourselves. And the Kantian faith proclaims, everyone who is ideally
rational will legislate exactly the same universal moral principles.

The opposite of autonomy is heteronomy: The heteronomous person is one whose actions are
motivated by the authority of others, whether it is religion, the state, his or her parents, or a peer group.
The following illustration may serve as an example of the difference between these two states of being.
In the early 1960s, Stanley Milgram of Yale University conducted a series of social psychological
experiments aimed at determining the degree to which the ordinary citizen was obedient to authority.
Volunteers from all walks of life were recruited to participate in “a study of memory and learning.” Two
people at a time were taken into the laboratory. The experimenter explained that one was to play the
role of the “teacher” and the other the role of the “learner.” The teacher was put in a separate room
from which he or she could see the learner through a window. The teacher was instructed to ask the
learner to choose the correct correlate to a given word, and the learner was to choose from a set of
options. If the learner got the correct word, they moved on to the next word. But, if the learner chose
the wrong word, he or she was punished with an electric shock. The teacher was given a sample shock of
45 volts just to get the feeling of the game. Each time that the learner made a mistake, the shock was
increased by 15 volts (starting at 15 volts and continuing to 450 volts). The meter was marked with
verbal designations: slight shock, moderate shock, strong shock, very strong shock, intense shock,
extreme-intensity shock, danger: severe shock, and XXX. As the experiment proceeded, the learner
would generally be heard grunting at the 75-volt shock, crying out at 120 volts, begging for release at
150 volts, and screaming in agony at 270 volts. At around 300 volts, there was usually dead silence.

Now, unbeknown to the teacher, the learner was not actually experiencing any shocks; the learners
were really trained actors simulating agony. The results of the experiment were astounding. Whereas
Milgram and associates had expected that only a small proportion of citizens would comply with the
instructions, 60 percent were completely obedient and carried out the experiment to the very end. Only
a handful refused to participate in the experiment at all once they discovered what it involved. Some 35
percent left at various stages. Milgram’s experiments were later replicated in Munich, Germany, where
85 percent of the subjects were found to be completely “obedient to authority.”

There are two ways in which the problems of autonomy and heteronomy are illustrated by this example.
In the first place, the experiment seems to show that the average citizen acts less autonomously than
we might expect. People are basically heteronomous, herd followers. In the second place, there is the
question about whether Milgram should have subjected people to these experiments. Was he violating
their autonomy and treating them as means (rather than ends) in deceiving them in the way he did?
Perhaps a utilitarian would have an easier time justifying these experiments than a Kantian.

In any case, for Kant, it is our ability to use reason in universalizing the maxims of our actions that sets
rational beings apart from nonrational beings. As such, rational beings belong to a kingdom of ends.
Kant thought that each of us—as a fully rational, autonomous legislator—would be able to reason
through to exactly the same set of moral principles, the ideal moral law.

THE PROBLEM OF EXCEPTIONLESS RULES

One of the problems that plague all formulations of Kant’s categorical imperative is that it yields
unqualified absolutes. The rules that the categorical imperative generates are universal and
exceptionless. He illustrates this point with regard to truth telling: Suppose an innocent man, Mr. Y,
comes to your door, begging for asylum, because a group of gangsters is hunting him down to kill him.
You take the man in and hide him in your third-floor attic. Moments later the gangsters arrive and
inquire after the innocent man: “Is Mr. Y in your house?” What should you do? Kant’s advice is to tell
them the truth: “Yes, he’s in my house.”8 What is Kant’s reasoning here? It is simply that the moral law
is exceptionless.
It is your duty to obey its commands, not to reason about the likely consequences. You have done your
duty: hidden an innocent man and told the truth when asked a straightforward question. You are
absolved of any responsibility for the harm that comes to the innocent man. It’s not your fault that there
are gangsters in the world.

To many of us, this kind of absolutism seems counterintuitive. One way we might alter Kant here is
simply to write in qualifications to the universal principles, changing the sweeping generalization “Never
lie” to the more modest “Never lie, except to save an innocent person’s life.” The trouble with this way
of solving the problem is that there seem to be no limits on the qualifications that would need to be
attached to the original generalization—for example, “Never lie, except to save an innocent person’s life
(unless trying to save that person’s life will undermine the entire social fabric),” or “Never lie, except to
save an innocent person’s life (unless this will undermine the social fabric),” or “Never lie, except to
spare people great anguish (such as telling a cancer patient the truth about her condition).” And so on.
The process seems infinite and time- consuming and thus impractical.

However, another strategy is open for Kant—namely, following the prima facie duty approach
advocated by twentieth-century moral philosopher William D. Ross (1877–1971). Let’s first look at the
key features of Ross’s theory and then adapt it to Kant’s.

Ross and Prima Facie Duties

Today, Ross is perhaps the most important deontological theorist after Kant, and, like Pufendorf, Ross is
a rule-intuitionist. There are three components of Ross’s theory. The first of these is his notion of “moral
intuition,” internal perceptions that both discover the correct moral principles and apply them correctly.
Although they cannot be proved, the moral principles are self-evident to any normal person upon
reflection. Ross wrote,

That an act, qua fulfilling a promise, or qua effecting a just distribution of good ... is prima facie right, is
self-evident; not in the sense that it is evident ... as soon as we attend to the proposition for the first
time, but in the sense that when we have reached sufficient mental maturity and have given sufficient
attention to the proposition it is evident without any need of proof, or of evidence beyond itself. It is
evident just as a mathematical axiom, or the validity of a form of inference, is evident.... In our
confidence that these propositions are true there is involved the same confidence in our reason that is
involved in our confidence in mathematics.... In both cases we are dealing with propositions that cannot
be proved, but that just as certainly need no proof.

Just as some people are better perceivers than others, so the moral intuitions of more reflective people
count for more in evaluating our moral judgments. “The moral convictions of thoughtful and well-
educated people are the data of ethics, just as sense-perceptions are the data of a natural science.”10

The second component of his theory is that our intuitive duties constitute a plural set that cannot be
unified under a single overarching principle (such as Kant’s categorical imperative or the utilitarian
highest principle of “the greatest good for the greatest number”). As such, Ross echoes the intuitionism
of Pufendorf by presenting a list of several duties, specifically these seven:

1. Promise keeping
2. Fidelity
3. Gratitude for favors
4. Beneficence
5. Justice
6. Self-improvement
7. Nonmaleficence

The third component of Ross’s theory is that our intuitive duties are not absolute; every principle can be
overridden by another in a particular situation. He makes this point with the distinction between prima
facie duties and actual duties. The term prima facie is Latin for “at first glance,” and, according to Ross,
all seven of the above-listed moral duties are tentatively binding on us until one duty conflicts with
another. When that happens, the weaker one disappears, and the stronger one emerges as our actual
duty. Thus, although prima facie duties are not actual duties, they may become such, depending on the
circumstances. For example, if we make a promise, we put ourselves in a situation in which the duty to
keep promises is a moral consideration. It has presumptive force, and if no conflicting prima facie duty is
relevant, then the duty to keep our promises automatically becomes an actual duty.

What, for Ross, happens when two duties conflict? For an absolutist, an adequate moral system can
never produce moral conflict, nor can a basic moral principle be overridden by another moral principle.
But Ross is no absolutist. He allows for the overridability of principles. For example, suppose you have
promised your friend that you will help her with her homework at 3 p.m. While you are on your way to
meet her, you encounter a lost, crying child. There is no one else around to help the little boy, so you
help him find his way home. But, in doing so, you miss your appointment. Have you done the morally
right thing? Have you broken your promise?

It is possible to construe this situation as constituting a conflict between two moral principles:

1. We ought always to keep our promises.

2. We ought always to help people in need when it is not unreasonably inconvenient to do so.

In helping the child get home, you have decided that the second principle overrides the first. This does
not mean that the first is not a valid principle—only that the “ought” in it is not an absolute “ought.” The
principle has objective validity, but it is not always decisive, depending on which other principles may
apply to the situation. Although some duties are weightier than others—for example, non- maleficence
“is apprehended as a duty of a more stringent character ... than beneficence”—the intuition must
decide each situation on its own merits.

Kant and the Prima Facie Solution

Many moral philosophers—egoists, utilitarians, and deontologists—have adopted the prima facie
component of Ross’s theory as a convenient way of resolving moral dilemmas. In doing so, they typically
do not adopt Ross’s account of moral intuitions or his specific set of seven duties (that is, the first two
components of Ross’s theory). Rather, they just incorporate Ross’s concepts of prima facie duty and
actual duty as a mechanism for explaining how one duty might override another.

How might this approach work with Kant? Consider again Kant’s innocent man example. First, we have
the principle L: “Never lie.” Next, we ask whether any other principle is relevant in this situation and
discover that that is principle P: “Always protect innocent life.” But we cannot obey both L and P (we
assume for the moment that silence will be a giveaway). We have two general principles; neither of
them is to be seen as absolute or nonoverridable but rather as prima facie. We have to decide which of
the two overrides the other, which has greater moral force. This is left up to our considered judgment
(or the considered judgment of the reflective moral community). Presumably, we will opt for P over L,
meaning that lying to the gangsters becomes our actual duty.

Will this maneuver save the Kantian system? Well, it changes it in a way that Kant might not have liked,
but it seems to make sense: It transforms Kant’s absolutism into a modest objectivist system (as
described in Chapter 3). But now we need to have a separate criterion to resolve the conflict between
two competing prima facie principles. For Ross, moral intuitions performed that function. Since Kant is
more of a rational intuitionist, it would be the task of reason to perform that function. Perhaps his
second formulation of the categorical imperative the principle of ends might be of service here. For
example, in the illustration of the inquiring killer, the agent is caught between two compelling prima
facie duties: “Never lie” and “Always protect innocent life.” When determining his actual duty, the agent
might reflect on which of these two duties best promotes the treatment of people as ends that is, beings
with intrinsic value. This now becomes a contest between the dignity of the would-be killer who
deserves to hear the truth and the dignity of the would-be victim who deserves to live. In this case, the
dignity of the would-be victim is the more compelling value, and the agent’s actual duty would be to
always protect innocent life. Thus, the agent should lie to protect the life of the would-be victim.

CONCLUSION: A RECONCILIATION PROJECT

Utilitarianism and deontological systems such as Kant’s are radically different types of moral theories.
Some people seem to gravitate to the one and some to the other, but many people find themselves
dissatisfied with both positions. Although they see something valid in each type of theory, at the same
time there is something deeply troubling about each. Utilitarianism seems to catch the spirit of the
purpose of morality, such as human flourishing and the reduction of suffering, but undercuts justice in a
way that is counterintuitive. Deontological systems seem right in their emphasis on the importance of
rules and the principle of justice, but tend to become rigid or to lose focus on the central purposes of
morality.

One philosopher, William Frankena, has attempted to reduce this tension by reconciling the two types
of theories in an interesting way. He calls his position “mixed deontological ethics” because it is basically
rule centered but in such a way as to take account of the teleological aspect of utilitarianism. Utilitarians
are right about the purpose of morality: All moral action involves doing good or alleviating evil.
However, utilitarians are wrong to think that they can measure these amounts or that they are always
obligated to bring about the “greatest balance of good over evil,” as articulated by the principle of
utility.

In place of the principle of utility, Frankena puts forth a near relative, the principle of beneficence, which
calls on us to strive to do good without demanding that we be able to measure or weigh good and evil.
Under his principle of beneficence, he lists four hierarchically arranged subprinciples:

1. One ought not to inflict evil or harm.


2. One ought to prevent evil or harm.
3. One ought to remove evil.
4. One ought to do or promote good.
In some sense, subprinciple 1 takes precedence over 2, 2 over 3, and 3 over 4, other things being equal.

The principle of justice is the second principle in Frankena’s system. It involves treating every person
with equal respect because that is what each is due. To quote John Rawls, “Each person possesses an
inviolability founded on justice that even the welfare of society as a whole cannot override.... The rights
secured by justice are not subject to political bargaining or to the calculus of social interests.” There is
always a presumption of equal treatment unless a strong case can be made for overriding this principle.
So even though both the principle of beneficence and the principle of justice are prima facie principles,
the principle of justice enjoys a certain priority. All other duties can be derived from these two
fundamental principles.

Of course, the problem with this kind of two-principle system is that we have no clear method for
deciding between them in cases of moral conflict. In such cases, Frankena opts for an intuitionist
approach similar to Ross’s: We need to use our intuition whenever the two rules conflict in such a way
as to leave us undecided on whether beneficence should override justice. Perhaps we cannot decisively
solve every moral problem, but we can solve most of our problems successfully and make progress
toward refining our subprinciples in a way that will allow us to reduce progressively the undecidable
areas. At least, we have improved on strict deontological ethics by outlining a system that takes into
account our intuitions in deciding complex moral issues.

A Contract Involves Cooperation

But social contract ethics tells us that in a state of nature and state of war, people do not really get what
they want. In a state of nature with constant conflict and strife caused by unlimited freedom, people do
not get the security, stability, and creature comforts they want. Hobbes observes that human beings are
rational creatures; they are clever problem solvers who know how to figure things out in order to get
what they want. As clever creatures, then, human beings will devise a way to escape from the state of
nature: they will enter into mutually beneficial contracts with others. Sometimes people contract with
rulers and give them absolute power, or sometimes people informally contract with the people they live
with.

For Hobbes, humans don’t enter into cooperative ventures like contracts because they are naturally
cooperative creatures with natural inclinations toward sociability. They enter into cooperative ventures
because they realize they have a better chance of getting what they want when they form contracts
with others.

Recall from Chapter 3 that Aquinas discusses eternal law, divine law, natural law, and human law. For
Hobbes, there are basically only two kinds of law: scientific laws of nature and human-made laws. These
are the only laws that really exist. What Hobbes is doing is developing a view of ethics from the
perspective of someone who believes in scientific laws of nature but does not believe in eternal law,
divine law, or natural law from God.

Our focus in this chapter and in this tradition, then, is on ethical standards that are derivable from
human laws and human-made contracts. Thinking in this way is not very difficult to do; we can easily
think of contracts from the perspective of the legal and business world. An obvious example is trade and
commerce. For every commercial transaction, there is some kind of trade agreement: I’ll give you this,
you give me that. Whether in the Stone Age or the information age, the dynamic is the same: I’ll trade
my axe for your stingray spear, my dollar bills for your cup of coffee, or an increased balance on my
credit card for your computer program. Reciprocity is part of a contractual arrangement: you do this,
and I’ll do that. The commonly used Latin phrase quid pro quo captures the idea. The Latin literally
means: “this for that.” Other common phrases that capture the basic idea are “you scratch my back and
I’ll scratch yours,” and “tit for tat.” Because Hobbes was writing in England in the 1640s, the language he
uses to make this point about social contract ethics is much more formal:

Whensoever a man transferreth his right, or renounceth it; it is either in consideration of some right
reciprocally transferred to himself; or for some other good he hopeth for thereby. For it is a voluntary
act: and of the voluntary acts of every man, the object is some good to himself. And therefore there be
some rights, which no man can be understood by any words, or other signs, to have abandoned, or
transferred. As first a man cannot lay down the right of resisting them, that assault him by force, to take
away his life; because he cannot be understood to aim thereby, at any good to himself. (Hobbes 1651:
105)

When one agrees to enter into a contract, one is giving up a degree of freedom: one agrees to behave in
certain ways, and agrees not to behave in other ways. Thus, the nature of a contract has to do with the
amount of freedom possessed by each individual in the state of nature. In a contract, then, there are
boundaries on what individuals are allowed to do and what they should do. When individuals are totally
free, there are no boundaries. But that ends up being a state of war where everyone is worse off.
Hobbes envisions a strategy of cooperation as a better bet for satisfying one’s desires for peace,
stability, security, and creature comforts.

The social contract tradition wants us to see that in addition to the many individual explicit and signed
contracts, there are also many implicit and unsigned contracts; there is a social contract for all social
contexts. This is the only sense that Hobbes concedes that we are social creatures - we will seek to
benefit ourselves by participating in co- operative ventures. The main principle of social contract ethics
is:

Principle of the Social Contract: One ought to agree to participate in social contracts.

Hobbes shows how a straightforward ethical egoism will not work. For if we all straightforwardly
followed an ethical egoism then we would put ourselves in a state of nature, a condition that any
rationally self-interested being would want to avoid. Therefore, we need to follow an enlightened
egoism, or “rule-egoism,” i.e., social contract, in order to escape from the unhappy and unprofitable
state of nature. With the thought experiment of a state of nature and state of war, Hobbes shows that
human beings, as rationally self-interested, will agree to enter into a social contract precisely because
they are naturally self-interested. They will agree to follow a set of rules (a contract) only if they believe
they stand a good chance of benefiting themselves by receiving security and other benefits.

4.5 A Contract Involves Rationality

Rationality is a prerequisite for entering into a contract. Can you make contracts with non-rational
creatures like lions, tigers, wolves, birds, or mollusks? There is a story of how St. Francis of Assisi (from
the thirteenth century) made an arrangement with a wolf that was terrorizing the little Italian village of
Gubbio. St. Francis negotiated an agreement between the wolf and the villagers: if the wolf would stop
attacking the villagers, then the villagers would provide food for the wolf.
Someone like Hobbes, who takes a scientific worldview seriously, would likely say that such a story is
simply legend, for people cannot make agreements with wild animals. Hobbes would more likely point
to incidents such as the man in Taipei, Taiwan who in 2003 leapt into a lion’s den at the Taipei Zoo and
tried to convert a lion to Christianity. Luckily, because the lion was already fed that day, the man was
only bitten in the leg. Hobbes would also point to the example of Roy Horn of Siegfried and Roy, who
was unexpectedly attacked by one of his tigers during a performance. Wild animals are unpredictable
and we cannot rely on the contracts we make with them. We cannot talk to them and make agreements
that “we’ll do this, and they’ll do that.” Hobbes, like many moral theorists, maintains that for a being to
engage in morality and perform moral actions, the being has to possess rationality and have the ability
to reason.

Another ground for asserting that rationality is a prerequisite for morality is that beings must be able to
understand that there are rewards and punishments attached to their actions. Even when animals are
trained with methods using rewards and punishments, there is still a rather high degree of probability
that the animals will not live up to “agreements” made with their trainers. Although there may be
extended periods of training and the animals may have long relationships their trainers, wild animals can
turn on their trainers.

Rational beings also are known to break their contracts. But with rational beings – as compared with
non-rational beings – the chances are much higher that the contract will be observed; this is because the
being is rational and conscious of the benefits and the punishments. In Hobbes’s view, “a covenant
needs a sword”; words alone will not ensure that people follow the rules of the contract: “Covenants,
without the sword, are but words, and of not strength to secure a man at all” (Hobbes 1651: 129). This is
how ethics and laws must necessarily overlap. Punishment must be in the offing or else whenever
human beings get the chance, their selfish nature will prompt them to break the rules and the
agreements they have made with others.

As proof that rules must have teeth if they are to do the job we want them to do, Hobbes reminds us of
what happens when the structure of a society temporarily breaks down. Consider how people behave
when there is a catastrophe or disaster of some sort, cases where there are no authorities to enforce
the law. People behave lawlessly when laws are temporarily suspended. They loot and riot. This is the
kind of activity that also happens during war. The common expression “raping and pillaging” captures
the kind of activity that goes on during war.

What this means for Hobbes is that when the structure of society breaks down and people fall back into
the state of nature, they simultaneously fall back into a state of war. And according to traditional
wisdom, all is fair in love and war, i.e., anything goes. In sum, then, in order for a contract to work
effectively, the contract must: (1) be between individuals who are rational beings who have the capacity
to agree to the contract and who will understand the terms of the contract, and (2) have some
mechanism in place to penalize those individuals who violate the contract.

4.6 Common-sense Morality (Properly Understood)

Even though social contract theorists build egoism into their background assumptions, they nevertheless
still endorse common-sense morality. From the perspective of social contract ethics, it is in our rational
self- interest to follow certain basic rules: we should tell the truth, keep our promises, not deceive, not
steal, not kill, etc. Social contract ethics is a practical ethic; it gives us a reason to act and a reason to
follow these basic rules. The clearest evidence that social contract ethics endorses common-sense
morality is that, according to Hobbes, the golden rule sums up social contract ethics. Hobbes realizes
that ethical theory can be subtle at times and not thoroughly understood by everyone; but, he says, it is
inexcusable for someone to claim that he or she does not understand social contract ethics, because it
has been summed up in a principle that anyone can understand: “Do not that to another, which thou
wouldest not have done to thyself” (1651: 122).

But social contract theorists will interpret the golden rule with an egoistic slant. Think of it this way: why
should I not lie to you? A social contract theorist will say that the reason is: I don’t want you to lie to me.
Thus, treat others as you wish to be treated (the golden rule). From this perspective, the golden rule has
egoism built right into it. Hobbes finds no trouble with incorporating a principle traditionally thought to
have religious roots into his non-religious ethic. Similarly, he makes use of the concept of “contract” (he
often uses the word “covenant” too, which has obvious roots in the Judeo-Christian tradition in which
he was immersed, a tradition that believes there is a covenant between God and his people).

The concept of karma, which has had an important role to play in Indian ethics since ancient times,
would also be seen by contractarians as an ethical principle having an egoistic element built right into it.
The notion of karma is that every single action an individual performs contributes to what will happen to
that individual in his or her next life. Doing bad actions stores up bad karma and then in the next life one
would be demoted in some way from one’s current position in society. Doing good actions, on the other
hand, stores up good karma and then in the next life one would improve one’s position over one’s
current position in society. What an egoist notices about the law of karma is that playing by the rules
and doing good actions helps oneself in the long run, while breaking the rules and performing bad
actions only serves to harm oneself in the long run. In fact, in this tradition, since everyone is
responsible for their current position in society (because of their own past actions), others don’t deserve
your help at all. Contractarians are offering similar reasoning in that they claim that in following the
rules of society (abiding by the contract) one will benefit, and that the reason for helping others is not
born of altruism but of enlightened egoism.

We need not venture into such metaphysical speculations about a future life after death in order to use
this reasoning. The social contract tradition puts its emphasis on living in today’s society. The questions
to ask are: “Will I prosper in today’s society if I participate in the social contract?”; “Will I prosper if I
break the social contract?”; and “Will I be better off if there were no social contract at all?”

Let’s look at a few easy examples of how social contract ethics claims to support common-sense
morality and claims to offer us the best understanding of what common-sense morality actually is. If I
need money right now, where should I get it? Will my needs best be served if I steal it from the nearest
convenience store? No, because shortly after I demand the money from the salesperson at the
convenience store I will likely get caught by the police. Getting arrested will set me back in a worse
situation than I am in right now in needing money. Would I want to live in a society where people do not
get caught for stealing? No, because then I run more of a risk that people will steal from me when they
need money. Social contract ethics, then, provides a good explanation of why we need to have a rule
prohibiting stealing and a good explanation of why I ought to follow that rule.

If someone asks me a question, should I tell that person the truth? Or, will my interests best be served if
I lie to that person? When people say that honesty is the best policy, they are referring to the idea that if
I adopt a policy of lying to people, then my lies will eventually catch up with me and I will be in a worse-
off position than if I had made honesty my policy. From a social contract egoistic perspective, honesty is
the best policy for me; I will be better off in the long run if I do not get involved in lying to people, trying
to cover my tracks, starting to believe my own lies, etc. For example, if I want to do well in a job
interview, should I deceive my interviewer? Will my interests best be served if I deceive the interviewer,
or will my deceit likely become exposed when I am hired and asked to do something that I am not
capable or qualified to accomplish? Social contract ethics claims to put common-sense morality on a
solid footing and claims to give the best understanding of why we follow the ethical rules that we do.
And why we should follow ethical rules.

4.7 Social Contract Ethics Applied

Before we consider more applications of social contract ethics, let us consider social contract ethics and
the problem of relativism. Social contract theorists assert that all ethical standards everywhere are
aspects of social contracts. There are many individual contracts, but the nature of morality is a contract,
and a prerequisite for morality is rationality. Social contract ethics is rationally based and has an egoistic
aspect. Because social contract ethics says that ethical standards depend upon the standards enacted by
societies, social contract ethics may seem similar to cultural or ethical relativism. But in the face of the
wide variety of even contradictory ethical standards in different societies, social contract ethics will
respond that even though the terms of the various contracts are different (there is wide variety),
nevertheless, the framework of the ethical standards developed by various societies is still a contract.
Thus, social contract ethics is a form of ethical universalism, not a form of relativism. As such, it can
accept that there is cultural relativism, but it will disagree with ethical relativism’s claim that ethics is a
purely relative enterprise without any kind of universal dimensions. To see more clearly how social
contract ethics solves the problem of relativism, consider what ethical relativism implies about following
the rules and laws of one’s society. Cultural and ethical relativism, when taken together, would imply
that people ought to be obedient to the folkways and standards of their society; whatever is right is in
the folkways, because right and wrong are determined by the folkways. A robust ethical relativism says
that ethics is always relative to a society; there are no ethical standards outside a society’s standards. If
the rules and laws that are in the folkways are always right, then people ought to

always follow the rules and laws. But social contract theorists do not argue that people should have

blind obedience to the rules and laws of one’s society. In fact, the social contract tradition provides a
strong argument justifying civil dis- obedience in cases where the rules and laws of a society are unjust.
In an ethical relativist view, it would not make much sense to disobey the rules or laws of one’s society,
but social contract theorists have famously endorsed civil disobedience in situations where the terms of
the contract are unjust, or where all parties are not equally observing the terms of the contract.

In the social contract ethics tradition, a main issue is whether the contract is a good contract or a bad
contract. We have to check to see if the conditions of the contract are arranged correctly and whether
people are doing what they are supposed to be doing. Is the social contract indeed giving everyone who
agreed to participate in it the payoff they deserve, in return for giving up some of their unlimited
freedom? The likely situation is that the contract is beneficial in some way to some parties, for how else
would the contract be created in the first place?

But given the selfish nature of human beings, it is also a likely bet that some people in the contract are
benefiting more than others, because the ones in the position to take advantage of others and get away
with it are tempted to do so. The result? Some individuals or groups are being taken advantage of either
because the contract itself is rigged against them, or because the terms of the contract are not truly
being honored by all parties. An ethical concept that sums up whether the contract is a good contract or
a bad contract is justice. Is the social contract just or unjust? As a practical ethic, social contract theory
can help us justify certain moral rules (like “don’t lie,” “don’t steal,” “don’t kill”) and it can help us to
show how certain moral rules that may be endorsed by a certain group really do not have rational
backing.

For instance, in the United States there used to be so-called Jim Crow Laws, which were segregation
laws to separate the races in public spaces. Blacks were forced to use separate facilities, whether in
hotels, restaurants, railway cars, restrooms, or water fountains; and there were even laws banning
interracial marriages. These Jim Crow laws were supported and upheld by many rulings of the United
States Supreme Court. Can social contract ethics help us to show that Jim Crow laws, although endorsed
by white Americans and the US Supreme Court, really do not have rational backing?

According to social contract ethics, people – as rational beings – will agree to follow a set of rules (a
contract) only if they believe they stand a good chance of benefiting themselves. Were adult black
Americans rational beings with the capacity to agree to a contract and understand the terms of the
contract? Yes. As rational beings in a state of nature, would they agree to give up their freedom in
following Jim Crow laws because they believe that they will receive security and benefits in this
arrangement? No, not at all. As rational beings, why should they accept different standards from what
everybody else must follow? Jim Crow laws do not recognize blacks as individuals with equal standing in
the wider community; these laws were designed to limit where blacks could carry out their normal tasks
of everyday living. Jim Crow laws are indications that an unjust social contract was functioning in the
United States.

Here is another example. From the time that the United States declared its independence from the
British government in 1776, it took the 15th Amendment to the Constitution of the United States in
1870 for blacks to gain the right to vote, and the 19th Amendment in 1920 for women to be granted the
right to vote. Social contract ethics helps to show that laws prohibiting blacks and women from voting,
although endorsed by white men, are really unjust. For again, the individuals we are talking about are
rational egoists who are capable of agreeing or disagreeing with a contract. In using a social contract
ethic, adult blacks and women would not accept this kind of arrangement: why would they give up the
unlimited freedom that they enjoy in the state of nature for the terms of this unjust contract?

Thus, it is clear that with social contract ethics we are not talking about following rules and laws simply
for the sake of following rules, or being obedient to the status quo for the sake of being obedient. No,
we are talking about following the rules if they are reasonable rules for rational and selfish beings to
follow. What makes them reasonable, according to this tradition, is determined in the light of rational
egoism: a rule is reasonable to me when I am benefiting from it, and not only benefiting in the short run.
We are talking about the rules that are necessary to have a reasonably organized and stable society
where people can exercise the most amount of freedom without that freedom bringing down the whole
framework. Managing the terms of the social contract will always be a balancing act. According to social
contract ethics, eternal vigilance is necessary, for we are all selfish, and there must be watchdogs and
gatekeepers who ensure that people are following the rules and not taking advantage of their positions
of power in society.
4.8 Conclusion

Even though the social contract tradition observes and endorses egoism, it realizes that the state of
nature is undesirable and the only way out of the state of nature is through cooperation. It realizes there
is no “i” in “team.” We even see examples of cooperation in nature, as when Canadian geese fly in a V-
formation because it makes the flying easier for each individual goose. For humans, a social contract is
necessary for conditions of social cooperation.

Nevertheless, critics of social contract ethics will point out that there is something about common-sense
morality that does not sit right with social contract ethics. A commonsensical expression about ethics,
for instance, is that ethics and being ethical are about doing the right thing even when no one is
watching. Even though social contract ethics sounds convincing when it says that a social contract is
necessary for conditions of social cooperation, if that is what we truly believed ethics amounted to,
would we have good reason to act ethically when no one is looking? If Hobbes is correct about human
nature – that all people are self-interested – then wouldn’t they be motivated to cheat and take
advantage when they knew they were not going to get caught? True, social contract ethics will say we
need a general rule against cheating if we are to effectively and productively cooperate, but when it
comes to a particular situation, if no one knows I’m cheating, as an egoist I will want to cheat and agree
to the general rule that cheating is wrong. As an egoist, won’t I try to have my cake and eat it too?
Discussions about social contract ethics call this kind of person a free rider: one who wishes to benefit
from the rules but who will violate the rules if he or she can get away with it. Because of issues like this,
some social contract theorists attempt to de-emphasize the egoistic dimensions of social contract ethics.

Social contract ethics does provide solutions to all four problems in ethics, though. With regard to
philosophical questions about human nature, Hobbes argues that all human beings are ultimately self-
interested: psychological egoism is his solution to the problem of human nature. This view is also known
as rational egoism, because in addition to being selfish creatures (like any animal), humans are rational.
So human beings have created rules for themselves in order to escape the state of nature. The solution
to the problem of the origins of ethics, then, is that ethical standards come from human beings who
have created these standards by creating contracts. Although social contract ethics says that ethical
standards depend upon the standards enacted by societies, and it grants that the terms of the various
contracts are different, it responds to the problem of relativism by saying that all ethical standards are
still contracts and all human beings are rational and self-interested. So it accepts cultural relativism and
universalism, but denies ethical relativism.

As a solution to the problem of what makes something right or wrong, social contract ethics holds that
something is right if it is benefiting you and wrong if it is harming you (aka, ethical egoism). Social
contract ethics realizes, though, that a near-sighted understanding of ethical egoism leads to a state of
war. So when asking the question about what makes something right or wrong, the focus should be on
the rules, laws, and contracts of one’s society (aka, rule-egoism). Individuals must ask themselves if the
ethical rules their society is asking them to follow yield benefits for them. A society’s particular contract
will often provide answers about right and wrong, but the rational participants in the society must
critically evaluate the contract to make sure it is a just contract, one where people give up certain of
their freedoms, but only in order to get the benefits of a stable and secure society.
In the next chapter we will look at another tradition of modern ethics, utilitarian ethics. In some ways
utilitarianism continues Hobbes’s project, in that it tries to ground ethics in a scientific rather than a
religious

worldview. Utilitarianism also follows contractarianism in thinking of ethics in a consequentialist way, so


called because rules and laws are determined to be right or wrong depending on the consequences that
they will bring to people.

In other ways, though, utilitarianism is a departure from social con- tract ethics. Hobbes was thought of
as radical because he straight-facedly endorsed egoism, but Hobbes was still traditional because he
rested ethics on rationality. Utilitarians, as we will see, are regarded as radical not because of their views
on egoism, but because they broke with the longstanding rationalist tradition that sees ethics as resting
on rational foundations. The utilitarians regard ethics as not grounded in human rationality, but rather
in human feelings.

2.1 What Are Virtues?

A virtue is a trait of character of a person that is good for that person to have. Consider the ethical
concepts of tolerance, generosity, integrity, honesty, and kindness. In their noun form they are traits of
a person. Even though perhaps most of them can be put into the form of an adjective and applied to
actions (a tolerant action; a generous action, an honest action; a kind action), the focus in virtue ethics is
on traits of character of a person. Thus in this ethical tradition, the focus will be on a person who has (or
lacks) tolerance, generosity, integrity, honesty, kindness, etc.

In addition to defining a virtue as an excellent trait of character, in 337 bce the ancient Greek
philosopher Aristotle offered another way to define a moral virtue. A moral virtue, he argued, is a mean
between two extremes. This is known as his principle of the golden mean.

Principle of the Golden Mean: A moral virtue is a mean between two extreme vices (the vice of excess
and the vice of deficiency).

In ancient China, Confucius recommended a similar principle regarding virtues; and in ancient India, the
Buddha calls his philosophy of life the middle way. The central idea of the principle of the golden mean
is that a moral excellence – a moral virtue – consists in a mean state. Aristotle explains:

By virtue I mean virtue of character; for this [pursues the mean because] it is concerned with feelings
and actions, and these admit of excess, deficiency and an intermediate condition. We can be afraid, e.g.,
or be confident, or have appetites, or get angry, or feel pity, in general have pleasure or pain, both too
much and too little, and in both ways not well; but [having these feelings] at the right times, about the
right things, towards the right people, for the right end, and in the right way, is the intermediate and
best condition, and this is proper to virtue. Similarly, actions also admit of excess, deficiency and the
intermediate condition. Now virtue is concerned with feelings and actions, in which excess and
deficiency are in error and incur blame, while the intermediate condition is correct and wins praise,
which are both proper features of virtue. Virtue, then, is a mean, in so far as it aims at what is
intermediate. (Aristotle 337 bce: 44)
In Aristotle’s ethical theory, the moral virtues are concerned with both the feelings and the actions of a
person. Aristotle describes that how we handle our feelings, and the rational judgment we use in
developing our virtues, are important for human flourishing (i.e., important for ethics, living an ethical
life). In Aristotle’s view, each human being possesses a soul, a rational soul. The rational soul provides
human beings with the capacity to control their feelings, either well or poorly. If feelings are controlled
well, then virtues develop; if feelings are controlled poorly, then vices develop and stand in the way of
flourishing. Aristotle writes:

We have found, then, that we wish for the end, and deliberate and decide about what promotes it;
hence the actions concerned with what promotes the end will express a decision and will be voluntary.
Now the activities of the virtues are concerned with [what promotes the end]; hence virtue is also up to
us, and so is vice. (Aristotle 337 bce: 66)

The virtue courage, Aristotle explains, is a mean between cowardice on the one extreme and rashness
or fearlessness at the other extreme. The issue is about how one handles fear. If one is overcome with
fear, then one will be cowardly. On the other hand, if one ignores fear altogether, that is the other
extreme: one is fearless or rash. Excellence is navigating between the two extremes. Since we are
rational creatures we are in a position to control our behavior. If we allow feelings to overcome us, we
are not in control. On the other hand, if we deny that we have certain feelings then we are denying our
own human nature, which is not a rational thing to do, but rather a foolish and irrational thing to do.

Besides courage, another example of a virtue is temperance. Here again, by using the principle of the
golden mean we can recognize what the virtue temperance consists in. Temperance is a mean that has
to do with our desires. If we let our desires control us, then we are intemperate. At the other extreme, if
we deny our desires then we are denying our human nature. The excellence is controlling one’s desires
to the proper degree. In virtue ethics, controlling one’s desires to the proper degree – developing the
virtue of temperance – is something we ought to do in order to bring about our own well-being.

There are two further clarifications to be made about the principle of the golden mean. First, it is not a
mathematical mean. A helpful analogy is with archery. Hitting the middle of the target is not average; on
the contrary, it is excellent. Developing virtues (hitting the bull’s eye of a target) requires effort. A
beginner might be lucky enough to hit the bull’s eye once, but to do it over and over again involves
practice and skill development. The Buddha once said that it is easier to conquer others than it is to
conquer oneself. What he is pointing to is the difficulty of controlling one’s feelings and actions
appropriately in a consistent manner. Developing moral virtues is a challenge, and when one achieves
success in developing a moral virtue, that is excel- lent. A contemporary example involving the inability
to control one’s feelings and actions is road rage. Should we allow anger to control us (as in road rage),
or should we control anger to the point where we don’t become angry at all? For Aristotle, the virtue
regarding anger is to feel it and express it to the proper degree, not too much and not too little.

The second point to notice about the golden mean is that it is not a precise mean but rather a mean
relative to us. Another sports analogy would be the sweet spot on a baseball bat. The best place to hit
the ball on the bat is not the exact center of the bat; the mean is relative because the excellent place for
the ball to meet the bat is toward the thick end of the bat. This is only an analogy. But the point is that
the golden mean is not always exactly in the center; it may be off-center if that is what will allow the
person to flourish. For Aristotle, ethics is not a precise science: ethics is about living a good life, and that
is not something one can do with absolute precision. Human lives, even excellent ones, often take the
form of a zigzag, not a straight line. A virtue, a golden mean, is not a one-size-fits-all concept; it may not
look the same in different people.

2.2 Aristotle, Happiness, and the Virtues

Over the centuries there have been slightly different ways of defining a virtue, but the basic idea has
been rather constant. The basic idea was given its first systematic treatment and analysis by Aristotle in
ancient Greece. Although Socrates and Plato also worked in virtue ethics, it was Aristotle who composed
the first book-length treatment of virtue ethics in 337 bce.

Aristotle was interested in many subject areas, ethics being only one of them. He studied humans in the
same way that he studied many other beings and organisms. In an attempt to do a systematic study he
wished to understand all facets of human beings, including their bio- logy, their psychology, their social
aspects, their political aspects, their art, their logical abilities, etc. Aristotle begins his famous book on
ethics, not by discussing the virtues, but by discussing human nature and human happiness. He observes
that all humans seek happiness, and he uses logical argument to show that it is reasonable that all
human beings seek happiness, since it is the only thing valuable for its own sake.

He then turns his attention to finding a way to properly understand happiness. What is happiness? What
does the happy or good life consist in? His answer leads to a discussion of the virtues, which he
understands as “excellences.” A virtue is an excellent trait of character that is good to have. It is good to
have because it leads to an individual’s achievement of happiness, or flourishing. Happiness, Aristotle
argues, consists in the full development of one’s potential. (Rather than use the word “happiness,” it
may be more appropriate to translate the Greek term he uses with the terms “well-being” or
“flourishing”).

Aristotle observes that all humans are seeking to achieve well-being and seeking to flourish. He is not
claiming, of course, that because humans seek after happiness, they will necessarily achieve it.
Developing one’s natural talents into virtues is challenging. It is deserving of praise when one does so,
since developing virtues is an excellent achievement. In modern English, Aristotle’s study of human
behavior and his inquiry into the elements that contribute to human well-being and flourishing is known
as the study of human nature. Inquiring into human nature, as we will see in all subsequent chapters of
this book, is an important concern for anyone who studies ethics. For, after all, if we are to discuss how
we ought to live (i.e., discuss ethics), isn’t it important to get clear for ourselves about what kind of
being we are, so that we can then determine what kind of life is appropriate for the kind of creature we
are?

Two easy phrases sum up significant aspects of Aristotle’s theory of human nature: (1) Humans are
rational animals, and (2) humans are social/political animals. Both phrases characterize human beings as
kinds of animals. This indicates the kind of inquiry Aristotle was conducting. He was in the habit of
studying all aspects of the world around him, from stars, to physical objects, to animals, to humans. He
is known for his system of classification, and in his view humans are classified as animals. But humans
are very unique beings, since they have rational powers that no other creatures possess. The second
major characteristic that humans are social/political animals tells us that humans flourish in groups.
They have social origins (mother/father/child) and they succeed in social pursuits, such as having friends
and allies, and living in communities, towns, nations. Living a good, happy life, then, will involve living a
life in accord with our rational and social natures, and developing all of the virtues associated with the
natural abilities of rational and social creatures.

2.3 A Developmental Model

Human beings are not born with moral virtues. A moral virtue is a trait that gets developed by habit. As
Aristotle describes it:

Virtue of character results from habit . . . Hence it is also clear that none of the virtues of character
arises in us naturally . . . Thus the virtues arise in us neither by nature nor against nature, but we are by
nature able to acquire them, and reach our complete perfection through habit. (Aristotle 337 bce: 33–4)

Good habits (virtues) are the building blocks of good moral character. Humans are born with the
potential, and they are also born with the power to control their own actions and to guide their own
moral, physical, and mental development (see Diagram 2.1). Because humans are rational creatures,
they are aware of what they are doing. They have the choice about which actions to perform. If we
stopped to reflect on our own actions once in a while, we should see that repeated actions become
habits, and these habits can be either good or bad for us. Although doing the right thing and doing what
is good for you can often be a struggle, virtue ethics teaches that repeated actions become habits. When
one has developed a habit, it makes doing certain

Diagram 2.1

Potential ⇒ Repeated actions ⇒ Formation of habits ⇒ Character

actions much easier. Good habits (virtues) contribute to one’s growth and development as a person.

In thinking about virtues, there is a kind of input/output dynamic to consider. There is a time in one’s life
when one is attempting to develop virtues – this is the input phase. It is during the input phase that
one’s character is in the process of being developed. But then, once habits are established, and one’s
character is well formed, certain actions and the fruits of those good actions seem to flow from one’s
character effortlessly. When we have developed an ingrained habit, it makes doing the task at hand
much easier; we can perform difficult actions without even trying. This is the output phase, and the
dimension of character we are familiar with in different common expressions like, “she is acting out of
character today.” When you have a developed character, people will come to expect what kind of
actions you will perform.

Just as bad habits are hard to break, those who are convinced of the effectiveness of virtue ethics
remember and realize that good habits are also hard to break. Helpful analogies to use for this
input/output dynamic are learning to play a musical instrument and driving a car. At first, playing an
instrument is difficult – it can be painful, and one moves at a very slow pace. But when mastery of the
instrument is achieved, one can then do things with little effort. Similarly, in learning to drive a car, at
first it seems difficult to steer, to apply the right amount of pressure to the gas and to the brake, to see
the road and cars ahead, and at the same time see the cars to one’s side and in the rear. But when
driving a car is mastered, one seems to be driving effortlessly. The old saying about riding a bike also
applies here: once you learn how to ride a two-wheeler, you will never forget how to do it. Through
repetition, certain actions become second nature to us. In the words of St. Francis of Assisi: first do what
is necessary, and then do what is possible. Before you know it you’ll be doing the impossible. St.
Francis’s phrase sums up the input/output dynamic of developing virtues.

In Aristotle’s developmental model, the idea of a role model (mimesis in Greek) is also seen as very
important. This is because one of the natural ways that human beings learn is by imitating others. Think
of the expression “monkey see, monkey do.” With the help of role models we learn which virtues are
important and we also learn what kinds of actions one engages in when one has a particular virtue. In
the Christian tradition, for example, we see that Jesus often reminds his followers to imitate the various
perfections of God, e.g., God is loving, God is forgiving, God cares for us, and so we too must behave in
like manner to the people we interact with: “Forgive us our trespasses, as we forgive those who trespass
against us.” Further, in St. Paul’s letters – which make up the bulk of the writings in the New Testament
of the Christian Bible – Paul continues this approach because he advises that the good life is to imitate
the life of Christ. St. Francis, St. Dominic, and many other saints, using the same virtue ethics model,
took Paul’s advice. They attempted to live lives modeled after the life of Jesus. A phrase attributed to St.
Francis poetically sums up the notion of modeling: “Preach the gospel at all times, and sometimes use
words.”

Another way to make the point about role models in virtue ethics is to refer back to Aristotle’s
description of humans as social animals. This fact of our human nature again points to our being
affected by others. In addition, although some virtues seem more focused on us as individuals – for
example, courage is about how we handle fear, and temperance is how we handle our desires – we
should notice that many of the virtues, such as honesty, generosity, tolerance, have to do with our
dealings with others. Many excellent traits are social traits.

One dimension of all of the virtues is the fact that not only are virtues good for the individual who
possesses them, but they are also good for those who have social contact with the virtuous person. An
obvious example of this that we have already mentioned is that one who has a virtue is modeling for
others, and others will benefit from that. But there are a number of different ways that others benefit
from our virtues. For example, if I am generous, I am generous to others; if I am honest, I am honest to
others; if I am tolerant, I am tolerant of others. People who interact with a virtuous person will benefit.
The flipside of this is also true: the people who deal with the individual who lacks virtues and has many
vices (i.e., vicious people) will be negatively affected. Thus, I will be harmed by dishonest people, by
unkind people, by people who lack integrity, and by those who cannot control their desires.

2.4 Universalism and Relativism Again

Under Aristotle’s virtue ethics theory, certain virtues are good for anyone to have, no matter what
culture one is from. The Egyptian text The Instruction of Ptahhotep, written more than 4,000 years ago,
states that the following virtues should be practiced toward everyone: self-control, moderation,
kindness, generosity, justice, truthfulness, and discretion. For Aristotle, because we have a shared
human nature, there are human traits that are important for any rational and social animal to possess.
All human beings share enough features so that some virtues are necessary and important no matter
what particular cultural circumstances we could imagine. Other virtue theories, such as the Christian
virtue theory, also link together virtue ethics and universalism.

But some virtue theorists maintain that the character traits a particular society chooses to regard as
virtues and the character traits it chooses to regard as vices are purely relative to that society. Virtue
relativists will point to the fact that each culture has its own catalogue or list of virtues it thinks is
appropriate and necessary for living a good life in its society. It is quite possible, then, to have a
relativistic interpretation of the virtues (see Diagram 2.2). While a universalist virtue ethics will
acknowledge that different cultures emphasize different virtues, it nevertheless asserts that there are at
least some virtues that are universally important. A relativist virtue ethics, by contrast, notes that
different cultures emphasize different virtues and then asserts that there are no universal ethical
standards or virtues – all ethics is relative.

Here we can review a point from Chapter 1. Although virtue ethics is compatible with both cultural and
ethical relativism, when people interpret the different inventories of virtues as they relate to different
eras and different cultures (i.e., cultural relativism) as proof of ethical relativism, then they are being too
hasty with their conclusion.

Diagram 2.2

Possible combinations:

Relativist virtue ethics: Cultural relativism & ethical relativism & virtue ethics

Universalist virtue ethics: Cultural relativism & ethical universalism & virtue ethics

As we saw in Chapter 1, universalists contend that there are at least some ethical values, standards,
principles that are not relative. In the context of virtue ethics, a universalist would contend that there
are at least some virtues that are not relative. So, even though Aristotle’s list of the virtues is different
from a Christian list of virtues, a universalist will point out that there are some virtues that appear on
both lists, for example, justice, courage, and honesty. A universalist will notice that even though Greeks
such as Aristotle, and Christians such as St. Paul, have different worldviews, nevertheless, there are
certain common elements in both traditions. For example, not only do both traditions agree on the
importance of some particular virtues, but they both conceive of ethics as having to do with the search
for happiness (the Christian view of happiness involves eternal salvation, instead of earthly happiness)
and they both use the concept of a role model.

As was mentioned in Chapter 1, the perception that there are universal human traits having to do with
our language, our facial expressions, the way our minds work, our food preferences, etc. is very much
alive in some of the sciences today. If so, then there is a natural place for a universalist virtue ethics.
Evolutionists say these traits and the human potential Aristotle spoke of have been put into place by
natural selection, while theologians will say these traits and this potential have been put into place by
God. Nevertheless, the view that we can observe common human traits comes from many quarters and
fits in with a universalist virtue ethics.

Thus, we can use virtue ethics to think about the problem of relativism. To consider the issue a bit
further, we could ask ourselves: does it make sense to conclude that all human virtues are relative?
Does it makes sense to say that if a culture values a certain trait as virtuous then that trait will come to
be looked upon as virtuous, instead of vicious? Does it make the most sense to say that in identifying
virtuous behavior, all human experience is utterly dependent on its own era, historical circumstance,
and the influence of one’s society?

It seems that the motives, actions, emotions, and relationships that give rise to virtues in other cultures
and societies will often have a counterpart in our own lives. This seems to tell us that there are shared
human values and virtues, if not apparent on the surface, then below the surface. Hence we can reach
the same conclusion about the problem of relativism as we did in Chapter 1: ethical relativism is an
exaggeration of cultural relativism.

2.5 Virtue Ethics: a Guide to Good Behavior

Although virtue ethics can be linked with relativism or universalism, in a very important respect virtue
ethics is unlike both of those theories. Ethical Relativism and Ethical Universalism primarily address the
basic question: “What is ethics?” in the sense of, “Where do ethical standards come from?” As we saw
in Chapter 1, Ethical Relativism and Ethical Universalism are offered as solutions to the problems of
relativism and the origins of ethics. Virtue ethics, on the other hand, addresses more practical questions
such as, “How should I live my life?” and, “What kind of person should I try to become?” These
questions are central to the problem of what makes something morally right or wrong, which is a
practical ethical problem.

When compared to relative ethics and universal ethics, then, virtue ethics offers us more help and
guidance when we are trying to decide what we ought to do or what kind of person we ought to
become. Even though Aristotle’s basic virtue ethics framework is over 2,000 years old, we can fruitfully
use it in thinking about how to live our lives today.

But Aristotle’s virtue ethics is not a rigid formula for how to live the good life. In the first few pages of his
book on ethics, Aristotle qualifies his ethical theory:

Our discussion will be adequate if its degree of clarity fits the subject- matter; for we should not seek
the same degree of exactness in all sorts of arguments alike . . . Moreover, what is fine and what is just .
. . differ and vary so much that they seem to rest on convention only, not on nature. Goods, however,
also vary in the same sort of way . . . Since these, then, are the sorts of things we argue from and about,
it will be satisfactory if we can indicate the truth roughly and in outline; since we argue from and about
what holds good usually, it will be satisfactory if we can draw conclusions of the same sort . . . since the
educated person seeks exactness in each area to the extent that the nature of the subject allows.
(Aristotle 337 bce: 3–4)

Aristotle is doubtful that ethics admits of the same degree of precision as science and mathematics; he
offers an ethical theory that honors the complexity and richness of human experience.

One area of Aristotle’s ethical theory that I will underemphasize is his view that because we are
essentially rational, that is our ultimate purpose. If we only think of ourselves as rational beings, then
the good life would be the life of contemplation and thinking. But this overlooks the other sides of the
human person because, even by Aristotle’s own admission, we are not only rational beings, we are also
emotional, social, and political beings. This more controversial aspect of Aristotle’s ethical theory where
he recommends a life of contemplation over a life of action can be safely set aside. One recent
interpretation has it that Aristotle meant that the life of contemplation is the ultimate purpose only for
people at the end of life, those in retirement, which is an intriguing interpretation.

A more modest aspect of Aristotle’s claim that humans are rational animals is simply that human beings
have cognitive abilities; they are aware of the choices they make and can control the choices they make.
This is why the actions they choose to take, the habits they form, and the characters they develop, are
all their responsibility. Human beings have help, of course, from their social surroundings and from their
role models but, according to virtue ethics, human beings are not fully conditioned by their
environment. There is an innate potential that all humans share (a universal human nature), and it is up
to us individually to have a hand in developing that potential.

Another interesting way in which we can see that virtue ethics has practical applications is by noticing
how it fits in with literature. There are some ethical theorists who insist that studying great literature
that touches on ethical themes is a highly effective way to study ethics, especially virtue ethics.
Universalists will emphasize how literature can help us to recognize universal human needs, and
universal human values and virtues. Relativists will emphasize how literature can introduce us to values,
ways, and customs that are totally foreign to us.

Character is a key ethical concept and a key literary concept. Think about the characters in books and
movies. What virtues and vices do they possess that animate the plot? Narratives provide true-to-life
illustrations of what the philosopher Bernard Williams calls “thick” ethical concepts (such as treachery
and promise and brutality and courage), and so narratives can help those moral philosophers who might
be preoccupied with “thin” ethical concepts like right, good, is, and ought. Virtue ethics focuses on the
whole person and on the concrete experiences of living a good life. Great writers can skillfully describe
characters, their choices, the lives they lead, and the virtues they either have or lack.

2.6 Pros and Cons of Virtue Ethics

Though virtue ethics is a rich and varied ethical tradition, this chapter has primarily focused on
Aristotle’s virtue theory. As an ethical theory generally, though, virtue theory has some good things
going for it. Based on the brief description of virtue ethics in this chapter, we are now in a position to
point to some of the attractive qualities of virtue ethics: it is reasonable, somewhat flexible, and it
focuses on the whole person and thick concepts, all of which contribute to its being easily under-
standable. Furthermore, virtue ethics fits well with common sense, especially when it emphasizes that
good habits lead to good results such as happiness, well-being, and flourishing.

One of the challenges in applying virtue ethics to one’s own life is that one must choose which virtues to
aspire toward. Who are our role models? As relativists are quick to point out, we have many lists of
virtues to choose from (see Diagram 2.3). Here are two contemporary examples.

Diagram 2.3 Sample lists of virtues

Aristotle Christian Benjamin Boy Scouts Stephen Covey


Franklin
Courage Faith Temperance Trustworthy Be proactive
Temperance Hope Silence Loyal Begin with the
Gentleness Charity Order Helpful end in mind
Modesty Prudence Resolution Friendly
Righteous Justice Frugality Courteous Put first things
Indignation Courage Industry Kind first
Liberality Temperance Sincerity Obedient Think win/win
Magnificence Justice Cheerful Seek first to
Proper pride Moderation Thrifty understand, then
Honesty Cleanliness Brave be understood
Wittiness Tranquility Clean Synergize
Friendliness Chastity Reverent Sharpen the saw
Humility

The Boy Scouts of America require each scout to memorize and live by a list of Boy Scout virtues, they
call it the Scout Law: a scout will be trustworthy, loyal, helpful, friendly, courteous, kind, obedient,
cheerful, thrifty, brave, clean, and reverent. The book The 7 Habits of Highly Effective People, which was
on the best-seller list in the late 1990s for several years, recommends seven habits for success and its
author, Stephen Covey, claims that he is interested in restoring the character ethic. Even though Covey
never uses the word “virtue,” it is clear that his approach fits well within a virtue ethics framework, for
he emphasizes good habits, living a good (effective) life, and developing good character.

When faced with many lists of virtues, we can simplify things by remembering that within a given
tradition there is often one animating virtue that sets the tone for the rest of the virtues valued by that
particular tradition. In religious ethical traditions we will notice that for the Christians, even though they
value many different virtues (faith, hope, charity, prudence, temperance, justice, courage, etc.), St. Paul
says it is charity – love – that is the most important Christian virtue: “So these three things continue
forever: faith, hope, and love. And the greatest of these is love” (1 Corinthians 13:13). St. Augustine
furthers the point by saying that all of the other virtues are simply forms of charity. A contemporary
writer recently claimed that the virtue hospitality sums up the Franciscan ethical tradition. The one most
important virtue emphasized by Confucius in ancient China is jen, which is translated as goodness,
benevolence, compassion, or simply, humanity. The one most important Islamic virtue is obedience, for
human beings are called to obey the will of God. In fact, the word “islam” actually means submission,
surrender.

In non-religious ethical traditions like Aristotle’s virtue ethics, some have said that moderation sums it
up, for, after all, doesn’t Aristotle endorse the principle of the golden mean? That is not quite right,
though, because this interpretation takes away from the notion that a virtue is an excellence, not a
modest, or mellow, or average middle-of-the road position. Think of a grading system where a grade of
A is excellent, B+ is very good, B is good, C is average, etc. A virtue is an excellence, it is not middle-of-
the-road, moderate, C for average; it is earning an A! The word “virtuoso” means having exceptional
skill; we should use that word not only in music, where we most frequently hear it, but in ethics too. So,
saying we can sum up Aristotle’s ethic as emphasizing the one virtue “moderation” is a bit misleading.
Instead of trying to zero in on one virtue, perhaps we can focus on a few. The eighteenth century moral
philosopher David Hume, for example, emphasizes two virtues: benevolence and justice.

Another challenge to virtue ethics that contemporary philosophers raise is that while virtue ethics gives
us good reason to try to develop virtues in ourselves as individuals, it does not overtly require us to have
concern for human beings generally. If we are looking for an ethical theory that tells us that we must
have concern for all human beings as a moral obligation we do not seem to find it in virtue ethics.
2.7 Conclusion

This chapter has dealt with all four philosophical problems in ethics. With regard to the philosophical
problem of the origins of ethics, Aristotle’s virtue ethics claims that ethical standards come from a
combination of human nature and society; ethical standards do not come from God or religion. Ethical
standards are not solely derived from one’s society because there is a universal human nature that
cannot be totally ignored. Human flourishing cannot solely be determined by what a society decides
because human flourishing and well-being are tied to human nature. Societal standards that are
contradictory to human nature would not lead to human happiness.

It is easy to see how Aristotle’s virtue ethics also provides a solution to the problem of human nature.
Not only does he argue that there is a universal human nature, but Aristotle goes some way toward
filling in some details about what that human nature consists in, beginning with the observation that all
human beings are striving after happiness. And, as we have seen, Aristotle describes human beings as
rational animals and as social/political animals. As rational beings, humans can control their feelings and
actions, and can choose what kinds of habits they will develop.

When faced with the philosophical problem of relativism and its main question, “Is ethics relative to
society?,” someone might simply respond that, yes, ethics and virtues are always relative. Aristotle,
though, when faced with the relativist question, might answer, “yes” and “no.” As an ethical universalist,
Aristotle will say that cultural relativism may be true because we do observe ethical diversity among
cultures, but ethical relativism couldn’t be true because there are some virtues that are important to
have, no matter what culture one belongs to.

Finally, as a solution to the problem of what makes something morally right or wrong, virtue ethics
answers questions about how to determine the right thing to do and how one should a life, and what
counts as a life lived well, and what kind of person I should become, in terms of virtues and universal
human nature. A trait is virtuous if it is a product of our developed natural potential and if it contributes
to our happiness, well-being, and flourishing.

3.1 What Is Natural Law and Where Does It Come From?

The best way to get to the meaning of the natural law is to put it in the context of other laws that we are
more familiar with. We are all familiar with law as something that legislators and governments are
involved with. These are what Aquinas calls human laws, because they are designed, proposed, passed,
and enacted by humans. The kind of law that humans design, however, is not the only kind of law there
is, because humans are not the only law givers. For Aquinas, the supreme law giver is God.

God’s plan for all of reality involves laws. Since Aquinas’s world- view includes a belief in God as the
creator, then everything that exists anywhere in nature has its ultimate source in God. Another
characteristic of God is that God is all-knowing (omniscient). God has a plan or blueprint of some kind
for all of reality, thus God knows why reality is designed the way it is. A big difference between God and
the natural world as we know it (creation), is that God is eternal, while the natural world as something
that God has created is finite. While it is possible for the natural world to go out of existence, it is not
possible for God to go out of existence. God was not created; God has always existed and always will
exist. Thus, in addition to believing there are human laws, Aquinas also believes there is an eternal law.
The eternal law is God’s plan as God understands it. Humans, as finite beings, can never understand
God’s plan as God understands it. In the following passage, Aquinas describes the eternal law:

Just as craftsmen must have in mind plans of what they are making blueprints so those who govern
must have in mind plans of what those subject to their government ought to do laws. God’s wisdom,
thought of as the plan by which he created everything, is a blueprint or model; thought of as the plan by
which he directs everything to its goal, it is a law. The eternal law is indeed nothing else than God’s wise
plan for directing every movement and action in creation. (Aquinas 1270: 284)

So far, then, we have two kinds of law human law and eternal law. The difference between these two
kinds of law is emphasized in the Bible, and St. Augustine memorably captures the difference between
the two of them by using the phrases “city of God” and “city of man.” While the city of man has human
laws to organize it, the city of God has the eternal law as its ultimate guide.

We are finally in a position to make a first approximation of what natural law is. In the words of Thomas
Aquinas, “natural law is the rational creature’s participation in the eternal law.” Even though everything
in creation bears the mark of its creator, only rational creatures are able to consciously become aware
of this and understand what the eternal law requires of them. Aquinas describes the natural law:

Reasoning creatures follow God’s plan in a more profound way, them- selves sharing the planning,
making plans both for themselves and for others; so they share in the eternal reasoning itself that is
imprinting them with their natural tendencies to appropriate behaviour and goals. This distinctive
sharing in the eternal law we call the natural law, the law we have in us by nature. For the light of
natural reason by which we tell good from evil (the law that is in us by nature) is itself an imprint of
God’s light in us. (Aquinas 1270: 281)

But because the eternal law is only intimately known by God, human rational creatures must settle for a
somewhat second-best understanding of the eternal law. After all, it does not seem realistic that human
beings would be able to understand God’s plan in the way that God understands it. So, again, the
natural law is the rational creature’s understanding of the eternal law: the natural law is only a partial
glimpse into God’s plan for human beings. But even though the natural law is only a partial glimpse of
God’s plan, it is nonetheless a reliable guide for determining the basic outlines of an ethical life. We can
see how Aquinas is working out a solution to the problem of the origins of ethics. He is arguing that
ethical standards have their ultimate origin in God’s plan.

There is yet another type of law that is significant: divine law. Even though there is a natural law through
which all rational beings know the difference between right and wrong, yet what human beings can
figure out by their reason and reflection alone will not be sufficient for them to achieve eternal
salvation. In Aquinas’s religious worldview, it was necessary for God to have revealed more specific
guidance about how human beings ought to live their lives, because living according to the dictates of
the natural law would not be seen as sufficient guidance for people to reach eternal salvation.

Aquinas quotes St. Paul’s Letter to the Romans as an example of an ancient reference to the natural law.
Although poetic, Paul’s phrase “the natural law is written on the human heart,” is helpful in capturing
some of the elements of natural law ethics. “For though the pagans do not have God’s law,” Paul says,
“nevertheless they know the difference between right and wrong for they have the law written on their
hearts.” We can see how Aquinas’s analysis of the different kinds of law is important, because it helps us
to more clearly understand what Paul is getting at. Paul’s claim seems confusing at first because he uses
the same word, law, to refer to two different kinds of law. To clear up this ambiguity Aquinas has
distinguished between divine law and natural law.

3.2 The Natural Law and Universal Ethics

Because natural law ethics emphasizes that there is one natural law that all human beings ought to
follow and the ultimate source of the natural law is the one God, it is pretty obvious that natural law
ethics, like Aristotle’s virtue ethics, is another form of universalist ethics.

A question that we will immediately ask, though, is why, if everyone has the same natural law written on
their hearts, do we see such ethical variety in the world? This is a question that Aquinas addresses. In
other words, natural law theory attempts to solve the problem of relativism.

Aquinas’s answer is basically that everyone does indeed have the same moral law available to them as
long as they are rational beings. But when we are embroiled in the complex and complicated daily
affairs of countless individuals, then things begin to get messy. As we attempt to apply right and wrong
to our unique situations, our judgment can become clouded by bad habits or misguided passions. For
Aquinas, there are indeed universal moral standards and we come to know these universal moral
standards, not through human law, not through human feelings or emotions, not through our society’s
customs, but through human reason. Though we come to know these standards through reason, their
ultimate source is of divine origin. Aquinas, like Aristotle, holds that ethics is rooted in human nature
and that human nature is universal. When Aquinas talks about our natural inclinations to preserve life,
to propagate, and to seek knowledge, he was referring to every member of Homo sapiens.

3.3 Natural Law Ethics and Human Nature

The method that Aquinas suggests for moving from the abstract idea of a natural law to more specific
ethical duties or obligations is the following. If we observe human nature and human natural
inclinations, then we will recognize that humans are naturally directed toward basic and fundamental
values/goods. In saying that humans are naturally directed toward certain universal goods, Aquinas is
echoing Aristotle’s view of human nature. Aquinas says that the things to which human beings have
natural inclinations are naturally apprehended by human reason as good, and therefore are objects to
be pursued, while their opposites, as evils, are to be avoided. Centuries earlier, Aristotle advanced a
similar ethical approach. For Aquinas, we need to look at human natural inclinations (human nature), to
figure out what the natural law is and what the natural law requires us to do.

Aquinas identifies four categories of fundamental human goods: life, procreation, sociability, and
knowledge. The first fundamental human good is our own life. If we observe human behavior we will
notice that people have a natural inclination to preserve themselves. This natural inclination reveals
itself in many, many ways, from the most basic to the more complex. A simple way in which this natural
inclination reveals itself is in our very bodily actions. If you tried to fall flat on your face, literally, you will
probably not be able to do it. You will likely put your hands out in front of you to break your fall. You
have a natural inclination to preserve yourself; it is instinctual. A more complex example of how the
natural inclination to preserve oneself operates in human beings is by having a job. One of the basic
objectives of work is to “make a living,” to preserve one’s life.
The second fundamental human good Aquinas identifies is the human natural inclination toward sexual
reproduction. Like the first inclination to self-preservation, this inclination toward sexual activity (and
hence reproduction) can be thought of as instinctual in human beings. Here it is important to recognize
that these natural inclinations are not necessarily conscious. One cannot say that he or she is inclined to
sex, but is not inclined toward reproduction. Sexuality naturally leads to reproduction.

The third natural inclination is toward sociability. Here again we can hear the echoes of Aristotle. In
Chapter 2, we observed that Aristotle’s solution to the problem of human nature included the notion
that humans are social animals. This is what Aquinas is getting at here. We have a natural inclination to
sociability in that we naturally have social relationships from the day we are born with our parents, our
siblings, our friends, our own children, etc. It is inescapable that all humans come from a social
environment, and humans seem to strive naturally to be in a community environment. Think of peer
pressure, for instance. We naturally want to be accepted by our peers, and so we often cave in to the
pressure they put on us.

The fourth natural inclination Aquinas identifies our natural inclination toward knowledge also has
echoes of Aristotle. As Aristotle said, we are rational animals. The opening line of Aristotle’s book
Metaphysics is that “all men by their nature, desire to know.” Since this is a natural inclination that all
human beings have, we should think broadly about what is claimed here. A very basic example of this
natural inclination is that we are curious creatures; we want to know things. Human beings ask
questions. We ask questions because we want to know things. And when we ask questions we want the
truth. We have a natural inclination to knowledge and the truth. And for Aquinas’s religious worldview,
we have a natural yearning to know the truth about God.

Our natural inclinations incline us toward certain goods. The words “incline” and “inclination” are
helpful. Think of their meaning in terms of an inclined plane, a slant. On an incline, a ball will naturally go
in a certain direction: down. We, too, have natural directions; our natural inclinations slant us toward
certain goods. Another way to describe this is to say that human beings naturally value life, sexuality,
social interaction, and knowledge. If human beings naturally value these things, then we can call them
values. Thus, natural law theory asserts that there are fundamental human values. Now that we have
looked at Aquinas’s solution to the problem of human nature, we can consider how he provides a
solution to the problem of what makes something morally right or wrong.

How should we behave toward these goods, these fundamental values? The natural law ethic says we
ought to preserve and promote these values, not destroy them or contradict them. This then is the main
principle of natural law ethics:

Principle of Natural Law: We ought to perform those actions that pro- mote the values specified by the
natural inclinations of human beings.

What does that mean, practically speaking? Let’s take the first inclination, toward self-preservation.
Natural law ethics tells us that we ought to perform actions that promote our self-preservation and
avoid actions that will destroy or contradict our self-preservation. At the extreme, we should not kill
ourselves. To directly contradict our natural inclination to preserve ourselves is wrong, hence suicide is
immoral. Less extreme examples would include that we ought to take care of our health; we should not
engage in risky behavior that will harm ourselves, like drug addiction, self-mutilation, reckless driving,
etc.
Given the fact that we are rational beings, we have the capacity to realize that not only do we have
these natural inclinations, but other human beings have them too. Thus, we ought not to stand in the
way of others as they pursue their own self-preservation. This is precisely what the Golden Rule asks us
to do:

Principle of the Golden Rule: Do unto others as you would have them do unto you.

The principle of the golden rule is not only part of the Christian heritage, but also appears in many other
religious traditions such as Confucianism, Buddhism, Jainism, Zoroastrianism, Hinduism, Judaism, and
Sikhism (see Diagram 3.1).

Notice how a natural law argument against murder, for instance, differs from a divine law argument
against murder. A divine law argument against murder might go like this: premeditated murder is wrong
because, as it says in the Bible, “Thou shall not kill.” Natural law ethical reasoning is different; it says
that each person has a natural inclination to preserve their own life, hence it is wrong to stand in the
way of, or go against, another person’s natural inclination to preserve their own life; hence murder is
immoral. Aquinas is claiming that we can reach ethical conclusions simply through natural law ethical
reasoning, without consulting divine law.

Now take the second inclination, toward procreation. Natural law ethics tells us that we ought to
perform actions that promote procreation and avoid actions that will destroy or contradict our
inclination toward sexual reproduction. Thus, we ought to allow for sexual unions that yield children,
and we ought to refrain from actions that stand directly opposed to procreation, like artificial
contraception, sterilization, homosexual activity, and masturbation. This is the kind of reasoning that
Catholic theologians have used in developing their sexual ethic.

With regard to the third inclination, toward sociability, we can think back to the social virtues that
Aristotle said we are naturally predisposed to develop because of our social nature: generosity, honesty,
and friendliness. Aquinas mentions that we ought to avoid offending the people we live and associate
with. Thus natural law ethics advises us to behave socially, to get along with others, and be cooperative.

The last natural inclination, toward knowledge, may not seem to be directly about ethics, but if we have
a natural inclination toward the truth and people feed us lies, then they are in violation of the natural
law. Also, if we ourselves are naturally inclined toward knowledge, yet we do not allow ourselves to gain
in knowledge and wisdom, then we are not living up to the obligation we have to ourselves and to
others to seek the truth. For Aquinas, we must shun ignorance.

Diagram 3.1

The Golden Rule

Confucianism

Never do to others what you would not like them to do to you. (5th century BCE)

Buddhism

Hurt not others with that which pains thyself. (5th century BCE)
Jainism

In happiness and suffering, in joy and grief, we should regard all creatures as we regard our own self,
and should therefore refrain from inflicting upon others such injury as would appear undesirable to us if
inflicted upon ourselves. (5th century BCE)

Zoroastrianism

Do not do unto others all that which is not well for oneself. (5th century BCE)

Classical Paganism

May I do to others as I would that they should do unto me. (Plato, 4th century BCE)

Hinduism

Do naught to others which if done to thee would cause thee pain. (Mahabharata, 3rd century BCE)

Judaism

What is hateful to yourself, don’t do to your fellow man. (Rabbi Hillel, 1st century BCE)

Christianity

So in everything, do to others what you would have them do to you. ( Jesus of Nazareth, 1st century
BCE)

Sikhism

Treat others as thou wouldest be treated thyself. (16th century CE)

3.4 Natural Law Ethics and Virtue Ethics

Natural law ethics incorporates virtue ethics. In the same way that Aristotle sees a certain direction in
human needs and actions (what I have called the developmental aspect of his views on human nature),
Aquinas sees the same. Virtues are perfections; they are the natural out- come of following the
directionality that is built into human nature. In Aquinas’s view, our human nature was intentionally
created and designed by God, and our lives only reach their natural end when they take us closer to
God. The virtues are the fruits of performing actions toward a goal – our human good.

But the Christian list of virtues that Aquinas promotes differs from Aristotle’s list of virtues. The most
important difference is that Aquinas’s list, in addition to having moral virtues, also includes theological
virtues. While moral virtues are formed through repeated actions and habit, the theological virtues –
faith, hope, and charity – have their origin in God’s grace. This makes sense if we think of expressions
like “faith is a gift.” We cannot earn faith in God as we would develop a moral virtue, but rather, it is
through God’s grace that we are given the gifts of faith, hope, and charity. So, for Aquinas’s religious
version of virtue ethics, there are some virtues where we must rely heavily upon God’s grace. The
theological virtues have a similar place in natural law ethics as the divine law. Just as the divine law is
needed for people to achieve supernatural happiness, so too, are the theological virtues necessary for
supernatural happiness.
Overall then, the virtues have a place in Aquinas’s natural law ethics: when one is working to develop
one’s moral virtues, one is living in accord with the natural law. When one is putting oneself in a position
to receive God’s grace, one is preparing oneself for the theological virtues.

3.5 When Following the Natural Law Is Unclear: Use the Pauline Principle

It seems rather straightforward to say that when we are working toward developing virtues then we are
living in a way that is consistent with natural law ethics. Now, though, we must consider situations
where it is not clear what the natural law requires of us.

In his writing, Aquinas deals with ambiguous moral situations. He recognizes that even if we are trying to
follow a natural law ethic, there are still times when it is difficult to decide on the right thing to do. For
Aquinas, this is a very real aspect of living a human life. Just as we saw in Aristotle’s virtue ethics, living a
human life can be a zigzag. As human beings, we have many ethical responsibilities and we have a great
deal of potential that requires our efforts before it can manifest itself in virtues. There are times,
though, when these responsibilities pull us in different directions.

There is an important New Testament principle that Aquinas incorporates into his natural law ethic. It is
called the Pauline principle because we find it in Paul’s Letter to the Romans. A more popular phrasing
of the principle is: the end does not justify the means.

Pauline Principle: It is not morally permissible to do evil so that good may follow. (The end does not
justify the means.)

There may be situations where we are tempted to perform an action that we are not proud of, but we
consider doing it just for the purpose of bringing about some further goal we have. Is it OK, for example,
to turn the other way sometimes in order to bring about a greater good?

Take lying, for instance. Because of our natural inclination toward knowledge and toward sociability, we
ought to tell the truth. So lying is not in keeping with the natural law ethic. There are situations, though,
where we feel that we are justified in telling a lie because we are anticipating that there will be better
results if we told the lie than if we tell the truth. On occasions like these Aquinas advises us to remind
ourselves of the Pauline principle. It is not permissible to do evil so that good may come.

Take a more extreme example: a lifeboat situation. A group of people have survived a cruise disaster.
The lifeboat can only hold 20 people but right now there are 28 people in the boat. It appears that the
lifeboat is sinking and will not hold this many people. There are some survivors who are severely injured
and have now become comatose. One of the healthy survivors suggests throwing some of the injured
overboard in order to save the majority of survivors. Is such an action morally justifiable under natural
law ethics?

We know that this is a direct action of killing – throwing people over- board will certainly lead to their
death. The natural law would say that directly killing is wrong. But yet, if a few sick and comatose people
were sacrificed, then 20 healthy people would be saved. Isn’t that worth it? Here is where the Pauline
principle can remind us that we should not do evil in the hopes that good may come from it.

So would the natural law ethic really say that in this lifeboat situation we ought to do nothing? But then
everyone will die! This is where the religious worldview of natural law ethics can help us see how we
could possibly live with such a tragedy. By not doing an immoral action to try to bring about a good
result, we are leaving it in God’s hands. Perhaps God will help the whole group to be saved by sending a
fishing boat in its vicinity just in the nick of time. Or it could turn out that the sick are not thrown
overboard, and all 28 people die. But, by the ethical standard of the Pauline principle, it would still count
as the morally correct thing to do, because, according to the principle, the rightness or wrongness of an
action is not determined by the outcome, but by the principle of the thing. It is simply wrong, in
principle, to directly kill a few comatose people in the hope that a future good may come of it.

3.6 When Following the Natural Law Is Unclear: Use the Principle of Double Effect

Let’s look at another, more complicated, case. We know from above that it is in accord with natural law
to protect our lives. And we also know that it is against natural law to take another’s life. But how about
a situation where we are being physically attacked? Is it morally permissible to kill in self-defense? Is it
contrary to natural law to kill others who are attempting to preserve themselves (even if they are
attacking)?

Since Aquinas is working out of a biblical tradition, there is an obvious reference here to the fifth
commandment of the Hebrew Bible, “Thou shall not kill.” As a Christian philosopher, Aquinas will want
to make his ethical theory consistent with the Bible’s commandments. But does that mean that natural
law ethics prescribes that we ought not to kill, even in cases of self-defense? If we apply the Pauline
principle we seem also to reach the same conclusion that killing in self-defense is morally wrong. For in
killing in self-defense wouldn’t we be doing evil (killing our attacker) so that good may come (our life will
be preserved)?

Aquinas, though, would say that we have not yet properly analyzed the situation from the perspective of
natural law ethics. If we look closely at the main principle of natural law ethics that we ought to perform
those actions that promote the values specified by the natural inclinations of human beings we should
notice that one of the key elements of natural law ethics is that we need to use our free will to perform
these actions. In assessing the morality of a particular action it is important to note where we are
putting the energies of our free will. What are we willing? What are we intending?

If we are intending to preserve our own life, then we are acting in accord with the natural law. If we are
intending to destroy a life then we are not acting in accord with the natural law. The question about the
morality of killing in self-defense should therefore center on our intentions.

In a situation when an attacker is threatening our life, if we struggle to protect ourselves and in that
process our attacker gets killed, we have not committed an action inconsistent with natural law ethics. If
our intention is genuinely to protect our lives, then we are acting in accord with the natural law. If an
accidental by-product of this morally good action involves the destruction of a human good, this is
unfortunate, but it does not render our action immoral (see Diagram 3.2). Here is how Aquinas describes
the situation:

An act of self-defence may have two effects: it may save one’s own life and cost the attacker his. Now
intending to save one’s own life can’t make an act illegitimate, since it is in the nature of all things to
want to preserve themselves in being as far as they can . . . Somebody who uses more force than
necessary to defend himself will be doing wrong, though moderate use of force can be legitimate . . .
However, it is not licit for a man actually to intend to kill another in self-defence. (Aquinas 1270: 390)
Diagram 3.2

Intention ⇒ Action ⇒ Consequence 1 (Save my own life)

⇒ Consequence 2 (Take attacker’s life)

Thus, Aquinas has given us a Principle of Double Effect:

Principle of Double Effect: It is morally permissible to perform an action that has two effects, one good
and the other bad, if certain conditions are met.

The first condition is that the act itself must be good; the second is that we must be intending the good
outcome, not the bad; the third is that the action must not violate the Pauline principle (the evil effect is
not pursued for the sake of a further good effect); and fourth, it must be a serious situation, for, after
all, a basic human value or good is being destroyed.

Take a case that is different from life and death. Consider the second natural inclination, toward
procreation. If we have a natural inclination to procreate, we ought to promote that good by performing
actions that preserve that value. To perform actions that destroy that value is inconsistent with the
natural law. Thus, to sterilize oneself would directly destroy that value. But what if I am ill and the only
way to cure my illness is to perform an operation that will cause me to become sterile? If I am intending
to undergo the operation because I am intending to protect my life (following from the natural
inclination to preserve one’s life), and the sterilization is only a side effect, then we have a case where
one action will have two effects, one good and one evil. If I am intending the good one, though, and
there is a serious reason for this operation, then, according to the principle of double effect, I am
performing an action that is consistent with the natural law, and is therefore, moral.

Let’s look at one example from each natural inclination. That takes us to the third one, the natural
inclination to sociability. Here is an example that relates to smoking. Every year during national smoke-
out week people are told about the hazards of smoking and are given tips on how to break the habit.
One tip involves keeping yourself out of a situation where you will be tempted to smoke. Thus you
should avoid other smokers. So let’s say the action in question involves an invitation to go out with a
group of friends who, unfortunately, all smoke. You can go with them or not. If you go with them, you
will likely to be tempted to smoke because smokers will surround you; secondly, their second-hand
smoke will surround you. You choose not to go out with them, say. That chosen action has a good effect
and a bad effect. The bad effect is that you are going against your inclination to be social and be with
your friends. The good effect is that you are preserving your health. If you are intending to preserve
your health, then declining the social invitation from your friends is merely a negative side effect.

A fourth example involves the natural inclination to knowledge. One way to fulfill your natural
inclination to acquire knowledge is by reading. When you do a lot of reading, however, it causes eye
strain, and because you have eye problems your eye doctor has advised you to limit your daily reading.
The action we are considering is continual reading. The good effect is that you gain more knowledge; the
bad effect is that you damage your vision. When we apply the principle of double effect we must ask if
you are intending to damage your vision. If not, but you are intending the good effect only (more
knowledge), and the bad effect is not the means to the good effect, then your action, though it has
negative side effects, is morally permissible.

3.7 Conclusion

Although the virtues, which are more about one’s character than one’s actions, are incorporated into
the natural law framework, natural law ethics places much more emphasis on the analysis of moral
actions and the application of principles to determining the morality of actions. When applying natural
law ethics, the main element to focus on is one’s intention are we intending to follow the natural law?
Another aspect that differentiates medieval natural law ethics from ancient virtue ethics is that natural
law ethics is cast into a religious framework. Thus, according to natural law ethics, when we as
individuals develop the virtues, we are following a law that ultimately stems from God’s will.

Natural law as a moral law does have limitations even from the perspective of Aquinas himself, because,
as he points out, the divine law and the theological virtues are aids from God that are necessary to
achieve supernatural happiness. Today, natural law ethics is thought to have even more limitations. One
major reason for this is that today many moral theorists are skeptical about any kind of reasoning that
proceeds from observations about human natural inclinations to moral conclusions. Individuals who
share the religious worldview that undergirds natural law ethics, however, will be less skeptical about
this kind of reasoning.

Natural law ethics offers solutions to all four philosophical problems in ethics. Its solution to the
philosophical problem of the origins of ethics is that ethical standards have their ultimate origin in God’s
plan for the world. Ethical standards are not solely derived from one’s society. Since God has created
human beings, Aquinas believes we can discern God’s plan for us by examining and reflecting upon the
natural inclinations of human beings. Natural law ethics therefore has a considerably developed solution
to the problem of human nature. Human beings are rational and social beings that are naturally striving
toward basic goods. Ethical standards are importantly rooted in human nature, though human nature is
not their ultimate origin, since God is responsible for human nature being what it is.

With regard to the problem of relativism, Aquinas maintains that the apparent relativity of ethics does
not detract from the ultimate universal features of ethics, which are grounded in the universal features
of human beings. Like the other universalists we have looked at, Aquinas will accept the fact that there
is cultural diversity and disagreement in ethical standards, and will advise us not to take the ethical
disagreement and cultural relativity we observe as conclusive proof that there are no permanent and
universal standards in ethics. For Aquinas, the standards exist just as surely as God exists, and human
skepticism alone concerning these matters does not disprove their existence.

As a solution to the problem of what makes something morally right or wrong, natural law ethics
answers questions about how to determine the right thing to do and how one should live a life, in terms
of the natural law. An action is right when it is consistent with the natural law. As a highly developed
solution to the problem, natural law ethics offers several ethical principles that offer guidance in making
such a determination: the principle of natural law, the golden rule, the Pauline principle, and the
principle of double effect. As an ethical theory, it offers much practical guidance.
In this chapter we have sketched the basic outline of natural law ethics. Our next chapter will also use
the concept “laws of nature” but in a very different way, a way that is critical of religious natural law.
Our next chapter will address our first modern ethical theory: social con- tract ethics, a theory and
tradition that for the most part attempts to avoid religious and divine references. It is a view influenced
by mod- ern science as it began to develop in the seventeenth century.

You might also like