You are on page 1of 4

Forms of Utilitarianism

[Ben Eggleston & Dale E. Miller, 2014]

Act Utilitarianism:
Definition: An act is right if and only if it results in at least as much overall well-being as the
agent could have performed.
Rightness is directly conceptually dependent on well-being.
All acts that are not optimific are morally wrong.
Narrow - ONLY well-being matters morally; other properties (the action being disloyal, or
coming out of selfish motives etc.) have no independent bearing on the moral
assessment of the action.
The optimific action available to the agent is obligatory.


An agents duty at any given time, according to act utilitarianism, is not to act so that the
world resulting has as much overall well-being as a world can have, but just to act so that
the resulting world has just as much well-being as any world that could have resulted from
the acts that were among the agents options at the time of acting.
Maximisation in terms of the options available to the agent, not the idea of maximising
in the sense of leaving no increases to be achieved subsequently.
What are we to make of a worlds overall well-being? The sum of the well-being had by the
entities capable of having well-being. (Sentient creatures). Morality is NOT concerned with
achieving the greatest happiness for the greatest number - it is often the case that the
most beneficial act is different from the act that will distribute the benefit most widely.
The moral value of an action does not depend, at all, on whether the act complies with any
kind of moral rule. That said, act utilitarian is not blind to the usefulness of moral rules
as heuristic tools. Similarly, an understanding of customary morality is important for
foreseeing probable consequences and the likelihood of harm being caused. However,
rules have no right-making characteristics. Actions are not morally evaluated by
reference to rules, even if rules are invoked as heuristic devices.

Important Distinctions:

NB: Act utilitarianism requires the maximisation of well-being but is compatible with
various distinct concepts of well-being.
Similarly, different conceptions of act utilitarianism will accommodate whether the moral
value of an act depends on its actual consequences or those intended by the agent
(reasonably expected when the act was performed).
Important also to distinguish between the criterion of moral evaluation and an
action-guiding principle. The majority of criticisms aimed at act utilitarianism are
predicated on the belief that AU is established as action-guiding principle as well as a


1. Maximisation of well-being as the rational aim. Compare with satisficing

approaches which could allow for someone to consciously and deliberately favour a
suboptimal act despite having full capability to enact the optimal one; to call such an
action moral would be counterintuitive.
2. Sensitivity to context. Allows for different actions being right or wrong as determined
relative to the relevant contextual details. Avoids the overly schematic nature of certain
deontological approaches which identify certain classes of actions as wrong or right
regardless of situational context.


1. Impracticality of implementing AU. At the deliberative level, AU seems too demanding.

Humans are epistemically limited and cannot predict all consequences - or even likely
consequences - that will result from the action. As many of the acts effects on future well-
being are unforeseeable, AU demands an impossible amount of foresight?
- Counter: AU does not necessarily recommend AU as a decision-procedure!
2. Harms of implementing AU. Breakdown of coordination and cooperation if morality is so
flexible as to allow the moral status of actions to radically change dependent on context.
People would not expect other agents to behave in predictable ways - every point being an
opportunity for maximising well-being. [ideal decision procedure]
- Counter: AU does not necessarily recommend AU as a decision-procedure!
3. Immoral Consequences. Depending on circumstance, the right-making qualities of AU
apply to intuitively unpalatable actions. Claim often taken as indicative of AU misconstrues
morality or at least does not paint a sufficient picture of the moral landscape. We have
more than a prima facie duty to keep promises (special obligations). Can AU make sense of
desert? Is AU ready to treat people unjustly based on promoting overall well-being? - T.M.
Scanlon gives the example of Jones falling victim to an accident in a television station
where he is trapped, experiencing painful shocks, and the only way to free him would
involve turning off the TV transmitters for 15 minutes. However, a World Cup match is
being broadcast to millions of people.

Rule Utilitarianism:
Definition: An action is right in so far as it conforms to an authoritative moral code or set
of rules whose general acceptance value is at least as good (promoting well-being) as any
available alternative rule.
The authoritative moral code is NOT merely a decision-procedure. It provides the moral
standard according to which actions are morally evaluated.
NB: like act utilitarianism, RU can adopt different theories of the good and frame its theory
in terms of actual/expected outcomes, and make either average or utility the object of


Unpack the idea of collective ideal-code rule utilitarianism:

- Collective: the same moral code as being authoritative for all members of the group in
- Ideal-code: the authoritative moral code is the one whose general adoption would be
utility maximising even if it is not widely adopted.
What does general adoption entail?
- Adopting a moral rule as meaning perfect compliance with it? This interpretation is
charged with making rule utilitarianism extensionally equivalent to act utilitarianism. It
would be an enormously long and enormously complex moral code whose prescriptions
would be identical with the acts prescribed by the act-utilitarian principle (Brandt).
- Internalisation? An agent who internalises a moral code cultivates a psychological
disposition of a sort that yields a motive to obey the rules of the code e.g. pangs of
conscience when one contemplates violating the rules. If one favours internalisation
over perfect compliance, the resulting RU conception cannot be extensionally
equivalent to AU. Utility calculation must also account for the disutilities involved in
actually internalising the rules. We can only internalise a limited amount of rules, but a
rule utilitarian would present that even a limited plurality of rules would be more
conducive to general utility compared to internalising the single principle of maximise
the good. Inevitably, this code whose general internalisation would be optimific would
require some actions which are not optimific and forbid some that are. [Can never be
extensionally equivalent to act utilitarianism]


1. John Harsanyi: Other things being equal, a rule-utilitarian society would enjoy a much
higher level of social utility than an act-utilitarian society would
- If the moral norms of ones society allow for things like promises to be broken when
this would return a marginal gain in utility, this will undermine trust in one another and
people will have less incentive to plan their future activities on the expectation that
promises made to them would be kept.
- Co-ordination effect. Act utilitarians will be unable to produce desirable outcomes
through collective action - providing the example of voting.
Issue with Harsanyi: he assumes that an act utilitarian society will internalise only
the AU principle, thereby using the standard of moral rightness as the decision-
procedure. He does not demonstrate that a society of rule-utilitarians would enjoy a
higher level of utility than a society of sophisticated (multi-level) act utilitarians
who rely on secondary principles as heuristic devices at the deliberative level.
2. Coherence with moral intuitions. Maintains the binding force of special obligations such
as promises, or the partial concern given to say ones children. Brad Hooker argues along
these lines; using moral intuitions evidentially to support a theory of the good which
provides intrinsic value to things other than welfare (such as the general inviolability of
promises, and arguing that morality does not demand the level of personal sacrifice that
AU seems to) [rule-consequentialism].


1. Collapses into act utilitarianism? [See what does general adoption entail?]. This
objection depends on the particular formulation of rule utilitarianism.
2. Rubber Duck objection. Named after the article by Frances Howard-Snyder: Rule
Consequentialism is a Rubber Duck. Argues that RU is not really a utilitarian theory as it is
not agent neutral. Howard-Snyder maintains that consequentialism is by definition
agent neutral where the action prescriptions can be made without any essential
reference to the agent. Whereas, rule utilitarianism would allow for agent-focussed features
such as being more concerned with ones immediate family.
- Is this really convincing? Is not ethical egoism a consequentialist theory
- Issue of name? What does this objection say about the viability of RU as a moral
3. Incoherence objection. Also known as the rule-worship objection. One assumes that,
for RU, act maximisation is a goal of ultimate importance - hence the authoritative moral
code deriving its authority in virtue of the codes ability to maximise utility. If maximising
utility is of such paramount importance, however, then whenever one has a choice
between obeying the ideal moral code and performing an action that contravenes the code
but would produce more utility:
- J.J.C. Smart: I can understand it is optimific as a reason for action, but why should it
is a member of a class of actions which are usually optimific or it is a member of a
class of actions which as a class are more optimific than any alternative general class
be a good reason?
- Summary: RU insists that we should abide by a set of rules even when the same
considerations that recommend those rules in the first place count in favour of
breaking them.
- Challenge: Not all RU theories will endorse the idea that maximising utility is a goal of
overriding significance.

Satisficing Utilitarianism:
Definition: The right action is that which promotes a good enough outcome, where good
enough need not be optimal.
Accordingly, this position presents that there is some threshold of good such that if the
threshold is met then the action qualifies as right. An action which goes above the
particular threshold will be considered supererogatory.
Advocated by Michael Slote (1984).


Developed as a response to the demandingness objection to consequentialism which

maintains that maximising forms of consequentialism are strongly counterintuitive in their:
(i) Denial of supererogation
(ii) Belief that all other actions available to the agent other than the optimific one are
morally wrong and equally morally wrong.
[Appeal to distinguishing between the criterion of moral assessment and decision-guiding
There are moral choices which are permitted to be identified as morally right actions
even though they fail to maximise the good.


One main challenge facing proponents of satisficing utilitarianism is to explain just when
an outcome is good enough (Bradley 2006). Is there some absolute minimum of
goodness that any act must promote in order to be good enough, or is the threshold
always determined relative to the quality of options available to you at the time?
Some have suggested that satisficing is merely a nuanced form of maximisation. Robert
Goodin: maximisation under the constraints of time and information costs is the best
sense I can make of satisficing utilitarianism (2012).
If satisficing utilitarianism is adopted, then one can justifiably fail to do more good
than the agent is capable of doing. Such arbitrary failures to maximise the good seem
to warrant feelings of blame. If one can easily save two lives but only saves one and
proposes that saving one life was good enough, one may say that the moral obligation in
the relevant case is stronger.