You are on page 1of 5

Jacob Ross

Responding to Fanaticism
1. The Role of Stakes in Practical Reasoning
Descriptive Stakes Principle: If you are uncertain which of two descriptive theories, T1
and T2, is true, and if you are faced with a choice situation in which the stakes are higher
according to T1 than to T2, and if these descriptive theories disagree concerning which
alternative is best, it will often be rational to follow the recommendation of T1 even if
you regard T2 as more probable.
(In particular, this will be rational so long as the ratio by which the stakes of T1 exceed
the stakes of T2 is at least as great as the ratio by which the subjective probability of T2
exceeds the subjective probability of T1.)
Example: Taking an umbrella
Evaluative Stakes Principle: If you are uncertain which of two evaluative theories, T1
or T2, is true, and if you are faced with a choice situation in which the stakes are higher
according to T1 than to T2, and if these evaluative theories differ concerning which
alternative is best, it will often be rational to follow the recommendations of T1 even if
you regard T2 as more probable.
(Once again, this will be rational so long as the ratio by which the stakes of T1 exceed the
stakes of T2 is at least as great as the ratio by which the subjective probability of T2
exceeds the subjective probability of T1.)
2. Three Applications of the Evaluative Stakes Principle
A. Lobster Thermidor (ordinary higher vs lower stakes)
Suppose you are deciding whether to have lobster thermidor or vegetable soup. You
know that making the lobster thermidor would cause tremendous pain to an animal.
Suppose you have .49 credence in a moral theory according to which causing pain to
animals is just as bad as causing it to humans, and .51 credence in an alternative which on
you have no reason to avoid causing pain to animals but a weak reason to choose the
lobster thermidor in order to support the local fisheries. In this case, it is rational to
choose the vegetable soup even though you think you probably have more objective
moral reason to choose the lobster thermidor.
B. Lying to the Gestapo (finite vs infinite stakes)
2. Suppose the only way you can save many innocent lives is by telling a lie. Suppose
you have .9999 credence in a utilitarian theory according to which the value of an
outcome is proportional to the total amount of welfare it involves, and .00001 credence in
a Kantian theory according to which saving lives has only finite value whereas telling a
lie has infinite disvalue. In this cased, it is rational to refrain from telling the lie even
though you believe that it is overwhelmingly likely that you have more moral reason to
tell the lie.
C. The Effective Repugnant Conclusion (bounded vs unbounded stakes)
Suppose you are choosing between an outcome in which there is a very large population,
each of whose members has a wonderful life, and an outcome in which the population is
much larger, but everyone has a life that is only barely worth. And suppose you are
uncertain whether the true theory is one like Average Utilitarianism, according to which
you should choose the smaller, happier population, or a view like Total Utilitarianism,
according to which you should choose the larger, less happy population. In this case, no
matter how close you are to being certain that the average-style view is true, it is rational
to choose the outcome with the larger population of lives that are barely worth living, so
long this population is sufficiently large.
3. Four Ways in which One Might Try to Reject All Stakes Arguments
Reject Intertheoretic Value Comparisons
Adopt Structuralism
Reject MEC in favor of My Favorite Option
Reject MEC in favor of My Favorite Theory
3.1. Deny the intelligibility of Intertheoretic Value Comparisons
Intertheoretic Value Comparisons vs Interpersonal Utility Comparisons
Consider five cases:
1. Maximizing the sum of happiness of A and B: choosing between scratching A’s finger
and amputating B’s limbs.
2. Minimizing how angry A and B are at us (A is a vegan activist and B subscribes to a
traditional moral view): deciding between lobster thermidor and the vegetable stew.
3. Minimizing how angry it would be fitting to be at me for my action, according to A
and B.
4. Minimizing how wrong it is, according to A and B.
5. Minimizing how wrong it is, according to the moral theory held by A and according to
the moral theory held by B.
Different attitudes corresponding to different moral terms
Deontic theories, wrongness: blame/indignation, guilt
Axiological theories, goodness: pleasure vs disappointment
Consider a case where you antecedently have .5 credence in each of two possible
outcomes. How much worse B is than A corresponds to how disappointed you should be
to learn that B is true rather than A.
3.2. Adopt a Structuralist view such as PEMT
Two interpretations: Theory of intertheoretic value comparisons, or mere decision
procedure
Problem with the former interpretation:
First pair: Give to the more efficient vs less efficient charity
Second pair: Keep or break the deathbed promise
U1 = Difference between the first pair of options according to Utilitarianism
U2 = Difference between the second pair of options according to Utilitarianism
K1 = Difference between the first pair of options according to Kantianism
K2 = Difference between the second pair of options according to Kantianism
U1 > U2
K2 > K1
U1 = K1
K2 = U2
Contradiction!
Problems with using PEMT as a decision procedure: IIA, money pumps
Note that these problems apply to any structuralist theory according to which the relevant
set of options is variable. And using the set of all possible options has other problems.
Further problem: Doesn’t allow for dominance reasoning
Utilitarianism Kantianism
Do nothing 0 0
Cultivate talents 0 +1
Make lying promise +1 -∞

3.3. Reject Expected Value Maximization and Adopt My Favorite Option


Familiar Problems: Money pumps, IIA
Gets the wrong result in the limiting case where there is no evaluative uncertainty:
mineshaft case.
3.4 Reject Expected Value Maximization and Adopt My Favorite Theory
Problems where descriptive and normative uncertainty are not independent.
Suppose you have .5 credence in each of the following alternatives:
(i) Utilitarianism is true, more people on Island A, family members on Island B.
(ii) Common sense morality is true, family members on A, more people on B.
Suppose you choose Utilitarianism. You can then save the people on B.
Solution: Do what has maximal expected value relative to your favorite evaluative theory
together with your descriptive credences conditional on the truth of your favorite theory.
Problem: Lollipop Button
An evil demon, who knows which moral theory is true, has created a button such that:
If utilitarianism is true and you press it, you get a lollipop
If utilitarianism is not true and you press it, the world is destroyed.
The conditionalized version of my favorite theory says to press the button.
4. How to Accept Stakes Arguments While Avoiding Fanaticism
Solution: argue that it is irrational to have sufficiently high credence in fanatical theories.
This means argue that it is irrational to have non-zero credence in infinite or unbounded
theories.
4.1 The Incoherence of Unbounded Utilities
Van McGee: ‘An Airtight Dutch Book’
Controversial: Dutch book involves countably infinite number of bets.
4.2 Limitations on Fitting Attitudes
Strategy: Argue from limitations on fitting attitudes to limitations on ethical properties.

4.2.1 Argument from the Nature of Attitudes


For the attitudes that figure in the accounts of moral terms, there is such a thing as having
these attitudes fully/maximally.
Consider belief/confidence: there is such a thing as being fully confident.
Maybe something is true of other attitudes, such as fear, anger, indignation, guilt,
pleasure, disappointment, etc.
4.2.2 Modal Argument
The truth of an ethical claim requires that it be fitting for a normal person to have some
attitude.
That it is fitting to be disappointed/guilty/indignant, etc. to degree x implies you have
sufficient reason, of the right kind, to have these attitudes.
You have reason to A implies that it’s possible for you to A.
Therefore, it is only fitting for us to have an attitude to a given degree if it’s possible for
us to have this attitude that that degree.
There are limits to the degrees of disappointment, guilt, indignation, etc a normal person
can have.
Therefore, there are limits to the degrees of these attitudes it can be fitting for a normal
person to have.
Therefore, there are limits to the degrees of the corresponding ethical properties.
4.2.3 Argument from the Nature of Fittingness
It is fitting to have attitude A toward X iff X has the property that it is the function of the
A-regulating mechanism to track. (e.g., it’s fitting to fear a snake because snakes are
dangerous to us, and it is the function of the fear-regulating mechanism to generate fear
toward things that are dangerous to us.)
Functions require evolutionary histories
Therefore, if it is fitting to have a given mental state under given circumstances, it must
be the function of the mechanism regulating this mental state to generate it under these
circumstances, and so at least some of our ancestors must have had this mental state.
But there are limits to the degrees of attitudes that our ancestors had.
Therefore, there are limits to the degrees of attitudes that it can be fitting for us to have.

You might also like