Professional Documents
Culture Documents
284
by Scott Alexander 14th Mar 2009
"If you’re interested in being on the right side of disputes, you will refute your
opponents’ arguments. But if you’re interested in producing truth, you will fix your
opponents’ arguments for them. To win, you must fight not only the creature you
encounter; you must fight the most horrible thing that can be constructed from its
corpse."
Yesterday John Maxwell's post wondered how much the average person would do to save
ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related
question as part of an investigation of the Trolley Problems:
You are a doctor in a small rural hospital. You have ten patients, each of whom is
dying for the lack of a separate organ; that is, one person needs a heart transplant,
another needs a lung transplant, another needs a kidney transplant, and so on. A
traveller walks into the hospital, mentioning how he has no family and no one knows
that he's there. All of his organs seem healthy. You realize that by killing this traveller
and distributing his organs among your patients, you could save ten lives. Would this
be moral or not?
I don't want to discuss the answer to this problem today. I want to discuss the answer one
of my friends gave, because I think it illuminates a very interesting kind of defense
mechanism that rationalists need to be watching for. My friend said:
It wouldn't be moral. After all, people often reject organs from random donors. The
traveller would probably be a genetic mismatch for your patients, and the
transplantees would have to spend the rest of their lives on immunosuppressants,
only to die within a few years when the drugs failed.
On the one hand, I have to give my friend credit: his answer is biologically accurate, and
beyond a doubt the technically correct answer to the question I asked. On the other hand,
I don't have to give him very much credit: he completely missed the point and lost a
valuable effort to examine the nature of morality.
So I asked him, "In the least convenient possible world, the one where everyone was
genetically compatible with everyone else and this objection was invalid, what would you
do?"
1: Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things
most atheists think of is this:
Perhaps God values intellectual integrity so highly that He is prepared to reward honest
atheists, but will punish anyone who practices a religion he does not truly believe simply
for personal gain. Or perhaps, as the Discordians claim, "Hell is reserved for people who
believe in it, and the hottest levels of Hell are reserved for people who believe in it on the
principle that they'll go there if they don't."
This is a good argument against Pascal's Wager, but it isn't the least convenient possible
world. The least convenient possible world is the one where Omega, the completely
trustworthy superintelligence who is always right, informs you that God definitely doesn't
value intellectual integrity that much. In fact (Omega tells you) either God does not exist
or the Catholics are right about absolutely everything.
Would you become a Catholic in this world? Or are you willing to admit that maybe your
rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more
to do with a belief that it's wrong to abandon your intellectual integrity on the off chance
that a crazy deity is playing a perverted game of blind poker with your eternal soul?
2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him
from spiritual fulfillment.
Some commenters on Raising the Sanity Waterline don't deny the existence of such a hole,
if it is intepreted as a desire for purpose or connection to something greater than one's
self. But, some commenters say, science and rationality can fill this hole even better than
God can.
What luck! Evolution has by a wild coincidence created us with a big rationality-shaped
hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the
best possible way! I don't know - despite my sarcasm this may even be true. But in the
least convenient possible world, Omega comes along and tells you that sorry, the hole is
exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy
life. Do you head down to the nearest church for a baptism? Or do you admit that even if
believing something makes you happier, you still don't want to believe it unless it's true?
3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating
almost everything to charity.
Some commenters object that many forms of charity, especially the classic "give to
starving African orphans," are counterproductive, either because they enable dictators or
thwart the free market. This is quite true.
But in the least convenient possible world, here comes Omega again and tells you that
Charity X has been proven to do exactly what it claims: help the poor without any
counterproductive effects. So is your real objection the corruption, or do you just not
believe that you're morally obligated to give everything you own to starving Africans?
You may argue that this citing of convenient facts is at worst a venial sin. If you still get to
the correct answer, and you do it by a correct method, what does it matter if this method
isn't really the one that's convinced you personally?
One easy answer is that it saves you from embarrassment later. If some scientist does a
study and finds that people really do have a god-shaped hole that can't be filled by
anything else, no one can come up to you and say "Hey, didn't you say the reason you
didn't convert to religion was because rationality filled the god-shaped hole better than
God did? Well, I have some bad news for you..."
Another easy answer is that your real answer teaches you something about yourself. My
friend may have successfully avoiding making a distasteful moral judgment, but he didn't
learn anything about morality. My refusal to take the easy way out on the transplant
question helped me develop the form of precedent-utilitarianism I use today.
But more than either of these, it matters because it seriously influences where you go
next.
Say "I accept the argument that I need to donate almost all my money to poor African
countries, but my only objection is that corrupt warlords might get it instead", and the
obvious next step is to see if there's a poor African country without corrupt warlords (see:
Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable
answer would be to donate to another warlord-free charitable cause like the Singularity
Institute.
If you just say "Nope, corrupt dictators might get it," you may go off and spend the money
on a new TV. Which is fine, if a new TV is what you really want. But if you're the sort of
person who would have been convinced by John Maxwell's argument, but you dismissed it
by saying "Nope, corrupt dictators," then you've lost an opportunity to change your mind.
So I recommend: limit yourself to responses of the form "I completely reject the entire
basis of your argument" or "I accept the basis of your argument, but it doesn't apply to the
real world because of contingent fact X." If you just say "Yeah, well, contigent fact X!" and
walk away, you've left yourself too much wiggle room.
In other words: always have a plan for what you would do in the least convenient possible
world.
Mentioned in
149 Reversed Stupidity Is Not Intelligence
113 The Library of Scott Alexandria
88 Better Disagreement
39 6 Tips for Productive Arguments
38 A Suggested Reading Order for Less Wrong [2011]
Load More (5/41)
202 comments, sorted by top scoring
Some comments are truncated due to high volume. (⌘F to expand all) Change truncation settings
[ ] davidamann 14y
- 111
I think a better way to frame this issue would be the following method.
1. Present your philosophical thought-experiment.
2. Ask your subject for their response and their justification.
3. Ask your subject, what would need to change for them to change their belief?
For example, if I respond to your question of the solitary traveler with "You shouldn't do it because of biological
concerns." Accept the answer and then ask, what would need to change in this situation for you to accept the
killing of the traveler as moral?
I remember this method giving me deeper insight into the Happiness Box experiment.
Here is how the process works:
1. There is a happiness box. Once you enter it, you will be completely happy through living in a virtual world.
You will never leave the box.Would you enter it?
2. Initial response.Yes, I would enter the box. Since my world is only made up of my perceptions of reality,
there is no difference between the happiness box and the real world. Since I will be happier in the
happiness box, I would enter.
3. Reframing question.What would need to change so you would not enter the box.
4. My response:Well, if I had children or people depending on me, I could no
... (read more)
[-] pwno 14y 39
I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, "What would
I have to prove in order for you to change your mind?" If they answer "nothing" you know they are probably
not truth-seekers.
[-] Vladimir_Nesov 14y 11
Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral
position is really about.There are many factors to every decision, so it might help to try varying each of them,
and finding other conditions that compensate for the variation.
For example, you wouldn't enter the happiness box if you suspected that information about it giving the true
happiness is flawed, that it's some kind of lie or misunderstanding (on anyone's part), of which the situation of
leaving your family on the outside is a special case, and here is a new piece of information.Would you like your
copy to enter the happiness box if you left behind your original self? Would you like a new child to be born
within the happiness box? And so on.
2 abramdemski 11y This seems to nicely fix something which I felt was wrong in the "least convenient …
0 Rings_of_Saturn 14y Great, David! I love it.
-1 thrawnca 7y The happiness box is an interesting speculation, but it involves an assumption that, in my…
3 CynicalOptimist 7y Okay, well let's apply exactly the technique discussed above: If the hypothetical …
1 Jiro 7y What if we ignore the VR question? Omega tells you that killing and eating your children will…
-2 thrawnca 7y This would depend on my level of trust in Omega (why would I believe it? Because O…
1 TheOtherDave 7y For my part, it's difficult for me to imagine a set of observations I could make …
[-] MBlume 14y 62
I'm not sure if I'm evading the spirit of the post, but it seems to me that the answer to the opening problem is
this:
If you were willing to kill this man to save these ten others, then you should long ago have simply had all ten
patients agree to a 1/10 game of Russian Roulette, with the proviso that the nine winners get the organs of the
one loser.
[-] Scott Alexander 14y 25
While emphasizing that I don't want this post to turn into a discussion of trolley problems, I endorse that
solution.
[-] abramdemski 11y 14
In the least convenient possible world, only the random traveler has a blood type compatible with all ten
patients.
6 CynicalOptimist 7y This is fair, because you're using the technique to redirect us back to the origin…
0 abramdemski 7y Agreed.
3 DanielLC 9y I'd go with that he's the only one who has organs healthy enough to ensure the recipi…
-3 Rixie 11y MBlume knows this, he's just telling us what he was thinking.
2 Said Achmiz 10y What if one or more of the patients don't agree to do this?
7 DanielLC 9y Then you let him die, and repeat the question with a 1/9 chance of death.
1 Bruno Mailly 5y To me the logical answer is that it depends on how much value is attributed to "a" lif…
-1 [anonymous] 14y The technical creativity of this solution reveals the limits of rationality.This is a sol…
[-] Vladimir_Nesov 14y 14
Throwing a die is a way of avoiding bias in choosing a person to kill. If you choose a person to kill personally,
you run a risk of doing in in an unfair fashion, and thus being guilty in making an unfair choice. People value
fairness. Using dice frees you of this responsibility, unless there is a predictably better option.You are alleviating
additional technical moral issues involved in killing a person.This issue is separate from deciding whether to kill
a person at all, although the reduction in moral cost of killing a person achieved by using the fair roulette
technology may figure in the original decision.
7 Tasky 12y But as a doctor, probably you will have to choose non-randomly, if you want to stand by …
[-] bentarm 14y 49
There are real life examples where reality has turned out to be the "least convenient of possible worlds". I have
spent many hours arguing with people who insist that there are no significant gender differences (beyond the
obvious), and are convinced that to assert otherwise is morally reprehensible.
They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong,
that their morality just can't cope with a world in which this turns out not to be true.There are many similar
politically charged issues - Pinker discusses quite a few in the Blank Slate - where people aren't wiling to listen to
arguments about factual issues because they believe they have moral consequences.
The problem, of course - and I realise this is the main point of this post - is that if your morality is contingent on
empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that
sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these
differences do turn out to exist then you'll say sexism is ok.
This is probably a test you should apply to all of your moral beliefs - if it just so happens that I'm wrong about the
factual issue on which I'm basing my belief is wrong, will really I be willing to change my mind?
2 Pr0methean 10y That raises an interesting question: is it possible to base a moral code only on what's…
5 Richard_Kennaway 10y To do that would require that "all possible worlds that contain me" be a co…
0 Jackercrack 9y I think that it is not. All possible worlds include worlds where every tuesday the first …
5 DanielLC 9y You could have a personal moral code of stabbing anyone who you're 90% certain wo…
0 [anonymous] 9y That doesn't follow from your logic.There could be multiple functions of maximal…
0 Jackercrack 9y I took "all possible worlds that contain me" to mean all worlds where history wen…
1 [anonymous] 9y Retract -- circle with an line through it.
0 Jackercrack 9y What do you mean by circle with a line through it? Is that some sort of code f…
5 Nornagest 9y There should be a button with that appearance in the lower right-hand corner…
8 wedrifid 9y The causality is unlikely.There was never strikethrough syntax here and the retr…
2 Jackercrack 9y Ah, thank you. I hadn't noticed that
-8 Rixie 11y
[-] bill 14y 34
One way to train this: in my number theory class, there was a type of problem called a PODASIP.This stood for
Prove Or Disprove And Salvage If Possible.The instructor would give us a theorem to prove, without telling us if
it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up
with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).
This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient
possible world" in which it was true.
[-] Nebu 14y 20
I voted up on your post,Yvain, as you've presented some really good ideas here. Although it may seem like I'm
totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my
responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to
explore these 3 scenarios on their own.
Pascal's Wager
In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I
suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.
For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything
the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.
So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically
self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want
the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would
happen ... (read more)
0 matteyas 6y The point is that in the least convenient world for you, Omega would say whatever it is t…
2 Jiro 6y The least convenient world is one where Omega answers his objections.The least convenient…
0 jknapka 11y This is a very good point, and I believe I'll point it out to my rather fundamentalist sibling …
-1 DanielLC 9y If I really, truly believed that every non-Christian was doomed to eternal damnation, I'd…
[-] Vladimir_Nesov 14y 20
Let's try something different.
Puts on the reviewer's hat.
The Yvain's post presented a new method for dealing with the stopsign problem in reasoning about questions of
morality.The stopsign problem consists in following an invalid excuse to avoid thinking about the issue at hand,
instead of doing something constructive about resolving the issue.
The method presented by Yvain consists in putting in place the universal countermeasure against the stopsign
excuses: whenever a stopsign comes up, you move the discussed moral issue to a different, hypothetical setting,
where the stopsign no longer applies.The only valid excuse in this setting is that you shouldn't do something,
which also resolves the moral question.
However, the moral questions should be concerned with reality, not with fantasy.Whenever a hypothetical setting
is brought in the discussion of morality, it should be understood as a theoretical device for reasoning about the
underlying moral judgment applicable to the real world.There is a danger in fallaciously generalizing the moral
conclusion from fictional evidence, both because there might be factors in the fictional setting that change your
decision and which you ... (read more)
4 [anonymous] 14y I do agree. I think in many ways reality already is "the least convenient possible worl…
[-] freyley 14y 14
One difficulty with the least convenient possible world is where that least convenience is a significant change in
the makeup of the human brain. For example, I don't trust myself to make a decision about killing a traveler with
sufficient moral abstraction from the day-to-day concerns of being a human. I don't trust what I would become if I
did kill a human. Or, if that's insufficient, fill in a lack of trust in the decisionmaking in general for the moment.
(Another example would be the ability to trust Omega in his responses)
Because once that's a significant issue in the subject , then the least convenient possible world you're asking me
to imagine doesn't include me -- it includes some variant of me whose reactions I can predict, but not really
access. Porting them back to me is also nontrivial.
It is an interesting thought experiment, though.
[-] CronoDAS 14y 11
So I asked him, "In the least convenient possible world, the one where everyone was genetically
compatible with everyone else and this objection was invalid, what would you do?"
Obviously, you wait for one of the sick patients to die, and use that person's organs to save the others, letting the
healthy traveler go on his way. ;)
But that isn't the least convenient possible world - the least convenient one is actually the one in which the
traveler is compatible with all the sick people, but the sick people are not compatible with each other.
[-] Psy-Kosh 14y 10
Actually, you don't even need to add that additional complexity to make the world sufficiently inconvenient.
If the rest of the patients are sufficiently sick, their organs may not really be suitable for use as transplants, right?
My answer is based on a principle that I'm surprised no one else seems to use (then again, I rarely listen to
answers to the Fat Man/Train problem): ask the f**king traveler!
Explain to the traveler that he has the opportunity to save ten lives at the cost of his own. First they'll take a
kidney and a lung, then he'll get some time to say goodbye to his loved ones while he gets to see the two people
with the donated organs recover... and then when he's ready they'll take the re... (read more)
1 Marion Z. 7mo Only replying to a tiny slice of your post here, but the original (weak) Pascal's wager a…
[-] Thomas Eisen 3y 2
My answers:
1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super
impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that
omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly
omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid
defence, since there's a lot of suffering that isn't caused by other ... (read more)
[-] passive_fist 8y 2
either God does not exist or the Catholics are right about absolutely everything.
Then I would definitely and swiftly become an atheist, and I maintain that this is by far the most rational choice for
everybody else as well. My prior belief in God not existing is relatively high (let's say 50/50), but my prior belief in
all of Catholicism being the absolute truth is pretty much nil. And if you're using anything vaguely resembling
consistent priors, it has to near-nil for you too, because the beliefs of Catholicism are just so incredibly specific.
They na... (read more)
[-] Nanashi 8y 2
I find this method to be intellectually dangerous.
We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the
brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract
solutions to conceptual problems.
I agree that there is a small modicum of value to considering the LCPW. Just like there's a small modicum of value
to eating a pound of butter for dinner. It's just, there are a lot better ways to spend ones time.The proper
response to "We... (read more)
8 Nornagest 8y I don't think I could disagree more.The point of ethical thought experiments like the sic…
0 Nanashi 8y That's fair. I understand the value: it exposes the weakness of using overly rigid heuristics…
2 TheOtherDave 8y I agree that insisting on assuming the LCPW is a lousy strategic approach to most …
[-] MichaelHoward 14y 2
Yvain,
Do you have a blog or home page with more material you've written? Failing that, is there another site (apart
from OB) with contributions from you that might be interesting to LW readers?
2 Scott Alexander 14y Thanks for your interest. My blog is of no interest to anyone but my immediate …
0 michaelkeenan 14y Hey Yvain. I found your blog a little while ago (I think it was from an interesting …
1 badger 14y Ha, this was just enough information for my google-fu to finally succeed.Yvain, I have a f…
0 Scott Alexander 14y Thank you, Michael, for not linking to it here, and thank you, Badger, for the ki…
[-] nazgulnarsil 14y 2
with regards to the third question: what if I believe that any resources given simply allow the population to
expand and hence cause more suffering than letting people die?
[-] Scott Alexander 14y 15
If you don't really believe that, and it's just your excuse for not giving away lots of money, you should say loud
and clear "I don't believe I'm morally obligated to reduce suffering if it inconveniences me too much." And then
you've learned something useful about yourself.
But if you do really believe that, and you otherwise accept John's argument, you should say explicitly, "I accept
I'm morally obligated to reduce suffering as much as possible, even at the cost of great inconvenience to myself.
However, I am worried because of the contingent fact that giving people more resources will lead to more
population, causing more suffering."
And if you really do believe that and think it through, you'll end up spending almost all your income on condoms
for third world countries.
[-] Mr Valmonty 1mo 1
Is this not just an alternative way of describing a red herring argument? If not, I would be interested to see what
nuance I'm missing.
I find this classically in the abortion discussion. Pro-abortionists will bring up valid-at-face-value concerns
regarding rape and incest. But if you grant that victims of rape/incest can retain full access to abortions, the pro-
abortionist will not suddenly agree with criminalisation of abortion in the non-rape/incest group.Why? Because
the rape/incest point was a red herring argument
[-] ouroborous 6mo 1
I am trying to imagine the least convenient possible world (LCPW) for the LCPW method.
Perhaps it is the world in which there is precisely one possible world. All 'possible' worlds turn out to be
impossible on closer scrutiny. Omega reveals that talking about a counterfactual possible world is as incoherent
as talking about a square triangle.There is exactly one way to have a world with anyone in it whatsoever, and
we're in it.
[-] NoriMori1992 2y 1
This is a good argument against Pascal's Wager, but it isn't the least convenient possible world.The least
convenient possible world is the one where Omega, the completely trustworthy superintelligence who
is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact
(Omega tells you) either God does not exist or the Catholics are right about absolutely everything.
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of
Pascal's Wager has less to do with a hypothesized p
... (read more)
[-] JohnBuridan 8y -1
I think Pascal's Wager and the God-Shaped Hole should get more play.
To your Pascal's Wager statement
Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will
punish anyone who practices a religion he does not truly believe simply for personal gain.
I don't think what you say is incommensurable with the Catholic position that what is most important to the
Omega is that we pursue the best thing we know i.e. intellectual integrity along with charity. But perhaps I am
wrong.You might know more about this th... (read more)
3 [anonymous] 8y I think the GSH is largely that our whole way of thinking, our terminology, our philos…
3 gjm 8y There's a nice exposition of roughly this idea [http://slatestarcodex.com/2013/06/17/the-what…
0 JohnBuridan 8y To Hollander:When we create models, they are models of something other than y…
[-] A1987dM 9y -1
In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely
everything.
That sounds like it would decrease my probability that God exists by several dozen orders of magnitude.
0 DanielLC 9y Yes, but the important part is that it would mean that you know God won't punish you f…
0 hairyfigment 9y I should point out that - if for some reason we're taking absurdly low-probability hy…
0 DanielLC 9y Generally you use the probability times the utility. It would seem reasonable to take a…
0 hairyfigment 9y I know you've seen the Pascal's Mugging problem - that's what I meant to refer t…
[-] Dmytry 11y -2
In good ol days there was concept of whose problem something is. It's those people's problem that their organs
have failed, and it is traveller's problem that he need to be quite careful because of demand for his organs (why
he's not a resident, btw? The idea is that he will have zero utility to village when he leaves?). Society would
normally side with traveller for the simple reason that if people start solving their problems at other people's
expense like this those with most guns and most money will end up taking organs from other people to stay alive
... (read more)