You are on page 1of 22

The Least Convenient Possible World

284
by Scott Alexander 14th Mar 2009

Steelmanning Hypotheticals Rationality Frontpage

Related to: Is That Your True Rejection?

"If you’re interested in being on the right side of disputes, you will refute your
opponents’ arguments. But if you’re interested in producing truth, you will fix your
opponents’ arguments for them. To win, you must fight not only the creature you
encounter; you must fight the most horrible thing that can be constructed from its
corpse."

-- Black Belt Bayesian, via Rationality Quotes 13

Yesterday John Maxwell's post wondered how much the average person would do to save
ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related
question as part of an investigation of the Trolley Problems:

You are a doctor in a small rural hospital. You have ten patients, each of whom is
dying for the lack of a separate organ; that is, one person needs a heart transplant,
another needs a lung transplant, another needs a kidney transplant, and so on. A
traveller walks into the hospital, mentioning how he has no family and no one knows
that he's there. All of his organs seem healthy. You realize that by killing this traveller
and distributing his organs among your patients, you could save ten lives. Would this
be moral or not?

I don't want to discuss the answer to this problem today. I want to discuss the answer one
of my friends gave, because I think it illuminates a very interesting kind of defense
mechanism that rationalists need to be watching for. My friend said:

It wouldn't be moral. After all, people often reject organs from random donors. The
traveller would probably be a genetic mismatch for your patients, and the
transplantees would have to spend the rest of their lives on immunosuppressants,
only to die within a few years when the drugs failed.

On the one hand, I have to give my friend credit: his answer is biologically accurate, and
beyond a doubt the technically correct answer to the question I asked. On the other hand,
I don't have to give him very much credit: he completely missed the point and lost a
valuable effort to examine the nature of morality.

So I asked him, "In the least convenient possible world, the one where everyone was
genetically compatible with everyone else and this objection was invalid, what would you
do?"

He mumbled something about counterfactuals and refused to answer. But I learned


something very important from him, and that is to always ask this question of myself.
Sometimes the least convenient possible world is the only place where I can figure out my
true motivations, or which step to take next. I offer three examples:

1: Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things
most atheists think of is this:

Perhaps God values intellectual integrity so highly that He is prepared to reward honest
atheists, but will punish anyone who practices a religion he does not truly believe simply
for personal gain. Or perhaps, as the Discordians claim, "Hell is reserved for people who
believe in it, and the hottest levels of Hell are reserved for people who believe in it on the
principle that they'll go there if they don't."

This is a good argument against Pascal's Wager, but it isn't the least convenient possible
world. The least convenient possible world is the one where Omega, the completely
trustworthy superintelligence who is always right, informs you that God definitely doesn't
value intellectual integrity that much. In fact (Omega tells you) either God does not exist
or the Catholics are right about absolutely everything.

Would you become a Catholic in this world? Or are you willing to admit that maybe your
rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more
to do with a belief that it's wrong to abandon your intellectual integrity on the off chance
that a crazy deity is playing a perverted game of blind poker with your eternal soul?
2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him
from spiritual fulfillment.

Some commenters on Raising the Sanity Waterline don't deny the existence of such a hole,
if it is intepreted as a desire for purpose or connection to something greater than one's
self. But, some commenters say, science and rationality can fill this hole even better than
God can.

What luck! Evolution has by a wild coincidence created us with a big rationality-shaped
hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the
best possible way! I don't know - despite my sarcasm this may even be true. But in the
least convenient possible world, Omega comes along and tells you that sorry, the hole is
exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy
life. Do you head down to the nearest church for a baptism? Or do you admit that even if
believing something makes you happier, you still don't want to believe it unless it's true?

3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating
almost everything to charity.

Some commenters object that many forms of charity, especially the classic "give to
starving African orphans," are counterproductive, either because they enable dictators or
thwart the free market. This is quite true.

But in the least convenient possible world, here comes Omega again and tells you that
Charity X has been proven to do exactly what it claims: help the poor without any
counterproductive effects. So is your real objection the corruption, or do you just not
believe that you're morally obligated to give everything you own to starving Africans?

You may argue that this citing of convenient facts is at worst a venial sin. If you still get to
the correct answer, and you do it by a correct method, what does it matter if this method
isn't really the one that's convinced you personally?

One easy answer is that it saves you from embarrassment later. If some scientist does a
study and finds that people really do have a god-shaped hole that can't be filled by
anything else, no one can come up to you and say "Hey, didn't you say the reason you
didn't convert to religion was because rationality filled the god-shaped hole better than
God did? Well, I have some bad news for you..."
Another easy answer is that your real answer teaches you something about yourself. My
friend may have successfully avoiding making a distasteful moral judgment, but he didn't
learn anything about morality. My refusal to take the easy way out on the transplant
question helped me develop the form of precedent-utilitarianism I use today.

But more than either of these, it matters because it seriously influences where you go
next.

Say "I accept the argument that I need to donate almost all my money to poor African
countries, but my only objection is that corrupt warlords might get it instead", and the
obvious next step is to see if there's a poor African country without corrupt warlords (see:
Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable
answer would be to donate to another warlord-free charitable cause like the Singularity
Institute.

If you just say "Nope, corrupt dictators might get it," you may go off and spend the money
on a new TV. Which is fine, if a new TV is what you really want. But if you're the sort of
person who would have been convinced by John Maxwell's argument, but you dismissed it
by saying "Nope, corrupt dictators," then you've lost an opportunity to change your mind.

So I recommend: limit yourself to responses of the form "I completely reject the entire
basis of your argument" or "I accept the basis of your argument, but it doesn't apply to the
real world because of contingent fact X." If you just say "Yeah, well, contigent fact X!" and
walk away, you've left yourself too much wiggle room.

In other words: always have a plan for what you would do in the least convenient possible
world.

Steelmanning 11 Hypotheticals 3 Rationality 2 Frontpage

Mentioned in
149 Reversed Stupidity Is Not Intelligence
113 The Library of Scott Alexandria
88 Better Disagreement
39 6 Tips for Productive Arguments
38 A Suggested Reading Order for Less Wrong [2011]
Load More (5/41)
202 comments, sorted by top scoring
Some comments are truncated due to high volume. (⌘F to expand all) Change truncation settings
[ ] davidamann 14y
- 111
I think a better way to frame this issue would be the following method.
1. Present your philosophical thought-experiment.
2. Ask your subject for their response and their justification.
3. Ask your subject, what would need to change for them to change their belief?
For example, if I respond to your question of the solitary traveler with "You shouldn't do it because of biological
concerns." Accept the answer and then ask, what would need to change in this situation for you to accept the
killing of the traveler as moral?
I remember this method giving me deeper insight into the Happiness Box experiment.
Here is how the process works:
1. There is a happiness box. Once you enter it, you will be completely happy through living in a virtual world.
You will never leave the box.Would you enter it?
2. Initial response.Yes, I would enter the box. Since my world is only made up of my perceptions of reality,
there is no difference between the happiness box and the real world. Since I will be happier in the
happiness box, I would enter.
3. Reframing question.What would need to change so you would not enter the box.
4. My response:Well, if I had children or people depending on me, I could no
... (read more)
[-] pwno 14y 39
I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, "What would
I have to prove in order for you to change your mind?" If they answer "nothing" you know they are probably
not truth-seekers.
[-] Vladimir_Nesov 14y 11
Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral
position is really about.There are many factors to every decision, so it might help to try varying each of them,
and finding other conditions that compensate for the variation.
For example, you wouldn't enter the happiness box if you suspected that information about it giving the true
happiness is flawed, that it's some kind of lie or misunderstanding (on anyone's part), of which the situation of
leaving your family on the outside is a special case, and here is a new piece of information.Would you like your
copy to enter the happiness box if you left behind your original self? Would you like a new child to be born
within the happiness box? And so on.
2 abramdemski 11y This seems to nicely fix something which I felt was wrong in the "least convenient …
0 Rings_of_Saturn 14y Great, David! I love it.
-1 thrawnca 7y The happiness box is an interesting speculation, but it involves an assumption that, in my…
3 CynicalOptimist 7y Okay, well let's apply exactly the technique discussed above: If the hypothetical …
1 Jiro 7y What if we ignore the VR question? Omega tells you that killing and eating your children will…
-2 thrawnca 7y This would depend on my level of trust in Omega (why would I believe it? Because O…
1 TheOtherDave 7y For my part, it's difficult for me to imagine a set of observations I could make …
[-] MBlume 14y 62
I'm not sure if I'm evading the spirit of the post, but it seems to me that the answer to the opening problem is
this:
If you were willing to kill this man to save these ten others, then you should long ago have simply had all ten
patients agree to a 1/10 game of Russian Roulette, with the proviso that the nine winners get the organs of the
one loser.
[-] Scott Alexander 14y 25
While emphasizing that I don't want this post to turn into a discussion of trolley problems, I endorse that
solution.
[-] abramdemski 11y 14
In the least convenient possible world, only the random traveler has a blood type compatible with all ten
patients.
6 CynicalOptimist 7y This is fair, because you're using the technique to redirect us back to the origin…
0 abramdemski 7y Agreed.
3 DanielLC 9y I'd go with that he's the only one who has organs healthy enough to ensure the recipi…
-3 Rixie 11y MBlume knows this, he's just telling us what he was thinking.
2 Said Achmiz 10y What if one or more of the patients don't agree to do this?
7 DanielLC 9y Then you let him die, and repeat the question with a 1/9 chance of death.
1 Bruno Mailly 5y To me the logical answer is that it depends on how much value is attributed to "a" lif…
-1 [anonymous] 14y The technical creativity of this solution reveals the limits of rationality.This is a sol…
[-] Vladimir_Nesov 14y 14
Throwing a die is a way of avoiding bias in choosing a person to kill. If you choose a person to kill personally,
you run a risk of doing in in an unfair fashion, and thus being guilty in making an unfair choice. People value
fairness. Using dice frees you of this responsibility, unless there is a predictably better option.You are alleviating
additional technical moral issues involved in killing a person.This issue is separate from deciding whether to kill
a person at all, although the reduction in moral cost of killing a person achieved by using the fair roulette
technology may figure in the original decision.
7 Tasky 12y But as a doctor, probably you will have to choose non-randomly, if you want to stand by …
[-] bentarm 14y 49
There are real life examples where reality has turned out to be the "least convenient of possible worlds". I have
spent many hours arguing with people who insist that there are no significant gender differences (beyond the
obvious), and are convinced that to assert otherwise is morally reprehensible.
They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong,
that their morality just can't cope with a world in which this turns out not to be true.There are many similar
politically charged issues - Pinker discusses quite a few in the Blank Slate - where people aren't wiling to listen to
arguments about factual issues because they believe they have moral consequences.
The problem, of course - and I realise this is the main point of this post - is that if your morality is contingent on
empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that
sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these
differences do turn out to exist then you'll say sexism is ok.
This is probably a test you should apply to all of your moral beliefs - if it just so happens that I'm wrong about the
factual issue on which I'm basing my belief is wrong, will really I be willing to change my mind?
2 Pr0methean 10y That raises an interesting question: is it possible to base a moral code only on what's…
5 Richard_Kennaway 10y To do that would require that "all possible worlds that contain me" be a co…
0 Jackercrack 9y I think that it is not. All possible worlds include worlds where every tuesday the first …
5 DanielLC 9y You could have a personal moral code of stabbing anyone who you're 90% certain wo…
0 [anonymous] 9y That doesn't follow from your logic.There could be multiple functions of maximal…
0 Jackercrack 9y I took "all possible worlds that contain me" to mean all worlds where history wen…
1 [anonymous] 9y Retract -- circle with an line through it.
0 Jackercrack 9y What do you mean by circle with a line through it? Is that some sort of code f…
5 Nornagest 9y There should be a button with that appearance in the lower right-hand corner…
8 wedrifid 9y The causality is unlikely.There was never strikethrough syntax here and the retr…
2 Jackercrack 9y Ah, thank you. I hadn't noticed that
-8 Rixie 11y
[-] bill 14y 34
One way to train this: in my number theory class, there was a type of problem called a PODASIP.This stood for
Prove Or Disprove And Salvage If Possible.The instructor would give us a theorem to prove, without telling us if
it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up
with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).
This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient
possible world" in which it was true.
[-] Nebu 14y 20
I voted up on your post,Yvain, as you've presented some really good ideas here. Although it may seem like I'm
totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my
responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to
explore these 3 scenarios on their own.
Pascal's Wager
In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I
suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.
For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything
the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.
So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically
self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want
the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would
happen ... (read more)
0 matteyas 6y The point is that in the least convenient world for you, Omega would say whatever it is t…
2 Jiro 6y The least convenient world is one where Omega answers his objections.The least convenient…
0 jknapka 11y This is a very good point, and I believe I'll point it out to my rather fundamentalist sibling …
-1 DanielLC 9y If I really, truly believed that every non-Christian was doomed to eternal damnation, I'd…
[-] Vladimir_Nesov 14y 20
Let's try something different.
Puts on the reviewer's hat.
The Yvain's post presented a new method for dealing with the stopsign problem in reasoning about questions of
morality.The stopsign problem consists in following an invalid excuse to avoid thinking about the issue at hand,
instead of doing something constructive about resolving the issue.
The method presented by Yvain consists in putting in place the universal countermeasure against the stopsign
excuses: whenever a stopsign comes up, you move the discussed moral issue to a different, hypothetical setting,
where the stopsign no longer applies.The only valid excuse in this setting is that you shouldn't do something,
which also resolves the moral question.
However, the moral questions should be concerned with reality, not with fantasy.Whenever a hypothetical setting
is brought in the discussion of morality, it should be understood as a theoretical device for reasoning about the
underlying moral judgment applicable to the real world.There is a danger in fallaciously generalizing the moral
conclusion from fictional evidence, both because there might be factors in the fictional setting that change your
decision and which you ... (read more)
4 [anonymous] 14y I do agree. I think in many ways reality already is "the least convenient possible worl…
[-] freyley 14y 14
One difficulty with the least convenient possible world is where that least convenience is a significant change in
the makeup of the human brain. For example, I don't trust myself to make a decision about killing a traveler with
sufficient moral abstraction from the day-to-day concerns of being a human. I don't trust what I would become if I
did kill a human. Or, if that's insufficient, fill in a lack of trust in the decisionmaking in general for the moment.
(Another example would be the ability to trust Omega in his responses)
Because once that's a significant issue in the subject , then the least convenient possible world you're asking me
to imagine doesn't include me -- it includes some variant of me whose reactions I can predict, but not really
access. Porting them back to me is also nontrivial.
It is an interesting thought experiment, though.
[-] CronoDAS 14y 11
So I asked him, "In the least convenient possible world, the one where everyone was genetically
compatible with everyone else and this objection was invalid, what would you do?"
Obviously, you wait for one of the sick patients to die, and use that person's organs to save the others, letting the
healthy traveler go on his way. ;)
But that isn't the least convenient possible world - the least convenient one is actually the one in which the
traveler is compatible with all the sick people, but the sick people are not compatible with each other.
[-] Psy-Kosh 14y 10
Actually, you don't even need to add that additional complexity to make the world sufficiently inconvenient.
If the rest of the patients are sufficiently sick, their organs may not really be suitable for use as transplants, right?

[-] alex_zag_al 12y 10


There's another benefit: you remove a motivation to lie to yourself. If you think that a contingent fact will get you
out of a hard choice, you might believe it. But you probably won't if it doesn't get you out of the hard choice
anyway.
1 Muhd 7y On the other hand, if you think that a contingent fact will get you out of a hard choice, perha…
[-] Dreaded_Anomaly 12y 8
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of
Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that
it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a
perverted game of blind poker with your eternal soul?
I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to
reason and then rewarded us for not using it.
8 Nick_Tarleton 12y Would you want to, if you could? If so, given the stakes, you should try damn hard …
4 Robert Miles 12y I don't follow your reasoning. Because God made us able to do a particular thing, w…
2 DanielLC 9y I certainly wouldn't like such a God. He'd be better than a God who bestowed upon us…
-1 Dreaded_Anomaly 12y My statement does not generalize in that way, and was not intended to do so.
6 Antonio 12y It does. It just doesn't if you accept the premise that intelligence is, in and of itself, goo…
[-] [anonymous] 14y 7
The problem with the 'god shaped hole' situation (and questions of happiness in general) is that if something
doesn't make you happy NOW, it becomes very difficult to believe that it will make you happy LATER.
For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of
your life.Would you take it? Our immediate reaction is to say 'no', probably because we don't like the idea of
'fake', chemically-induced happiness. In other words, because the idea doesn't make us happy now, we don't really
believe it will ... (read more)
7 Swimmer963 (Miranda Dixon-Luinenburg) 12y I try my best to value other peoples' happiness equa…
2 Hul-Gil 12y I would definitely take the Soma, and don't see why anyone wouldn't. Odd, the differences …
3 Kingreaper 12y I wouldn't take it. I desire to help others, and it gives me pleasure to do so, it makes …
0 Swimmer963 (Miranda Dixon-Luinenburg) 12y I agree with you completely. I can understand wh…
2 Peter Wildeford 12y I'm reminded of Yudkowsky's Not For the Sake of Happiness Alone [http://less…
5 [anonymous] 12y I think one of the points underrepresented in these "Not For the Sake of XXX …
4 Hul-Gil 12y Agreed. I also think people tend to underestimate the goodness of pure bliss: I have e…
0 Hul-Gil 12y He makes good points, but note that there's nothing saying you couldn't take Soma and…
0 Peter Wildeford 12y The argument wasn't that you need the joy of scientific discovery; it was tha…
1 jhuffman 12y This is just wire-heading isn't it? At least, that is what you should search for if you want…
1 Hul-Gil 12y Same here.That is, I know I'd wirehead - I don't see any bothersome implications with …
3 jhuffman 12y It does not matter if you are immobilized. Once you are wire-heading there is no re…
0 [anonymous] 12y I think you're simply assuming that we're motivated primarily by happiness in that c…
[-] ChrisHibbert 14y 6
I like the phrase "precedent utilitarianism". It sounds to utilitarians like you're joining their camp, while actually
pointing out that you're taking a long-term view of utility, which they usually refuse to do.The important
ingredient is paying attention to incentives, which is really the rational response to most questions about morality.
Many choices which seem "fairer", "more just", or whose alternatives provoke a disgust response don't take the
long-term view into account. If we go around sacrificing every lonely s... (read more)
6 alex_zag_al 12y Actually, we would all be more safe, because we'd be in less danger from organ failure.…
-3 DanielLC 9y That would be true if they were hunting people down. As stated, people would becom…
5 theseus 8y This is an exact instance of the point of the post. It is important to assume they are hun…
4 [anonymous] 14y That would make a great movie! Lonely Stranger Jason Statham wakes up and realis…
3 Desrtopa 10y On what basis would you say it's the case that utilitarians usually refuse to take a long-t…
-1 ChrisHibbert 10y When I've argued with people who called themselves utilitarian, they seemed to …
6 Desrtopa 10y Well, in my experience people who self identify as utilitarians don't appear to be any …
7 christopherj 10y And that is an advantage of traditional moral systems -- because they have been …
5 Ford 7y I tend to agree, but it depends on how something was tested. In "Darwinian Agriculture"…
[-] Psy-Kosh 14y 5
Very good point, and crystalizes some of my thinking on some of the discussion on the tyrant/charity thing.
As far as the specific problems you posed...
For your souped up Pascal's Wager, I admit that one gives me pause.Taking into account the fact that Omega
singled out one out of the space of all possible religions, etc etc...Well, the answer isn't obvious to me right now.
This flavor would seem to not admit to any of the usual basic refutations of the wager. I think under these
circumstances, assuming Omega wasn't open to answering any further question... (read more)
8 astray 14y The souped up Pascal's Wager seems like the thousand door version of Monty Hall.
[-] [anonymous] 12y 4
I would act differently in the least convenient world than I do in the world that I do live in.
[-] JJ10DMAN 13y 4
Yes! I can't believe I don't see this repeated in one form or another more often. Fallacies are a bit like prions in
that they tend to force a cascade of fallacies to derive from them, and one of my favorite debate tactics is the
thought experiment, "Let's assume your entire premise is true. How might this contradict your position?"
Usually the list is longer than my own arguments.
[-] Epictetus 8y 3
The least convenient world is one where there's no traveler and the doctor debates whether to harvest organs
from another villager. I figure that if it's okay to kill the traveler for organs, then it should be okay to kill a villager.
Similarly, if it's against general principle to kill a villager for organs, then it shouldn't be okay to kill the traveler.
Perhaps someone can come up with a clever argument why the life of a villager is worth intrinsically more than
the life of the traveler, but let's keep things simple for now.
So, let us suppose that N sic... (read more)
1 Meni_Rosenfeld 8y The perverse incentive to become alcoholic or obese can be easily countered wit…
0 Lumifer 8y I think China used to have a similar system, except that instead of lottery they just picked …
1 Marion Z. 7mo That seems entirely reasonable, insofar as the death penalty is at all. I don't think we …
-1 Jiro 8y We have two such systems today, except 1.We call it "taxes". 2. People die on an overall statist…
[-] Irgy 12y 3
This might be better placed somewhere else, but I just thought I'd comment on Pascal's Wager here.To me both
the convenient and inconvenient resolutions of Pascal's Wager given above are quite unsatisfactory.
To me, the resolution of this wager comes from the concept of sets of measure zero.The set of possible realities
in which belief in any given God is infinitely beneficial is an infinite set, but it is nonetheless like Cantor Dust in
the space of possible explanations of reality.The existence of sets of measure zero explains why it is reasonable
to a... (read more)

[-] corruptmemory 14y 3


Although I understand and appreciate your approach the particular examples do not represent particularly good
ones:
1: Pascal's Wager:
For an atheist the least convenient possible world is one where testable, reproducible scientific evidence strongly
suggests the existence of some "super-natural" (clearly no-longer super-natural) being that we might ascribe the
moniker of God to. In such a world any "principled atheist" would believe what the verifiable scientific evidence
support as probably true. "Atheists" who did not do th... (read more)

[-] Annoyance 14y 3


"I believe that God’s existence or non-existence can not be rigorously proven."
Cannot be proven by us, with our limits on detection, or cannot be proven in principle?
Because if it's the latter, you're saying that the concept of 'God' has no meaning.
3 corruptmemory 14y Formalize this a bit: "I believe that X’s existence or non-existence can not be rig…
1 Nebu 14y I think just because something cannot be proven (even in principle) does not necessarily im…
-2 cleonid 14y It is the latter (I’m an agnostic). However, I don’t see why the concept has no meaning.W…
2 Baughn 14y It's possible to decide which axioms are in effect from the inside of a sufficiently comple…
1 cleonid 14y "It's possible to decide which axioms are in effect from the inside of a sufficiently comp…
3 Sebastian_Hagen 14y Assuming the entity in question is cooperative, try this: Ask it if P=NP [http…
8 John_Baez 13y Or: it says "This is undecidable in Zermelo-Fraenkel set theory plus the axiom of…
1 Tasky 12y If it really is undecidable, God must be able to prove that. However, I think an easier …
8 Eliezer Yudkowsky 14y It says "There is no elegant proof". Next?
4 Sebastian_Hagen 14y Ask again, with another famously unsolved math problem. Repeat until it…
-1 Dacyn 7y Depending on your definition of "elegant", there are probably no famous unsolved …
1 Vladimir_Nesov 14y It could give a formally checkable proof, that is far from being elegant, but…
0 Annoyance 14y "Would you say that axioms in math are meaningless?" They distinguish one hypothe…
7 Nebu 14y I think the words "true" and "false" have some connotation that you might not want to i…
4 anonym 14y They distinguish one hypothetical world from another. It's a subtle distinction, but I thi…
3 Johnicholas 14y Euclidean geometry isn't a theory about the world, and therefore cannot be falsifie…
-2 Annoyance 14y "Math is not physics." It's made out of physics. I think perhaps you mean that ma…
1 Dacyn 7y Riemannian geometry is not an axiomatic geometry in the same way that Euclidean geom…
0 SanguineEmpiricist 7y First i've heard of this, super interesting. Hmm. So what is the correct way…
0 Dacyn 7y Special relativity is good enough for most purposes, which means that (a time slice of) …
0 cleonid 14y "They distinguish one hypothetical world from another." Just like different religions. "Fu…
[-] DPiepgrass 2y 2
0: Should we kill the miraculously-compatible traveler and distribute his organs?

My answer is based on a principle that I'm surprised no one else seems to use (then again, I rarely listen to
answers to the Fat Man/Train problem): ask the f**king traveler!
Explain to the traveler that he has the opportunity to save ten lives at the cost of his own. First they'll take a
kidney and a lung, then he'll get some time to say goodbye to his loved ones while he gets to see the two people
with the donated organs recover... and then when he's ready they'll take the re... (read more)
1 Marion Z. 7mo Only replying to a tiny slice of your post here, but the original (weak) Pascal's wager a…
[-] Thomas Eisen 3y 2
My answers:
1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super
impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that
omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly
omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid
defence, since there's a lot of suffering that isn't caused by other ... (read more)
[-] passive_fist 8y 2
either God does not exist or the Catholics are right about absolutely everything.
Then I would definitely and swiftly become an atheist, and I maintain that this is by far the most rational choice for
everybody else as well. My prior belief in God not existing is relatively high (let's say 50/50), but my prior belief in
all of Catholicism being the absolute truth is pretty much nil. And if you're using anything vaguely resembling
consistent priors, it has to near-nil for you too, because the beliefs of Catholicism are just so incredibly specific.
They na... (read more)
[-] Nanashi 8y 2
I find this method to be intellectually dangerous.
We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the
brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract
solutions to conceptual problems.
I agree that there is a small modicum of value to considering the LCPW. Just like there's a small modicum of value
to eating a pound of butter for dinner. It's just, there are a lot better ways to spend ones time.The proper
response to "We... (read more)
8 Nornagest 8y I don't think I could disagree more.The point of ethical thought experiments like the sic…
0 Nanashi 8y That's fair. I understand the value: it exposes the weakness of using overly rigid heuristics…
2 TheOtherDave 8y I agree that insisting on assuming the LCPW is a lousy strategic approach to most …
[-] MichaelHoward 14y 2
Yvain,
Do you have a blog or home page with more material you've written? Failing that, is there another site (apart
from OB) with contributions from you that might be interesting to LW readers?
2 Scott Alexander 14y Thanks for your interest. My blog is of no interest to anyone but my immediate …
0 michaelkeenan 14y Hey Yvain. I found your blog a little while ago (I think it was from an interesting …
1 badger 14y Ha, this was just enough information for my google-fu to finally succeed.Yvain, I have a f…
0 Scott Alexander 14y Thank you, Michael, for not linking to it here, and thank you, Badger, for the ki…
[-] nazgulnarsil 14y 2
with regards to the third question: what if I believe that any resources given simply allow the population to
expand and hence cause more suffering than letting people die?
[-] Scott Alexander 14y 15
If you don't really believe that, and it's just your excuse for not giving away lots of money, you should say loud
and clear "I don't believe I'm morally obligated to reduce suffering if it inconveniences me too much." And then
you've learned something useful about yourself.
But if you do really believe that, and you otherwise accept John's argument, you should say explicitly, "I accept
I'm morally obligated to reduce suffering as much as possible, even at the cost of great inconvenience to myself.
However, I am worried because of the contingent fact that giving people more resources will lead to more
population, causing more suffering."
And if you really do believe that and think it through, you'll end up spending almost all your income on condoms
for third world countries.
[-] Mr Valmonty 1mo 1
Is this not just an alternative way of describing a red herring argument? If not, I would be interested to see what
nuance I'm missing.
I find this classically in the abortion discussion. Pro-abortionists will bring up valid-at-face-value concerns
regarding rape and incest. But if you grant that victims of rape/incest can retain full access to abortions, the pro-
abortionist will not suddenly agree with criminalisation of abortion in the non-rape/incest group.Why? Because
the rape/incest point was a red herring argument
[-] ouroborous 6mo 1
I am trying to imagine the least convenient possible world (LCPW) for the LCPW method.
Perhaps it is the world in which there is precisely one possible world. All 'possible' worlds turn out to be
impossible on closer scrutiny. Omega reveals that talking about a counterfactual possible world is as incoherent
as talking about a square triangle.There is exactly one way to have a world with anyone in it whatsoever, and
we're in it.
[-] NoriMori1992 2y 1
This is a good argument against Pascal's Wager, but it isn't the least convenient possible world.The least
convenient possible world is the one where Omega, the completely trustworthy superintelligence who
is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact
(Omega tells you) either God does not exist or the Catholics are right about absolutely everything.
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of
Pascal's Wager has less to do with a hypothesized p
... (read more)

[-] Aurini 14y 1


I apologize for banging on about the railroad question, but I think the way you phrased it does an excellent job of
illustrating (and has helped me isolate) why I've always vaguely uncomfortable with Utilitarianism.There is a sharp
moral contrast which the question doesn't innately recognize between the patients entering into a voluntary
lottery, and the forced-sacrifice of the wandering traveller.
Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism. I think it was you
who commented on OvercomingBias, that one of t... (read more)
[-] John_Baez 13y 20
Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism.
So maybe some form of forced socialism is right. But you don't seem interested in considering that possibility.
Why not?
While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it
as a moral principle.
Why not?
It seems like you have some pre-established moral principles which you are using in your arguments against
utilitarianism. Right?
I don't see how you can compromise on these principles. Either each person has full ownership of
themselves (so long as they don't infringe on others), or they have zero ownership.
To me it seems that most people making difficult moral decisions make complicated compromises between
competing principles.
3 JohnH 12y Utilitarianism itself requires the use of some pre-established moral principles.
6 Hroppa 12y Thought experiment: A dictator happens to own all the property on the planet. Until now,…
3 Aurini 12y Good god, Aurini (2009) sounds quite pompous. I can't even deal with reading his entire c…
3 CharlieSheen 11y Most LessWrong posters are still firmly in the Cathedral and may fail to appreci…
2 pedanterrific 12y What an interesting way of dodging the question.What's this supposed to mean?…
2 Aurini 12y Heh, I'm nothing if not Interesting.The quote is a typo, incidentally - I meant to write "…
3 handoflixue 12y "Giving something for 'free' is just another form of enslavement " Hmmmm, this actu…
1 TheAncientGeek 7y Or "mu". Ownership, self or otherwise, is the wrong frame entirely, for instace.
0 AspiringRationalist 11y There are important differences between moral principles and government p…
1 fubarobfusco 11y More generally, reaching the moral conclusion that agent A should do X (or even i…
0 Swimmer963 (Miranda Dixon-Luinenburg) 12y The Canadian government has socialist elements, an…
8 Aurini 12y The problem for me - speaking as a Canadian - is that there's no choice about it.To be ho…
0 MixedNuts 12y Anectodal evidence: In France, the post office is much worse since they have comp…
[-] gmweinberg 14y 1
I don't see any problem with acknowledging that in a world very different from this one my beliefs and actions
would also be different. For example, I think the fact that there are and have been so many different religions with
significantly different beliefs as to what God wants is evidence that none of them are correct. It follows that if
there was just one religion with any significant number of adherents then that would be evidence (not proof) that
that religion was in fact correct.
Maybe if Omega tells me it's Catholicism or nothing I'll become a Cath... (read more)
[-] [anonymous] 14y 1
Yvain,
Do you have a blog or home page with more material, or is there another site (apart from OB) with
contributions from you that might be interesting to LW readers?
[-] [anonymous] 14y 1
Yvain, you frequently seem to have extra line breaks in your post, which I've been editing to fix. I'm leaving this
post as is because I'm wondering if you can't even see them, in which case are you using an unusual browser or
OS?
[-] Gunslinger 7y 0
So I asked him, "In the least convenient possible world, the one where everyone was genetically
compatible with everyone else and this objection was invalid, what would you do?"
That's a pretty damn convenient world. It's basically like saying "In a world where serious issue X isn't applicable,
what would you do?" which might as well be the better question instead of beating around the bush.
Sorry if this was posted before.
[-] aausch 8y 0
The acceleratingfuture domain's registration has expired (referenced in the starting quote)
(http://acceleratingfuture.com/?reqp=1&reqr=)
[-] matteyas 9y 0
I have a question related to the initial question about the lone traveler.When is it okay to initiate force against
any individual who has not initiated force against anyone?
Bonus: Here's a (very anal) cop out you could use against the least convenient possible world suggestion: Such a
world—as seen from the perspective of someone seeking a rational answer—has no rational answer for the
question posed.
Or a slightly different flavor for those who are more concerned with being rational than with rationality: In such a
world, I—who value rational answers above all other answers—will inevitably answer the question irrationally. :þ
0 DanielLC 9y I'm not sure what this means.There is a finite number of choices. Each of them has a spe…
[-] Varan 10y 0
I think that traveler's problem may pose two questions instead of one. First of all - is that a right thing to do just
once, and the second is if it's good enough to be a universal rule.We can counclude that's the same question,
because using it once means we should use it every time when a situation is the same. But using it as a universal
rule has an additional side effect - a world where you know you can be killed (depraved of all posessions, etc.) any
moment to help some number of strangers is not such a nice place to live in, though sometimes it's po... (read
more)

[-] brainoil 10y 0


Would this be moral or not?
Of course it is, if you live in this hypothetical world.The fact that in real life things are rarely this clear, or the fact
that in real life you will be jailed for doing this, or the fact that you'd feel guilty if you do this, or the fact that in
real life you won't have the courage to do this, doesn't mean that it's wrong.
But in real life I'd hardly ever violate the libertarian rights because of all the reasons mentioned above.
[-] rasthedestroyer 11y 0
The biological commentary is indeed accurate, but I question its relevance in the context of the question, which
seems to be one in favor of a utilitarian ethical discourse without the biological considerations. It might be better
to assume the biological factors involved are compatible, or assume all other factors are equal, and disregard the
biology.
The first answer that comes to mind for most I'm sure is that 10 is greater than 1, and that such a sacrifice would
return a net gain in lives saved. However, this question is complicated by what it is abou... (read more)
2 [anonymous] 8y Why, if the sick people are so close biologically, can't we sentence one of them instea…
[-] A1987dM 11y 0
But in the least convenient possible world, here comes Omega again and tells you that Charity X has
been proven to do exactly what it claims: help the poor without any counterproductive effects.
You don't need the least convenient possible world and Omega for that; for non-excessively-large values of
proven, this world and givewell.org suffice. I'm surprised that in three years nobody pointed that out before.

[-] [anonymous] 12y 0


I dont think this would be a moral example Debt Consolidation Loan
[-] lucidfox 12y 0
The least convenient possible world is the one where Omega, the completely trustworthy
superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity
that much. In fact (Omega tells you) either God does not exist or the Catholics are right about
absolutely everything.
The problem with this specific formulation is that fundamentalist Christian beliefs are inconsistent, and thus it is
trivially follows from Omega's wording that God does not exist.
A better wording would be to postulate that Omega asserts the poss... (read more)
[-] Bugle 13y 0
"first, do no harm"
It's remarkable that medical traditions predating transplants* already contain an injunction against butchering
passers by for spare parts
*I thought this was part of the Hippocratic oath but apparently it's not
0 thomblake 13y An injunction to do no harm is part of the Hippocratic oath, and the actual text has m…
1 MrHen 13y Obligatory wikipedia link. [http://en.wikipedia.org/wiki/Hippocratic_Oath] On the other …
[-] [anonymous] 14y 0
3: Extreme Altruism.
I don't want to save starving Africans. In most circumstances I would not actively mass murder to cull
overpopulation but I wouldn't judge myself immoral for doing so.
[-] [anonymous] 14y 0
2:The God-Shaped Hole....Do you admit that even if believing something makes you happier, you still
don't want to believe it unless it's true?
I would, and do, admit that I don't want to believe it unless it's true. I watch myself make that decision more than
enough to be honest about it.
(I'll note that believing things that aren't true makes me miserable and stressed. My verbal beliefs go about
interfering with my behavior and my aversion to hypocrisy frustrates me. I'm usually better off believing the truth
and just going along with the lie. However, I've assumed that in the least convenient possible world my God
shaped hole was repared to normal function.)
[-] [anonymous] 14y 0
You are a doctor in a small rural hospital.You have ten patients, each of whom is dying for the lack of a
separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another
needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family
and no one knows that he's there. All of his organs seem healthy.You realize that by killing this traveller
and distributing his organs among your patients, you could save ten lives.
I wouldn't kill him. It isn't worth the risk for me. I al... (read more)
[-] [anonymous] 14y 0
1: Omega, the completely trustworthy superintelligence who is always right, informs you that God
definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not
exist or the Catholics are right about absolutely everything.
Would you become a Catholic in this world?
Yes, plus pay bribes/alms at whatever the going rate is for having doubts that have been updated to greater than
50% based on observations. Since faith is somewhat destinct than raw prediction I suspect God'd be cool with
that.
[-] vroman 14y 0
*kill traveler to save patients problem
assuming that
-the above solutions (patient roulette) were not viable
-upon recieving their new organs, the patients would be restored to full functionality, the equal of or better utility
generators than the traveler
then I would kill the traveler. however, if the traveler successfully defended himself, and turned the tables on me, I
would use my dying breath to happily congratulate his self preservation instinct and wish him no further
problems on the remainder of his journey. and of course Id have left instructions w ... (read more)

[-] Jonnan 14y 0


The problem is the "least convenient world" seems to involve a premise that would, in and of itself, be
unverifiable.
The best example is the pascals wager issue - Omega tells me with absolute certainty that It's either a specific
version of God (Not, for instance Odin, but Catholicism), or no God.
But I'm not willing to believe in an omniscient deity called God, taking it back a step and saying "But we know it's
either or, because the omniscient de . . . errr . . . Omega tells you so" is just redefining an omniscient deity.
Well, if I do... (read more)
1 Nick_Tarleton 14y Yes, to make it work, you may have to imagine yourself in an unreachable epistemi…
0 Jonnan 14y No, to make it work you have to assume that you believe in omniscience in order to clar…
4 Cyan14y You're right that the existence of Omega is information relevant to the existence of othe…
2 Jonnan 14y Not if omniscience is A) a necessary prerequisite to the existence of a deity, and B) by…
1 Cyan 14y The whole idea of an unreachable epistemic state seems to be tripping you up. In the l…
4 Dan_Moore 11y Argument #1 works in the least convenient imaginable world, in my opinion. …
0 Cyan 11y It's been more than two and a half years, dude! OK, here goes. I made a misstep by …
3 Dan_Moore 11y sorry - I was led there by a recent thread [http://lesswrong.com/r/discussi…
0 DanielLC 9y You could two-box. If you get the million and the thousand you prove that he's not omni…
[-] cleonid 14y 0
“Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes
you happier, you still don't want to believe it unless it's true?”
I believe that God’s existence or non-existence can not be rigorously proven. Likewise there is no rigorous
protocol for estimating the chances.Therefore we are forced to rely on our internal heuristics which are
extremely sensitive to our personal preferences for the desired answer. Consequently, people, who would be
happier believing in God, mostly likely already do so.The same p... (read more)

[-] JohnBuridan 8y -1
I think Pascal's Wager and the God-Shaped Hole should get more play.
To your Pascal's Wager statement
Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will
punish anyone who practices a religion he does not truly believe simply for personal gain.
I don't think what you say is incommensurable with the Catholic position that what is most important to the
Omega is that we pursue the best thing we know i.e. intellectual integrity along with charity. But perhaps I am
wrong.You might know more about this th... (read more)
3 [anonymous] 8y I think the GSH is largely that our whole way of thinking, our terminology, our philos…
3 gjm 8y There's a nice exposition of roughly this idea [http://slatestarcodex.com/2013/06/17/the-what…
0 JohnBuridan 8y To Hollander:When we create models, they are models of something other than y…
[-] A1987dM 9y -1
In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely
everything.
That sounds like it would decrease my probability that God exists by several dozen orders of magnitude.
0 DanielLC 9y Yes, but the important part is that it would mean that you know God won't punish you f…
0 hairyfigment 9y I should point out that - if for some reason we're taking absurdly low-probability hy…
0 DanielLC 9y Generally you use the probability times the utility. It would seem reasonable to take a…
0 hairyfigment 9y I know you've seen the Pascal's Mugging problem - that's what I meant to refer t…
[-] Dmytry 11y -2
In good ol days there was concept of whose problem something is. It's those people's problem that their organs
have failed, and it is traveller's problem that he need to be quite careful because of demand for his organs (why
he's not a resident, btw? The idea is that he will have zero utility to village when he leaves?). Society would
normally side with traveller for the simple reason that if people start solving their problems at other people's
expense like this those with most guns and most money will end up taking organs from other people to stay alive
... (read more)

[+] Lisawood 12y -5


[+] corruptmemory 14y -5
[+] Saladin 11y -10
Moderation Log

You might also like