Received: from MIT.EDU (SOUTH-STATION-ANNEX.MIT.EDU [18.72.1.

2]) by bloom-picayu
ne.MIT.EDU (8.6.13/2.3JIK) with SMTP id OAA03566; Sat, 20 Apr 1996 14:53:50 -040
0
Received: from [199.164.164.1] by MIT.EDU with SMTP
id AA07808; Sat, 20 Apr 96 14:11:13 EDT
Received: by questrel.questrel.com (940816.SGI.8.6.9/940406.SGI)
for news-answers-request@mit.edu id LAA25210; Sat, 20 Apr 1996 11:12:02
-0700
Newsgroups: rec.puzzles,news.answers,rec.answers
Path: senator-bedfellow.mit.edu!bloom-beacon.mit.edu!gatech!europa.eng.gtefsd.co
m!uunet!questrel!chris
From: chris@questrel.questrel.com (Chris Cole)
Subject: rec.puzzles Archive (decision), part 12 of 35
Message-Id: <puzzles/archive/decision_745653851@questrel.com>
Followup-To: rec.puzzles
Summary: This is part of an archive of questions
and answers that may be of interest to
puzzle enthusiasts.
Part 1 contains the index to the archive.
Read the rec.puzzles FAQ for more information.
Sender: chris@questrel.questrel.com (Chris Cole)
Reply-To: archive-comment@questrel.questrel.com
Organization: Questrel, Inc.
References: <puzzles/archive/Instructions_745653851@questrel.com>
Date: Wed, 18 Aug 1993 06:05:17 GMT
Approved: news-answers-request@MIT.Edu
Expires: Thu, 1 Sep 1994 06:04:11 GMT
Lines: 1243
Xref: senator-bedfellow.mit.edu rec.puzzles:25000 news.answers:11520 rec.answers
:1920
Apparently-To: news-answers-request@mit.edu
Archive-name: puzzles/archive/decision
Last-modified: 17 Aug 1993
Version: 4
==> decision/allais.p <==
The Allais Paradox involves the choice between two alternatives:
A. 89% chance of an unknown amount
10% chance of $1 million
1% chance of $1 million
B. 89% chance of an unknown amount (the same amount as in A)
10% chance of $2.5 million
1% chance of nothing
What is the rational choice? Does this choice remain the same if the
unknown amount is $1 million? If it is nothing?
==> decision/allais.s <==
This is "Allais' Paradox".
Which choice is rational depends upon the subjective value of money.
Many people are risk averse, and prefer the better chance of $1
million of option A. This choice is firm when the unknown amount is
$1 million, but seems to waver as the amount falls to nothing. In the
latter case, the risk averse person favors B because there is not much
difference between 10% and 11%, but there is a big difference between
$1 million and $2.5 million.

Thus the choice between A and B depends upon the unknown amount, even
though it is the same unknown amount independent of the choice. This
violates the "independence axiom" that rational choice between two
alternatives should depend only upon how those two alternatives
differ.
However, if the amounts involved in the problem are reduced to tens of
dollars instead of millions of dollars, people's behavior tends to
fall back in line with the axioms of rational choice. People tend to
choose option B regardless of the unknown amount. Perhaps when
presented with such huge numbers, people begin to calculate
qualitatively. For example, if the unknown amount is $1 million the
options are:
A. a fortune, guaranteed
B. a fortune, almost guaranteed
a tiny chance of nothing
Then the choice of A is rational. However, if the unknown amount is
nothing, the options are:
A. small
large
B. small
large

chance
chance
chance
chance

of
of
of
of

a fortune ($1 million)
nothing
a larger fortune ($2.5 million)
nothing

In this case, the choice of B is rational. The Allais Paradox then
results from the limited ability to rationally calculate with such
unusual quantities. The brain is not a calculator and rational
calculations may rely on things like training, experience, and
analogy, none of which would be help in this case. This hypothesis
could be tested by studying the correlation between paradoxical
behavior and "unusualness" of the amounts involved.
If this explanation is correct, then the Paradox amounts to little
more than the observation that the brain is an imperfect rational
engine.
==> decision/division.p <==
N-Person Fair Division
If two people want to divide a pie but do not trust each other, they can
still ensure that each gets a fair share by using the technique that one
person cuts and the other person chooses. Generalize this technique
to more than two people. Take care to ensure that no one can be cheated
by a coalition of the others.
==> decision/division.s <==
N-Person Fair Division
Number the people from 1 to N. Person 1 cuts off a piece of the pie.
Person 2 can either diminish the size of the cut off piece or pass.
The same for persons 3 through N. The last person to touch the piece
must take it and is removed from the process. Repeat this procedure
with the remaining N - 1 people, until everyone has a piece.
References:
Luce and Raiffa, "Games and Decisions", Wiley, 1957, p. 366

The remaining n-t players can each be assigned one of the remaining n-t pie-pieces without further ado due to the Marriage Theorem. Winter 1993. the best strategy is to wait until a certain number of daughters have been presented then pick the highest dowry thereafter. ==> decision/dowry. When a daughter is presented. The sultan's catch is that the commoner may only marry the daughter with the highest dowry. each of whom is in love with at least one of the k men. The Marriage Theorem can be applied to the Fair Pie-Cutting Problem. For reasons given below he should "accept" each piece of size > 1/n. p. the commoner will be told the daughter's dowry. so they believe this pie is larger than t/n. If there is no such set." in Mathematical Entertainments. (Otherwise the set T above was not maximal. but an envy-free solution.|T'| pieces at random from outside T'. published by the MAA. such that for all 0 < k <= n and for any set of k men there are at least k women. The exact number to skip is determined by the condition that the odds that the highest dowry has already been seen is just greater than the odds that it remains to be . "Dividing a Cake. "How To Get (At Least) A Fair Share of the Cake". Vol. ed. This is fair since they all rejected the other n-t pieces.p <== Sultan's Dowry A sultan has granted a commoner a chance to marry one of his hundred daughters.Kenneth Rebman. The pie-cutter is required to "accept" all of the pieces. The theorem asserts that there is a way to arrange the village into n monogamous couplings. The commoner has only one chance to accept or reject each daughter. There is a cute result in combinatorics called the Marriage Theorem. What is the commoner's best strategy assuming he knows nothing about the distribution of dowries? ==> decision/dowry.s <== Solution Since the commoner knows nothing about the distribution of the dowries. Choose |T| . 15. not just the best piece(s). the Marriage Theorem can be applied directly. All of the men are in love with all of the women :-}. Mathematical Intelligencer. contains references to work by Steven Breams and Alan Taylor. glue them together with the pieces in T' and let the players in T repeat the game with this smaller (t/n)-size pie. Let t be the size of the largest set (T) of players satisfying |T| > |T'|. 50. in Mathematical Plums. Given a set S of players let S' denote the set of pie-pieces acceptable to at least one player in S. No. is not solved. The commoner will be presented the daughters one at a time. A reference to this problem: David Gale. Ross Honsberger. Each of the players labels some non-null subset of the pieces as acceptable to him. Dolciani Mathematical Expostions Number 4. 1. Since the pie-cutter accepts every piece we know that t < n.) The problem of getting not just a fair solution. A village has n men and n women. One player cuts the pie into n pieces. he cannot return to a previously rejected daughter.

1965. suitor number 51 is in a quandary. We might all be doing the same thing. I might as well just tell you. at least). etc.. He's all set to skip 37.. and the sultan's vizier has just cried out. Mosteller. Working out the math for n=100 and calculating the probability gives: The commoner should wait until he has seen 37 of the daughters. etc. his odds of choosing the daughter with the highest dowry are surprisingly high: about 37%. edited by Ross Honsberger.. and one by one they come in. then pick the first daughter with a dowry that is bigger than any preceding dowry. Well. he gets his head lopped off. so they know that the best strategy is to skip the first 37 girls. QUESTION 1: What is his best strategy? ANSWER 1: His best strategy is: . which goes as follows: There's a long line of suitors outside the sultan's palace. It doesn't hurt any of us to tell the rest what strategy we'll be using. and the next suitor comes in. Addison-Wesley. This amounts to finding the smallest x such that: x/n > x/n * (1/(x+1) + . "Fifty Challenging Problems in Probability with Solutions". "Next!" cries the vizier.seen AND THAT IF IT IS SEEN IT WILL BE PICKED. and then pick the first girl who is the best up to that point." Naturally. Anyway. Unfortunately (for the suitor. that's not the right strategy. "Next!" the next guy in line says. I became interested in an iterated version of this problem. #47. If a suitor gets the right girl. 104-110) Here's a twist on the sultan's dowry problem I hope hasn't been posted yet. head number 50 comes rolling down. these few dozen guys get their heads lopped off. so that none of us sets out to pick the same girl over and over again. a few moments later. Alas. After the 49th head has just rolled down the hill.. F. But he doesn't know if the last guy skipped the right girl because she was in the first 37. pp. or if he didn't meet her yet because he stopped too early. so one by one. the first few dozen guys all know their probability theory. as before. if he doesn't. With this strategy. though. "This isn't working out. he marries her. that I'm going to get her! I know this great strategy where you skip the first 37 blah blah blah. each one assumes that he's the only one who knows that strategy. except now he knows. + 1/(n-1)). (cf. "Mathematical Plums".

If there isn't one."Skip the first 14. head number 51 rolls down the hill. "Next!" cries the vizier. Take the first girl in [15. who is getting a little hoarse. and wishes he didn't have this job. take the SECOND girl in [38. QUESTION 2: What is suitor number 52's best strategy? ANSWER 2: His best strategy is: .37] who is better than the first 14.100] who is the best up to that point." Unfortunately.

and 5 daughters." Second. take the THIRD girl in [38. Therefore.25X. "I have $200 in my hand. to see where we went wrong." Its probability of success is 1. You have decided to pick one envelope. The value of the exchange is $200. but X is different in the two . Both cases are equally likely. but then the following argument occurs to you: Suppose my chosen envelope contains $X. of course. The value of the exchange is $100.14] who is better than the first 5. If I exchange I get $100.ucla. If there isn't one.) Does anyone have any observations on this one? byron elbows (mail to brian@cs. And it is strategy number 100.100] who is the best up to that point. then the other envelope either contains $X/2 or $2X. (The corresponding conditions hold for 3. If there isn't one. I lose $100 by exchanging. switching gets 2X ($200). So where is the slip up? In one case. how many suitors will it take before the right girl is certain to be found? Does each succeeding suitor always have a better chance of winning than the preceding one? SPECULATION: The last strategy is "Pick the last girl.5 * $X/2 + . you are allowed to peek into the envelope you chose before finally settling on it. But then I can apply the argument all over again. substituting real numbers for variables. we will assume the envelopes contain $100 and $200. The value from not exchanging is $100. The value from not exchanging is $200. Take the first girl in [6. which is higher than my current $X. In the following. in the other case.37] who is the best up to that point. averaging the two cases. won't it? MORE QUESTIONS: If each suitor uses the best strategy at that point. then average the results.edu) ==> decision/envelope. Suppose that when you peek you see $100. 4. so my expectation if I take the other envelope is . take the case that X=$100. "I have $100 in my hand. One contains twice as much money as the other. First.p <== Someone has prepared two envelopes containing money. switching gets X/2 ($100). Something is wrong here! Where did I go wrong? In a variant of this problem." Now. If I exchange I get $200.5 * $2X = $1." By the end of the day. Therefore. a wedding will be set.s <== Let's follow the argument carefully. I see that the expected gain is zero."Skip the first 5. I gain $100 by exchanging. We will consider the two equally likely cases separately. so I should change my mind and take the other envelope. take the case that X=$200. Should you switch now? ==> decision/envelope. take the SECOND girl in [15.

otherwise stick. This is a classic case of confusing variables with constants. The reason for the stick case is straightforward.5 and the odds of $200 could be . [ where f(g) = (1/2)*Uniform[0. For those who would prefer to see this worked out in detail: Assume the smaller envelope is uniform on [$0. The reason for the switch case is due to the pdf of the smaller envelope being twice as high as that of the larger envelope over the range [0. such as the validity of division by zero. However the expected absolute gain (in terms of M) is: / M | g f(g) dg. OK. This pins down what X is: a constant.e. so we should switch.$M). Thus overall the always switch policy has an expected (relative to $100) gain of (3/4)*$50 + (1/4)*(-$50) = $25.$M]. Suppose all amounts up to $1 trillion were equally likely to be found in the first envelope. Limiting the maximum value in the envelopes removes the self-contradiction and the argument for switching. but I cannot under any circumstances average X/2 and 2X. QED. But any probability distribution that is finite and equal for all integers would sum to infinity. and all amounts beyond that would never appear.5. However. Let's see how this works. and leads to the invalid proof that you should always switch. so always switching is not the optimal switching strategy. What if we do not know the maximum value of the pdf? You can exploit the "test value" technique to improve your chances. so let's consider the case in which I looked into the envelope and found that it contained $100. I can average the two numbers ($100 and $200) to get $150. if we know the maximum value $M that can be in the smaller envelope. The trick here is . which is also the expected value of not switching. and the two would balance each other out. the only way the odds of $50 could be . Then for small amounts one should indeed switch.25X. ] = 0. Surely there must be some strategy that takes advantage of the fact that we looked into the envelope and we know something we did not know before we looked.M)(g) + /-M (1/2)*Uniform(-M.cases. the expected gain in switching is (2/3)*$100 + (1/3)*(-$50) = $50. and I can't simply average the two different X's to get 1. Thus. so the expected value of switching is $125. they always involve some illegitimate assumption. 50% chance $X is in [$M/2. What is the expectation value of always switching? A quarter of the time $100 >= $M (i. but not for amounts above $500 billion. That is.5 is if all integer values are equally likely.$M] and 50% chance the larger envelope is chosen). not one as it must to be a probability distribution. for some value of $M.0](g). then the optimal decision criterion is to switch if $100 < $M. the assumption of equal likelihood for all integer values is self-contradictory. the expected value of switching.5 and the odds of $200 is . This is reminiscent of the plethora of proofs that 0=1. Now the argument is that the odds of $50 is . In this case the expected switching gain is -$50 (a loss). OK. The strategy of always switching would pay off for most reasonable amounts but would lead to disastrous losses for large amounts. Well.

and tell you one of them.e. He tells you that two of the closed doors have a goat behind them and that one of the doors has a new car behind it. guess "low" with probability P(y) and "high" with probability 1-P(y).low. Is it to your advantage to do so? ==> decision/monty. he ended up with the same amount of money that he started with! However. Monty opens one of the two remaining doors and shows that it hides a goat. You are supposed to guess whether this is the lower or higher one of the two numbers I picked. until he runs out of "work" to do. Can you come up with a method of guessing that does better than picking the response "low" or "high" randomly (i. If you are allowed to play the game repeatedly.or. This strategy yields a probability of > 1/2 of winning since the probability of being correct is 1/2*( (1-P(a)) + P(b) ) = 1/2 + (P(b)-P(a)).. ==> decision/exchange.or. i. A man walks into a bar on the US side of the border." Monty Hall shows you three closed doors. a Canadian dollar was worth 90 US cents in the US. as he transported Canadian dollars into Canada and US dollars into the US. He then walks across the border to Canada.p <== I pick two numbers. This works in that to be in the range [M. do not. pays with a Canadian dollar and receives a US dollar in change. Now if the number shown is y. but that is another story.5) ? ==> decision/high. and a US dollar was worth 90 Canadian cents in Canada). you can estimate the pdf.e...M].e. You pick one door. Note that he can only continue to do this until the Canadian bar runs out of US dollars. randomly. if it is more.s <== The man paid for all the drinks. pays with a US dollar and receives a Canadian dollar in change. he performed "economic work" by moving the currency to a location where it was in greater demand (and thus valued higher).hall. ==> decision/monty. you say.s <== Pick any cumulative probability function P(x) such that a > b ==> P(a) > P(b). Who pays for the drinks? ==> decision/exchange. the Canadian and US dollars were discounted by 10 cents on each side of the border (i. or the US bar runs out of Canadian dollars. so the "test value" technique may not offer much of an advantage. you are slightly with this technique.2M] you will make the correct decision. The earnings from this work were spent on the drinks. switch. but before you open it. the pdf may not even be uniform. He then offers you a chance to switch doors with the remaining closed door. than the if T happens Therefore. probability to guess right > . ==> decision/high.s <== .to pick a test value T. assuming the unknown pdf is uniform on [0. which is > 1/2 by assumption.low. better off Of course.p <== You are a participant on "Let's Make a Deal. orders 10 US cents worth of beer.p <== At one time. If the amount in the envelope is less test value.. and ends up dead drunk with the original dollar in his pocket. orders 10 Canadian cents worth of beer.hall. He continues this throughout the day. But.

if Monty always adopted this strategy. 29. flipping a coin. 2. contestants would soon learn never to switch. ==> decision/newcomb. 67 under the title ``A Problem in Probability. novice probability students do not see that the opening of the door gave them any new information. p. isn't it pretty clear that the chance the prize is behind the remaining door is 99/100? The original Monty Hall problem (and solution) appears to be due to Steve Selvin. Monty can have one of three basic motives: 1. and Monty shows that 98 of them are valueless. Feb 1975. 3. At any rate. No improvement when switching. AMM 99:1 (Jan 1992). and probably are timeless. since it requires knowing something about Monty's probability of bluffing. . 3. No. However. The being put nothing in box B if it predicted you will do anything other than choose option (1) (including choosing option (2). 3. but these contain bibliographies): Leonard Gillman. think that (2) is the intended interpretation of Monty's motive. p. These result in very different strategies: 1. CMJ 24:2 (Mar 1993). (2) Open both box A and B. myself included. 29. so one presumes that occasionally Monty offered another door even when the contestant had picked a goat. and second. "The Problem of the Car and Goats". there are hidden assumptions about Monty's motivation that cloud the issue. the principles that underlie the problem date back at least to the fifties. However. and appears in American Statistician. 134). No.p <== Newcomb's Problem A being put one thousand dollars in box A and either zero or one million dollars in box B and presents you with two choices: (1) Open box B only. p. See the references below for details. your chance of picking the car doubles when you switch. If there are 100 doors. V. 3 Ed Barbeau. He only opens a door when the contestant has picked the grand prize. Aug 1975. 2. Double your odds by switching. Interviews with Monty Hall indicate that he did indeed try to lure the contestant who had picked the car with cash incentives to switch. The being put money in box B only if it predicted you will choose option (1). 149 The second reference contains a list of equivalent or related problems. p.).Under reasonable assumptions about Monty Hall's motivation. etc. 1. The problem is confusing for two reasons: first. so he responded two issues later (American Statistician. Don't switch! Most people. He randomly opens doors. He always opens the door he knows contains nothing. V. analyzing the problem with this strategy is difficult.'' It should be of no surprise to readers of this group that he received several letters contesting the accuracy of his solution. "The Car and the Goats". A good way to see that Monty is giving you information by opening doors that he knows are valueless is to increase the number of doors from three to 100. Reference (too numerous to mention.

Therefore.p <== Three prisoners on death row are told that one of them has been chosen at random for execution the next day. Since there is no way million is in the box or not. My mental states lead to my choice and.000 * P(predict take one | take one) E(take both) = $1. the expected gain from either action cannot be determined from the information given. what does it mean the probability that the million is in the box? choice is correlated with the state of the box. but the other two are to be freed. You cannot change what is in the boxes. However. therefore you should take one box. one would use the formulas: E(take one) = $0 * P(predict take both | take one) + $1. However. The warden relents: 'Susie will go free. the first prisoner says that because he is now one of only two remaining prisoners at risk.s <== This is "Newcomb's Paradox". to maximize your gain you should take both boxes. this argument is fallacious. In order to compute the expected gain. to the state of the box.000 * P(predict take one | take both) While you are given that P(do X | predict X) is high. However. You are presented with two boxes: one certainly contains $1000 and the other might contain $1 million.Assuming that you have never known the being to be wrong in predicting your actions. The following argument might be made: your expected gain if you take both boxes is (nearly) $1000. Therefore.000 * P(predict take both | take both) + $1. this correlation is irrelevant. Therefore my choice and the state of the box are highly correlated. specifying that P(predict X | do X) is high would be equivalent to specifying that the being could use magic (or reverse causality) to fill the boxes.' Horrified. very probably. the probability that to change whether the that you can change It means that your Events which proceed from a common cause are correlated.001. whereas your expected gain if you take one box is (nearly) $1 million. which option should you choose to maximize the amount of money you get? ==> decision/newcomb. In this sense. Indeed. One privately begs the warden to at least tell him the name of one other prisoner who will be freed.000. ==> decision/prisoners. since your choice cannot change the state of the box.s <== Each prisoner had an equal chance of being the one chosen to be executed. it is not given that P(predict X | do X) is high. my choice changes the "probability" that the money is in the box. his chances of execution have risen from one-third to one-half! Should the warden have kept his mouth shut? ==> decision/prisoners. You can either take one box or both. So we have three cases: . it might be argued that you can change the $1 million is there.

2. Adding a black card yields: p * r/(n+1) + (1-p) * (r/(n+1) * (r-1)/n + (b+1)/(n+1) * r/n). this becomes (r+1)/(n+1) as expected. The question is. you win. how do you do it? ==> 1. diamonds or hearts). and what is your probability of winning ? ==> decision/red. meaning you say "red" on the first card with probability p. decision/rotating. If one is down. either all up or all down. . This becomes r/(n+1) as expected. the warden will randomly choose either B or C. You may do so by grasping any two glasses and.e. you can say "red" on the first card.s <== Turn two adjacent glasses up. At any point before I run out of cards. Pull out two diagonal glasses. So now we have: Prisoner executed: A A B C Name given to A: B C C B Probability: 1/6 1/6 1/3 1/3 We can calculate all this without knowing the warden's answer. Thus.p <== I show you a shuffled deck of standard playing cards.table. You wish to turn them all in the same direction. what's the best strategy. When B or C is the one to be executed. With probability 1-p.p <== Four glasses are placed upside down in the four corners of a square rotating table. and C's chances are 2/3. or any other card you wish. We assume I the "dealer" don't have any control over what the order of cards is. one card at a time.s <== If a deck has n cards. When he tells us B will not be executed. turn one down and replace. C is twice as likely as A to be the one executed. turn it up and you're done. you must say "RED!". Proof by induction on n.table. Thus. If not. Now. 3. ==> decision/red. the probability that A will be executed is still 1/3. we eliminate the middle two choices above. since you now know its composition. I will even allow a nondeterministic strategy. if A is to be executed. the last card. Assuming that a bell rings when you have all the glasses up. ==> decision/rotating. the best strategy wins with a probability of r/n.Prisoner executed: A B C Probability of this case: 1/3 1/3 1/3 Now. Take two adjacent glasses. turning either over. If the next card I show is red (i. After some algebra. 4. Invert them both. There are two catches: you are blindfolded and the table is spun after each time you touch the glasses. The statement is clearly true for one-card decks. The odds of winning are therefore: p * (r+1)/(n+1) + (1-p) * ((r+1)/(n+1) * r/n + b/(n+1) * (r+1)/n). Suppose it is true for n-card decks. r red and b black. you watch the first card go by. among the two remaining cases. optionally. and add a red card. and tell A that name. and the warden will always give that name. there is only one prisoner other than A who will not be executed. and then apply the "optimal" strategy to the remaining n-card deck. Turn two diagonal glasses up.

I cannot possibly win more than $1 million whether I toss 20 tails in a row or 2000. Suppose. so the sum of the products of the probabilities and the payoffs is: E = sum over n (2^-n * 2^n) = sum over n (1) = infinity So you should be willing to pay any amount to play this game." The classical solution to this problem was given by Bernoulli. those of a cube (Kim).. Invert them both. Assume that the bank has 1 million dollars (1*K*K = 2^20). where p is the largest prime factor of n. it turns out that these constants are usually much higher than people are really willing to pay to play. and the payoff at step n is 2^n. In other words. and in fact it can be shown that any non-bounded utility function (map from amount of money to value of money) is prey to a generalization of the St. Ramshaw _The Mathematical Gardner_. He noted that people's desire for money is not linear in the amount of money involved.. Belmont CA 1981. Wadsworth International. we will see that such a procedure exists if and only if the parameters k and n satisfy the inequality k >= (1-1/p)n. Let's calculate the expected value: The probability of winning at step n is 2^-n.g. ==> decision/stpetersburg. The rest of the story lies in the observation that bankrolls are always finite.s <== Classical decision theory says that you should be willing to pay any amount up to the expected value of the wager. but at least the expected value is finite. This is called the "St. So the classical solution of Bernoulli is only part of the story.5. References "Probing the Rotating Table" W. e. people do not desire $2 million twice as much as they desire $1 million. Take two diagonal glasses. Petersburg paradox. The paper mentions (without discussing) two other generalizations: more than two orientations of the glasses (Graham and Diaconis) and more symmetries in the table. Petersburg game. To figure out what would be a fair value to charge for playing the game we must know the bank's resources. . Petersburg Paradox. Then the expected VALUE of the game is: E = sum over n (2^-n * C * log(2^n)) = sum over n (2^-n * C' * n) = C'' Here the C's are constants that depend upon the risk aversion of the player.p <== What should you be willing to pay to play a game in which the payoff is calculated as follows: a coin is flipped until it comes up heads on the nth toss and the payoff is set at 2^n dollars? ==> decision/stpetersburg. However. Laaser and L. that people's desire for money is a logarithmic function of the amount of money. for example. . and this dramatically reduces the amount you are willing to bet in the St. T.

44-45). C. the less said the better. B.3) + (.3) + (.. C's is 0.: Macmillan.. Then C and A shoot alternately until one hits. . Note that the expected value of the player's profit is 0. If he misses this time.5)(. then B clearly {I disagree. Summing the geometric series we get . cyclically (but a hit man loses further turns and is no longer shot at) until only one man is left. . this is not at all clear!} shoots the more dangerous C first. They are to fire at their choice of target in succession in the order A. and A gets one shot at B with probability 0. C is out of luck. Thus hitting B and finishing off with C has less probability of winning for A than just missing the first shot.Therefore my expected amount of winning is E = sum n up to 20 (2^-n * 2^n) = sum n up to 20 (1) = $20 and my expected value of winning is E = sum n up to 20 (2^-n * C * log(2^n)) = some small number This is much more in keeping with what people would really pay to play the game. Here's Mosteller's solution: A is naturally not feeling cheery about this enterprise. the less likely it is that the player will come away with any profit at all. On the other hand.3 of succeeding.7)^2(.5. 1960.Y.. for the following reason. B.2e.p <== A.W. It will be seen that as e (and hence the expected value of the profit) increases. suppose A hits B.5)^3(. and B never misses. Fry suggested this change to the problem in 1928 (see W. and so he is not going to shoot at C.7)(.8e (20% discount). pp. Incidentally. and let p denote the probability that the player profits from the game. and C are to fight a three-cornered pistol duel. So A fires his first shot into the ground and then tries to hit B with his next shot.3) + .C.. For a particular value of the bank's resources. 3/13 < 3/10. Mathematical Recreations and Essays.R. This is mildly counterintuitive. Each term corresponds to a sequence of misses by both C and A ending with a final hit by A.3. The problem remains interesting when modified in this way. ==> decision/truel.s <== This is problem 20 in Mosteller _Fifty Challenging Problems in Probability_ and it also appears (with an almost identical solution) on page 82 in Larsen & Marx _An Introduction to Probability and Its Applications_. If he shoots at B and misses him. assuming the price of getting into the game is 0. let e denote the expected value of the player's winnings. B will then surely hit him. Ball.5)^2(. N. What should A's strategy be? ==> decision/truel. T. Having the first shot he sees that. All know that A's chance of hitting his target is 0. if he hits C. p diminishes. Now let's vary the bank's resources and observe how e and p change. A's chance of winning is (. The more the game is to the player's advantage in terms of expected value of profit.

How about the payoff function being 1 if you win the "duel" (i.C).C.As much as I respect Mosteller. Now. if at some point you are still standing and both the others have been shot) and 0 otherwise? This should ensure that an infinite sequence of deliberate misses is not to anyone's advantage. this may be quite rational . It is then easy to establish that: . --. We need to know the value of each possible position for each person.spending the rest of my life firing a gun into the ground would be a very unattractive proposition to me :-) ] Now. Therefore. assuming that they are rational and value survival above killing their enemies. I would conclude that the ideal strategy for all three players. since people with such a payoff function would not get involved in the fight in the first place! [ I. would be to keep firing into the ground. If P(X hits)<1 and X fires at Y with intent to hit. Furthermore.e.rutgers. then if all fire into the ground with every shot. An infinite sequence of misses has value zero to both players. Y must insure that X can not follow this strategy by shooting back at X (thus insuring that P(X survives)<1). If they don't value survival above killing their enemies (which is the only a priori assumption that I feel can be safely made in the absence of more information). Come to think of it.Y). say Y. The question is then whether both players should try to shoot the other first. or whether one should let the other take the first shot. Thus. then the problem can't be solved unless the function each player is trying to maximize is explicitly given. in the order in which they get their turns (so the initial position is (A. I have some serious problems with this solution. if X pulls the trigger and actually hits someone what would the remaining person. clearly Y must try to hit X. denote each position in the game by the list of people left standing.e.clong@remus. each will survive with probability 1. I am presupposing a form of irrationality on the part of the fighters: they're only interested in survival if they win the duel.I'll have a go at this. By definition: valA(A) = 1 valA(B) = 0 valA(C) = 0 valB(A) = 0 valB(B) = 1 valB(C) = 0 valC(A) = 0 valC(B) = 0 valC(C) = 1 Consider the two player position (X. given that some real shots are going to be fired. do? If P(X hits)=1.B.edu (Chris Long) OK . and the position after A misses the first shot (B. then P(Y survives)<1 (since X could have hit Y). both players should try to shoot the other first.A)). I don't think simple survival makes a realistic payoff function. However. Since having the first shot is always an advantage. since X firing at Y with intent to hit dominates any other strategy for X. and each player can ensure a positive payoff by trying to shoot the other player. If we allow the option of firing into the ground. the argument could be made that a certain strategy for X that both allows them to survive with probability 1 *and* gives less than a probability of survival of less than 1 for at least one of their foes would be preferred by X. So both players deliberately missing is a sub-optimal result for both players.

C).A) valA(A.7*valA(B. However. Now look at this from the point of view of player B.A).it is strictly dominated by shooting at B.B) position are: For shooting at A: 0.C.A)) A similar argument shows that C's expected payoffs in the (C. He also knows that if the other players choose to shoot.C) For missing: valC(A.A) + 0.C) So C either shoots at B or deliberately misses.C) = = = = = = 7/10 1 1 5/10 0 0 valC(A. So it is known that some player's strategy must involve shooting at another player rather than deliberately missing. his expected payoff is: 0. So in position (B.B. If he deliberately misses. So A's expected payoff is: valA(A. and: valC(C.A).C.B) valA(C.A.3*valA(C. they will shoot *at him*.A.C)) Each player can obtain a positive expected payoff by shooting at one of the other players.A) valC(A.B) valA(B.A) valB(A.7*valA(B.A) valB(B.B.C) valB(C. then it was optimal for the other player to fire *at him* and that he would be at a disadvantage in the ensuing duel because of not having got the first shot. therefore. the fact that an infinite sequence of misses is sub-optimal for all three players means that at least one player is going to decide to fire.A) and (C. it is less clear than in the 2 player case that any particular player is going to fire.A) valC(B.B) valC(C. 35/130 + 0.C.C.C).valA(A. Again.7*valA(B.C) For shooting at B: 35/130 + 0. the best that he can hope for is that they miss him and he is presented with the same situation again. his expected payoff is: 0. his expected payoff is: valA(B.A.B) valB(C.C) valC(C.A) If he shoots at C. he must shoot at another player rather than deliberately miss. In the 2 player case.B.A) valA(B.A) = 0. (B.5*valC(A. He knows that *if* it is sub-optimal for him to shoot at another player.C.3*valA(B.A) = 9/130 + 0. and it is known that an infinite sequence of misses will result in a zero payoff for all players.C) = = = = = = 0 0 0 5/10 10/13 7/13 Now for the three player positions (A. we can immediately eliminate shooting at C as a strategy .C. This is not necessarily true in the 3 player case.C) = = = = = = 3/10 0 0 0 3/13 6/13 valB(A.B.B. If he shoots at B. .C.C.B).B) = MAX(valC(A.C) = MAX(valA(B.A) Since he tries to maximise his payoff. This is clearly less good for him than getting his shot in first. then it is optimal for at least one of the other players to shoot.5*valC(A.7*valA(B.B) valB(B.C.5*valC(A.B. 9/130 + 0.B.A) + 0.A) And if he deliberately misses.C) valA(C. each player knew that *if* it was sub-optimal for him to fire.B.7*valA(B.B) valC(B.C). Consider the payoff to A in the position (A.

B) = MAX(0.B) = 133/260 valC(C.A). until at most one remains. . then it is to fire at whichever of the other players is more dangerous.uk> In article <1993Mar25.10269@cs. all three will fire simultaneously.C. But here's the difference: > > At some instant. the Bad hits with probability q=. and C's best strategy in position (C. giving us: valA(C.C.A. all positions with 3 players can be resolved.allow me these alternate names.A) = 3/10 valB(B. Questions: (a) In the general case.C.A.cornell. 9/130 + 21/100) = 3/10.B. giving us: valA(A.B. the Bad wins.A. The most dangerous of the three players then finds that he has nothing to lose by firing at the second most dangerous.B's expected payoffs are: For shooting at A: valB(C. each at a target > of his choice.C) = 7/10 valC(A.B. They all know that the Good hits with > probability p=. and the Ugly are standing at three equidistant "P" "Q" "R" -.B. karr@cs.puzzles > archive.A) = 0 So valA(A. what are the optimal strategies for the other two players.B) = 7/26 I suspect that.C.David Seal <dseal@armltd.A.B) = 57/260 valB(C. and A's best strategy is position (A.edu (David K arr) writes: > The Good. Note that there are then four > possible outcomes: the Good wins. For each player.B) = 5/10 For shooting at C: valB(A. > points around a very large circle. the Bad.B.A.cornell. 35/130 + 0) = 7/26. valC(C. with this payoff function.C) is to deliberately miss.B) is to shoot at B.A) = 7/10 valC(B.C) = 3/10 valB(A. A multi-round multi-person game can get complicated if implicit alliances are formed or the players deduce each other's strategies. Then any who survive that round fire simultaneously > again. the Ugly wins. or all > are killed.5. we can establish that if their correct strategy is to fire at another player.C) = MAX(3/10.7.9. B shoots at C for an expected payoff of 7/10.C) = 0 And finally.022459. This gives us: valA(B. For simplicity let's disallow communication and assume the players forget who shot at whom after each round. possibly as functions of the hit probabilities and the cyclic order of the three players? (b) What happens in the 4 or more player case? -. and the Ugly > hits with probability r=.B) = 7/10 So in position (B. > > Yes. about to fight a three-way duel to > see who gets the treasure. I know this sounds like decision/truel from the rec.edu>.co.

7991 0.1517 0.---------. if any. I should shoot at the better shooter if they're both aiming at me or neither is aiming at me.0355 0.1824 0. the 4th and 5th lines here looks wrong: the intermediate expressions are quite different.p can offer a better comment. Who is most likely to survive? Good.-----.----------0. may lie elsewhere. > > 3.1246 0. Bad.1601 0.3443 0.1371 0. and r under the constraint p > q > r so that > the answers to questions 2 and 3 are reversed? Which of the six > possible permutations of the three shooters is a possible ordering > of probability of survival under the constraint p > q > r? Yes. q.) If I *know* who my opponents are going to aim at. say. depending on the strategies.0027 0. > > 1.0152 0.---------. Bad aims at Ugly.0152 0. I can't explain *why* P_survival(q. Can you change p. depending on the strategies.--------.0008 0.8221 but I *have* double-checked this result.0381 0.p. and Good aims at Ugly. Perhaps a game-theorist lurking in r. Of the six possible survival-probability orderings. Note that the probability all three shooters die is highest at the equilibria! This seems rather paradoxical and rather sad :-( > > 2. Ugly aims at Good.3260 0.3946 0.6367 0. Here.------- .0146 0.1371 0. unfortunately none of the players has a strictly dominant strategy: P aims at Q aims at R aims at --------.--------Q P P Q P Q * Q R P Q R Q R P P * R P Q R R P R R Q P survival Q survival R survival Noone lives ---------.q) = Q_survival(r. Bad aims at Good.0026 0. > 4.1005 0.1470 0. What is each shooter's strategy? Each player has two possible strategies so there are eight cases to consider.0649 0.0444 0. five can be obtained readily: p q r P_surv Q_surv R_surv Order -----------.0426 0.p) = 0.3270 (The similarity of.5342 0. Who is least likely to survive? > Bad or Ugly. or Ugly.0355 0. Otherwise I should aim at whoever is *not* aiming at me.6966 0.8221 0.-----.> > Now the questions: These are not easy questions.r.8221 0.4307 0. the equilibria are *not* equivalent and "solution". Ugly aims at Bad. unlike for zero-sum two-person games.4140 0. even with the simplifying assumptions I've made. There are two equilibrium points (marked "*" above): Good aims at Bad.

26 0. > > -. I assume by "has no effect on" you mean "does not improve.076 0. Are there any value of p.7.edu> mkt@vax5.5 case we are given.cornell.18039@vax5.412 0.edu>.172 0. r=1.David Karr (karr@cs. r aims at p choice. Are there any value of p.353 Q P P R R P Q R P Q R R Q Q P Unlike the p=.cornell.505 0.0.314 0..022459.675 0.cornell. karr@cs.505 0. p=1.320 0. I wonder about the "has no effect" statement. Then the Ugly lives if he shoots the Bad and dies if he does anything else. Suppose that in round 1 the Good fires into the ground and the Bad shoots at the Good.cit. This is the easiest of the questions .cornell.9.) So it definitely makes a difference to the Ugly in this case to shoot at the .25 0.edu) > Speaking of decision/truel. First of all.255 0. anyone care to repost it? -..cornell.75 0.edu (David Karr) writes: [. q=1. q.242 0.cit. q=. since this dominates any other strategy. and r for which it is ever in the >> interest of one of the shooters to fire into the ground? >> > Yes.02 0.01 0.10269@cs. q aims at p. (I've found no case with a "simple pure" solution other than this "obvious" p aims at q.01 0. q..James Allen In article <1993Apr1. Shooting at anyone has no effect on ones personal > survival.puzzles) suggesting that the N-person "truel" (N-uel?) has a Cooperative Solution (ceasefire) if and only if N = 3.25 0. while q and r each shoot at p.325 0.01 0. it assumes that continuing the fight forever has a positive value for each shooter. r=. I recall a *very* interesting analysis (I *might* have seen it here in rec. (The Bad will surely shoot at the Ugly if he can in round 2. and r for which it is ever in the > interest of one of the shooters to fire into the ground? No.] >> 5.413 0. I was very pleased by this answer but I had to think about it.edu writes: >In article <1993Mar25.25 0. The only way for one to survive is to have the other > two shoot at eachother.) > > 5.50 0.. But even if each shooter is simply trying to maximize his probability of never being shot. the five cases in this table *do* have simple pure solutions: in each case p shoots at q.01 0.173 0. they will keep shooting into the > ground and thus all live. It can't hurt to shoot at one's stronger opponent.123404.408 0.50 0." > If all follow the same logic.406 0. but it's still not easy enough for me to construct an elegant proof in English. But I don't see this in the FAQL. My preferred assumption is that it doesn't.324 0.344 0.

Each one is a blindingly fast shot: he can grab his holstered gun. Then the Ugly lives if he shoots the Bad and dies > if he does anything else. Then I *think* the conclusion holds for p=q=r=1: The best strategy is to wait for someone else to grab for their gun.cornell. then shoot that person.edu>. > But even if each shooter is simply trying to maximize his probability > of never being shot.edu> mkt@vax5. p=1.cit. and r for which it is ever in the >>> interest of one of the shooters to fire into the ground? >>> >> Yes.18039@vax5. The reaction time of each shooter is just 0.cornell." > >> If all follow the same logic. The bullets travel between shooters in less than 0.David Karr (karr@cs.2 second. This isn't entirely unreasonable--we can certainly set up a game that plays this way--but suppose we assume instead: All three start out with guns initially holstered.cornell. r=1.e du writes: >>In article <1993Mar25... My preferred assumption is that it doesn't.cornell. karr@cs. it assumes that continuing the fight forever has a positive > value for each shooter. any decision he makes to act can be based only on the actions of the other two up to 0. since this dominates any other strategy. First > of all.2 second before he initiates his own action. they will keep shooting into the >> ground and thus all live. Of course this is only good if you don't mind waiting around the circle forever. and fire in 0.edu (David Karr) writes: > [. karr@cs.022459.2657@cs. That is.cornell. A shooter can redirect his unholstered gun at a different target and fire in just 0.edu) In article <1993Apr5.123404.] >>> 5.10269@cs. (The Bad will surely shoot at the Ugly if > he can in round 2.6 second.edu (David Karr) writes: > In article <1993Apr1.cornell.210749. aim. The only way for one to survive is to have the other >> two shoot at eachother. Shooting at anyone has no effect on ones persona l >> survival.Bad. Are there any value of p. But all this is under the assumption that no shooter can tell what the others are about to do until after all have shot.) So it > definitely makes a difference to the Ugly in this case to shoot at the > Bad. I wonder about the "has no effect" statement. q.cornell. . At least I haven't yet thought of a case in which you improve your survival by shooting at anyone. therefore nobody will shoot at anyone.1 second and stop any further action when they hit.edu>.cit. > > I assume by "has no effect on" you mean "does not improve. > > I was very pleased by this answer but I had to think about it. q=1. > > Suppose that in round 1 the Good fires into the ground and the Bad > shoots at the Good.4 second. -.

. we arrive at an optimal solution of "No Shooting". You reason that you can know your probability of shooting your opponent (either 1 or 0). you're dead. then you know that: 1. then is it in your best interest to shoot someone? If it is. We already discussed this case and said "Shoot!". ya know). and fire in 0. > aim. if no. what will happen? Let's say that you are one of the gunmen (the Good). the bad and the ugly? If the command is "Shoot" then all will shoot and somebody is going to wind up lucky (Prob that it is you is 1/3). But you know the opponent thinks the same way so. Then what? Will the Good the Bad and the Ugly each randomly decide between "Shoot" and "No Shoot" with . then all will shoot-arriving at an optimal solution of "Shooting". p=1 and r<1. Is there a way of "knowing" what the opponent thinks? Of course not.5 probability between the chances of soemone shooting or not shooting? If the answer to this is "Shoot" then we arrive at square one: all will Shoot. > Each one is a blindingly fast shot: he can grab his holstered gun. You reason that your opponent has a variable probability of shooting you. then all the shooters will behave in the same fashion. and that there exists _one_ unique method for survival. Obviously." But then you realize that your opponent will also think the same way. So you say to yourself. Ay. Thus from your perspective. r=0. If there is no effect on your personal survival.. "Fire at the opponent! I'll get to stop playing this blasted game. Obviously you can see the recursion of this process... You can shoot him if you wish on this round (if you like). You can survive the next round. You reason "My probability to survive the next round is independent on whether or not I fire at him. How do we distinguish between the good. you're both dead. then do we analyze this with another .5 probability? If this is true.. Obviously the above case will not hold.> Here's where the clincher comes in! If we "assume" the object of the game is to survive.6 second.5 probability. you cry! What if there exists _more than one_ solution for optimal survival. > But all this is under the assumption that no shooter can tell what > the others are about to do until after all have shot. If the command is "No Shoot".so you might think that you might as well not shoot. Perhaps this would be easier to discuss if we let p=1.." So you say to yourself. then all will withold. "Fire at the opponent!". shooting at the ugly would be wasting a shot. > A shooter can redirect his unholstered gun at a different target and . then all will fire into the ground (or just give up and go home--no sense waitin' around wastin' time. then we arrive at another . If the answer is "No Shooting". q=1. then we arrive back at square one: since we assume all shooters are geniouses. But if your opponent thinks that way. r=1? Sorry. Thus we have made a complex problem more simple but retaining the essence of the paradox: If there are two gunmen who shoot and think with perfect logic and are kept inside a room and are allowed to shoot at discrete time intervals without being able to "see" what your opponent will do. in terms of survival. But wait you cry! What if the opponent figures this out too: p<1. But really. If there is no effect. you say. But wait.. 2. there's the rub! >This isn't entirely unreasonable--we can certainly set up a game that plays > this way--but suppose we assume instead: > > All three start out with guns initially holstered. 'nuff said! This applies to the p=r=q=1 case as well..

That is. If they all stay aboard.1 second and stop any further action when they hit. and C are passengers in a lifeboat in a storm.2 Opponents 0. Of course this is only good if you don't mind waiting around the circle forever. I'd wonder if the situation wouldn't just reduce to a game of "chicken. You start to aim for .01 which is positive. taking all three to the bottom with it.1 sec and then turn and aim at another." with each person waiting until the last minute and jumping only if it seems the other two have decided to sink .2 second before he initiates his own action. So it is known that some player's >strategy must involve shooting at another player rather than deliberately >missing. but I haven't seen anything fundamentally different between this and the above case yet.4 second. therefore nobody will shoot at anyone.1 sec (you place it by . then shoot that person.09 sec (you place it by . Then I *think* the conclusion holds for p=q=r=1: The best strategy is to wait for someone else to grab for their gun. For example.1 sec and then stop aiming. I came across the following: >Each player can obtain a positive expected payoff by shooting at one of the >other players. More ideas to consider: You begin unholstering your gun but only for . The reaction time of each shooter is just 0. You aim into the ground for . the two remaining in the boat are guaranteed to survive. and the payoff for jumping is 0. B.0 You begin 0.19) You start to aim for . are unholstered you are unholstered. The bullets travel between shooters in less than 0. You start to aim for . and it is known that an infinite sequence of misses will >result in a zero payoff for all players. Hmmn. They note you aren't them. for this strategy gives a 67% chance of survival (assuming everyone is equally likely to "crack" first) vs.2 sec.. while the person who jumped has a 1% chance of survival. If anyone jumps overboard. in the sense that if nobody jumps then the payoff for all is zero.09 sec and then stop aiming (or aim at another) -Greg Looking at the answer for decision/truel.4 Opponents aiming at to unholster your gun begin unholstering guns. the lifeboat is certain to sink eventually.2 ) You begin unholstering your gun but only for .alternate ploy: 0. any decision he makes to act can be based only on the actions of the other two up to 0.> > > > > > > > > > > > > fire in just 0. What happens now? I'll have to think about it. But it is not clear to me that the three shouldn't just all sit still until someone goes nuts and jumps overboard despite everything. At least I haven't yet thought of a case in which you improve your survival by shooting at anyone.. It seems to me the lifeboat satisfies the quoted conditions. only 1% for jumping by choice. They haven't aimed at anyone yet.2 second. This may be true but it's not obvious to me. suppose A. Even if there is a wave about to swamp the boat.

you've got to adhere to your strategy and you've got to assume that others will adhere to theirs. the two remaining in the boat are guaranteed to survive. expected value <= . and the payoff for jumping is 0. expected value . On the other hand. expected value <= . -. taking all three to the bottom with it. For example. So it is known that some player's >strategy must involve shooting at another player rather than deliberately >missing.g. I came across the following: >Each player can obtain a positive expected payoff by shooting at one of the >other players.. and it is known that an infinite sequence of misses will >result in a zero payoff for all players.David Karr (karr@cs. "Don't jump at all" and "Don't jump unless I crack" are different strategies. while the person who jumped has a 1% chance of survival.7 because in case A misses. If they all stay aboard. It seems to me the lifeboat satisfies the quoted conditions. . B.7) In fact the value of "A shoots first" is strictly less than . but only because of the asymmetry of the odds. Yes and no. In the truel I don't think this is true. But it is not clear to me that the three shouldn't just all sit still until someone goes nuts and jumps overboard despite everything. and all have expected payoff < 1. the lifeboat is certain to sink eventually. if I take "Don't jump at all" and the others take "Don't . and the first one is often (not always) superior . but three are ruled out a priori by obvious optimizations of each individual's strategy): Nobody ever shoots A shoots first (at C shoots first (at B shoots first (at (expected value 0) B. only 1% for jumping by choice. Yes in the sense that if you treat the game as a psychological one. the same four possibilities recur.7) B.5) C.. as a mathematical game. So the value of "B shoots first" uniquely maximizes B's value function. it is easiest to proceed directly to considering B's point of view.cornell. I. and to determine whether *anyone* shoots. and C are passengers in a lifeboat in a storm.edu) > > > > > > > > > > > > > > > > > > > > > > Looking at the answer for decision/truel. But treating it as a mathematical game. ergo B will always shoot as soon as possible.with the boat if you don't jump. suppose A. Whenever it is B's turn to shoot.01 which is positive. this situation is set up so it is always worse to be the first person to jump. If anyone jumps overboard. B can divide the possible courses of action into four possibilities (actually there are seven. the best strategy is as you say.e.e. for this strategy gives a 67% chance of survival (assuming everyone is equally likely to "crack" first) vs. This may be true but it's not obvious to me. in the sense that if nobody jumps then the payoff for all is zero. The rest of the analysis then follows as in the archive.

I'm certain to survive and the others each have a 50. But for mathematical analysis. I. David Seal .e. the argument above *does* show that someone's strategy will involve shooting at another player: the strategy "Don't shoot at all" is unstable in exactly the same way as "Don't jump at all" was. Applied to the truel. even if it's only the result of someone actually having taken "Don't jump unless I crack". and your argument showing that it is always in B's interest to shoot is more satisfactory.5% chance. As a psychological game. But I agree it allows for a lot of leeway about how and when the deadlock gets broken. which is better from my point of view than a 67% chance of survival for all of us. it is in everyone's interest to change strategy.jump unless I crack". you cannot control what you will do if you crack. since "Don't jump at all" is not an available strategy for most real humans. in the sense that if everyone takes it. it shows that someone will jump eventually.i.e. and so we commonly use "Don't jump" to mean "Don't jump unless I crack". the problem has to tell you what strategies you are not allowed to take. What the argument above shows is that "Don't jump at all" is not a stable strategy. some of the mathematical strategies may simply not be available .