Professional Documents
Culture Documents
ABSTRACT. Aumann (1995) has shown that „common knowledge of substantive rationality“
implies the backwards induction (BI) outcome in any (generic) perfect information game. In
Aumann’s framework, (a) knowledge is modeled by means of partitional information structures, (b)
it is assumed that at each state of the world, each player has a strategy, and (c) ”substantive”
rationality requires of a player to ”plan” a move for any of his decision nodes on the supposition
that the node is reached – even if he knows that it will not be reached.
We show that none of these features is essential for the BI argument. Instead, (A) we work
with belief structures which allow for the possibility that players are mistaken, and that neither
positive nor negative introspection hold, (B) we do not assume that players must always have
strategies, and (C) we use what we call ”relative” rationality conditions, which which only require
that a player does not make a move whereof he knows that another one he considers possible
implies a higher payoff. Moreover, we use the assumption that, (D) for any reached node, the move
prescribed by the BI profile is considered possible by the player whose move it is.
Our Theorem 1 says that common belief (CB) of relative rationality (in the sense of (C)),
and of conditional possibility of the BI moves (in the sense of (D)) is both necessary and sufficient
for CB in the BI outcome. Theorem 2 says that there will be correct CB in the BI outcome, if and
only if, in addition to the condition of Theorem 1, all players are relatively rational. Next, we
consider the case where players have strategies, and reconsider Aumann’s condition for BI.
Theorem 3 says that even in general belief structures Aumann’s condition remains sufficient for CB
in the BI outcome. Theorem 4 says there will be correct CB in the BI outcome, if in addition to the
CB of Theorem 3, every player knows his own strategy, and plays according to it. Finally, we show
that the BI outcome is already implied by ‚forward belief‘ in both relative rationality and
conditional possibility of moves, provided that these forward beliefs are correct.
This version: February 2002. Many thanks to Geir Asheim, Bart Lipman, and Dov Samet for
suggesting a translation of originally syntactic formulations into the state-space framework, and to
Yossi Feinberg for many helpful comments, in particular for suggesting Remark 4. Arnis Vilks
wants to express his gratitude to the Center for the Study of Language and Information at the
University of Stanford, where parts of the present paper were written.
1. Introduction
In a much discussed paper, Aumann (1995) has formulated a sufficient condition for the backwards
induction (BI) play of a (generic) perfect information (PI) game. While Aumann‘s condition can be
background assumptions the role of which is not entirely clear. Among others, Aumann‘s model has
(b) At each state of the world, each player has a strategy for the game, i.e., a plan of action for all
his decision nodes - including nodes whereof he knows that they will not be reached.
(c) „Substantive“ rationality of a player is defined by the condition that ''no matter where he finds
himself - at which vertex - he will not knowingly continue with a strategy that yields him less than
In the present paper we show that none of these features is essential for the backwards induction
argument. In fact, it seems intuitively clear that the BI argument nowhere depends on introspective
reasoning. It also seems to „go through“ regardless of whether players‘ beliefs in unreached
successors‘ rationality is mistaken or not. Quite similarly, the assumption that players must have
strategies seems unnecessarily strong: For instance, an irrational player off the BI path who is
3
mistakenly believed to be rational by all the other players may well have no definite plan of action
at all - while the BI argument may still apply. Our approach has the following main features:
(A) We work with general belief structures, which are not required to be partitional. We do assume
that beliefs are always consistent, but they may be mistaken, and neither positive nor negative
(B) We assume that, at each state of the world, a particular path through the game-tree will be
followed, but that this need not result from players‘ having strategies.
(C) We use ”relative” rationality conditions, which are defined in terms of moves (instead of
strategies), and which only require that a player does not make a move whereof he knows that
(D) A further distinguishing feature of our approach is the assumption that, for any reached node,
the move prescribed by the BI profile is considered possible by the player whose move it is. In our
case of possibly mistaken beliefs some such assumption seems to be an unavoidable presupposition
of the BI argument. After all, even at a reached node a player might not consider possible a move
which „actually“ - according to the rules of the given game - is a move he might take.
Our Theorem 1 says that common belief (CB) of relative rationality (in the sense of C) and of
conditional possibility of the BI moves (in the sense of D) is both necessary and sufficient for CB
4
in the BI outcome of the game. Our Theorem 2 says that this CB will be correct, if and only if, in
addition to condition of Theorem 1, all players are relatively rational. We then reconsider Aumann‘s
condition for BI and show in Theorems 3 and 4 that it remains a sufficient condition for BI even
without the partitional information structure assumed by Aumann. Actually, Aumann‘s condition
turns out to be stronger than ours. Finally, we adapt the notion of „forward knowledge“ introduced
in different versions by Balkenborg and Winter (1997), Rabinowicz (1998), and Sobel (1998), and
show in Theorem 5 that the BI play is already implied by forward belief in relative rationality and
conditional possibility of the BI moves at any given node, provided that these beliefs are correct.
The paper is organized as follows: The next section presents notation and the formal framework,
Section 3 explains the main ideas by means of a very simple example, while Sections 4, 5, and 6
respectively state the general results for the case of CB without strategies, for CB with strategies,
and forward belief. Section 7 discusses possibility of non-BI moves, the important, but special case
of the Centipede game, and the timing of beliefs, and relates our results to previous work, including
2. The framework
For the given PI game we consider, the following notation will be used:
á(v) the set of immediate successors of node v, i.e. á(v)={w0X| v<w and for no u0V: v<u<w}
i(v) the player who makes a move at node v, i.e. the i0N for which v0Vi
Throughout, we assume that the PI game under consideration is in general position, i.e., ði(t) ði(t‘)
whenever t t‘.
Si the set of player i‘s strategies, where each strategy is formally treated as a mapping from Vi
As usual, we set
S-i = (j0N\{i}Sj
s-i = (sj)j0N\{i},
6
We will use é for the BI profile, and â(v) for the terminal node induced by the BI profile in the
subgame with origin v. For v0Z, we set â(v)=v. It will be important in what follows to distinguish
carefully between moves on the BI path, i.e., moves preceding â(vo), and BI moves. The latter term
will be used for all moves assigned by é to some decision node, independently of whether this
The intended interpretation is that o(ù) contains the terminal node actually reached in state ù, thus
[v]:={ù | v<o(ù)}.
7
Note that this event can alternatively be interpreted as the event that the player at the immediate
The intended interpretation of Ki(ù) is as the set of those states which i considers possible when the
actual state is ù. For any player i, and any event EdÙ, we can therefore define the event that i
considers E possible by
Writing ~E for Ù(E, we note that both Bi(E)=~Pi(~E), and ~Bi(~E) = Pi(E) hold identically. Below,
we will mostly follow the tradition to express possibility in terms of belief, but we emphasize that
there is no reason whatsoever to view the notion of belief as more basic than the notion of
possibility.
E6F := (~EcF),
8
which can be interpreted as the event that E holds only if F does. We adopt the bracketing
convention that EcF6G and E1F6G stand for (EcF)6G and (E1F)6G, respectively.
B1(E):= B(E)
CB(E):= _ m$1Bm(E).
Common belief among some subset MdN of players is defined analogously, and will be denoted
CBM(E).
In order to compare our epistemic conditions for BI with Aumann‘s, we will also consider belief
As usual, s(ù) will also be written as (si(ù))i0N, where si(ù)0Si, and s-i(ù) will stand for (sj(ù))j0N\{i}.
3. A Simple Example
Before turning to general definitions and results, we explain the main ideas for one of the simplest
It should not matter for the BI argument whether I‘s belief in II‘s rationality is mistaken or not. If
(1) and (2) are satisfied, one should expect the BI outcome x regardless of whether II is in fact
rational. However, it does matter for the BI argument, whether I believes that II, if reached,
considers y possible. If I believed that II does not consider y possible, it will be rational for I to play
„across“. Quite similarly, the BI outcome x cannot be expected, if I does not consider it possible.
After all, as we are considering beliefs which can be mistaken, it might be the case that Figure 1
shows the „objective“ game, but that player I mistakenly believes that he is physically unable to
make the move leading to x. Thus, when beliefs may be mistaken, the BI argument seems to require
Given a belief structure for the game of Figure 1, it is straightforward to represent the latter two
assumptions by events:
(iii) PI([x])
Which events should be taken to represent assumptions (1) and (2)? When beliefs may be false, it
becomes problematic to define rationality as choice of a best move from those which are in fact
11
possible. Clearly, the best move for II, if she is reached, is the one leading to y. However, if we
observed outcome z, should we conclude that II is irrational? With respect to the „objective“
possibilities depicted in the game tree, the answer is „clearly yes“, but if II believes that she cannot
carry out her BI move, it seems she is still rational relative to what she considers possible. This
motivates our notion of relative rationality. Relative rationality only requires that a player forbears
to make a move if he considers possible an alternative one which he believes to imply a higher
payoff. For player I in our example this implies that a state at which I is relatively rational belongs
Relative rationality of I at a state would also require that the state belongs to the following event:
For the general case, the event of relative rationality will in fact be defined as the intersection of all
such events, but for our simple example we will only need event (i). Relative rationality of player
II can be formulated in an even simpler way, as she can have no doubt about the outcomes of her
rationality is construed as relative rationality) that state belongs to the following event:
It is now easy to verify that the intersection of events (i), (ii), (iii), and (iv) is a subset of [x]. In
words: If I believes that II is relatively rational and, if reached, considers her BI move possible, then
I will make his BI move, provided that he considers this possible and is relatively rational.
Consider a belief structure (Ù, o, (Ki )i0N) for a given PI game. In order to express our condition of
relative rationality, consider player i‘s decision node v, two immediate successors v+, v- 0á(v), and
two terminal nodes t+, t- such that ði(t+)>ði(t-). If i is relatively rational, it should not happen that he
believes both [v+]6[t+] and [v-]6[t-], considers [v+] possible, but still makes the move from v to v-.
Hence we define the event that player i=i(v) is relatively rational at his node v by
Relative rationality of all players at all their nodes is defined as the event
Finally, we define I to be the event that the BI path is followed, and the BI outcome is thus reached:
I:={ù| o(ù)=â(v0)}
The first main result of the present contribution is that CB in both relative rationality and
conditional possibility of the BI moves is both necessary and sufficient for CB in the BI outcome.
Clearly, if there is CB in the BI outcome, and beliefs are correct, the BI outcome must result. Our
next Theorem shows that much less than full veridicality of beliefs needs to be added to derive the
BI outcome.
Intuitively, the derivation from Theorem 1 is simple: Once there is CB in the BI outcome, the
players must consider their BI moves at reached nodes possible, and believe that no moves off the
BI path will be made. Together with relative rationality, this implies that only BI moves will
actually be made. On the other hand, if players consider only the moves on the BI path possible, and
14
In order to relate our results to Aumann‘s, we now consider a belief structure with strategies, (Ù, o,
(Ki )i0N, s), as defined in Section 2 above, which allows us to consider events defined in terms of
both outcomes and strategies. Aumann assumes that having a strategy in state ù means playing
according to that strategy. While certainly very natural, this assumption is not empty: If a strategy
is interpreted as specifying what a player plans, or has decided to do (at each of his nodes, if it were
reached), one may well imagine that a player does not stick to his plan, or that he revises his
decision (cf. Kramarz, 1993). Even if one follows Aumann in thinking „of the players as attaching
automata to their vertices before play starts“ (Aumann, p. 12), it is at least conceivable that the
automata can be reprogrammed, or that they do not work as programmed. Be that as it may, in our
framework we can express the assumption of „play according to strategies“ by the following event
PAS:
Following Aumann, the event that player i‘s strategy is si is defined as follows:
Aumann‘s assumption that each player knows his own strategy can be represented by the event:
Finally, to express Aumann‘s „substantive“ rationality condition, we define, for each decision node
v, each player i, and profile s, the payoff of player i which he would get if v were reached, and the
The event that i would get a higher payoff with strategy si than with his actual strategy if v were
Player i is substantively rational if no event of this sort is believed by him. Thus the event
We are now ready to formulate Aumann‘s condition for BI. It is represented by the event
Aumann (1995) has shown for partitional belief structures that this condition implies the BI
outcome. Our next two theorems show that the special properties of partitional belief structures are
As in the case of Theorem 1, we clearly can infer from this that CB in PAS, KOS, and SR imply the
BI outcome if all beliefs happen to be veridical. As in the case of Theorem 2, however, much less
than full veridicality of all beliefs is needed. It suffices to add the assumptions „play according to
Clearly, the events considered in Theorems 1 and 2 remain well-defined for belief structures with
strategies. Thus, for belief structures with strategies, we have two different sufficient conditions for
CB in the BI outcome, and two different sufficient conditions for the BI outcome itself. The
conditions not requiring strategies are easily seen to be, respectively, weaker than the ones that do.
Remark 3. Unless Z is a singleton, there is a belief structure with strategies such that the inclusions
6. Forward Belief
Finally, we show that the condition for BI play formulated in Theorem 2 can be further weakened
by using the notion of forward belief. The underlying observation is that the informal BI argument
nowhere depends on players‘ beliefs about their predecessors in the game tree, but only on beliefs
about their successors‘ beliefs about their successors beliefs, and so on. This idea has been explored
by Balkenborg and Winter (1997) in Aumann‘s framework (i.e. with partitional belief structures
and strategies at all states), and in slightly different frameworks by Rabinowicz (1998) and Sobel
(1998).
To define forward belief for general belief structures, let (v0, v1, ..., vm) be the path from the origin
to the immediate predecessor of v, i.e., v0á(vm), and vì+10á(vì) for ì=0,...,m-1. Let qv=(i0, i1, ..., im)
be the corresponding sequence of players, i.e., iì=i(vì) for all ì, and let Qv be the set of all (non-
(1,2),(1,3),(2,3),(1),(2),(3)}.
Forward belief prior to node v in event E is then taken to be the following event:
18
Theorem 5:
how it should be, as the BI argument for a given game yields the same outcome when some non-BI
moves are considered impossible by the players. Nevertheless there are several questions that arise
First of all, it might be suspected that identities analogous to Theorems 1 and 2 hold when the BI
profile é is replaced by some other profile s. Does CB in relative rationality and conditional
possibility of all moves of some arbitrary strategy profile s imply CB in the outcome induced by s?
For the game of Figure 1, with VI={v0, a2}, VII={a1}, and Z={a3, d1, d2, d3}, consider the following
belief structure: Ù={2,3}, o(2)={d1}, o(3)={a3}, KI(2)=KII (2)={2}, KI (3)=KII (3)={2,3}. Define the
profile s by s(v0)=d1, s(a1)=a2, s(a2)=a3, and conditional possibility of the moves specified by s as
follows:
It is easy to check that in this belief structure Ù = RR 1 CB(RR 1 CP(s)), but at state 3, the outcome
The example also shows that a relatively rational agent may well consider possible more than one
of his moves at a given node. It is also easy to show that his opponents can believe that he is both
relatively rational and considers all his moves (at reached nodes) possible. However, there is an
20
interesting asymmetry between beliefs about oneself, and beliefs about others: An agent cannot
(typically) believe of himself that he is both relatively rational and considers all his moves (at
reached nodes) possible. Prima facie it may seem natural to strengthen the condition of CB in
relative rationality and conditional possibility of BI moves by requiring conditional possibility of all
moves, that is, by replacing the event CPI by the event CP, defined as follows:
However, except for trivial games, the event CB(RR1CP) is clearly empty: As soon as there is CB
in the BI outcome, there must also be CB that no move off the BI path will be made, so that no
agent can consistently believe that he considers his (single-move) deviations from the BI path
possible. For the condition of Theorem 5, which implies the BI outcome, but not CB in it, the
situation is different.We show that, for the simple Centipede game of Figure 3, there is a belief
The required belief structure can be defined as follows: Ù={d1, d2, d3, a3 }, o(ù)/ù, and
KI(d1 )={d1 ,d 2 }, KII(d 1 )={d 1 }, KI(d2 )={d 2 }, KII(d2)={d2, d3}, KI(d3)={d3, a3}, KII(d3)={d3 },
KI(a3)=KII(a3)={a3}, as visualized in the diagram of Figure 4, where the circles are the states, and
One can check that d10 RR 1 CP 1 _v FBv(RRv1CPv). At this state, player I ends the game
immediately with d1, but he also considers possible that he might be in a state where he moves
across to a1 and considers d2 as the only possible outcome. He also thinks that in that second state,
player II, while she actually plays down to d2, also considers possible to move across, and so on. In
state a3, where no BI move is made, both players are relatively rational simply because, in that
hypothetical state, which no-one considers possible in d1, only playing across is considered possible
by both players.
Somewhat surprisingly, the situation is again different in a game where a player moves twice along
the BI path. To see this, consider the game of Figure 2 again. In a state where the condition of
Theorem 5 holds, player I believes [a1]6[a3], i.e., the only outcome he considers possible after a1 is
a3. As the latter is the actual outcome in the state considered, and a2 is therefore reached, conditional
possibility of d3 cannot be consistently required on top of the condition of Theorem 5. For any belief
structure for the game of Figure 2, the seemingly quite natural event RR 1 CP 1 _v FBv(RRv1CPv)
is empty.
There is a way out of this problem, if one distinguishes between a player‘s beliefs at his different
decision nodes. A player can then believe now that his beliefs at a later stage of the game would be
different from what he believes now. In order to adapt our framework accordingly, one could define
a modified belief structure that specifies a distinct possibility correspondence for each decision
node. As this is formally identical to a general belief structure for a game where each player has
only one decision node, we confine ourselves here to the following remark.
23
Remark 4. For any PI game where each player has only one decision node (such that we may set
N=V, and denote the only element of Vi by i), there is a belief structure such that
As the event in this remark is clearly a subset of the one in Theorem 5, we can formulate the
following condition for BI: If each player, at any of his nodes, is relatively rational, considers, if the
node is reached, all his moves at that node possible, and if this is CB among all his respective
opponents (including himself at his other nodes), then the BI outcome must result.
Aumann interprets the knowledge of his condition for BI as pertaining to a time before the game
starts. This is certainly a feasible interpretation for the conditions of the theorems in the present
contribution. However, the previous discussion indicates that it might be more natural to allow for
different beliefs at different nodes of a player, and to interpret a player‘s beliefs at a node as the
beliefs he would hold when the node is reached - if it were reached. After all, for the player at the
origin of the tree the relevant considerations are about later players‘ beliefs at the time when they
have to act - rather than what they believe before the game starts. Moreover, if a player‘s beliefs for
his different nodes are distinct, they cannot (all) be actual beliefs, but (some of them) must be
regarded as hypothetical. At least, the implicit assumption that players do not change their beliefs
seems to be required for Aumann‘s interpretation. As we have seen, this assumption may be
24
inconsistent with otherwise natural conditions such as the ones of the preceding discussion.
It is worth emphasizing that the time when beliefs are held must also be one where all relevant
reasoning has been completed. However, it is not unambiguously clear what should count as
„relevant“ reasoning. In particular, reasoning about one‘s own rationality seems to be relevant only
insofar as it pertains to one‘s future selves or to others‘ beliefs about it. In order to act rationally, a
player has to think about his own rationality at possible future occasions, and also about what later
players will believe about his rationality, but he does not need to draw all conclusions from the
assumption that he is rational at the present occasion. Our Theorem 5 shows that the BI argument
is in fact valid for players who do not think about their own (present) rationality or about what they
(at present) consider possible. The beliefs in conditions such as those of Theorem 5 or Remark 4
can therefore be taken to be held at a time before players have thought about the implications of
their respectively own (present) rationality - but after they have exploited all the other relevant
information.
By contrast, the conditions of Theorems 1 through 4 pertain to players after they have also thought
through all the implications of their own rationality, and have reached common full belief in what
is going to happen - including what their respectively own actions are going to be.
25
Instead of the semantic state-space used here, conditions for BI can alternatively be analysed in a
syntactic framework. Thus the notions of relative rationality and conditional possibility of BI moves
have been introduced in Vilks (1999), where the epistemic logic KTn is used. (Cf. Fagin, Halpern,
Moses, and Vardi, 1995 for various systems of multi-agent epistemic logic.) Essentially, KTn
corresponds to requiring possibility correspondences to be reflexive, i.e., to satisfy ù0Ki (ù) for all
ù, which implies the „veridicality axiom“ Bi(E)dE. In the language of the present framework, the
results in Vilks (1999) say that for veridical beliefs the following holds:
In a formal epistemic logic with counterfactual conditionals, Clausing (1999) has proved a variant
of Theorem 4, thus showing that the veridicality axiom (which, in the by now standard terminology,
distinguishes knowledge from belief) is inessential for Aumann‘s version of the BI argument.
A further difference between the state-space approach of the present paper and the authors‘ previous
syntactic work is that the latter requires explicit conditions representing the players‘ knowledge or
beliefs about the structure of the game. In the present paper we follow the main strand of the
literature in considering only belief structures for a given game. By doing so one is implicitly
assuming common knowledge of the players‘ payoff-functions, and of the fact that some path
through the given game will be played in each and every possible state of the world.
26
Brandenburger (1999) notes that „there has been a long-standing intuition that, simply as a
theoretical matter, probabilities play an inessential role in [PI] games“. In fact, probabilistic
considerations do not play any role in the conditions for BI considered above. Of course,
probabilistic beliefs may be added to a given belief structure, and if this is done, there are some
natural restrictions one would like to impose. If ìi(ù) denotes player i‘s probability measure in state
ù, one would require that supp(ìi(ù)) d Ki(ù). However, there is no need to assume that subjective
possibility implies positive probability. As in the case of continuous random variables there may
well be an event E which is considered possible (E1Ki(ù) i), but assigned probability zero
(ìi(ù)(E)=0). Conversely, there may well be events which are believed with probability one
As the literature nevertheless often identifies „full belief“ with „belief with probability one“, we
briefly indicate how to generate belief structures warranting this identification from given
probabilistic beliefs. If we start with a finite Ù, an outcome function o:Ù6Z, and for each player i
a mapping ìi from Ù to the set of probability measures on Ù, we interpret ìi(ù)(E) as the degree of
belief player i has in event E when the true state is ù. We can then define a belief structure by
taking Ki(ù) to be the support of ìi(ù). The event Bi(E) is then the set of all states at which i
assigns probability 1 to E, and ~Bi(~E) the set of all states at which i assigns positive probability to
E.
27
In a framework of this kind, Ben-Porath‘s (1997) notion of common certainty CC of an event E can
be defined as
CC(E):= E 1 CB(E),
8. Proofs
The following lemmas can be easily proved from the definitions of the belief and common belief
operators.
Lemma 3. Bi(E)1Bi(F)=Bi(E1F).
Lemma 4. Bi(E)d~Bi(~E).
Lemma 5. CB(Ù)=Ù.
Proof of Lemma 10. Consider a decision node v0Vi, and assume that the assertion of the lemma is
true for all v’0á(v). Note that this assumption must hold for any node where only terminal moves
hence, by Lemmas 7, 8 and 9, it follows for any v-0á(v) with v- é(v) that
CPI d [v]6~Bi~[é(v)].
Combining the latter two inclusions, and using the fact that (E1F6G)1(H6F) d H6(E6G), we get
Combining this with (1), using the fact that E1(F6(E6G)) d F6G, and Lemma 6 again, we get
As this holds for all v-0á(v) with v- é(v), and [v]6 ^w0á(v)[w] =Ù, it follows by Lemmas 5, 6, and 7
that
for any v+, v-0á(v), with v+ v-, v0Vi. If v- é(v), (#) follows from I d ~[v-] and Lemma 6. If v-= é(v),
v+ é(v), and thus I d ~[v+], and by lemma 6 and 8 CB(I) d Bi~[v+], which yields (#). Because of
for any i, v+, v-0á(v), t+, and t- with v+ v-, which yields the result. Q.E.D.
holds for any v0Vi. If v does not belong to the BI path, I d ~[v] d ~[v] c ~Bi~[é(v)] , and (**)
follows by Lemma 6. If v belongs to the BI path, I d [é(v)], and by Lemmas 8, 2, 6, and 4, we get
Thus (**) holds for any v0Vi, and applying Lemma 7 repeatedly yields CB(I)dCB(CPI). Q.E.D.
CB(RR1CPI)dCB(I). To prove the converse, note that Lemmas 6, 7, 8, and 11 yield CB(I)d
Proof of Theorem 2. Let v0Vi, and v-0á(v), v- é(v). As ði(â(v))>ði(â(v-)), the definition of RR
implies:
Using the fact that (E1F6G) 1 E d F6G, the latter two inclusions yield
For any decision node v on the BI path, we get, from Theorem 1, and Lemmas 4, 6, and 8
Thus, for any v- that can be reached by one deviation from the BI path, we have
Denote by D the set of all nodes reached by one deviation from the BI path. As can be shown by a
simple induction, _v-0D ~[v-]=I. We thus get RR 1 CB(RR 1 CPI) d I, and by Theorem 1
RR 1 CB(RR 1 CPI) d CB(I) 1 I. The converse follows from Theorem 1 and Lemma 11. Q.E.D.
Proof of Lemma 13. Consider a decision node v0Vi, and assume that the assertion of the lemma is
At this state, some v-0á(v), v- é(v) must be reached. I.e., for some such v-,
Proof of Theorem 3. Let v=v0 in the Statement of Lemma 13, and note [v0]6[â(v0)]=I. Q.E.D.
Proof of Theorem 4. Assume that contrary to the assertion of the theorem, there is a state
~I=^v-0D[v-]. Then ù0[v-] must hold for some v-0D. Let v0Vi be such that v-0á(v). Because of
ù0PAS, si(ù)(v)=v-. Clearly, [si(v)=v-]1PASd[v]6[v-]. Thus by the definition of KOS, and Lemmas
35
Proof of Remark 3. Consider a structure where Ù={ù}, o(ù)=â(v0), Ki(ù)=Ù for all i, and s(ù) é.
RR1CB(RR1CPI)=CB(RR1CPI)=Ù. Q.E.D.
To prove Theorem 5, we need the following generalization of the forward belief operator. Let (v1,
v2, ..., vm) be the path from node v1 (not necessarily the origin) to the immediate predecessor of v,
i.e., v0á(vm), and vì+10á(vì) for ì=1,...,m-1. Let qvv1=( i1, i2, ..., im ) be the corresponding sequence
of players, i.e., iì=i(vì) for all ì, and let Qvv1 be the set of all (non-empty) subsequences of qvv1. We
then define:
Proof of Lemma 16. Assume the assertion is true for all u‘>u, u‘0V. (If á(u)1V=i, the assertion
must be true as w>u then implies w=â(w), and thus [w]6[â(w)]=Ù.) Thus, if u‘0Vi‘
for all decision nodes w‘>u‘, and in particular for w‘0á(u‘). Thus we get
37
As
we get
Moreover,
Proof of Theorem 5. First, observe that for any u0V, the following holds:
In particular, for an arbitrary u0Vi, we can take w0á(u). Doing so, we infer
Q.E.D.
Proof of Remark 4. Let v#w stand for v<w or v=w. Consider the following belief structure
Ù:=Z,
o(t)/t, and
Let é(i)#t. Then Ki(t)= {â(a): a 0 á(i)}, which implies t0CPi and t0Bi([a]6[â(a)]) for all a0á(i).
Because of ði(â(é(i))>ði(â(a)) for all a0á(i)\{é(i)}, the latter implies t0RRi. Furthermore, if i<t does
not hold, t0CPi and t0RRi trivially hold. This gives â(v0)0RRi1CPi.
For any set of states X, define Kj(X) as the union of Kj(t) for all t in X. As is well known,
t0CBN\{i}(E) if and only if Kj(...Kk(t)...)dE for all sequences j,...,k in N\{i}. We will show for any t
0Kj(...Kk(â(v0))...), where j,...,k is a sequence in N\{i}, that either é(i)#t or not i<t holds. This is
clearly the case for t0{â(v0)}. Now assume our claim has been shown for all sequences of length n
and let t0Kj(Kl(...Kk(â(v0))...)), where j,l,..,k is a sequence of length n+1. If not i<t, we are done.
Otherwise, note that there must be some t'0Kl(...Kk(â(v0))...) with t0Kj(t') and j<t' or t=t'. In the latter
case, the induction hypothesis yiels é(i)#t. In the former case, we have j<t=â(a) for some a0á(j).
Thus j<i implies é(i)#t, and i<j implies i<t', in which case the induction hypothesis yields é(i)#j<t.
As shown above, we thus have Kj(...Kk(â(v0))...) in RRi1CPi for any sequence j,..,k in N\{i}, which
References
Aumann, R. (1995), "Backward Induction and Common Knowledge of Rationality," Games and
Balkenborg, D. and Winter, E. (1997), “A necessary and sufficient epistemic condition for playing
Brandenburger, A. (1998), „On the Existence of a ‚Complete‘ Belief Model“, HBS Working Paper
99-056.
Clausing, T. (1999), The Logical Modeling of Reasoning Processes in Games, doctoral thesis,
Fagin, R., Halpern, J., Moses, Y., and Vardi, M. (1995), Reasoning about Knowledge, Cambridge
Kramarz, F. (1993), „How agents plan their actions in games: a model of players‘ reasoning with
Rabinowicz, W. (1997), “Grappling with the Centipede: Defence of Backward Induction for BI-
mimeo.
Vilks, A. (1999), “Knowledge of the Game, Relative Rationality, and Backwards Induction“,