You are on page 1of 32

Trandafira Andrei

Reasoning about Knowledge in the Real World

Abstract

The main aim of this paper is to devise an intuitive tool for representing how real-world
agents reason about knowledge, especially when they find themselves in interactive settings. For
this purpose, some basic concepts of epistemic logic will be introduced, including the classical
tool of possible-worlds semantics. Then, I will present the problems in regard to logical
omniscience that the classical model has. Building upon ideas taken from Formal Concept
Analysis, I will present a new semantic tool for representing knowledge, which is adequate for
representing how real-world agents reason about knowledge. One of the core concepts of the
new model is that there are two different and very important types of ambiguities when
reasoning about knowledge in interactive contexts: ambiguity on the part of the representing
agent and ambiguity on the part of the represented agent. Another important point that will be
touched is the interpretation of the role played by common knowledge in the way agents reason
about knowledge. In this respect, common knowledge will be shown to have the behavior
characteristic of the modal operator of necessity.

1
I. Classical and bounded rationality

Leaving aside the question on whether our social world can be explained fully by starting
from individual actions, it is nonetheless obvious that individual actions play a determinant role
in shaping the social realm. Nowadays, one of the most established frameworks for modeling
human action, behavior and interaction is Rational Choice Theory (RCT). RCT spreads from a
very abstract level, where its basic assumptions are nearly analytic, to more concrete levels
where substantial assumptions about agent’s structure are made, which in turn, license inferences
concerning consequences of actions and future states of the world. At its abstract end, RCT states
that individuals will always act towards maximizing their expected utility, or in other words
towards realizing their preferred alternatives. At the most abstract level, this amounts to
accepting that, ceteris paribus (with regard to both his mental states and the external context),
the agent will always choose the same alternative from the same set; or at least that he will
always (in respect to the ceteris paribus clause) employ the same probabilistic rule for selecting
his choice. This basic assumption is grounded in two ways.

First, if it would not be the case that this type of ceteris paribus holds, then we would not
be in a position to make epistemic correlation between different features of our world. That is, if
we could not abstract the set of invariant features pertaining to a certain context, then we could
not discover any co-dependence among features. It is only because we are in a position to
abstract such a set, that we can acquire the type knowledge expressed, for example, in causal
statements. And second, it seems pretty absurd to imagine someone which would claim that,
ceteris paribus, he will select different alternatives, and this behavior is rationally consistent. In
other words, if one accepts that he would proceed in a different manner if he would get a second
chance to act in a past situation, then this can only be taken to mean that he admits his choice in
the first instance was not the best he could have made. In any situation, an agent has limited set
of alternatives that he can choose based on his abilities and on the external context; and given his
preferences, the alternatives he has available are objectively ranked in respect to the degree at
which their accomplishment could satisfy his preferences; and this still holds, even if the agent in

2
case does not know which of the alternatives is best, although if he is minimally rational he will
know that some of his choices would be more appropriate than others.

This general point can be seen as the common ground from which classical rationality
and bounded rationality depart one from the other. The difference arises at the level of the
substantial assumptions made concerning the information and the abilities of the agents. In the
classical model the agents have complete information regarding their situation, including the fact
that they have access to all the alternatives available and the fact that they can reliably choose the
one which best fulfills their preferences. In short, they are not only perfect reasoners, but also
have complete information regarding the facts relevant to the situation. The unrealistic nature of
these assumptions provided the rationale for building a new model, based on more realistic
assumption concerning the resources of agents. In this new model, the agents were not that
perfect neither in carrying out their reasoning, or in obtaining all relevant information.

Each of the models has its pros and cons. The classical model offers the advantage of
using formal methods, which is an invaluable tool for modeling with precision; its disadvantage,
though, lies in the fact that it deals with idealized agents, and thus it is at a loss in explaining the
behavior of real agents. On the other hand, the bounded model carries the promise of explaining
the behavior of real agents, not just of idealized ones; but, at the same time, it runs the risk of
making choices seem more like arbitrary ones, than rational ones, and it also does not fare very
well in the company of formal models.

The line of this work lies in the middle ground between these two extremes of the
rationality spectrum. Its aim is to have the advantages from both perspectives, while leaving
behind their disadvantages. Therefore, it aims at a model which is at the same time formal, and
has as its target the real agent. The starting place will be epistemic logic, which can be seen as
opening this type of middle ground.

3
II. Epistemic logic

Epistemic logic entered its modern stage with Jaako Hintikka’s book “Knowledge and
belief” (Hintikka, 1962). Hintikka’s idea, which practically opened the door to an entire field of
enquiry, was to model knowledge using the framework of possible worlds. One way of
understanding the framework of possible worlds is to see it as formalizing the idea that things
could have been different. Usually, we say that what happens is real, as opposed to what does
not happen which is not real. Accordingly, the real world is the one formed by all the things that
happen. For example, it is true George Bush is the president of United States, and this is a fact
about the world in which we live. But we can understand that things could have been different:
George Bush could have lost the elections, or he could have even chosen not to run for president.
Another familiar idea which can be formalized with this framework is that of the indeterminacy
of future. We usually conceptualize the future by thinking that it could unfold in different ways.
For example, we usually believe that it might rain tomorrow in our town, but in the same time we
also believe that it might not rain tomorrow in our town. Another way to express this is to say
that both future states are possible.

The specific notions of modal logic are those of necessity and possibility. A proposition p
is necessary if in every possible world it is true that p and a proposition p is possible if there is at
least one possible world in which it is true that p. Hintikka used the notion of necessity to explain
what is for an agent a to know that p. Namely, a knows that p if and only if p is true in all
alternative states that the agent a considers possible. Thus, knowledge that p is interpreted as a
kind of necessity:

I know that p just in case I think that p is necessarily true, that it holds in every one of
the alternative ways I think the world might actually be.

Perhaps it is best to start explaining this idea in detail by using an example 1, to show its
intuitive appeal. Suppose that three children A, B and C are playing in the courtyard and each of
them gets his face dirty during the play, without knowing it. When they get inside, their teacher
gathers all of them and tells them: ‘At least one of you is dirty on his face’. Furthermore, every

1
Taken from (Geanakoplos, 1992)
4
child sees only the faces of the other two kids, but not his own face. After she makes the
announcement, the teacher asks A whether his face is dirty or not, then she asks B the same
question and in the end C is also asked whether his face is dirty or not. All the questioning takes
place publicly, and every child hears the answers of everybody else. What will they answer?

Applying the definition of knowledge it follows that a kid will know that his face is dirty
just in case this will be true in all alternatives he conceives to be possible and he will know that
his face is clean just in case this will be true in all alternatives he conceives to be possible. But,
what is the set of alternatives conceived as possible by every kid? If they would be asked
privately, before they can see each other, whether they think they are dirty or not, they will
conceive two relevant alternatives: the one in which they are dirty, and the one in which they are
clean. And consequently they will come to know either that they are dirty or that they are clean
just in case they would manage to narrow down the range of alternatives to only one of them. In
other words, if they are to know that they are dirty (or clean), then it would have to be true in all
alternative states that they are dirty (or clean); and this comes down to regarding the alternatives
in which it is true that they are clean as impossible. A simple route to manage this would be if
they could see themselves and would be perceptually reliable. Unfortunately for them, this move
is not available in their case. Nevertheless, they have to accomplish the same task (of
disregarding one of the alternatives as impossible) if they are to know that their face is one way
or another. So, how could they manage to do it?

Prior to seeing the others faces and prior to listening to the public announcement that
there is at least one dirty face, each of the kids faces the full range of alternatives and does not
have any information to disregard any one of them 2:

2
Where A, B, C denotes the child who conceives the alternatives in the respective table; a, b, c are the three
children in regard to the state of their face; S1-8 are the possible alternatives; 1 in a cell expresses that that kid has
a dirty face in that state, and 0 expresses that that kid has a clean face in that situation. Also, the thick lines
represent the fact that that alternative is disregarded by the representing agent and the interrupted lines
represent the fact that the representing agent knows the fact that the represented agent (with the respective
color) disregards that alternative.
5
A S1 S2 S3 S4 S5 S6 S7 S8

a 1 1 1 1 0 0 0 0

b 1 0 1 0 1 0 1 0

c 0 0 1 1 0 0 1 1

B S1 S2 S3 S4 S5 S6 S7 S8

a 1 1 1 1 0 0 0 0

b 1 0 1 0 1 0 1 0

c 0 0 1 1 0 0 1 1

C S1 S2 S3 S4 S5 S6 S7 S8

a 1 1 1 1 0 0 0 0

b 1 0 1 0 1 0 1 0

c 0 0 1 1 0 0 1 1

Each one of the kids will come to know either that his face is dirty or that it is clean, if he
manages to disregard some of the alternatives in his table as impossible, to finally remain only
with alternatives in which his face is dirty, or clean respectively. Once the public announcement
which states that at least one of them is dirty is made, each of them is in a position to dismiss the
S6 alternative. Furthermore, each knows that the other two also dismissed the S6 alternative,
because they heard the announcement together. Also, each one of them will be in a position to
dismiss six alternatives by means of perceiving the other two kids. A will dismiss alternatives
S1, S2, S4, S5, S 6 and S8; B will dismiss S1, S2, S5, S6, S7 and S8; and C will dismiss S2, S4,
S5, S6, S7 and S8. In addition, each two kids know that they saw the third one. It follows that

6
besides dismissing some alternatives after seeing the others’ faces, each kid knows some of the
alternatives the others have dismissed on similar (perceptual) grounds. A will know that B
dismissed the alternatives in which C has a clean face (S1, S2, S5 and S6) and also that C
dismissed the alternatives in which B has a clean face (S2, S4, S6 and S8); B will know that A
dismissed the alternatives in which C has a clean face (S1, S2, S5 and S6) and also that C
dismissed the alternatives in which A has a clean face (S5, S6, S7 and S8); and C will know that
A dismissed the alternatives in which B has a clean face (S2, S4, S6 and S8) and also that B
dismissed the alternatives in which A has a clean face (S5, S6, S7 and S8).

At this point, the kids are asked to say whether they think they are dirty or not. A, who is
first, does not have sufficient information to dismiss either S3 or S7 (the two alternatives that he
considers possible). Therefore, he neither knows that his face is dirty nor that his face is clean,
and he will say that he does not know whether his face is dirty or not. Does A’s answer give any
new information to B? Previously, B was in a position to know that the alternatives considered
possible by A belong to the set {S3, S4, S7, S8}, more precisely that A either considers possible
S3 and S7 (in case B has a dirty face) or S4 and S8 (in case B has a clean face); nonetheless A’s
answer is consistent with each of these four alternatives, so B does not learn any new
information. But what about what C learns? His situation is similar to B’s; he already knew that
A considers possible either S1 and S5, or S3 and S7. However, C learns that B now knows that
S2 is not the case; because otherwise A would have answered that his face is dirty (because
having seen that both B and C have clean faces, A would have known that he must have the dirty
face). Therefore at this point C updates his set of states considered impossible by B, by adding
S2 to it; the new set is now {S2, S5, S6, S7 and S8}.

B, who is second, does not have sufficient information to dismiss either S3 or S4 (the two
alternatives that he considers possible). Therefore, he neither knows that his face is dirty nor that
his face is clean, and he will say that he does not know whether his face is dirty or not. Does B’s
answer give any new information to C? As we already saw, prior to B’s answer, C has already
narrowed down the set of alternatives that he thinks B might consider as possible to {S1, S3,
S4}. From these three, S1 is the only alternative in which C’s face is clean. This means (for C)
that if B would see that C’s face is clean, then he would dismiss also S3 and S4 from the set of

7
possible alternatives. Thus, B would be left with only one state that he considers possible (S1)
and he would know that S1 is the actual state of affairs. In this case, B would know that his face
is dirty and would answer accordingly. On the contrary, B would answer that he does not know
whether his face is dirty or not just in case he would see that C’s face is dirty; because he would
dismiss S1, but he could dismiss neither S3 nor S4. Therefore, as B answers that he does not
know, C is in a position to update the sate of states he considers being impossible by adding also
S1 to it. Thus, C is left only in the position to consider only S3 as possible. He will know that his
face is dirty (in accordance with S3).

III. Possible-worlds semantics and logical omniscience

Epistemic logic aims to model a world which consists in a set of agents embedded in an
objective reality (‘nature’). The agents are named in the formal language of epistemic logic with
1,…,m and the nature is described as a set of propositions ={P,Q,R,…}. ( ) is defined as

the least set of formulas containing , closed under , and the modal operators .

Thus, if p and q are formulas of ( ) then so are p (not p), p q (p and q) and p (I
knows that p) for i = 1,…,m. This syntactic part has the role of imposing well-formedness
conditions upon the formulas of the formal language that we use to codify epistemic relations. A
formula which is not syntactically correct cannot take any value and is not capable of expressing
any meaningful content. Instead, any formula which is syntactically correct is capable of
expressing meaningful content.

Once the syntax of a logical language is set up, it is possible to set up also its semantic.
The job of semantics is to attach content to the formulas and to provide a method for evaluating
the formulas. For example, in classical logic semantics is responsible for assigning truth values
to well-formed formulas. As was already pointed out, knowledge is commonly modeled by
using the possible-world semantics. The main formal model for possible world semantics, Kripke
structures, was introduced by Kripke (1963) and was originally intended for the modal logic of
necessity and possibility.

8
A Kripke structure M is a tuple , where is a set of states, is a truth

assignment to the primitive propositions of for each state s , and is a binary relation on

the states of S, for i = 1,…,m. A world is a pair , where is a Kripke structure and s

. Usually, is called an accessibility relation and represents the set of worlds considered

possible by an agent i when he is situated in some world: if, in world ,player i

considers a possible world. Another way of expressing the same thing is which holds
if agent i in world s considers world t as being possible. The notion that a formula is true at a
world is formally defined via the relation , a binary relation between worlds and formulas,
where is read “p is true at w” or “w satisfies p”. One example of a clause defined using
is:

iff for all such that

which expresses Hintikka’s intuition that an agent i knows in some world exactly if
is true in all worlds that i considers possible.

As shown by Halpern and Moses (1992), if we want to model knowledge using Kripke
structures, then we have to accept certain constraints on the types of knowledge that we can
model. These constraints are captured by the following axiom system (K), which consists in two
axiom schemas:

A1. All tautologies of the propositional calculus

A2.

and two rules of inference:

R1. From and infer (modus ponens)

R2. From infer (generalization)

9
Other axioms commonly considered to adequately representing properties of knowledge
are:

A3.

A4.

A5.

These axioms are obtained by imposing different restrictions upon the accessibility
relation, which is unrestricted in K system. Requiring that the accessibility relation is reflexive
leads to A3; requiring that it is transitive leads to A4; and requiring that it is Euclidean leads to
A5. Adding A3 to K leads to the system known as T; adding A4 to T leads to the system known
as S4; and adding S5 to S4 leads to the system known as S5.

As noticed by Hintikka (1975), although possible-worlds semantics can be adapted to


model knowledge (as presented in the previous section), this strategy suffers from a serious
drawback; namely, it implies logical omniscience on the part of the reasoners. The various
problems associated with logical omniscience, as summarized by Sim (1998) are:

 Consequential closure – if an agent knows a set of formulas Γ, then he also knows


every formula α which is logically implied by Γ. This problem is due to the fact that
consequential closure does not have as its scope what an agent directly believes, but
what propositions would be true if the ones believed by the agent (Γ) were true;
 Irrelevant beliefs – if an agent knowing that p is understood as p being the case in all
possible worlds, then it follows that the agent will believe all the tautologies (because
they are true in all possible worlds). In other words, if the condition of truth in all
possible worlds is taken to be sufficient (and not just necessary) for knowledge, then
we have to accept this type of logical omniscience;
 Inconsistent beliefs – also, an agent cannot believe both a proposition and its
negation, without believing every other proposition. This follows because there is no
world in which a proposition and its negation are both true;

10
 Computational intractability – is due to the fact that agents are required to compute
all logical consequences their beliefs; a task which is usually out of reach of bounded
real world agents.

This consequences are clearly undesired if we want to represent the knowledge of real
world agents, and therefore make the use of classical possible-worlds semantics unfit for this
task. In what follows I will search for a different interpretation which does not suffer from the
problem of logical omniscience, and thus is suited for representing knowledge of real-world
agents.

IV. Worlds in the eyes of the beholder

A’s perspective in the actual world is constituted by the set of propositions which A
assumes to be true in the actual world. It is typical of our epistemic situation that in many
situations we do not have a sufficiently wide perspective to distinguish the actual world from
every other possible world; to arrive at a point in which we conceive only of one world to be
possible; and consequently to regard that world as actual. In our example, A’s perspective of the
actual world is constituted by what he sees and by what he heard from the teacher. In other
words, A assumes that the information obtained from these two sources is correct of the actual
world. Thus every world that he conceives as possibly being the actual world must be consistent
with that information. If we keep in mind that the worlds which an agent conceives as possibly
actual in a certain situation are conceived from the agent’s perspective in the actual world, not
from the actual world itself, then we can represent in a different manner the epistemic
accessibility relation. The idea is to take the accessibility relation not as holding between the
actual world and the worlds that the agent considers possible, but to take it as holding between
the agent’s perspective in the actual world and the worlds he considers as possibly actual.

In our example each kid considers the same set of objects (i.e. the three faces) and also
considers the same set of attributes that those objects could have (i.e. dirty or clean); they also

11
know that the others consider the same sets. Also, they know that each object has only one of the
attributes and for some objects they know which attribute it possesses. But, for each kid there are
some objects in the set for which they do not know whether it has one or the other attribute (i.e.
their own faces). Another way to put this is to say that their knowledge is complete in respect to
the relevant sets of objects and attributes (i.e. they know how many kids are and what attributes a
face might have); and that their knowledge is incomplete in respect to what attributes have some
objects. For every object about which they do not know what attributes it has, they nevertheless
know that it, in fact, possesses only one of the two.

They do know which worlds might be actual, but they do not know which one of them
really is the actual one. Indeed, the point of the example is to show how can someone come to
know which world is the actual one. As long as the sets of objects and attributes are considered
to be complete, the actual world will be identified when it is settled, for every object of the set,
which attributes it has. In this case an alternative is a unique assignment of attributes to objects,
with the condition that each object must be assigned exactly one attribute. Thus, we obtain the
eight possible alternatives S1-S8. The process of settling upon an alternative as the actual one is
carried by successively eliminating alternatives as possible candidates for being actual, until
there is only one alternative left3.

Because, relative to an agent, the actual world is arrived at the end of the epistemic
process and is not available as a starting point, this casts serious doubts upon the adequacy of
modeling knowledge (in these cases) with the classical tool of Kripke structures. In a Kripke
structure the actual world (indexed to an agent) is the one from which the epistemic alternatives
are accessible. Although considering accessibility relations from the actual world might make
sense in the context for which they were originally devised (i.e. the metaphysics of possible
worlds), in many epistemic contexts it is not known which world is actual, and therefore it does
not make much sense to consider accessibility relations starting from the actual world. In these

3
Of course, it is possible to eliminate all alternatives but one and, nonetheless, to be in a position to eliminate also
this last alternative. This would mean either that the set of alternatives was not correctly assumed to be complete
(i.e. some alternatives were left out in the process, or some relevant facts were not considered etc.) either that
some error occurred during the process. But, if the set of alternatives is correctly assumed as complete and the
process is error-free, then the last remaining alternative is necessarily the one which is actual.
12
cases, if we take the actual world as one of the conceived alternatives, then the agent will not be
in a position to identify the accessibility relations starting from the actual world.

For example, Aumann (1999) starts with a set Ω whose members are called states of the
world, and an event is defined as a subset of Ω. In our example, Ω is formed by the states from
S1 to S8, and the event that A’s face is dirty is identified with the set of states in which it is true.
Aumann assumes as given a function k on Ω, called the knowledge function. K ranges over an
abstract set, whose members represent different states of knowledge of some certain individual I
(e.g. ki(w) represents the knowledge that I has in world w). Further, he defines the set of states
that an agent cannot distinguish from the true state of the world ω:

Thus is the set of states that an agent considers possible (e.g. for C, ).

Aumann assumes that ω, which is the true state of the world (or the actual world), is also a

member of the set. However, this is a very strong and hard to justify assumption. It

presupposes that even if we are not omnipotent in our ability to identify outright which is the

actual state, we are nevertheless omnipotent in our ability to include ω in the set . After all,

if we do not always know which the actual world is, why should we always be able to include the
actual world in the set of possible worlds that we consider possible? Although, there are many
cases in which the set of relevant facts is completely established (e.g. our example), and as a
consequence the actual world must be one in which one of the possible combinations of relevant
facts holds; there also cases in which, for example, an agent might acquire a false information
which will lead him to disregard the (objectively) actual world as a candidate for the
(subjectively) actual world. This means that, indeed, at any stage of an agent’s reasoning about
identifying the actual world from the set of possible worlds compatible with his information, as
long as he is not aware that he made some mistake, he will regard some (yet unidentified)
possible world as the actual world.

A more promising strategy seems to be to consider accessibility relations which start


from an incompletely specified world, which the agent assumes to be a correct, although not a
13
complete, description of the actual world. First I will describe such a formal model (known by
the name of Formal Concept Analysis4), and then I will discuss its advantages upon the classical
one.

A formal context is defined as a triple of sets , where is called a set of objects,

is called a set of attributes and is called the incidence of the context. A many

valued context is a quadruple where and are respectively the sets of objects

and attributes, is a set of attributes values, and is a mapping . A special case of


many valued contexts is that of incomplete contexts, where . For incomplete
contexts, if it is known that the object g has the attribute m, if it is
known that g does not have m, and otherwise. Practically, the ‘?’ value is just a
placeholder for one of the two other values; when ‘?’ is assigned to an object/attribute pair it
means that it is not settled/known whether that pair has the ‘+’ value, or the ‘-‘ value.

If in an incomplete context we have for all and

, then we say that K is equivalent to the formal context where


. This is to say that an incomplete context, once it has every value
settled as + or -, will express the same thing as a formal context which assigns + values for the
same object/attributes combinations. Also an incomplete context is called an extension of an

incomplete context , if: wherever a pair in has the values ‘+ ‘or ’–‘, those pairs have the

same values also in ; and there is at least one pair which has the value ‘?’ in , but in has
one of the ‘+’ or ‘-‘ values. In more simple terms, an extension of an incomplete context is a
context in which all that was settled is preserved, but some things that were previously not
settled, are now settled. An extension which is equivalent with a formal context is called a
completion. That is, one arrives at a completion by settling all the ‘?’ values of an incomplete
context. All completions of an incomplete context form its completion set.

Let us apply this formal model to our example to see how well we can represent the
knowledge of the three kids with it. The states of knowledge of our three kids after they see each

4
The presentation below is taken from (Obiedkov, 2002)
14
other, but before they get to answer whether they think they are dirty or not can be represented
by the following tables:

A dirty B dirty C dirty


a ? a + a +
b + b ? b +
c + c + c ?

It seems pretty clear that representing their knowledge by using incomplete contexts is at
best a place to start, because it does not provide us with a way to represent the way our kids
reason from this starting point. The next straightforward step would be to unfold these
incomplete contexts (which represent the information available from our kids’ perspectives) into
their completion sets. The completion set of the incomplete context which represents A’s actual
state of knowledge would be formed by:

A dirty A dirty
a + and a -
b + b +
c + c +

It seems natural to interpret the completion set of an incomplete set as the set of possible
worlds conceived as possible by an agent whose actual state of knowledge is represented by the
incomplete set. Also, the definition of knowledge as truth in all possible worlds conceived as
possible by an agent holds in this interpretation. Namely, any object/attribute pair which takes
one of the ‘+’ ‘-‘ values in the incomplete set which represents an agent’s state of knowledge is
one which has the same value in every member of the completion set. Further, it seems
promising to define an accessibility relation which holds between incomplete possible worlds
(which represent an agent’s state of knowledge or their perspective) and their completion set

15
(which represent the possible worlds which the agent considers to be possible)5. Let us now take
into account the way in which agents represent the states of knowledge of others.

Although in the case of one agent it is possible to have only one type of ambiguity (i.e. of
not knowing whether an object has an attribute or not) and therefore, one value (?) is enough to
express it, in the case of more than one agent we can distinguish two types of ambiguity. The
first is the situation in which the represented agent does not know whether some object has some
attribute or not, and the second is the situation in which the representing agent knows that the
represented agent has assigned a substantive value (i.e. + or -) to a pair, but he does not know
which value is in fact assigned by the represented agent. To exemplify the difference, the first
ambiguity appears when B represents A’s knowledge and has to represent the fact that A does
not know whether he (A) is dirty or not, and the second ambiguity appears when B represents
A’s knowledge and has to represent the fact that A knows the state of his (B) face, without
knowing which is the fact that A knows. Put in other words, the first ambiguity is due to
shortsightedness on the part of the represented agent, and the second is due to shortsightedness
on the part of the representing agent. Yet another way to make this distinction is in terms of
probabilities. The first ambiguity amounts at attributing probabilities to the represented agent by
the representing agent, and the second ambiguity amounts at attributing probabilities to the
representing agent by the representing agent himself.

How should we represent these ambiguities in such a way that it would permit us to keep
track of the alternatives that are left open in each case? The first type of ambiguity is the one that
can be found also in the case of one agent (because in this case the representing agent is also the
one being represented). To mark it we will use the symbol ‘?’. Whenever we attribute a ‘?’
concerning some bit of information (i.e. an object/attribute pair), it will mean that the agent,
whose knowledge is being represented, does not know which of the extensions of that
incomplete set (obtained by replacing that specific ‘?’) correctly represents how the things

5
Such a modal logic will be developed in a further work. However, the approach will build upon the work of Miroiu
(1999) on mirroring worlds. The idea is to start with two distinct sets of possible worlds, one formed by the agents’
perspectives, and one formed by the alternatives of the situation. Then, the worlds from the first set (agents’
perspectives) will be taken to mirror (partially or completely) the worlds from the second set. Consequently, it will
be shown that agent A knows p, if p is the case in all the worlds mirrored from his perspective; and it will be shown
that agent A knows which world w from the second set is actual, just in case it is completely mirrored from its
perspective.
16
actually are. On the other hand, the second type of ambiguity, which will be marked by using the
symbol ‘!’, means that the representing agents is not in a position to know which is the
completion set that the represented agent considers. In these cases, the best the representing
agent can do is to know the set of the possible completion sets (that the represented agent might
consider).

Using these two values along with the two substantive ones, we can easily represent the
knowledge one agent has about the situation, the knowledge that he has about the knowledge that
another agent has about the current situation, and so on, in the following way:

- For every atom (i.e. object/attribute pair) such that the representing agent
knows either that the object has the respective attribute, or he knows that it does not have
it we write the value ‘+’, and respectively the value ‘-‘ ;
- For every atom such that the representing agent knows that the represented
agent does not know whether the object has that attribute or that it does not have it, we
write the value ‘?’;
- For every atom such that the representing agent knows that the represented
agent knows whether the object has or has not that attribute, but the representing agent
does not know which one of them is known by the represented agent, we write the value
‘!’ ;
- Then, we build the set of alternatives which expresses the state of
knowledge of the representing agent by using the following translation rules from the
value-assigned atoms to the set of possible worlds:
 R(?) – for every atom with the value (?) we will have a set of two
possible worlds – one in which that atom has the value 1, and one in which it has
the value 0. Each of these sets represents a situation (of knowledge) in which the
represented agent might find himself;
 R(!) - for every atom with the value (!) we will translate situation
(set of possible worlds) formed by R(?) in two further sets of possible worlds – in
one set the atom will get the value 1 (in every world of that set), and in the other it
will get the value 0. The sets formed by R(!) will represent the alternatives

17
considered by the representing agent to represent the situation of the represented
agent. The relation between the sets formed by R(!) will be one of exclusive
disjunction (they will be regarded as second order possible worlds, one of them
being actual for the represented agent)
 R(+) – in every possible world that atom will have the value 1;
 R(-) – in every possible world that atom will have the value 0.

Let us employ this method of representing knowledge to our familiar example. The
relevance of the example lies in the way C comes to know that his face is dirty. How does he
reason to arrive to this conclusion? Initially, he starts from the same position as the two others
(they know that at least one of them is dirty). Afterwards, he is in a position to obtain additional
information first from A’s answer and from his representation of A’s knowledge when he gives
his answer; and second from B’s answer and from his representation of B’s knowledge when he
gives his answer. Furthermore, when representing B’s knowledge, C will be in a position to use
what he knows about A’s knowledge that he also knows that B knows. Therefore, C’s knowledge
can be further divided into C’s knowledge about A’s knowledge about the situation (CA); C’s
knowledge about B’s knowledge about A’s knowledge about the situation (CBA); C’s
knowledge about B’s knowledge about the situation (CB); furthermore, when representing C’s
knowledge about the situation (C). We have C’s representation of A’s knowledge:

CA dirty Sa1 Sa2

a ? 1 0 1 0

b 1 1 1 1 1

c ! 1 1 0 0

C represents A’s knowledge after his answer. He knows that B does not discern either
between S(1,1,1) and S(0,1,1), or between S(1,1,0) and S(0,1,0). Unfortunately, A’s answer is
compatible with every of these alternatives, and thus A’s answer does not give any additional
information about A’s knowledge. However A’s knowledge gives C some new information
18
about B’s knowledge. We have C’s knowledge about B’s knowledge about A’s knowledge about
the situation:

CBA dirty Sa1 Sa2 Sa3 Sa4

a ? 1 0 1 0 1 0 1 0

b ! 1 1 1 1 0 0 0 0

c ! 1 1 0 0 1 1 0 0

C knows that B knows that A does not know his own condition; C knows that B does not
know what his own condition is, but that he knows that A knows about it; and C does not know
his own condition but knows that B knows that A knows about it 6. Therefore, to the best of C’s
knowledge, B knows that A finds himself in one of the Sa1-Sa4 situations (and that he cannot
distinguish inside them). Nevertheless, C knows that B knows that A knows that S(0,0,0) is not
the case. Therefore, he knows that B knows that A did not think Sa4 is the set of possible
alternatives (because otherwise A would have answered that he is dirty). So, in the end, C knows
that B knows that A does not distinguish inside one of the Sa1-Sa3 sets. Now, for C’s
representation of B’s knowledge:

CB dirty Sb1 Sb2

a 1 1 1 1 1

b ? 1 0 1 0

c ! 1 1 0 0

6
When there are more than two agents, those who are caught in the middle of a chain of reasoning (e.g. CBA) can
be seen both as representers of the ones who are further in the chain, and as represented by the ones earlier in
the chain. Nevertheless, the agent who really does the representing task is the first one, and thus any ambiguity on
his part will be valued as ‘!’, even if other representers in the chain know that value more precisely.
19
C knows that B knows that A is dirty; he knows that B does not know whether himself
(B) is dirty; and he knows that B knows if he (C) is dirty, but he does not know what B knows in this
respect. C knows that B does not distinguish inside either Sa1 or Sa2, but C knows that B knows that
S(1,0,0) is not the case (as reasoned above). Therefore, C knows that if B would have regarded Sb2 as
the set of possible alternatives, he would have said that his face is dirty. As this is not the case, C knows
that B regards Sb1 as the set of possible alternatives. Finally, we have what C himself knows about the
situation:

CB dirty Sb1 Sb2

a 1 1 1 1 1

b ? 1 0 1 0

c ! 1 1 0 0

He knows that both A and B are dirty, but he does not know if he is dirty. C cannot
distinguish inside Sc1, but C knows that B did not consider S(1,0,1) possible. Therefore, C is left
only with one possible alternative: S(1,1,1) and, accordingly, he knows that S(1,1,1) is the actual
world and that his face is dirty.

In the standard analysis of this example it is held that the third kid comes to know that he
is dirty only because it was common knowledge between the three kids that at least one of them
is dirty. Indeed, if each one of them would have received this information privately, then C
would not have been in the position to know that his face is dirty. Using our method of
representing knowledge it is straightforward to explain why this would be so. First, C would not
be in a position to discard Sa4 in CBA, because he would not be in a position to discard S(0,0,0)
from the alternatives B might think that A might consider as possible; therefore, A’s answer
would not convey to C the information that B knows that Sa4 is not the set of possible
alternatives according to A. In turn, C would not be in a position to discard Sa2 in CB, because
he would not be in a position to discard S(1,0,0) from the alternatives B might consider as
20
possible; therefore, B’s answer would not convey to C the information that B does not take
S(1,1,0) to belong to the set of possible alternatives. Finally, without this A would not be in a
position to discard S(1,1,0) from Sc1 and, consequently, he would not be in a position to know
that S(1,1,1) is the case.

But, what additional information would C need to be in a position to know that his face is
dirty? He would need to know that B knows that A knows that there is at least one dirty face.
This will allow him to know that B knows that A disregards Sa4; which in turn would allow C to
know that B disregards Sb2; which in turn will allow C to know that his face is dirty. In
conclusion, it is not necessary to be common knowledge that at least one of the kids is dirty, for
C to be in a position to know that his face is dirty (although common knowledge of that fact is
sufficient for C to know that his face is dirty). But, before discussing the role played by common
knowledge in these cases, let us first further extend and explain the method proposed for
representing knowledge in interactive contexts. For this, let us analyze another example7.

A mischievous rich father gives to each of his two sons an enclosed envelope; and tells

them that one of them contains $ and the other contains $ with n being a random

natural number between 1 and 7 (i.e. the minimum sum is 10$ and the maximum sum is
10.000.000$). Each of the sons looks privately into his own envelope. The first son finds that he
has 10.000$, and the second finds that he has 1.000$. Then, the father asks each of them
privately if he would like to pay 1$ to exchange the envelopes. At this point, A anticipates a
50500$ payoff if he agrees, and B anticipates a 5050$ payoff if he agrees; therefore both agree to
pay 1$ for the exchange. After he finds out that they agree to exchange, the father calls both of
them and tells them to shake hands if they still want to make the exchange. At this point both of
them find out that the other one also wanted to make the exchange; and the question is if they
will still want to make the exchange.

Now, let us see if we can successfully employ the same method in this case. First, a
simplification is in order. Namely, it is not necessary to use only the two substantial values (+

7
Taken also from (Geanakoplos, 1992)
21
and -) and, thus, to be compelled to write each possible sum in the envelope as a distinct
attribute. Much simpler, the set of substantial values can be taken to comprise of the seven
possible amounts which could be found in the envelopes. Thus, every envelope will have only
one attribute (the amount of money) which could take any of the seven values, but only one at a
time.

Also, to skip the unimportant details, we should note that only A could come to the conclusion
that he does not want to make the exchange after all (because he would lose 9.000$ if he makes it).
Therefore, we are interested in representing A’s knowledge in this situation, aiming to see if he has
sufficient information to call off the exchange. First, let us represent what A knows when he discloses his
envelope:

CB dirty Sb1 Sb2

a 1 1 1 1 1

b ? 1 0 1 0

c ! 1 1 0 0

A knows that he has 10.000 $, but he does not know whether B has 100$ or 100.000$.
Now, after A finds out that B also wants to make the exchange he is in a position to represent
what B knows as follows:

CB dirty Sb1 Sb2

a 1 1 1 1 1

b ? 1 0 1 0

c ! 1 1 0 0

22
A knows that B does not know how much money he (A) has; and A also knows that B knows how
much money he (B) has, even if he (A) does not know how much money B has. Therefore, according to
A, for each possible sum B might have (i.e. 1.000 or 100.000), B is not in a position to discern between
two different possible worlds (which differ in respect to the amount A has). Further, A knows that his
own agreement gave new info to B concerning his (A) situation. So, A has to represent also how B
represented what A knows as follows:

ABA amount Sa2 Sa1 Sa3

a ! 100 100 10.000 10.000 1.000.000 1.000.000

b ? 10 1.000 1.000 100.000 100.000 10.000.000

A knows that B knows that A knows his own sum, and A knows that B does not know
what is A’s sum (!). Also, A knows that B knows that A does not know what sum B has (?). So,
according to A’s representation of B’s representation of A, for each sum that B might think A
has (i.e. 100, 10.000 and 1.000.000 – because according to A’s representation of B, B might have
either 1.000 or 100.000, and thus B might think that A has either 1.000 or 10.000, or 10.000 or
1.000.000) we have two possible worlds corresponding to the sums B might have for the given
A’s sum.

Now that A’s representation of the relevant states of knowledge is completed, he can
proceed with the backwards eliminative stage. First, A knows that B knows that A knows B does
not have 10.000.000$ because otherwise B would have not agreed for the exchange in the first
place; therefore A knows that B knows that if A would be in Sa3 then he would know that B has
100.000$ and he would have rejected the bet in the first instant. As he does not have 1.000.000,
he will not immediately reject the bet, because he thinks that it is still possible that B has
100.000, in which case he should accept the bet. At this point, A knows that B knows that A does
not have 1.000.000 and thus, A knows that if B would be if in Sb2 (i.e. would have 100.000) he
would reject the bet in the next moment, because he would know that A has only 10.000. As B
does not reject the bet, A comes to know that B is not in Sb2, but in Sb1 (the only possible
situation left for B to be in, according to A). At this point A has enough information to call off
23
the bet, because he knows that B has only 1.000$ in both alternatives of Sb1 and therefore, he
would lose 9.000$ if he would accept the bet.

In analyzing these examples by using our method for representing knowledge, some parts
of the process remained merely implicit. The method, described above, of representing the
knowledge that agents have about others’ agents knowledge consists in a way of structuring the
general framework in which reasoning about knowledge takes off. It is widely accepted
nowadays that in regard to some selected details of a situation, an agent can conceive a set of
possible worlds describing every possible way in which those details might fit together. At the
most basic level, when an agent knows only what details are relevant in a situation, he is in a
position to consider all alternatives as being possibly actual. From this perspective, reasoning
about knowledge comes down to finding information (i.e. that some detail is in a certain way,
rather than another) which reduces the set of alternatives considered.

This process of narrowing down the set of alternatives takes place in time and thus,
sequentially. Further, in interactive contexts, an agent can obtain new information from other
agents’ actions, as long as they can represent those agents’ knowledge just before they acted.
This representation will be of course incomplete in respect to the representation those agents
themselves have about their own knowledge. But, there are many situations (e.g. like our
examples) when some other’s agent action gives to the agent in question sufficient information to
further refine his representation of the other’s knowledge. And as long as he can transfer the
information from the representation of some other’s knowledge into information about his own
knowledge, he acquires a tool for furthering his knowledge. This is precisely the point when
discussing the concept of common knowledge is in place.

V. Common knowledge

24
The introduction and also the first rigorous analysis of the concept of common
knowledge is due to David Lewis’ book ‘Convention’(1969). As his analysis still touches many
important points regarding common knowledge, shortly presenting his ideas on the issue might
be a good place to start.

The type of problem that paved the way for the introduction of common knowledge was
that of coordination problems. A coordination problem appears in situations when the agents
involved have the same preferred outcomes, in the sense that they happen to prefer (not
necessarily for similar reasons) that the same state of affairs be obtained, rather than other.
Further, in a coordination problem, there are multiple possible equilibrium points. Every agent is
better off if one of the equilibrium points is reached. The problem is that the agents need to
match their actions toward a particular equilibrium. In a coordination problem, every agent
already believes that the others want to reach also an equilibrium point; otherwise, it would not
be a coordination problem, but a competitive one, in which better courses of action for one agent,
are among the worse for the other. So, granted that the agents already believe that everyone
thinks of the situation as a coordination problem and furthermore, that they identify the same
equilibrium points, then the problem is how to effectively reach one of them. For example, if the
coordination problem consists in restoring a phone call which has been cut, there are two
possible equilibriums: the original caller calls again and the original receiver waits to be called,
or the original receiver calls and the original caller waits to be called. The other two possible
options are that both try to call back, or that they both do not try waiting for the other one to do
it; these are not equilibriums because in neither of them the conversation is restored as quickly as
possible.

In situations where there is only one agent which tries to achieve the better outcome, he
only needs to correctly predict the natural context; and this kind of prediction does not involve a
regress of the type we find in multi agents settings. Here we need to form expectations not only
of the behavior of the natural context, but also of the behaviors of the other agents: ‘if I know
what you believe of the matters of fact that determine the likely effects of your alternative
actions, and if I know your preferences among possible outcomes and I know that you possess a

25
modicum of practical rationality, then I can replicate your practical reasoning to figure out what
you will probably do, so that I can act appropriately.’ (Lewis, p.27)

But, in this case the behavior of the others might depend partially also on what
expectations they have concerning my behavior. So, if I am to correctly predict the behavior of
the other I might need to correctly expect what he expects me to do. And in turn, I might
reasonably expect that the other will try to replicate my reasoning concerning what I expect for
him to do, so I might need to replicate the way in which you replicate my reasoning. This chain
of replicating reasoning could, in principle, go on for an indefinitely long number of steps.
Nevertheless, the function of expectations is to provide reasons for my actions. By forming
expectations concerning what you are going to do, then I settle some uncertainties of the
situation and, therefore, a can better evaluate my alternatives. According to Lewis, higher order
expectations (e.g. my expectations about your expectations that I behave in a certain way)
provide additional reasons (to the ones already provided by the first order expectations) for
acting in a certain way. Practically, this happens because an expectation of a higher order can be
traced back to a first order expectation (e.g. if I form an expectation about your expectation of
my behavior, then I can also form an expectation about your behavior). And, provided that our
initial first order expectations converges with our higher-order expectation, then our selected
action (based only on the initial first order expectation) gets even more support for being the
correct one. On the other hand, if they do not converge, it means that we have acquired at least
one false expectation, and we have to recheck our reasons.

The reader might have noticed that the method for representing knowledge that I
presented above is practically a method for representing the dynamic of expectations, of which
Lewis talks about; and, indeed, they both share the same perspective. Also, now it can be seen
how the reasoning proceeded when I explained the examples. To take the first example, the links
between CBA, CB and C matrices are similar with the links between a higher order expectation
and its first order correspondent. The C matrix is used to represent C’s expectations about the
outside world; in other words, his first-order expectations. Instead, CBA and CB matrices are
used to represent higher order expectations. And, C answered the question because he managed
to supplement his first-order expectation with his higher-order expectations, in such a way that

26
he was in a position to form a more precise first-order expectation (i.e. now, he knew that he was
dirty, while at the beginning he did not knew it).

We are now in a position to explain what difference did made the fact that it was
common knowledge that at least one of the kids was dirty. At the first two levels of expectations,
the same representation is obtained with or without common knowledge that at least one is dirty.
This happens because C independently (perceptually) knows that at least one is dirty, and also C
independently knows that B knows that at least one is dirty (because he knows that B sees A).
However, at the third level expectation (i.e. CBA) C does not know anymore that B knows that A
knows that there is at least one dirty, because C knows that this would be the case just if he is
dirty (so that B knows that A sees C); in case C would not be dirty, B would not know that A
knows that at least one is dirty. So, as C does not know that he is dirty, he cannot know if B
knows that A knows there is at least one dirty. This is precisely the new information obtained by
C from taking that it is common knowledge that at least one is dirty; then, this information which
supplements his third-level expectation licenses an inference (as presented in the discussion of
the example) to the first-level expectation that C himself has a dirty face. But, before explaining
the role of common knowledge in the dynamic of reasoning about knowledge, we should make a
brief overview of the main conceptions on common knowledge.

In an overview of the approaches to the concept of common knowledge, Barwise (1987)


distinguishes between three main ways of accounting for common knowledge:

1. The iterative approach;


2. The fixed-point approach;
3. The shared-environment approach.

Suppose that we have two agents a and b, between which it is common knowledge that p;
the fact that it is common knowledge that p will be expressed by Cp. The task is to characterize
what amounts to being the case that Cp in terms of a, b, p and ordinary private knowledge.

According to the iterative approach (which is also the most influential approach), Cp is to
be understood in terms of iterated knowledge of p. Namely, it is the case that Cp if a knows that
p, b knows that p, a knows b knows that p, b knows a knows that p, and so forth.

27
According to the fixed-point approach, it is the case that Cp if:

a and b know (p and Cp).

According to the shared-environment approach a and b have common knowledge that p


just in case there is a situation s such that:

 ,

 knows s,

 knows s.

While the iterative approach models common knowledge by means of an infinite


hierarchy, the last two make appeal to some sort of circularity. The circularity of the fixed point-
approach might seem pretty vicious, as long as knowledge that p is already embedded in
knowledge that Cp; and this practically turns the definition into saying that it is the case that Cp
between a and b if they both know that Cp. The shared-environment approach states that
common knowledge amounts to awareness of a situation s, which includes p as well as the
awareness of the situation itself by the agents.

Also, Barwise (1987) distinguishes between three different questions about common
knowledge:

i. What is the correct analysis of common knowledge?


ii. Where does it come from?
iii. How is it used?

The standard analysis of common knowledge proceeds by showing that the first two
approaches are logically equivalent. Contrary to it, stands for example Barwise (1987) who
shows that this equivalence can be obtained only by making some idealized assumptions. More
specific, the iterative approach is equivalent to the fixed-point one just in case it is assumed that
the agents are perfect reasoners, that it is common knowledge between them that they are so, and
attention is restricted to finite situations.
28
Discussing common knowledge in respect to how agents reason about knowledge, to how
they represent their and others’ knowledge requires an important distinction; namely, between
common knowledge seen from an internal perspective and common knowledge seen from an
external perspective. What is relevant from an agent’s point of view is what propositions he takes
as being common knowledge; irrelevant of whether they also happen to be held in common
knowledge. In other words, the error of mistakenly taking something as common knowledge can
only be a posteriori appreciated as such. But this is familiar to any incorrect information. If one
takes information to be correct when in fact it is not, then he will find this when he does not
arrive to the expected results. Thus, the second question regarding the origin of common
knowledge is one concerned with the external view on common knowledge. It aims at showing
how some individuals might come to know something in common knowledge, and its answer is
searched on the lines of describing what has to be the case for those agents to have common
knowledge of something (e.g. the shared environment approach).

However, for our present concerns, the important question is rather this: in what specific
way does the fact that an agent holds something to be common knowledge contributes to his
reasoning? As we have already seen, the first step in reasoning about expectations is to represent
as adequately as possible the field of alternatives that might be considered possible by the
represented agent. This representation involves an appeal to two different types of uncertainty,
and structures that field in disjunctive sets of possible worlds. A representation of a certain
expectation (e.g. CBA) is done by taking into account the information available concerning that
expectation. At this level the fact that something is common knowledge does not have any
impact on the way a certain expectation is represented. If C knows that B knows that A knows
that C is dirty, this knowledge will be represented in the CBA matrix in the same way, irrelevant
of the fact that it is acquired privately by C from someone else, or because it is held by C to be
common knowledge.

The difference is rather that private knowledge is bound to certain matrices. If C has
private information about the fact that B knows that at least one is dirty, then C can use this
information only in representing the CB matrix. Instead, if C thinks that B knows there is at least
one dirty because he takes it to be common knowledge that there is at least one dirty, then he can

29
embed the information that at least one is dirty in any matrix he wants. Therefore, holding p to be
common knowledge amounts to the fact that the agent is allowed to represent the information
that p in every matrix (e.g. CBA or CBABAC and so on).

At this point, we can also offer an analysis of the iterative definition of common
knowledge. Basically, it focuses on the consequences derived from the fact that something is
held in common knowledge. Indeed, if something is held as common knowledge by an agent,
then he will accept as prima facie correct any reiterated knowledge (or higher order expectation)
of that fact. Further, if we employ the apparatus of possible worlds once more we can interpret
common knowledge as some kind of necessity operator. If we take each expectation formed by
the same agent as a possible world, then common knowledge that p amount to the fact that p is
the case (i.e. has the value +) in all possible worlds. Also, if something is something is taken to
be common knowledge only between some of the agents involved in a situation, then it will hold
only in expectations regarding only those agents. At the other end, simple private information is
just information about some world (expectation).

To conclude, common knowledge is not special in any obscure way, as the superficial eye
could take it to be after seeing how it can make the differences in case like the one with the three
kids. It is just that in many cases the information offered by the fact that p is common knowledge
has a greater scope than the sum of individual expectations regarding the knowledge that p that
are independently obtain (e.g. like it was the case with CBA that at least one is dirty in our
example). Also common knowledge does not add any unique information, which could not be
obtained in other ways. Given any finite number of expectations of some agents, taking that p is
the case in every one of them could be obtained either by taking to be common knowledge that p,
or by individually taking for each expectation that p is the case. But, even if it does not provide
special information, it is nevertheless easier to obtain this same information by way of common
knowledge, than by way of private knowledge.

30
VI. Conclusion

The main aim of this paper was to devise an intuitive tool for representing how real-
world agents reason about knowledge, especially when they find themselves in interactive
settings. For this purpose, some basic concepts of epistemic logic were introduced, including the
classical tool of possible-worlds semantics. Then, by taking distance of the classical model due
to its problems of logical omniscience which do not make it fit for representing real-world
resource bounded agents, and building upon ideas taken from Formal Concept Analysis, we
presented a new semantic tool for representing knowledge. One of the core concepts of the new
model is that there are two different and very important types of ambiguities when reasoning
about knowledge in interactive contexts: ambiguity on the part of the representing agent and
ambiguity on the part of the represented agent. Still, further work needs to be done to devise also
a formal system of modal logic which will support the up to now rather intuitive method for
representing knowledge. Another important point that was touched was the interpretation of the
role played by common knowledge in the way agents reason about knowledge. In this respect,
common knowledge can be seen as having the behavior characteristic of the modal operator of
necessity. It remains for further work to apply this new tool and to see if it can be used to derive
interesting conclusions on a variety of topics.

31
Bibliography:

Aumann, R. “Interactive epistemology I: Knowledge,” International Journal of Game Theory


28, 263-300 (1999)

Barwise, J. Three views of common knowledge, In: Proc. TARK 1988, Los Altos, Morgan
Kaufmann Publishers, 365–397, (1988)

Halpern, J.Y. and Moses, Y. ‘‘A guide to completeness and complexity for modal logics of
knowledge and beliefs,’’ J. Artif. Intell., 54, 319–379 (1992).

Hintikka, J. Knowledge and belief. Cornell University Press, (1962)

Hintikka, J. ‘‘Impossible possible worlds vindicated,’’ J. Philos. Logic 4, 475–484 (1975).


Geanakoplos, J. “Common knowledge,” Journal of Economic Perspectives 6, 53-82 (1992)
Kripke, S.A. ‘‘Semantical analysis of modal logic I: Normal model propositional calculi,’’
Zeitschrift fur mathematische Logik und Grundlagen der Mathermarik, 9, 67–96 (1963).
Lewis, D. Convention: A Philosophical Study, Harvard University Press, Cambridge, Mass.,
(1969).
Miroiu, A. “Actuality and World-Indexed Sentences,” Studia Logica 63, 311-330 (1999)
Obiedkov, S. “Modal Logic for Evaluating Formulas in Incomplete Contexts,” In: U.Priss,
D.Corbett, G.Angelova (Eds.): Conceptual Structures – Integration and Interfaces. Springer,
Heidelberg, 314-325 (2002)

Sim, K.M. “Epistemic logic and logical omniscience: A survey,” International Journal of
Intelligent Systems 12, 57-81 (1998)

32

You might also like