You are on page 1of 16


Identity of Minds

This chapter tackles the question on what constitutes the identity of the mind or, what comes
to the same, what consist the minds distinguishing features. This question is raised on two
levels: on a general and on a particular level. On the general level, the question inquires into
what distinguishes mental phenomena such as beliefs and desires from non-mental ones such
as tables, chairs, and sunsets. On the particular level, the question inquires into what
distinguishes one particular mental phenomenon such as the belief that it is raining from other
mental phenomena such as the belief that it is not raining and the desire not to bring an
umbrella. These two levels of inquiry can also be explained in terms of the class-member
dichotomy. On the class level, which corresponds to the general level, the question is: What
feature or set of features do the members of the class of mental phenomena have in common
or share with one another that make them all members of the same class? Whereas on the
member level, which corresponds to the particular level, the question is: What feature or set of
features does a particular member of the class of mental phenomena possess that differentiates
it from the other members of the same class?

Consequently, the answers to these questions have been used as bases for evaluating
the plausibility of certain materialist theories. The general idea is that a plausible theory of the
mind must be able to account for and must be consistent with the distinguishing features of
the mind both on the general and particular levels. This chapter is thus divided into two
sections. The first section discusses the distinguishing features of the mind both on the general
and particular levels. And the second discusses the various arguments that have been
constructed on the basis of these distinguishing features of the mind, and which seek to
challenge certain materialist theories.

1. The Distinguishing Features of the Mind

1.1. Defining Mentality

Descartes defines mentality in terms of consciousness, in contrast to physicality which he

defines in terms of extension. Accordingly, mental states are conscious states while non-
mental states are non-conscious states. There are, however, two reasons why this definition of
mentality is not attractive to some philosophers. The first is the existence of unconscious
mental states. Examples of this kind of states include the unconscious states that Freudian
psychology talks about, beliefs and desires that we are at the present not conscious of, and
emotions that we allegedly have but which we only come to recognize that we have when
other people point them out with usas when we recognize it to be true when someone tells
us that we are actually jealous or in love. We generally consider these states to be mental
though they are not conscious, so mentality cannot be defined simply in terms of
consciousness. The second is the metaphysics that seems to go naturally with the definition of

mentality in terms of consciousness. If we will recall, Descartes holds that consciousness is
the essential property of minds takes as metaphysical substances in contrast to the extension
of matter or physical substances. This would make consciousness a property of something that
lies outside the scientific domain, and hence to which no scientific explanation would be
possible. For these two reasons, some have come to regard consciousness as something that is
illusory and hence can be explained away, or as something that is irrelevant to mentality and
hence can be ignored.

The difficulties presented by these two reasons to defining mentality in terms of

consciousnesss, however, are not insurmountable. First of all, one can accept Descartes
concept of mentality while rejecting his ontology; or more precisely, one can accept the reality
of consciousness without believing that it is a property of some metaphysical substance. We
have already seen how this was done by realist physicalists, who regard consciousness as a
property of higher-level physical states. Secondly, there is a way to handle the case of the
unconscious mental states that preserve consciousness as the defining feature of mentality.
David Rosenthal, for instance, does this by distinguishing between dispositional, or
potentially conscious, mental states, to which unconscious mental states refer, and occurrent,
or presently conscious, mental states. As he (1991, 463) explains:

People do, of course, have many more beliefs and preferences, at any given time, than
occur in their stream of consciousness. And the nonconscious beliefs and preferences
must always have intentional properties. But this need not be decisive against taking
consciousness as our mark of the mental. For we can construe beliefs and preferences
as actual mental states only when they are conscious. On other occasions we could
regard them to be merely dispositions for actual mental states to occur, in those cases,
we can say, one is simply disposed to have occurrent thoughts and desires.

John Searle has the same approach. For like Rosenthal, Searle understands
unconscious mental states as mental states that are not actually conscious but which, in
principle, can be conscious or be brought to consciousness. As Searle (1999a, 86) explains:

Even when unconscious, the unconscious mental state is the sort of thing that could be
conscious. I have to say in principle because we need to recognize that there are all
sorts of states that the person cannot bring to consciousness because of repression,
brain injuries, and so on. But if a state is a genuine unconscious mental state, then it
must be at least the sort of state that could be conscious.

But to further sharpen the distinction between mental and non-mental states, Searle makes a
distinction between unconscious and non-conscious states. Accordingly, Searle refers to
unconscious mental states simply as unconscious states. So understood, unconscious states
include the many beliefs and desires that we have that we are at the moment not conscious of
but which we can bring to consciousness. The Freudian unconscious states are no exception,
for they can likewise be brought to consciousness though by means of some types of
intervention, like hypnosis. On the other hand, Searle refers to those states that could not be
conscious in any way as non-conscious states. Such states include the physical states of

machines such as cars and clocks. In this consideration, conscious and unconscious states are
mental states, while non-conscious states are mental states.

We, however, need to clarify a confusion that may possibly arise here. While we are
conscious of our conscious states, like we are aware of our beliefs and desires, it is not in
virtue of our consciousness of them that make them conscious states. For otherwise all our
objects of consciousness would be conscious phenomena, such that the tables and chairs that
we may happen to perceive would not be physical objects but mental ones. Here we need to
distinguish between the conscious state and the object of this state. For instance, if we
perceive a mountain, the perceiving is the conscious experience; while the mountain is the
object of the conscious experience, which happens not to be a conscious state. Our perception
of the mountain, however, can also be an object of another, or a higher, conscious state, such
as when we reflect on this perception. Here the object of a conscious state is another
conscious state. Thus being conscious of a tree is different from being conscious of a belief or
a pain. The object of the former is not a conscious state while the object of the latter is. But
now it may be asked, In virtue of what then that our beliefs, desires, and pains are conscious
states? This calls for a clarification of the meaning of consciousness.

There are three ways by which the meaning of the term consciousness can be
clarified. One is by correlating the term with its more familiar synonym. In this regard, the
term consciousness is defined as referring to what is called awareness. This is what Searle
(1999b: 41) does in defining consciousness in the following: By consciousness I mean
states of sentience or awareness that typically begin when we wake up in the morning from a
dreamless sleep and continue throughout the day until we fall asleep again. Another is by
listing the items that make up the extension of the term. In this regard, consciousness is
defined as consisting of the following: cognitions, which include knowing, believing,
understanding, thinking, and reasoning; emotions, such as envy, anger, fear, and joy;
sensations, such as pains, tickles, and itches; perceptions, which include seeing, hearing,
tasting, touching, and smelling; quasi-perceptions, such as hallucinations, dreaming,
imagining; and conations, which include acting, trying, wanting, and intending (see Maslin
2003). And still another is by examining the properties of consciousness. And in this regard,
philosophers of mind talk about qualia, subjectivity, privacy, intentionality, and others. In
what follows, we shall examine these properties of consciousness.

a. Qualia

The first is the phenomenal properties of consciousness, which refer to the subjective
qualities of our conscious experiences. These properties or qualities correspond to the so-
called qualia (the singular of which is called quale), which are also described as the raw
feels, phenomenal feels, and qualitative feels that accompany our conscious experiences.
Thomas Nagel (1991) refers these qualities as the what-it-is-like properties of our conscious
experiences. To be in the state of pain, for instance, is to know what it is like to be in pain. For
instance, if we ask a person who is experiencing a toothache the question, what is it like to
have a toothache?, what we are asking him or her is the particular way in which he or she
experiences his or her own toothache, which refers to the subjective quality of his or her own

experience of his or her own toothache. More specifically, what we are asking him or her is
how painful is her or his toothache.

Daniel Dennett (1993, 381), who is critical of the relevance of qualia to mentality,
describes these phenomenal properties simply as referring to the ways things seem to us.
Dennett (1993, 381) gives the following examples: Look at a glass of milk at sunset; the way
it looks to youthe particular, personal, subjective visual quality of the glass of milk is the
quale of your visual experience at the moment. The way the milk tastes to you then is another,
gustatory quale, and how it sounds to you as you swallow is an auditory quale. There is a
subjective quality or quale that comes with each of the conscious experiences that we have or
go through. And Chalmers (1996, 6-11) lists the following catalog of such conscious
experiences: visual experiences (e.g. color sensations), auditory experiences (e.g. musical
experience), tactile experiences (e.g. the feel of velvet, cold metal, and another persons lips),
olfactory experiences (e.g. the stench of rotting garbage and warm aroma of freshly baked
bread), taste experiences (e.g. the taste of sugar and salt), experiences of hot and cold, pain,
other bodily sensations (such as headaches, hunger pans, tiches, and tickles), mental imagery
(e.g. mental image of a loved one), conscious thought (e.g. reflecting on ones actions),
emotions (e.g. happiness and depression), and the sense of self (a sense that there is
something behind conscious thoughts).

b. Privacy and Subjectivity

When we said earlier that qualia are subjective we mean that qualia are private in terms of our
knowledge about them. More specifically, we mean that only the bearer or the subject of
conscious states can have direct knowledge of the qualia of his or her conscious states. Such
subjectivity, however, is not only true of the quality with which we have or experience our
conscious states but of the existence of our conscious states as well. Not only am I the only
one who has direct knowledge of the quality of my toothache, for instance, but I also happen
to be the only one who has direct knowledge of the existence of my toothache or the fact that I
have a toothache. In this regard, mental or conscious states are states whose existence can
only be known privately by those who have them.

There is another sense, however, in which conscious states are subjective; and this
concerns their mode of existence. Accordingly, conscious states are subjective in the sense that
they exist only as some subject is conscious of them. Pains, for instance, exist only in so far
someone experiences them; in contrast to chairs, tables, and mountains, which exist even if no
one perceives them. John Searle (1999a, 44-45) calls this type of subjectivity as ontological
subjectivity to distinguish it from epistemic subjectivity. On the one hand, conscious states are
ontologically subjective in that they exist only in so far as some subject is conscious of them;
while physical states and objects are ontologically objective in that they exist regardless of
whether some subject is conscious of them. Something that is ontologically subjective is also
described as possessing a first-person ontology, while something that is ontologically
objective is also described as possessing a third-person ontology. It will be recalled that
subjective or first-person existence is what we have described in Chapter 1 as mental

On the other hand, something is epistemically subjective if ones knowledge of it is
dependent on or is significantly affected by ones attitudes and preferences. Examples are
judgments contained in evaluative statements. If I judge, for instance, that Baroque music is
better than pop music, I do so because of my attitudes and preferences. In contrast, something
is epistemically objective if ones knowledge of it is not dependent on ones attitudes and
preferences. Examples are judgments contained in descriptive statements. If I say, for
instance, that the Toccata and Fugue in D Minor was composed by Johann Sebastian Bach,
I do so independent of my attitudes and preferences, independent, in particular, of my
preference for Baroque music to pop music. As it were, whether I like it or not, such musical
piece was composed by such composer.

Based on these considerations, it is thus important not to confuse the following senses
of subjectivity. The first is the privacy concerning ones knowledge of the subjective quality
or quale of ones experiences of his or her conscious states. The second is the privacy
concerning ones knowledge of the existence of ones conscious states. The third, called by
Searle as ontological subjectivity, is the dependence of the existence of conscious states on
a subject. And fourth, called by Searle as epistemic subjectivity, is the dependence of our
judgments on our attitudes and preferences.1

c. Intentionality

The third is intentionality, which refers to the property of mental or conscious states to have
contents, be about, or be directed at some objects or states of affairs. To better understand the
nature of intentionality, we need to make two considerations. One is that while some mental
states are indeed intentional, some, however, are not. Examples of mental states that are
intentional are beliefs, desires, hopes, and fearsthese mental states are always about some
objects or states of affairs. One, for instance, cannot have a belief without this belief being
about something. In this connection, an interesting but puzzling feature of intentional mental
states is that they remain intentional or directed even if the objects they are about do not exist.
For instance, the belief that Santa Claus remains directed or to be about something even if
there is no Santa Claus. Another way of putting this is that a belief obtains even if the object it
relates to does not exist. This is unlike ordinary relations such as sitting next to. If I say that I
am sitting next to Maria, and Maria happens not to exist then the relation sitting next to does
not obtain. Better yet, since there is no Santa Claus, I cannot actually sit next to Santa Claus
but I can believe that I am sitting next to him. Here the sitting next to does not obtain but the
belief does. On the other hand, examples of non-intentional mental states are pain, undirected
anxiety, and some forms of boredomthese mental states are not about some objects or state
of affairs. Being non-intentional, it does not make sense to ask what these states are about.
Thus, intentionality is not a universal feature of all mental or conscious states. But though this
is the case, intentionality is considered to be a very important feature of the mind because it is
what enables the mind to relate to the world, and thus cope with it.

Incidentally, on the basis of his distinction between ontological and epistemic types of subjectivity, Searle
would eventually argue that it is possible to have an epistemically objective knowledge of something
ontologically subjectivereferring to mental statesin order to make his case that it is possible to have a
science of the mind without having to reject the subjectivity of consciousness. Offhand, Searle seems to sidestep
the problem concerning privacy. In any case, we shall go back to this claim of Searle in the latter chapters.

Another consideration is the fact that there are non-mental phenomena that are also
intentional. For instance, linguistic expressions, maps, and signs are also about some objects
or states of affairs. These phenomena, though non-mental, are likewise intentional. In this
regard, Searle (1999a, 92-94) distinguishes among original, derived, and as-if/metaphorical
types of intentionality. Original intentionality refers to the kind of intentionality that mental
states possess. Such intentionality is regarded as original for it is not derived from or
conferred by some other intentional phenomena. Derived intentionality, on the other hand, is
the kind of intentionality that certain objects possess because humans, through the
intentionality of their minds, have conferred such intentionality on such objects. For instance,
linguistic expressions, maps, and signs refer to the things that they refer to only because
humans have decided to use them in that way. Lastly, as-if intentionality is the kind of
intentionality that we metaphorically ascribe to machines, as when we say that the television
set is tired and therefore needs rest.

On a historical note, Franz Brentano (1973) once proposed intentionality as the

essential mark of the mental. His proposal was part of his project of putting psychology on
firm scientific grounds, which requires a clear criterion for distinguishing mental from
physical phenomena. The idea is that if psychology were to be a science of the mental, it
should first be clear what differentiates the mental from the physical. Consequently, Brentano
(1973, 88) argues that mental phenomena are intentional while physical phenomena are not.
While this criterion works well when one speaks of mental states such as beliefs, hopes, and
desires, a difficulty, however, arises when one speaks of sensations such as pains. Anticipating
such difficulty, Brentano (1973, 82-83), in the following, tries to accommodate sensations
such as pains and pleasures into the intentional to account for their mentality:

Yet it may still be the case that with respect to some kinds of sensory pleasure and
pain feelings, someone may really be of the opinion that there are no presentations
involved, even in our sense. At least we cannot deny that there is a certain temptation
to do this. This is true, for example, with regard to the feelings present when one is cut
or burned. When someone is cut he has no presentation of touch, and someone who is
burned has no feeling of warmth, but in both cases there is only the feeling of pain.
Nevertheless there is no doubt that even here the feeling is based upon a presentation.
In cases such as this we always have a presentation of a definite spatial location which
we usually characterize in relation to some visible and touchable part of our body. We
say that our foot or our hand hurts, that this or that part of the body is in pain.

Brentano claims that it is not true that the sensation of pain is not intentional, for the
presentation of such sensation (the object the sensation is allegedly about) is its spatial
location in the body, like the head, hand, or foot. In other words, pain is intentional in that it is
directed at the particular part of the body in which it is spatially located. For instance, the
sensation of pain in a headache is directed at the head. It is not difficult to discern, however,
that Brentano is equivocating here. The sense in which pain points to a particular part of the
body is not the same in which mental states are said to be directed at some objects. That a
belief is intentional has nothing to do with its spatial location, like that it occurs in the brain or
that it is a belief of some particular person. Consequently, Brentanos defense for his claim
that intentionality is the essential mark of the mental fails. There simply are mental states that

are not intentional but which are mental in virtue of their other propertiestheir qualia or
phenomenal properties.

d. Some Other Features and the Critical Ones

Aside from the ones discussed above, there are still other features of consciousness identified
by some philosophers. Searle (2004, 134-145), for instance, includes the following in his list:
(1) that consciousness always comes in discrete units of unified conscious fields (p. 136);
(2) that all of our conscious states come to us in some sort of mood or other (p. 139); (3) that
Within the conscious field, one is always paying more attention to some things than others
(p. 140); (4) that for any conscious state there is some degree of pleasure or unpleasure (p.
141); (5) that All of our conscious experiences come to us with a sense of what one might
call the background situation in which one experiences the conscious field (p. 141); (6) that
there is an obvious distinction between the experience of voluntary intentional activity on the
one hand and the experience of passive perception on the other (p. 142); (7) that Our
conscious experiences do not just come to us as a disorganized mess; rather they typically
come to us with well-defined, and sometimes even precise, structures (p. 143) (the Gestalt
Structure); and (8) that It is typical of normal conscious experiences that I have a certain
sense of who I am, a sense of myself as a self (p. 144).

We have seen that consciousness is what essentially defines mentality. 2 Mental states
are states that are either actually or potentially conscious. Consciousness or conscious states,
on the other hand, has several features as discussed above. Among such features, however,
qualia and intentionality are widely considered to be the critical ones; as such, much of the
philosophical investigations on the nature of the mind have been focused on the phenomenal
and intentional properties of consciousness. Rosenthal (1991, 463), for instance, writes in his
essay Two Concepts of Consciousness: All mental states, of whatever sort, exhibit
properties of one of two types: intentional properties and phenomenal, or sensory, properties.

There is a general consensus among philosophers of mind that not all conscious states
possessing qualia or phenomenal properties possess intentionality or intentional properties as
well. As we have seen in our investigations, pains and other sensations, for instance, have
subjective qualities but they are not directed at some objects in the world. There are, however,
some disagreements on whether all conscious states possessing intentional properties possess
phenomenal properties as well. Searle (2004, 134), for instance, believes that all intentional
states have phenomenal properties as well: Apparently the idea is that some conscious states,
such as feeling a pain or tasting ice cream, are qualitative but some others, such as thinking
about arithmetic problems, have no special qualitative feel. I think this is a mistake. If you
think there is no qualitative feel to thinking two plus two equals four, try thinking it in French

Chalmers (1996, 11-23) has a different take on the matter. He believes that consciousness, which he equates
with qualia or the phenomenal properties of the mind, is just one of the two defining characteristics of mentality.
The other one is what he refers to as the functionality or the functional or psychological properties of the mind.
Phenomenally speaking, mental states are conscious states; while functionally or psychologically speaking,
mental states are causes of behaviors. One problem here is while it is clear that it is possible for mental states to
be conscious states without causing behaviors, it is, however, not clear how it would be possible for mental states
to be causes of behaviors without being conscious. If we grant the possibility of the latter, what then would
distinguish mental causes from non-mental causes of behaviors?

or German. But Rosenthal (1991, 464) believes otherwise: it seems unlikely that pure
phenomenal states, such as pains, have anything interesting in common with pure intentional
states, such as beliefs. Kim (159-60), on the other hand, believes that there are intentional
states that have phenomenal properties while there are some that do not: For it certainly
seems that there is something it is like to believe something, to suspend judgment about
something, to wonder about something, to want something, and so on. But as we saw, at least
many instances of these states dont have any phenomenal, sensory quality.

It is possible, however, to sidestep these disagreements by understanding intentional

states as referring simply to mental states that are strikingly intentional, that is to say, mental
states whose intentional properties stand out, regardless of whether or not they also possess
phenomenal properties. Likewise, we may understand phenomenal states as referring
simply to mental states that strikingly phenomenal, meaning to say, mental states whose
phenomenal properties stand out, regardless of whether or not they also possess intentional

1.2. Individuating Mental States

In the previous section we examined how mental states are distinguished from non-mental
states. Now we shall examine how mental states are individuated or distinguished from one
another. To begin with, as we have already seen, mental states are generally distinguished in
terms of their phenomenal and intentional properties. As a result, philosophers of mind
generally divide mental states into phenomenal and intentional states. Paradigm examples of
phenomenal states are various types of pain, while those of intentional states are beliefs and
desires. Given this general classification, let us now examine how mental states in each
general type, the phenomenal and the intentional, are individuated.

On the one hand, phenomenal states are individuated in terms of their quality and
intensity. For instance, pains can be differentiated from tickles in terms of the quality of their
phenomenal feel, say pains carry with them hurting sensations while tickles ticklish ones. But
different kinds of pain can be differentiated in terms of the degree or intensity of the hurting
sensation that they carry with them. On the other hand, intentional states are individuated in
terms of their quality and content. At this point, it is appropriate to introduce the term
propositional attitude (believed to have been coined by Betrand Russell), which is actually
just another term for intentional states but which better captures the fact that intentional states
consist of their quality and content. A propositional attitude refers to any mental state that is
by nature intentional and whose content can in principle be expressed in a propositional form.
For instance, a belief is always a belief in something or that something is the case, and that
something is always expressible in the form of a proposition, as the belief in God is
expressible as the belief that God exists.

The quality of a propositional attitude refers to the kind of attitude directed at a

proposition, while its content is the proposition at which an attitude is directed. Consequently,
propositional attitudes are distinguished from one another in terms of their quality and
content. Let us compare and contrast the following propositional attitudes: (1) I believe that
p; (2) I believe that q; and (3) I hope that p. Here, propositional attitudes (1) and (2) are

the same in quality but different in content; propositional attitudes (1) and (3) are the same in
content but different in quality; and propositional attitudes (2) and (3) are different both in
content and quality.

The controversial issue about propositional attitudes concerns the individuation of

their contents; that is, whether or not the contents of propositional attitudes are individuated
without any regard to factors external to the mind or brain such as its natural environment and
social context. There are two views that are in contention on this issue, namely, internalism
and externalism. Internalism claims that the content of a propositional attitude is individuated
without any regard to such external factors. On this view, the propositional attitude is said to
have a narrow content. On the other hand, externalism claims that the content of a
propositional attitude is individuated only by likewise regarding such external factors. And on
this view, the propositional attitude is said to have a wide-content (see Kim 1998, 193-207).
Consider the following two propositional attitudes: I believe that p and I believe that q. I
know that p and q are different, but do I know their difference without regard to external
factors, such as what p and q refer to the external world? Internalists claim that what they
refer to in the world do not matter in how I distinguish p from q, while externalists claim

In most discussions concerning internalism and externalism, however, externalism is

presented as an argument that challenges the coherence of certain theories of the mind whose
claims lead to internalism. As such, the polemics between internalism and externalism come
down to how to answer the externalist objection. We shall later on examine the arguments that
constitute the externalist objection (Putnams twin-earth and Burges arthritis thought
experiments). Cartesian dualism is widely regarded as a paradigm example of an internalist
theory of mind. Accordingly, if mental states are ontologically independent from the physical
states of the body and of the physical world in general then physical states of whatever sort
make no contribution to the way mental states are individuated. How the physical world is
makes no difference to ones mental states.3

In the contemporary scene, functionalist and computationalist theories of mind are

representative examples of internalist views on the mind. As functional states, mental states
are recognized solely in terms of their causal roles in an input-output system; as such, factors
external to the system play no role in the individuation of mental states (see Burge 1991, 556-
558). On the other hand, as computational states, mental states are recognized solely in terms
of their syntax or formal properties; and as such external factors play no role in the
individuation of mental states. In this connection, it will be recalled that for Fodor
computational processes are carried out using a system of representation that is internal to the
brain, which is called the language of thought. On this view, mental states are recognized as

Some regard externalism as part of commonsense or folk psychology (see Kim 1998, 205; and Stich 1994, 357-
58). As Stich (1994, 357), for instance, while explaining the viewpoint of Cummins, writes: following
Putnam, Burge, and others, Cummins also maintains that the taxonomy of intentional states exploited by folk
psychology is anti-individualisticbeliefs and desires cannot be specified in a way that is independent of
environment or history (p. 140). This seems to conflict, however, with the fact that Cartesian dualism, which is
also regarded as part of commonsense or folk psychology, is widely recognized as an internalist theory of the

symbols in this internal language; and as an internal language, the recognition of these
symbols cannot admit of external factors such as the semantics or what the symbols represent
in the outside world. The recognition of these symbols would have to be based solely on their
formal or syntactical properties. As Fodor (1991, 486) writes:

I take it that computational processes are both symbolic and formal. They are symbolic
because they are defined over representations, and they are formal because they apply
to representations, in virtue (roughly) the syntax of the representationsWhat makes
syntactic operations a species of formal operations is that being syntactic is a way of
not being semantic. Formal operations are the ones that are specified without reference
to such semantic properties of representations as, for example, truth, reference and
meaningBut now, if the computational theory of the mind is true (and if, as we may
assume, content is a semantic notion par excellence) it follows that content alone
cannot distinguish thoughts. More exactly, the computational theory of the mind
requires that two thoughts can be distinct in content only if they can be identified with
relations to formally distinct representations.

2. Some Anti-Materialist Arguments

On the basis of these features of the mind, certain arguments have been constructed that
challenge certain materialist views on the nature of the mind. The basic reasoning is that any
materialist explanation of the mind that eventually leaves out one or some of these features of
the mind is bound to be incomplete and hence mistaken. These arguments, however, are not
intended to show that materialism in general is mistaken and thus to advance metaphysical
views, dualist or idealist, but simply to show that certain types of materialist views are
mistaken in order to advance an alternative materialist view. The issue, in short, is not whether
materialism in general is mistaken but whether certain types of materialism are mistaken.
According to the general and particular levels with which the identity of the mind can be
investigated, these arguments can be grouped into two broad types: those concerning
mentality in general and those concerning particular mental states.

2.1. Arguments Concerning Mentality in General

As the minds defining feature is consciousness whose critical features are qualia and
intentionality, the arguments falling under this category are those that point out the failure of
some materialist views to account for consciousness in general or for the critical properties of
consciousness, namely qualia and intentionality.

[a. Arguments Involving Qualia]

China Brain

The China brain argument or thought experiment was conceived by Ned Block (1991) to
challenge the functionalist view of the mind, in particular, the view that two or more systems
consisting of different materials but whose functional organizations are the same would have
the same functional states. Block simply asks us to imagine a situation where every Chinese

citizen represents a particular neuron in the brain, and where all these Chinese citizens are
organized in the same way as the neurons in the brain are organized. Furthermore, let us
suppose that there is a system whereby the Chinese citizens communicate with one another,
say via radios or cellphones, in the same way that neurons interact with one another. The
question now is: Can we meaningfully say that the Chinese citizens taken as one organized
system has the same consciousness or mental states as the human brain? Functionalist theories
will have to answer in the affirmative, but this will run counter to common intuitions, which
will give us a negative answer. This argument is also called by Block and some other
philosophers as the absent-qualia argument, and this is so because consciousness and qualia
are regarded by these philosophers as almost identical due to the fact that all conscious states
have qualia..

Of Bats and Mary

Thomas Nagel (1991) argues that we can never know what it is like, or what it feels like, to be
bats even if we have all the relevant knowledge concerning bats, which includes their
physiology and behavioral patterns. What this implies is that any physical explanation of bats
will leave out the phenomenal quality of being bats, thereby rendering such explanation as
incomplete or as not exhaustive of what bats are. A related argument is put forward by Frank
Jackson (1991), which concerns what Mary doesnt know. Jackson imagines Mary to be an
expert on neurobiology and the mechanics of colors. Mary can perfectly explain what happens
to the brain as we perceive colors of different wavelengths. The only problem with Mary is
that she has lived all her life in a room where the only colors are black, white, and shades of
gray. Mary is not color blind but due to the circumstances she is in she has not perceived
colors other than the ones mentioned. The question now is: Does Mary, who has complete
knowledge of the physics involved in seeing colors but lacks the actual experience of seeing
some colors, already knows everything there is about in seeing colors? According to Jackson,
Mary will never know the phenomenal features of seeing the other colorswhat it is like to
see red or blue, for instance.

In both arguments, Nagels and Jacksons, it is shown that knowing the physics
involved in acts of consciousness will not capture something integral about such acts, namely,
their phenomenal properties. Bats are conscious beings; and just knowing their physiology
and behavioral patterns will not tell us what it is like to be bats. Perceiving colors is a
conscious act; and just knowing the physics involved in such act without the actual experience
of such act will not tell us what it is like to perceive colors. The consequence to be drawn
from these considerations is that consciousness or mentality cannot be equated with

Inverted Qualia

The inverted qualia argument, also known as the inverted spectrum argument and believed to
have been first thought of by John Locke (see Chalmers 1996, 263), is another argument that
challenges functionalist theories of mind. To explain this argument let us use the illustration
given by Searle (2004, 85). Accordingly, suppose we ask Mark and Joseph to pick out the blue
pens from a set of blue and red pens, and both have successfully done so. Suppose, however,

that Mark and Joseph have inverted internal experiences attached to the words red and
blue. Mark sees red and blue pens in the usual way, that is, he sees red pens as red and blue
pens as blue. Joseph, however, sees them differently. Joseph sees red pens in the same way
that Mark sees blue pens, that is, as blue; while he sees blue pens in the same way that Mark
sees red pens, that is, as red. In short, Marks and Josephs qualia in seeing red and blue pens
are inverted. Now the point of the argument is this. The fact that Mark and Joseph exhibit the
same reaction to the same input, that is, they are able to pick out the blue pens from the set of
red and blue pens as a result of the same verbal command to pick out the blue pens, proves
that the functionalist explanation of how the mind works, where mental states are defined in
terms of their causal or functional roles, does not capture the phenomenon of the inverted
qualia. Consequently, this renders the functionalist theories of the mind as deficient and hence
as mistaken.

Fading and Dancing Qualia

The absent-qualia and inverted qualia arguments are intended to show that the functionalist
idea that conscious states are determined by functional organization is mistaken, for it is
possible that two systems with the same functional organization do not share the same
conscious states, either one has conscious states while the other one does not have or one has
conscious states different from the conscious states of the other. David Chalmers (1996, 246-
275), however, comes to the defense of the said functionalist idea. Chalmers argues that two
anti-functionalist arguments are impossible because they lead to impossible consequences.
More specifically, he shows that the absent-qualia phenomenon leads to the fading-qualia
phenomenon while the inverted-qualia phenomenon to the dancing-qualia phenomenon. The
phenomena of fading qualia and dancing qualia are both dismissed by Chalmers as
impossible. Let us examine his reasons.

Against the absent-qualia argument, Chalmers imagines that our neurons in the brain
are gradually being replaced by silicon chips. There are thus two extreme sides to the
spectrum, the normal person with an organic brain and a robot whose synthetic brain consists
of silicon chips, and in between are intermediary entities with varying combination of neurons
and silicon chips in their brain. According to the absent-qualia argument, the normal person is
fully conscious while the robot is fully unconscious. Now Chalmers asks, what about the
intermediary entities? There are two possibilities: either at a certain point in the series of
replacing neurons with silicon chips consciousness or qualia suddenly disappears, or the full
qualia of the normal person fades as the neurons get gradually replaced by silicon chips until
it totally disappears when the replacement is complete. Chalmers first dismisses the
possibility of suddenly disappearing qualia, saying it will lead to brute discontinuities in the
laws of nature, (Chalmers 1996, 255) or gaps in the laws of nature. Chalmers, however, also
dismisses the possibility of fading qualia for the absurdity of the fact that an entity that has
fading qualia will always be wrong about his inner states. This entity will always say red
whenever it is seeing pink.

Against the inverted-qualia argument, Chalmers imagines that two persons, say Ray
and John have inverted qualia (say what they both call red Ray sees red while John sees
blue) and what accounts for this is a slight difference in their brain structures: In some small

region of their brains, Ray has a neural circuit while John has a silicon circuit. Suppose that a
similar silicon circuit is attached to Rays head as a backup circuit for his neural circuit (in
that small region of his brain), and suppose too that there is a switch that can switch directly
between the neural and silicon circuits. Thus if Ray sees red, and we flip the switch, he will
then see blue. If we flip the switch several times, the change from blue to red experiences and
back will be dancing before his eyes. Chalmers dismisses the possibility of this, saying that
it is impossible that these color experiences dance while Ray continues to call red objects he
and John used to call red before the silicon circuit was attached to Rays head. Chalmers
(1996, 268) puts it this way: My experiences are switching from red to blue, but I do not
notice any change. Even as we flip the switch a number of times and my qualia dance back
and forth, I will simply go about my business, noticing nothing unusual.

Chalmers later on claims that the absent-qualia phenomenon also leads to the dancing-
qualia phenomenon. Chalmers (1996, 274) thus summarizes his arguments in the following

To summarize: We have established that if absent qualia are possible, then fading
qualia are possible, if inverted qualia are possible, then dancing qualia are possible;
and if absent qualia are possible, then dancing qualia are possible. But it is implausible
that fading qualia are possible, and it is extremely implausible that dancing qualia are
possible. It is therefore extremely implausible that absent qualia and inverted qualia
are possible. It follows that we have good reason to believe that the principle of
organizational invariance is true, and that functional organization fully determines
conscious experiences.

[b. An Argument Involving Intentionality]

Chinese Room

The Chinese room argument or thought-experiment, advanced by John Searle (1980),

specifically challenges the view of strong artificial intelligence or the computational theory of
mind. More specifically, it challenges the claims that all there is in having a mind is the
implementation of computer programs, and that, as a consequence, the mental states of
humans are no different in kind from the computational states of a running computer program.
The Chinese room argument challenges these claims by showing that, unlike humans,
computers do not know what the contents of their computational states (or the symbols that
they manipulate) are about or represent in the world. What computers only know of these
symbols are their shapes and the ways in which they should be combined according to the
rules of their programs.

An important thought experiment that is used to defend the views of strong artificial
intelligence is the Turing test as discussed in the previous chapter. It will be recalled that
according to this test, if after a series of questions and answers, the human interrogator could
not tell, on the basis of the respondents answers, which is the human and which is the
machine, then the machine is said to have passed the test, and is consequently considered to
be intelligent. But Searles Chinese room argument shows that even if a computing machine

passes the Turing test, still intelligence cannot be attributed to it. The reason is that attributing
intelligence means attributing consciousness, which in turn means attributing intentionality.
But a computing machine can never have intrinsic intentionality, for it does its computations
in a way that considers only the syntactic features of symbols or representations.

The Chinese room argument, in its simple form, goes this way. Imagine a native
English speaker who does not understand Chinese is locked in a room with only two outlets.
Outside of this room are native Chinese speakers who do not know who or what is inside the
room. In one outlet, the person inside the room is given by the Chinese speakers several
manuscripts bearing Chinese symbols and a manual of English instructions for manipulating
the Chinese symbols. The person inside the room does not even know that the symbols are
Chinese; he only recognizes and individuates the symbols according to their shapes or formal
properties. Now imagine that the manual, which the person has immediately mastered, says
that if he recognizes certain combinations of symbols in the manuscripts given to him in one
outlet, then he should arrange certain combinations of symbols and send them to the persons
outside the room through the other outlet. Suppose now that what the person inside the room
sends to the persons outside the room are correct answers to the questions that the persons
outside the room ask him through the manuscripts that they send him. In this case, in so far as
the persons outside the room are concerned, the person inside the room understands Chinese.
But the fact is the person inside the room does not understand the symbolshe does not even
know that they are Chinese; he does not know what they represent; and he simply manipulates
them according to the instructions in the manual. Technically speaking, he does not know the
semantics of those symbols; he only knows their syntax.

2.2. Arguments Concerning Particular Mental States

There are two thought experiments that challenge the thesis of internalism, or which argue
that propositional attitudes or intentional states have wide contents. One is the twin-earth
thought experiment by Hilary Putnam (1994), while the other is the arthritis thought
experiment by Tyler Burge (1991). Putnams thought experiment argues for the influence of
the natural environment on the individuation of the contents of propositional attitudes, and as
such is described as espousing natural-kind externalism; while Burges argues for the
influence of the social environment, and as such is described as espousing social externalism.
Both thought experiments argue that internalism fails because it does not capture the
difference in the contents of propositional attitudes that is brought about by a difference in the
conditions external to these states.

Twin Earth

Imagine that our earth, which we shall call normal earth, has a duplicate, which we shall
call twin earth. Normal earth and twin earth have identical features except for the following:
In normal earth, the substance called water has the chemical composition of H 2O, while that
in twin earth has XYZ. From our point of view, let us call the substance called water in
natural earth as water while the substance called water in twin earth as twater. Though
water and twater differ in chemical composition (water has H 2O while twater has XYZ), they,
however, have the same properties, which include liquidity, transparency, and wetness. As

such, whatever can be done using water, like quenching thirst and washing the laundry, can
also be done using twater. Now imagine also that David in normal earth has a physical
duplicate, molecule per molecule, in twin earth. So as not to confuse with David in normal
earth, we will refer to Davids physical duplicate in twin earth as David2. Let us next
suppose that David and David2 are both thirsty and as a result desire to drink the substance
that they both call water. From our point of view, this would mean that David desires to
drink water while David2 desires to drink twater. The critical question here is whether David
and David2 desire to drink the same thing, or whether the contents of their desires the same?
The answer, according to Putnam, is in the negative, for precisely water and twater have
different chemical compositions. What this means is that the identity of their desires, or the
individuation of their desires, is partly constituted by some external factorreferring to their
chemical composition. And what is true for desire in this context is true for all intentional
states or propositional attitudes.


Suppose that John goes to see a doctor and tells the doctor that he has arthritis in his thigh, to
which the doctor replies to John that he is mistaken for arthritis refers to the inflammation of
the joints and not of bones in general. Let us grant this situation as the actual situation. Now
let us suppose a counterfactual situation whose only difference from the actual one is that
arthritis is defined as the inflammation of the bones in general and not just the inflammation
of the joints. In this counterfactual situation, John goes through the same motion of seeing a
doctor and telling the doctor that he has arthritis in his thigh. Now, unlike in the actual
situation where the doctor tells John that he is mistaken, in this counterfactual situation the
doctor agrees with what John tells him. The critical question here is whether Johns belief that
he has arthritis in the actual situation is the same as his belief that he has arthritis in the
counterfactual situation; or whether the contents of Johns belief in the actual situation and of
Johns belief in the counterfactual situation the same. The answer, according to Burge, is in
the negative. They cannot be the same since Johns belief in the actual situation is false while
his belief in the counterfactual one is true. Now what accounts for this difference in mental
content is the difference in the linguistic uses of the word arthritis in the actual and
counterfactual situations, or more precisely, in how the word arthritis relates to the events or
states of affairs in the world assumed in such situations. What this means is that the contents
of our beliefs, or of any other propositional attitudes, can only be individuated by considering
their environmental conditions, such as the conventions or practices of ones speech

3. Concluding Remarks

The following table summarizes our discussion on the distinguishing features of the mind.

The Minds Distinguishing Features

Defining Mentality Individuating Mental States

Consciousness Phenomenal Intentional States

Qualia Intentionality Ontological Quality Intensit Quality Content

Subjectivity, y
Privacy, and
Others Narrow Wide
Content Content

The anti-materialist arguments discussed here merely concern the distinguishing features of
the mind. There are other arguments, however, which take other forms, such as those that
appeal to inconsistency and explanatory power. We shall again discuss the arguments
discussed here together with the other ones in Chapter 7, where these arguments are presented
as difficulties or stumbling blocks of the project of assimilating the mind into the scientific
worldview. Needless to say, there are counter-arguments to these arguments, and there are
arguments against these counter-arguments, and so on and so forth. It is simply impossible for
us to discuss all these arguments in a book; we can only include in our discussion those that
serve our purposes.