You are on page 1of 65

Prolegomena to a Complete Explanatory Theory of Consciousness

Prolegomena to a Complete Explanatory Theory of Consciousness

Jeffrey J. Steinberg 12.23.09

William Mailliw & So Publishers New York, New York

-1-

! Table of Contents !
1. Consciousness in Broad Strokes.. 3 2. The Form..... 5
An analysis of consciousness that uncovers the form that any theory of consciousness must take if it is to fully explain the phenomenon. Philosophical Zombies.. 5 The Materialist Assumption. 5 Philosophical Golems.... 6 8 The Explanatory Gap.... The Good Reason....... 10 Analysis of several brain components in light of the form. Computation and Emergence.. Neurotransmitters. Coalitions of Neurons... Action Potential Patterns. Electromagnetic Fields . Microtubules & The Orch OR Model.

3. Brain Forms........ 12
12 15 15 17 18 20

4. Theory Forms........ 23
Analysis of a few theories in light of the form. HOT Theories.. Information Integration Global Workspace Model .... Comparison Table.. A synthesis of the reviewed material formulated according to the form. 23 26 29 37

5. A Formulation of The Form. 38 6. References and Sources.... 41 7. Supplementary Materials..... 47


Science, Worlds, and Reality 47 Scientific Realism 49 Basic Problems in Addressing Consciousness Scientifically: A Critique of Dehaenes Approach.. 52

-2-

! Consciousness in Broad Strokes !


In the most full and sweeping sense of the term, consciousness is the only thing one can and does know, for everything as one has come to know it is merely another moment of consciousness. Kant made a variant of this point when he claimed that one can only know phenomena. However, this definition of consciousness, while motivated by an interesting insight into the nature of ones reality, is generally useless. The problem of how to define consciousness has been plaguing man since time immemorial. Yet, it seems we can all agree on at least one aspect of consciousness: that for anything that is conscious, there is a way to be like it, to use the famous words of Thomas Nagel [62]. That is, there is nothing it is like to be a rock, but there is a way it is like to be you. Clearly, this formulation can quickly boarder on gibberish, but it nonetheless gets at the most profound fact of consciousness: that it is the substance of experience, and that things that dont have consciousness dont have experience. The problem with experience is that it is subjective, that my experience is mine and yours yours, and to each his own. That is, that any individual experience, e.g. yours or mine, is not objectively available to be anothers experience. Thus, consciousness is a particularly bizarre phenomenon1 in the world insofar as one knows it exists because one experiences it from one side, i.e. subjectively, but cannot point to it out there in the world in the way one can with light and gravity. Unlike those other phenomenon which are available for everyone to witness, i.e. are objective, consciousness, at least in everyday life, is totally inaccessible in others, and in that way appears metaphysical, i.e. beyond the purview of empiricism. Thus, unlike the latter entities, the exact
1 While it is often a faux pas to cite a dictionary, I do believe Websters Unabridged Second is of particular aid in clarifying my usage of phenomenon. Websters reads,
Phenomenon: any observable fact or event; as: a in the broadest sense, any fact or event whatever; any item of experience or reality c an object of sense perception as distinguished from an ultimate reality. This meaning is due to Kants absolute separation of the thing-in-itself from the object of experience, or phenomenon. It is more thoroughgoing than the ancient distinction, since Kant asserts the utter unknowability of the thing-in-itself, while the ancients conceived essences to be knowable. d in positivistic and scientific usage, any fact or event of scientific interest susceptible of scientific description and explanation [77].

The combined meanings of a and d specify my use of the word, which is thus distinct from phenomenology the study of experience, and phenomenal consciousness, a term coined by Ned Block, who writes, Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state [5]. These similarities within the terminology are an unfortunate idiosyncrasy that is rather unavoidable if one wishes to stay within the parlance.

-3-

nature or relationship of consciousness to its surroundings is unspecified because there is no easy empirical theory. But more fundamentally, this means consciousness can always be construed as an end, that is, as that which has no causal properties, the existence or nonexistence of which having no discernable consequences due to the inherent and fundamental separation of subjectivities. In everyday practice, we use a number of quick and dirty tests for consciousness such as the ability to speak coherently or to respond to complex problems reasonably to get around these epistemic issues. How exactly we do this is still controversial as exhibited by the robust theory of mind discourse. However, whether and how we attribute consciousness to others is not my concern here. I raise the hackle of theory of mind only to say I wish to avoid its tangles, and instead be concerned with consciousness as a scientific object. That is, I am not concerned with how one attributes consciousness to another, but what consciousness is and how we come to such knowledge. My approach is to set out on a philosophical trajectory, trying to figure out not what consciousness is in particular, but rather what form an explanatory theory of it must be. Then, having found the outline of this necessary form, I will delve into the brain, analyzing whether any of the known components fit the bill. In doing so, I will review the relevant literature, discussing why some approaches or conclusions are necessarily wrongheaded, ultimately trying to extract those elements of each that will be useful in constructing a complete explanatory theory of consciousness. This paper is intended to be more exploratory then conclusive. If anything, I would like it to serve as something like notes on a sketch for what a final theory should look like.

-4-

! The Form !
Any robust scientific theory of consciousness must somehow address two problems that are in fact sides of the same coin: the explanatory gap, a.k.a. Leibniz's Gap [16,58,62] and the Hard Problem [9]. The explanatory gap exists between experience and the material world in that no one has even the whiff of a notion as to how to explain why and how some specific physicality gives rise to phenomenal experience as we know it rather than being phenomenologically void, as we assume of rocks. The hard problem is to close that gap. David Chalmers, who famously coined the latter term, likes to illustrate the thrust of the hard problem with his philosophical zombies, creatures that atom for atom, behavior for behavior are indistinguishable from regular humans, but lack consciousness, i.e. there is nothing it is like to be a zombie [9]. The zombic hunch, as Daniel Dennett likes to call it [22], is that if zombies are conceivable, there is a real difference between a person and a zombie. If one grants the zombic hunch, then one inevitably comes to the conclusion that any physical description of a person will necessarily be leaving something out, leaving out qualia, subjectivity, that is, one inevitably runs up against the explanatory gap. While an amusing thought experiment, I believe like Dennett and others that it is misleading and fundamentally impossible. However, I do not side with Dennett in totally dismissing the hard problem as an illusion. Dennett likes to write off zombies by writing of qualia, and what-it-is-likeness, and sometimes even consciousness all together [21,22]. While he is right in calling such terms squishy and ill-defined, thats ultimately his only case against them, and thus no case at all. To introduce many of my positions, Id like to take a few stabs at the zombies using the same weapons as Dennett, namely the materialist assumption and counter thought experiments, but with a little bit less blood lust than him, trying to avoid the nuclear option of out-and-out denialism. My argument against zombies stems from one assumption and that is the materialist assumption. The materialist assumption as I conceive it is that every phenomenon has a material substrate, e.g. light is photons, air is atoms, magnetism is a physical force. Consciousness is a particularly strange phenomenon in that it can be taken to be the only
-5-

phenomenon one encounters as explained in the introduction. However, if we are to be reasonable social beings, i.e. not metaphysical solipsists, then we must give credence to an external, objective reality, that is, we have to assume it2. Thus, we have the irrefutable je ne sais quoi to which science appeals in its investigations and describes in its equations and theories, the completeness of which will necessarily remain unknown because it fundamentally is an assumption. The je ne sais quoi of the materialist assumption is not the color red nor the electromagnetic wave of a particular frequency, but the referent of the latter3. Further, if we are to be reasonable social beings, then we must not only assume an external reality, but also attribute consciousness to other beings. Otherwise, one is again a solipsist, albeit a less radical one. But if one attributes consciousness to others, one must treat consciousness as a phenomenon in the world, and thereby assume consciousness has a material substrate in the je ne sais quoi, i.e. one must assume it in the same category of the je ne sais quoi, equally describable by science as fire. To do anything else is to be a spiritist, or dualist, i.e. one who posits two fundamentally different realitiesthe je ne sais quoi and a spiritual one; and dualists have no place, no word, no nothing in science. Now, if we are to take the zombie thought experiment at its word, namely that the zombies are the same as humans bit for bit, then they must have the same je ne sais quoi as humans and therefore the same material substrate of consciousness as we do, thereby having consciousness, and consequently violating the very definition of a zombie. Hence, zombies are impossible Q.E.D. If we are to be reasonable people, people worth engaging, then we must cede that zombies are inconceivablenothing more than linguistic illusions. But that was an easy punch and I am certainly not the first to throw it. So let me try another attack, not so much against zombies themselves, but the reason behind their genesis, i.e. the hard problem and its objection, But wheres the experience??? I have a counter thought experiment: imagine I am God, omnipotent, all-knowing creator down to the very fabric of the universe. I have created conscious human-like beings, call them philosophical golems. They are similar to humans only the consciousness in them is literally a little biological switch in the brain
Im essentially just making the Kantian argument of the division between noumena and phenomena. For a more thorough discussion of what I mean by the je ne sais quoi and sciences relation to it see the supplementary essays.
2 3

-6-

that all the neurons run through that explicitly code for any content that could be conscious or something like that. The circuitry doesnt matter. Let us not quibble about the exact nature of the set up. The only important factor is that there is a switch and when it is on the being is conscious and when it is not the being is not. This switch doesnt cause an electromagnetic field or a change in neural activity or anything else that one might want to point to as being consciousness. I am God and I have made a switch such that when it is on the neural computations/ representations running through it are conscious fundamentally, metaphysically, always, that is just the way it is because I am God and I said so, end of story (Im trying to avoid any quibbling or nitpicking of the sort that Mary the neuroscientist in the colorless lab has been subjected to with the blue bananas and such [ibid]). The situation I am creating is that in which we know what consciousness is fundamentally because I have created it and told you so and should resemble that at the proposed end of neuroscience [10] when every detail of the brains physical functioning is known, the only difference being here we also know what consciousness is physically. It is the exact opposite of the zombie experiment in that we know that the golems can be conscious and we know what makes them conscious, i.e. we know what consciousness is. Now that we are in this situation, knowing fundamentally what consciousness is, a zombistsomeone of the zombic persuasion [22]is still going to protest, but how does it make the computations conscious? Why does it make those representations consciousness? They are still going to insist that I am leaving experience out. These questions and their likes are obviously inane given the situation created by the thought experiment. The questions are equivalent to asking, why does crushing a grape flatten it? or what makes calcium carbonate chalk? or other ridiculous questions of identity. We know what consciousness, experience and qualia are physically, making asking after them senseless. Thus, in the best possible world concerning our understanding of consciousness, i.e. the world of philosophical golems, the hard question remains unanswered, not because it is hard, but because it is formulated such that it cannot be: it asks for an identity which it will nevertheless reject. In this light, we can see it for what it is, not a hard question, but an impossible oneone that cannot be answered in any world.
-7-

If we now compare this to the situation at the end of neuroscience, when we know all there is to know about the brain physicallyall the bosons and fermions or what ever the elementary particles of the day areand, as materialists, we assume that that is all there is to know scientifically, then we see the zombic questioning is similarly fruitless. In a complete neuroscience, we may not know what consciousness is, i.e. we may not be able to point to it, but we can be sure it has been described somewhere in our description of the brain. But to any proposal that consciousness is this or that part, the zombist will still insist just as he ever has, Where is the experience? All I see is atoms and bosons and such. Why do we think that science can answer their questions here in a potential future of our world if the question cant be answered in the best possible world? The hard question is all the more the impossible question. Science has no way of answering it, which is to say science has no way to close the explanatory gap. Instead, we see that at some point we are just going to have to accept that consciousness is such and such physical process/ phenomenon as a brute fact of the universe. The consequence of this situation is that we must discover what said physical process is. But how will we know when we have it? We wont. We in fact never will. That is because scienceour only means to be sure of anythingcannot, as stated, close the explanatory gap. However, I dont believe the explanatory gap is a fundamental divide. While, I dont believe it can be closed, I do believe it can be bridged. To understand what I mean by this, a more thorough exploration of the gap is necessary. The explanatory gap has its origin in Leibniz, who employed the analogy of a mill to explicate it.
It must be confessed that perception and that which depends on it are inexplicable in mechanical terms, that is, in terms of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, one could imagine it increased in size, while keeping the same proportions, so that one could go into it as into a mill. In that case, we should, on examining its interior, find only parts that work upon one another, and never anything by which to explain a perception. Thus, perception must be sought in a simple substance, and not in a composite or machine. Further, nothing but this (namely, perceptions and their changes) can be found in a simple substance. It is in this alone also that all the internal actions of simple substances can consist (Monadology 17).

Leibniz points out the problem of the explanatory gap as being between the physicalthe parts that work upon one anotherand the
-8-

subjectiveperception. From this he concludes perceptions must embodied in a free-standing substancea monad. That is, to place perception in the world, he posits a metaphysically distinct ontological entity, separate from other ontological entities that compose the world. Implicit in this is that those other ontological entities, the ones that are the physical, are also posited4. Which is to say, the materialist assumption is here, offering a stable background for the perceived world5. There is perception, which is given, and the posited entities, which are assumed. To create fluidity between them, Leibniz joins them conceptually in a third, the monad, as a union of their two essences, subjectivity and physicality respectively. What one should notice is that all the players heremonads, perception and posited ontological entitiesare in Leibnizs mind. They are all at root types of psychological entities. Thus, the explanatory gap is not between actual things, but his conception of them. It exists between one type of psychological entityperception, which is a knownand anotheran assumption about those perceptionsand in that way between two different ideas. These psychological entities are of two different categories that have no psychospatial contiguity. Thus, no path across the mental landscape will lead from one to the next. That is, no line of thought can directly lead from one the next. Nagel also explains this, saying, We do not at present possess the conceptual equipment to understand how subjective and physical features could both be essential aspects of a single entity or process [63]. While Leibniz tries to flout this fact by conjuring an entity that by definition encapsulates both, this is of no use to science. However, he does offer us a guiding hand. Leibniz did not try to bring the two sides of the gapthe noncontiguous psychological categories of given perception and assumed physicalitytogether directly, for that is impossible, but instead created a conceptual bridge, albeit one that was structurally unsound, with his monad. What is important is that he tries to bridge the gap, not close it. What a theory of consciousness needs is a psychological entity that serves as a liaison between two distinct others. And fortunately, we have such entities; they are called metaphors. Any theory of consciousness must be metaphorical in part if it is to address
4 5

Essentially, I am speaking of noumena, but given the baggage that comes with that term, I try to avoid it. i.e. it is positing the je ne sais quoi, which is the same as the ontological entities spoken of here.

-9-

the explanatory gap. We cannot hope for a direct, understandable reduction of experience to the physical, but we can hope to understand the relationship by proxy. The mark of a good theory will be a good metaphor, and a good metaphor is highly isomorphic. In the case of philosophical golems, there can be no good theory of its consciousness because there is no good metaphor between the switch and consciousness other than the sole isomorphism that one can be conscious or unconscious like a switch. However, we need not expect that the identity between consciousness and its physicality be so incongruous in our world. We cannot know that it is not incongruous, but we should proceed with the hope that it is, for that is the only hope available for a theory of consciousness. Because science can only give us physical descriptions, what we will need is some good reason to believe that such and such part of the description of the brain is fundamentally one and the same as consciousness. This good reason will not be of the scientific sort but of the philosophical sort, and will be metaphorical in nature. It will read something like this: this physical phenomenon has similar relational properties within itself as that which we know in this aspect of consciousness, and the way these two parts of the brain relate account for the relation between these two phenomenological relations. To construct this good reason, we will need a thorough understanding of both the physical landscape of the brain and the phenomenological landscape of consciousness, such that we can try to discover the isomorphisms. The good reason will be a metaphorical bridge across the explanatory gap. The hard problem will remain the impossible problem, thus unsolved, but explanatory gap will be dealt with as best as possibly can. Currently, our understanding of the phenomenological landscape is rather impoverished. However, we do know its general topography, which is to say we know at least six essential features of consciousness: It is: Intentionalconsciousness is always about something. Temporalconsciousness is always a continuous process in time6.
6 Dennetts contention that brain time is not continuous only holds for individual perceptual events, which may very well not occur in a smooth temporal fashion. Nonetheless, there is the overall fact that one is

- 10 -

Unifiedevery percept is bound into a perceived whole7. Diversethere are innumerable percepts from the color red to the smell of coffee to the tantalizing experience of a tip-of-thetongue moment. Quantalone can be conscious as in waking or unconscious as in a coma/a thought can suddenly jump from unconsciousness into consciousness as when one suddenly remembers he left the door unlocked/ sensory information moves from unconscious possessing to consciousness where it is further processed at some definite point, e.g. one doesnt perceive the ratios of red to green to blue, only a distinct color. Gradedone can be minimally conscious of something, as when not paying attention to an object in ones peripheral vision, or very conscious of it, as when attention is directed upon it (though not necessarily ones gaze). Thus, if we are to imagine the brain as Leibnizs mill, we must walk into it and search for the particular cog or axel that has properties that resemble thesethat is isomorphic to consciousness in these respects. We should be careful not to confuse cognitive effects that involve consciousness, such as the attentional blink, with the essential features of consciousness, for then we will be confusing consciousnesss workings with other parts of the brain for consciousness itself. While the six features above are phenomenologically essential, there are also basic physiological facts of the brain that must be accounted for when considering consciousness, namely that: Consciousness is not localized to any region of the brain. Not every part of the brain is involved in consciousness. The brain is still active when one is unconscious. Just given these nine facts about consciousness, all of which need to be accounted for, and knowing what form theory must take at least in part, one can begin to make significant headway in developing a complete theory.

continuously conscious, which cannot be an illusion of memory nor accounted for by his multiple-drafts model [21]. 7 Similarly, Dennetts claim that perceptual space is not continuous or all there doesnt negate the fact that all the percepts are part of one consciousness, a fact that is totally unaddressed by his multiplex alternative to the Cartesian theater [21].

- 11 -

! Brain Forms !
I would now like to take the method detailed in the previous section and begin applying it to the brain. I will first go through some basic arguments in relation to the brain itself, further narrowing down what the neural correlate of consciousness (NCC) could be. In the next section, I will review a number of theories of consciousness, highlighting how they account for certain features of consciousness and not others, thus failing to be anything but a partial theory of consciousness. Having pulled the best aspects from each, I will try to assemble my own theory of consciousness in the concluding section that hopefully accounts for all of the nine features listed above. My theory will certainly not be correct, but it will offer an embryological vision of a final theory. First, I would first like to layout why any purely computational theory of consciousness will be no theory of consciousness at all. There exists the powerful line of reasoning that if consciousness is purely computational, then in theory a system of fluid filled pipes could be conscious, a conclusion most people reject as absurd. However, this thought experiment doesnt actually dismiss the claim. To actually dismiss it one must unpack exactly what computation is, and the implicit assumption that consciousness emerges out of such. If one looks at anything that is computing without knowing that it is computing, all one would see in every case is a bunch of stuff moving around in a systematic way. Modern computers are at base electrons moving around doped silicon and circuits in a particular pattern. The first computers were series of gears, cams, axels, and cranks, all moving in a particular pattern. What computation is is a description of that pattern of movement of actual things. Just as probability isnt substantive, but merely a description of substantive action8, so computation is not substantive but merely a description. Which is to say computation is not an ontological entity. Recognizing this, one sees that any hard computational claim is left with one of two options. One is to reject that consciousness is an ontological entity, i.e. to reject that it is a phenomenon, which is essentially what Dennett does to support his functionalism. But this is the nuclear option, and not a tenable route for
8 Think of probability in quantum mechanics: probability is not itself anything, not an ontological entity, but merely our best description of the je ne sais quoi.

- 12 -

science. The second is to claim that consciousness emerges out of that which is performing the computations, i.e. moving in a particular pattern. Philosophers and scientists alike love this option of appealing to emergence [3,4,8,18,25,47,48,59,72,79,80,81]. Indeed, it is almost hackneyed to say that consciousness is an emergent phenomenon. And in a way that is obvious: one has a brain, evidence indicates no single locus produces consciousness, therefore consciousness must be an emergent phenomenon of the brain. An emergent property or phenomenon is approximately defined as something that is found in a system but cannot be reduced to any given component of that system (while many will take issue with even this, I want to avoid semantics and just move past this point, assuming we all share a common idea about what emergence is). There are many classes of things we refer to as emergent. Examples of emergent phenomenon in nature are ant colonies, where the actions of any individual ant cannot explain how the colony on the whole is constructed or functions coherently, the movement of shoals of fish and of birds, where no individual determines the direction, but they nonetheless move coherently as if there were unified control, or gastric chewing of crustaceans, where the rhythmic motion of the teeth is produced by the irreducibly complex interactions of roughly ten neurons. If one looks to these or any other examples, there is one important difference between the emergence in them and that of consciousness, namely that everything that we call emergent is not a phenomenon separate from that which it emerges from. Instead what we call emergent is merely a description of a behavior of the components of the emergent system. The movement of shoal is not anything more than the movement of all the individual birds or fish. The colony is our categorization of the behavior of many ants, but it is not fundamentally different from ants; it is not its own ontological entity. Consciousness, on the other hand, cannot be just a description of many discrete parts acting in a way that isnt explained by the individual interactions of any two parts. It is some thing. We know this because we experience it as unified. It is a phenomenon in its own right, while coherent movement is not. If one is to say that consciousness emerges in some way and is its own phenomenon, then one has two options: One can say that when the brain acts in such and such a way consciousness just pops into existence.
- 13 -

But that position pretty much translates into being a dualist, for it is essentially positing that a phenomenon separate from that which is in the brain already is consciousness. One could maintain that whatever it is that just pops into existence is something known in science, such as a photon, but that appears a fairly difficult position to maintain, given that one is saying matter just pops in and out of existence on a macro scale or in unison. The other option, which appears the only tenable one, is to maintain consciousness is one and the same as one material thing in the brainthe other side of the coin that is both consciousness and its physicalitybut consciousness only emerges out of this matter when the matter is in a specific conformation or state (indeed, this is what I will argue for). In this case, consciousness is like surface tension, in which all the water molecules already have charges, but only when those charges take the specific conformation as they do when forming a surface do they combine in such a way to produce the phenomenon of surface tensionthe congealing of the separate forces into one that binds the molecules together.9 In terms of consciousness, this option dictates that it is necessary for the brain to compute in some certain way to be conscious, that its neurons or neurotransmitters or electrical fields must be in some particular arrangement, performing some set of actions that can be described as computation. However, the computation, the action/arrangement itself isnt enough, isnt sufficient for consciousness; only the substance performing those actions can give rise to consciousness. To return to my original point, we see that a purely computational theory will be no theory of consciousness at all, never describing why consciousness can emerge out of whatever it is that it does from nor even what it would mean for consciousness to emerge whatsoever. For such a theory merely tells one about directions, motions, steps, never actual phenomena, which is what any scientific, i.e. materialist, theory of consciousness is interested in10.
One should pay special attention to how the emergence of surface tension is quite different from the emergence of an ant colony or shoal movement.
9 10 One obvious conclusion of this line of reasoning is that computers can never be consciousness unless they produce the exact same arrangement of whatever it is that is the NCC as that which is found in the brain. The human brain computes using the build up of electrical fields in an analogous fashion while computers compute using discrete electrons, thus any simulation of a brain by a computer will never be a physical simulation of the electrical fields, but only the numerical quantification of their state, i.e. a digital description

- 14 -

Given that consciousness must emerge from the brain in a manner similar to the way surface tension emerges from water and has the nine essential features discussed above, the next logical step is to deduce or at least narrow down the possibilities for what the NCC, by which I mean the physical correlate of consciousness (PCC) located in the brain, could be. The literature offers a few different possible physical correlates of consciousness: neurotransmitters [22,49], ensembles of whole neurons themselves [49,50,51,52], electrical firing patterns as encoded in the action potentials [3,18,19,23,25,26,30,49,50,52], electromagnetic fields [47,48,55,61,64], and microtubules [39,40,41,42]. Neurotransmitters can easily be ruled out: they are stable, only changing location; they are located not just all over the brain but all over the body, they themselves are not graded, only their concentration is; they are not unified in any way, and there is no conceivable metaphorical relationship between neurotransmitters and consciousness. Indeed, there is nothing intrinsic to neurotransmitters, they are merely molecules that happened to be used by the body to ferry electrical messages. Anything that is sufficiently small enough and that can be used to activate certain ion channels or G-proteins could be a neurotransmitter. Thus, the networks they are a part of and the specific effects they have upon the neurons may be crucial to the emergence of consciousness, but they themselves must be entirely unrelated to the PCC. On similar grounds, one can easily rule out ensembles of whole neurons themselves for any number of reasons: they are composed of innumerable components, not all of which can be the other side of the coin from consciousness; they are relatively stable over the time periods that one can go from conscious to unconscious; the neurons themselves are all nearly identical within any given type, the number of which are limited, thus within and between types there is not enough diversity; the neurons themselves are not graded nor quantal; they are found in all parts of the brain; there is little to no conceivable metaphorical

of their state but not an actual simulation of those fields. Because the computer simulation takes place in a different substantive form than that of the brain it cannot be the same type of consciousness that animals have. While I cannot reject that it might produce some sort of consciousness, I hold that whatever consciousness it is it will not be of the same fundamental type as our own.

- 15 -

relationship between neurons themselves and consciousness. This rather blunt claim is almost patently wrong. Christoph Koch makes a slightly more sophisticated claim, but it ultimately offers little more than the obtuse one above. His claim that some particular coalition of firing neurons in some pathway is the NCC [49,50,51,52] is almost no more revealing than simply stating that consciousness happens in your brain rather in your entire body. While it offers a direction for science explore, it fails to illuminate what consciousness is at a basic explanatory level. To baldly state that the firing of certain neurons is the NCC is to embody the explanatory gap in its most full-blooded, chasmal form. He qualifies this claim by adding that the coalitions must be in synchrony11, which offers some insight, for synchrony can be taken as a type of binding together and therefore unity, offering an isomorphism with consciousness. However, the binding Koch stipulates is only within any given coalition and therefore doesnt actually account for the unity across percepts, or at least types of percepts, e.g. sound and sight, which is the unity a complete explanatory theory of consciousness is interested in. Sometimes, Koch uses Ray Jackendoffs intermediate theory of consciousness to postulate that conscious part of the ensemble is a summary of the lower processing of the primary sensory cortices, which is observed by higher processing of the frontal cortices, which he calls the unconscious homunculus [49]. This latter qualification offers a second isomorphism with intentionality or aboutness, with the frontal cortices being about intermediate ones, thus offering a neurological analog of higher order thought (HOT) theories. However, he doesnt go so far as to make any claims about aboutness, nor explain how observing would make the intermediate coalitions conscious, nor explain how that observing is different from the intermediate coalitions summarizing primary processing. While he hasnt gone quite so far, he is about to be between a rock and a hard place [35], just as HOT theories are12. Thus Kochs theory adequately addresses the three facts about consciousness in the brainits non-locality, non-universality, and lack of relation to the constant activityand cursorily accounts for two of the essential features of consciousnessintentionality and unificationbut fails to address the remaining four meaningfully. It is a skimpy,
11 12

What exactly is meant by synchrony is addressed on pg. 18. HOT theories are discussed below, pg. 21.

- 16 -

fragmentary theory of consciousness at best. While it is milestone work for advancing the scientific end of the theory, it is certainly not a freestanding explanation. If Koch were more willing to hazard a more precise claim for the NCC, he would, like most others in the field, say that consciousness is the electrical firing patterns that arise from action potentials. And indeed, action potentials appear the major currency of the nervous system. Information is coded in action potentials, but not in any given action potential, for that is mere a depolarization, i.e. an electromagnetic field, traveling down an axon. Instead, it is encoded in the pattern of action potentials, which is called the firing pattern. Thus, the time span for any given piece of information cannot be shorter than it takes to produce that which codes for it. Clearly, the brain does not use serial processing as computers do, driving all the information through a bottleneck, albeit an incredibly fast one. Instead, the processing is distributed, and consequently information is not just encoded in the temporal pattern of action potentials, but also in the spatial pattern of their connections. It is only in the pattern of action potentials, that we see anything that even remotely comes close to accounting for the diversity of contents of consciousness. Indeed, consciousness must in some way be about the information encoded in the action potential patterns, for that is the only place one sees information at all. No single neuron holds anything meaningful; only through each neuron being connected in a specific way to others that are similarly individually connected can differentiation and therefore diversity and meaning arise. Most nonbiological theories of consciousness, whether it be HOT [65] or information integration [72] or global workspace [2,3,4,23,24,30], take the fact that the brain distributively processes information encoded in action potentials as the implementation, to use Marrs terminology [60], of some computation/representation that is consciousness [42]. Thus, in all these theories and possibly the one Koch would advance if pressed, the PCC is action potentials en masse. While action potentials do, as detailed above, account for the diversity of consciousness, they fail to account for nearly all the other features. Action potentials are by nature discrete and short-lived, and therefore cannot be unified or temporal. Claiming the patternthe action potentials en masseis unified and temporal is mistaken because that would mean
- 17 -

consciousness is like the movement of a shoal, i.e. a pattern that one can see in a number of discrete units that never actually directly interact and is not itself anything substantive. While action potentials individually and en masse can be present or not present, this cannot explain unconsciousness versus consciousness because action potential patterns are found in all parts of the central nervous system, and not all parts of the brain are conscious nor involved in consciousness. And lastly, while the action potential patterns contain informational content, there is no apparent intentionality within the pattern. However, one could argue that there could be intentionality through one pattern representing or being about another, which is essentially the HOT theory claim. Thus we see in action potential patterns much promise and indeed something requisite for the NCC, but they by in themselves cannot be the PCC. This leaves electrical fields as the last reasonable candidate for the PCC. Depending on the theory the PCC could be either the smaller local field potentials (LFP) or those larger electrical fields detected by EEGs, which are essentially globalized LFPs. LFPs are the summation of all the smaller electromagnetic fields generated by the electrical potential differences across membranes of dendrites. It is important to note that the summation here is physical and not computational. The potentials across the membrane vary spatially within and between dendrites and in a very real way amalgamate. The potentials are manipulated by the post-synaptic currents generated at synapses by the opening or closing of various ion channels and spread out across the membrane, but generally remain below the threshold for initiating an action potential, and are thus called subthreshold. These potential differences due to differences in ion flow can literally merge on the membrane, as if two gusts of air across a plain merge and can either combine effects if the same or can cancel if they are different. Then there is the amalgamation of the electromagnetic fields produced by the potential differences from both within and between the dendrites, which, if we imagine the potential differences across the membrane to be like vibrating surfaces that produce sound, like sounds can interfere or resonate, thus summing. Further, the potentials and therefore the electromagnetic fields fluctuate at varying frequencies. When many potentials oscillate at the same frequency, they synchronize. Much of the observed synchrony in the brain is thought to be mediated by gap junctions between interneurons.
- 18 -

Gap junctions, unlike synapses are connections that effectively create electrical continuity between adjoining neurons [42]. LFPs are therefore electromagnetic fields that are the summations of the fluctuations of the electromagnetic fields produced by membrane potential that can synchronize or not at a range of frequencies. Importantly, action potentials are two quick and ephemeral to have any significant effect on LFPs. And while the dendritic potentials the produce LFPs must in someway produce axonal firing, there appears little correlation between action potential patterns and LFPs [42,55,67]. There is mountains of data correlating synchronous activity of " band, i.e. 25-100hz, though 40hz is prototypical, oscillations in EEGs to conscious activity [42]. Correspondingly, the loss of consciousness during general anesthesia is characterized by a decrease in " EEG activity, which returns when patients awaken [47]. Accordingly, many have proposed the electromagnetic field produced by the brain to be the PCC [47,48,55,64] even Koch and Francis Crick at one point [50], though they latter dropped the claim. A synchronized electromagnetic field across the brain is a very alluring candidate because it is isomorphic with many of the essential features of consciousness: it is temporal, existing continuously over a period of time, though it can totally dissolve, thus being quantal. It is graded through greater or lesser synchrony and/or height of frequency. Only those neurons involved in the synchrony are involved in field, thus it is non-localized while being non-universal and is a very particular type of activity, different from the constant buzz of brain. Most important and most attractive is that it is the only physical phenomenon in the brain that offers a ready solution to the problem of unity. Every neuron involved physically contributes to the field and the amalgamated field affects all the neurons within its range. However, the exciting power of this unity isomorphism is tempered by the total and utter lack of diversity of the field. Indeed, Michael Shadlen and J. A. Movshon make devastating case against synchronous activity, proving it incapable of encoding information. A corollary of this is that there is no intentionality; how could there be, it has nothing to be about. Some like Susan Pockett try to argue that the innumerable possibilities for the geometrical conformations of the field across the brain can account for diversity [64]. But this seems a weak and thoroughly explanatorily opaque response. Why would one shape
- 19 -

give a different experience from another, and further why shape? There seems no metaphorical bridge between shape in the brain and experience. Further, even if it could encode anything, Koch points out that the electromagnetic field is a crude and inefficient means of communication. Thus, the very feature that makes electromagnetic fields so attractive, their uniformity, also disqualifies them from being the PCC by themselves. Stuart Hameroff and his partner in crime, Roger Penrose, make an intriguing, but unreasonable case for microtubules being the basis or rather medium through which the brain has consciousness. While Hameroffs theory is largely implausible, it is nonetheless very instructive because it appears the only theory that approaches a complete explanatory theory of consciousness. The theory is so outlandish, I must quote Hameroff at length to avoid inadvertent misconstrual.
Quantum theory describes the bizarre properties of matter and energy at near-atomic scales. These properties include: (1) Quantum Coherence, in which individual particles yield identity to a collective, unifying wave function (exemplified in Bose-Einstein condensates); (2) non-local Quantum Entanglement, in which spatially separated particle states are nonetheless connected or related; (3) Quantum Superposition, in which particles exist in two or more states or locations simultaneously; and (4) Quantum State Reduction or collapse of the wave function, in which superpositioned particles reduce or collapse to specific choices. All four quantum properties can be applied to the seemingly inexplicable features of consciousness. First, quantum coherence (e.g. Bose-Einstein condensation) is a possible physical basis for binding or unity of consciousness. Second, non-local entanglements (e.g. Einstein-PodolskyRosen correlations) serve as a potential basis for associative memory and non-local emotional interpersonal connection. Third, quantum superposition of information provides a basis for preconscious and subconscious processes, dreams and altered states. Lastly, quantum state reduction (quantum computation) serves as a possible physical mechanism for the transition from preconscious processes to consciousness. What is quantum computation? In classical computing, binary information is commonly represented as bits of either 1 or 0. In quantum computation, information can exist in quantum superposition, for example, as quantum bits or qubits of both 1 and 0. Qubits interact or compute by entanglement and then reduce or collapse to a solution expressed in classical bits (either 1 or 0). In the [Orchestrated Objective Reduction] model, quantum computation occurs in microtubules within the brains neurons. Microtubules are polymers of the protein tubulin, which in the Orch OR model transiently exist in quantum superposition of two or more conformational states. Following periods of preconscious quantum computation (e.g. on the order of tens to hundreds of milliseconds) tubulin superpositions reduce or selfcollapse at an objective threshold due to a quantum gravity mechanism proposed by Penrose. Microtubule-associated protein (MAP-2) connections provide input during classical phases, thus tuning or orchestrating the quantum computations Each Orch OR quantum computation determines classical output states of tubulin, which govern neurophysiological events, such as initiating spikes at the axon hillock, regulating synaptic strengths, forming

- 20 -

new MAP-2 attachment sites and gap-junction connections, and establishing starting condition for the next conscious event. These events are suggested to have subjective phenomenal experience (what philosophy calls qualia) because in the Penrose formulation superpositions are separations in fundamental spacetime geometry. In a pan-protopsychist philosophical view, qualia are embedded in fundamental spacetime geometry and Orch OR processes access and select specific sets of qualia for each conscious event [40].

The theory is entirely untenable because quantum properties like BoseEinstein condensates and entanglement are restricted to quantum scales and time spans except when supported in the highly artificial environments of physics laboratories. Hameroff contends that the cytoplasm surrounding the microtubules gelatinizes in such a way as to shield them during computation, but this is an implausible, ugly, ad hoc solution. His claim that anesthetics block the quantum computational abilities of dendritic microtubules and therefore consciousness [39,42] is strongly contradicted by Alkire et als work on potassium channels [1]. Moreover, the pan-protopsychist philosophy is undesirable insofar as it would entail that every quantum collapse, which happens all the time in everything, would have qualia and therefore experience, though not necessarily one that we could understand. It also totally fails to explain why certain parts of the spacetime-qualia fabric feel different ways or why the quantum computations would select experiences at all. And while, it supposedly explains the quantal difference between nonconscious and conscious processing as being between superposition and collapsed states13, it totally fails to explain gradation in consciousness. Lastly, while temporal issues such as backward time referral are explained, the overall temporal nature of consciousness, that it is a continuous phenomenon over time is not addressed. Thus, it only accounts for unity, quantality and, for reasons not worth delving into, the three brain-related features of consciousness. However, the Orch OR model is valuable insofar as it truly attempts to relate features of consciousness to physicalities, albeit from the quantum world, through the use of isomorphisms. Further, it coherently weaves all the features together functionally: superpositions are torn qualia fabric, OR yields the correct qualia, microtubules quantum compute this yielding microconsciousness [42,79,80,81], these computations manifest themselves classically in " synchrony, "
13 This doesnt exactly make sense, given that there is much nonconscious processing that yields results, which under the Orch OR model entails collapse, but never becomes conscious, which is the consequence of collapse.

- 21 -

synchrony enables binding into a unified macroconsciousness [ibid]. In terms of what it explainsunity, quantality, correlation of " synchrony with consciousness, and the explanatory gap insofar as panprotopsychism dispels it altogetherit is a complete explanatory theory of consciousness, albeit entirely implausible. I applaud it for actually going all the way, for its seamless account from the fundamental physical features of the brain up to consciousness, metaphorically relating physicalities with features of consciousness. We should look to its form, rather than its contents, for inspiration.

- 22 -

! Theory Forms !
It appears all the biological possibilities for consciousness fail in one way or another to account for all its features. Some believe this fact indicates that consciousness must fundamentally not be biological in nature, instead insisting it is computational. While I have already shown such theories to be no theories of consciousness at all, I will run through a fewHOT, information integration, and global workspacebecause they contain elements that are both insightful and, I believe, ultimately requisite for a complete explanatory theory of consciousness. The HOT theory roughly states that consciousness occurs when a higher order thought/representation/mental state represents/perceives a lesser one. Whatever set of terms you prefer, the important thing is that the HO thought/representation/mental state is about the lower order (LO) one. That is essentially all there is to it. The claim is that this aboutness, or intentionality, captures the phenomenal awareness inherent in consciousness because the concept of consciousness arising out of thought awareness is fundamental and apparent. It is fundamental in that it deals with relatively basic units, thoughts/representations, and a basic relation between the two, i.e. an indissoluble intentionality. It is apparent in that this intentional relationship has awareness inherent in it, however diaphanous or minimal that awareness is. And with awareness, one has the essence of consciousness, for what would consciousness be with out an awareness of some sort? The main criticisms of HO theories have been humorously stated as placing the theory between a rock and a hard place. The rock half of this criticism is best presented by Alvin Goldman:
The idea here is puzzling. How could possession of a meta-state confer subjectivity or feeling on a lower order state that did not otherwise posses it? Why would being an intentional object or referent of a meta-state confer consciousness on a first-order state? A rock does not become conscious when someone has a belief about it. What should a first-order psychological state become conscious simply by having a belief about it? [37].

Under this attack, HO theories prove too much, to the point of an obvious reductio. The hard place criticism represents the complete opposite, saying that HO theories havent explained anything about consciousness, or at least not why consciousness feels the way it does.
- 23 -

HO theories do answer this in part with the intentionality, the diaphanous awareness. Pushing any harder on the issue, that is, asking why awareness makes one feel, is futile; we have reached the indissoluble essence of consciousness. If there is going to be any feel that is feel in general, not particular feelingsthere is necessarily awareness. Thus, continuing to ask HO theories why there is feel at all is simply nagging and unproductive. However, HO theories have no good answer to why red is different from blue. The usual response is to say the LO representation contains different informational content encoding blue and red in each case. While this is certainly an adequate response it is entirely unsatisfactory as formulated in the HOT literature. The rock problem is a bit hairier because it gets at the nature of the intentional relationship posited by HOT theory. HOT theorists have tried to deflect this argument by saying that rock isnt mental and the lower order state needs to be mental for their to be consciousness. Many, including myself, find this argument weak, and see it as an ugly and largely ad hoc caveat to a very clean theory. Regardless, even if we accept this argument about necessary mentality, there remains a much more damaging problem of levels of representation. HOT theory and its sibling, first order (FO), theory approach the thoughts/representations they are theorizing about as if they are distinct and only exist after some point processing. FO theory states nearly the same thing as HO theories, just that instead of representing anything of a lower order, consciousness arises from representing the contents of experience. HO theorists say that FO theory may account for the experience but it doesnt allow for the possibility of an experience to be experienced as one, i.e. it doesnt allow for self-consciousness, while HO theory does. However, the real problem is they both assume that the contents of experience are only represent at one level of the mind and all the processing below that totally lacks content and hierarchy, by which I mean one state representing another. If one takes any time to inspect the processing pathways of the brain, he would see that there are numerous levels of representation: different aspects of an object get processed and represented and reprocessed and re-represented, fed back down to lower processing, split and sent to different modalities and processing areas, etc. The complexity of the intermingling of representations is astounding. To assume all the layers of processing are
- 24 -

devoid of representation or informational content is almost patently absurd. If all that is requisite for consciousness is one state being higher to and about another, then one should be conscious of almost all layers of processing from the retina on up. And FO theories assumption of a first order representation that somehow sums up all the content of whatever one is perceiving on seems unduly arbitrary, implausible, and totally against the grain of all the literature on sensory processing. Further, HOT theory cannot explain unity or binding without somehow positing a grand-incorporator thought/representation. To believe that all different representations somehow suddenly come together into one is highly suspect. This grand-incorporator thought/representation reeks of a Trojan horse harboring some abstract fugacious homunculus. Then to claim that it is the representation of this cumulative representation that is consciousness seems entirely arbitrary after all those hundreds of thousands of different representations that led up to just those two, i.e. the HO and LO representations. HO theory might try to explain temporality by saying it comes from continuously representing something and that the unity comes from the fact that it all takes place in one system, i.e. the brain. But again that seems a weak explanation. And while quantality is easily explained, gradation is not, especially not in FOT theory. HOT theory may say gradation comes from a HOT having its own HOT and so on, allowing gradation, but this brings us back to the problem of the retinal patterns being conscious. It appears the only valuable feature of HOT theories is the readily apparent and fundamental nature of awareness bound within intentionality. Intentionality understood as one thing being about another offers the beginnings of a bridge across the explanatory gap insofar as it incorporates both vaguely objective componentsthe two thingsand a more metaphorical oneaboutnessthat is isomorphic with our intuitive understanding of awareness, a subjective psychological entity. Everything else is either incoherent, problematic, ugly and/or poorly handled. Nonetheless, one shouldnt throw the baby out with the bathwater: its formulation of intentionality seems essential to any theory that seeks to bridge the explanatory gap. Information integration theory as championed by Giulio Tononi is only interested in explaining and consequently only fully accounts for
- 25 -

two essential features of consciousnessdiversity and unity. The theory argues that subjective experience is one and the same thing as a system's capacity to integrate information. In this view, experience, that is, information integration, is a fundamental quantity, just as mass, charge or energy are [72]. Before I delve in, let us note how radically unexplanatory this theory is. The claim consciousness is a fundamental quantity may be the most opaque, impenetrable formulation yet proposed. Further, I moot whether the extent of a process, in this case integration, can qualify as being a quantity of the same ontological category as quantities like mass, charge, and energy. The extent of any process just doesnt ring of fundamentality to me. Even less comprehensible is saying consciousness, a phenomenon in the world, a thing, is a capacity; for capacity is not a thing, not a phenomenon, but an ability. This formulation doesnt just avoid the explanatory gap, it explodes its proportions to ever greater distances. At least in the standard version of the gap, both sides in some way are categorized as things, as psychologically tangible. Under the information integration theory, the psychological entity on the worldly side, i.e. the side that gives rise to consciousness, is not an assumed ontological entity, but a linguistically mediated understanding of the nature of any potential ontological entity that is assumed. It is largely the fact that capacity and therefore ability are necessarily linguistic concepts that is largely responsible for this widening. This problem is particularly acute with information integration theory, but will be true in some form or another for any computational theory of consciousness. Despite these issues there are a few interesting features to recommend information integration theory. It wonderfully describes the nature of diversity and of the integration into a unity. Unfortunately, understanding of such requires delving into the computational terminology of the theory. Thus, let us unpack the statement that the
capacity [to integrate information], corresponding to the quantity of consciousness, is given by the value of a complex. is the amount of effective information that can be exchanged across the minimum information bipartition of a complex. A complex is a subset of elements with >0 and with no inclusive subset of higher [72].

This essentially states that the extent of consciousness corresponds to the extent that any two informationally distinct parts of a systemthe elementscan talk to each other, thus making a conversationthe complex.

- 26 -

It is important to notice that there is no third party mediating the exchange of effective information, thus integration only requires connections between elements, which themselves could be for instance, a group of locally interconnected neurons that share inputs and outputs, such as a cortical minicolumn [72]. Each informationally distinct part, i.e. each element, represents a quale. Why and how this is is probably the most innovative and valuable part of the theory. Tononi writes,
The elements of a complex constitute the dimensions of an abstract relational space, the qualia space. The values of effective information among the elements of a complexis sufficient to specify the quality of conscious experience. Thus, the reason why certain cortical areas contribute to conscious experience of color and other parts to that of visual motion has to do with differences in the informational relationships both within each area and between each area and the rest of the main complex If a group of neurons that is normally part of the main complex [for say, the total experience of seeing a blue wall] becomes informationally disconnected from itthe same group of neurons, firing in exactly the same way, would not contribute to consciousness. Moreover, according to the theory, the other groups of neurons within the main complex are essential to our conscious experience of blue even if, as in this example, they are not activated. This is not difficult to see. Imagine that, starting from an intact main complex, we were to remove one element after another, except for the active, blue-selective one. If an inactive element contributing to "seeing red" were removed, blue would not be experienced as blue anymore, but as some less differentiated color, perhaps not unlike those experienced by certain dichromats. If further elements of the main complex were removed, including those contributing to shapes, to sounds, to thoughts and so forth, one would soon drop to such a low level of consciousness that "seeing blue" would become meaningless: the "feeling" (and meaning) of the quale "blue" would have been eroded down to nothing [72, emphasis added].

While the theory provides no explanation of why a quale can arise from particular set of neurons, it does provide a robust explanation of how and why one quale differs from another, why red looks different from blue. According to the theory, there is nothing inherent to the element itself that makes it encode a quale. It is only through its relationships that it gains identity. In this way, qualia are like words, which, as Wittgenstein observed, derive their meaning not from their referent, but from how they are used in relation to other words. Interestingly, while HOT theory begins to explain why there is feel at all, but totally fails to explain why qualia feel different, information integration theory does just the opposite, failing on the first account but succeeding on the second. While the theory doesnt go this route at all, its seems that one could almost understand the informational exchange between elements as each element being about the other and itself at the same time, thus incorporating intentionality, which could begin to explain why integration gives rise to consciousness. While this is just a speculative thought, it may be worth pursuing.
- 27 -

However, as the theory stands, each element on its own isnt conscious, but can become so by integrating with another, which also becomes conscious. A two element complex would be conscious to an infinitesimally small extent that is totally unimaginable. The unity of this consciousness is apparent insofar as the informational binding between the elements is consciousness. Further, it is easy to see how adding more elements could account for the gradation of consciousness and how adding an element to a greater complex accounts for it suddenly becoming consciousness, hence quantality. Indeed, as I fall asleep, it does seem that the amount and meaning of qualia decreases in a manner consistent with the theory. However, it doesnt account for the gradation of consciousness associated with attention. Try this: stare directly at this letter 0 Do not move your gaze from it, now try to move the so-called attentional spotlight around your visual. I can attend to the slight visible sliver of my nose or direct my attention up to the arc of the field. At no point do I take in different information, the visual field is composed of exactly the same qualia, yet I can become more conscious of some qualia in the field over others. This form of gradation, which is arguably the more interesting and difficult one, cannot be explained by information integration theory. One could say the theory can explain the gradation of the breadth of consciousness but not of its depth. There remains a more damaging problem that information integration theory doesnt adequately handle why stimuli only become conscious after a certain amount of processing. It perfectly handles why motor outputs and primary sensory inputs and the cerebellum do not contribute to consciousness, but much informational integration of any particular percept occurs before one is conscious of it. Tononi says only the thalamocortical system has the proper circuitry for integration, but the thalamocortical system includes most of the cortex, from the primary sensory cortices on up. Indeed, the integrated circuitry that Tononi describes is indistinguishable between V2, V3, & V4, only in the last of which is arguably directly involved in consciousness [79,80,81]. And surely information integration is occurring well before V4; how else would the raw data of sensory modalities transform into qualia we know. Thus information integration theory handles the quantal nature of consciousness in relation to how any particular thought can suddenly pop into consciousnessit joins the complexbut not in relation to stimuli processing.

- 28 -

Lastly, while integration is a process and consequently temporal, it isnt necessarily continuous. Indeed, sharing of information seems like it must occur discretely, or at least continuity seems irrelevant. Tononi says, The spatial and temporal scales defining the elements of a complex and the time course of their interactions are those that jointly maximize [72]. But time scale only determines rate, not continuity. This is probably my weakest criticism of information integration theory, but that is because it is more of a suspicion. To review, information integration theory doesnt explain consciousness at all but does offer a robust theory of consciousnesss diversity and how its qualia theoretically can be unified informationally. Though why this occurswhy hearing would share its information with sight and not both separately to higher processingremains unclear. It explains intentionality insofar as it is not an issue, and poorly and/or partially accounts for the remaining features of consciousness. Bernard Baars has advanced the so-called global workspace model, which in many ways is just a reformulation of the Cartesian theatre, only now the audience is unconscious higher processing modalities. The global workspace model is cognitive in nature and tries to place consciousness in its processing schema. Being cognitive, it explains why some but not all stimuli can be processed, why that possessing seems to have a serious bottleneck in that only a few stimuli at a time can be attended, and why once attended, they become available to almost all higher processing. The theory claims that consciousness is the global workspace, which is the stage of attention and a bit of the working memory surrounding it. Only a few stimuli at a time can occupy the stage just as one can only pay attention to a few things at once. But once paid attention to, the audience, higher processing centers such as those for reporting, for remembering, for doing complex calculations, making decisions, etc, have complete access to them. In the global workspace model, consciousness seems to be the publicity organ of the brain. It is a facility for accessing, disseminating and exchanging information, and for exercising global coordination and control [3]. The implications of this theory are: that all higher processing is unconscious, as it is the audience and not the stage, and thus the reductio of a conscious audience is avoided; that one can only be conscious of a few stimuli at a time; that

- 29 -

attention is a prerequisite of consciousness; and that everything that is conscious can be reported and remembered. The problem with this theory is that it is in fact not a theory of consciousness, but of conscious processing. This is due to its total and utter failure to distinguish between phenomenal consciousness (Pconsciousness) and access consciousness. Ned Block, who originally made the distinction, describes phenomenal consciousness as,
experience. P-conscious properties are experiential properties. P-conscious states are experiential states, that is, a state is P-conscious if it has experiential properties. The totality of the experiential properties of a state are "what it is like" to have it. Moving from synonyms to examples, we have P-conscious states when we see, hear, smell, taste and have pains. Pconscious properties include the experiential properties of sensations, feelings and perceptions, but I would also include thoughts, wants and emotions. But what is it about thoughts that makes them P- conscious? One possibility is that it is just a series of mental images or subvocalizations that make thoughts P-conscious. Another possibility is that the contents themselves have a P-conscious aspect independently of their vehicles. See Lormand, forthcoming. A feature of P-consciousness that is often missed is that differences in intentional content often make a P-conscious difference. What it is like to hear a sound as coming from the left differs from what it is like to hear a sound as coming from the right [5]

and access consciousness by saying,


A state is access-conscious (A-conscious) if, in virtue of one's having the state, a representation of its content is (1) inferentially promiscuous (Stich, 1978), i.e. poised to be used as a premise in reasoning, and (2) poised for [rational] control of action and (3) poised for rational control of speech. (I will speak of both states and their contents as A-conscious) [5].

In these descriptions, one sees that access consciousness is a functional notion, while phenomenal consciousness is not. Phenomenal consciousness, Block admits, may have something to do with information processing, but is nonetheless still an actual phenomenon that is phenomenal14 in constitution. In his lucid paper, On a confusion about a function of consciousness, Block details all the awful confounding consequences of not distinguishing the two types of consciousness [5]. Given this distinction, let us ask as Baars does, What is a theory of consciousness a theory of? [2]. He responds,
as far as we are concerned, it is a theory of the nature of experience. The reader's private experience of this word, his or her mental image of yesterday's

14

See first footnote on usage of these terms for clarification.

- 30 -

breakfast, or the feeling of a toothache -- these are all contents of consciousness [2, emphasis added].

Baars is unequivocal that a theory of consciousness should be about experience and therefore about phenomenal consciousness. Yet, his theory is anything but a theory of phenomenal consciousness. Being cognitive, it is only concerned with how stimuli are accessed and processed. Listen to his account of a given moment:
At this instant you and I are conscious of some aspects of the act of reading the shape of these letters against the white texture of the page, and the inner sound of these words. But we are probably not aware of the touch of the chair, of a certain background taste, the subtle balancing of our body against gravity, a flow of conversation in the background, or the delicately guided eye fixations needed to see this phrase; nor are we now aware of the fleeting present of only a few seconds ago, of our affection for a friend, and some of our major life goals [2].

Clearly, he is conflating attention with consciousness and unattended with unconsciousness. Yes, I am not conscious of my life goal or my saccades, but am surely conscious of the chair and the surrounding noise as both part of what Koch calls the gist, and the gist, while not always remembered, is always minimally conscious [49,53]. However, my case here is weak because the terms of argument are so ill-defined and subject, consciousness, so slippery. So let me turn to the material from which Baars gathers his evidence, which mostly comes from the research of Stanislas Dehaene and his colleagues, who also advocate the global workspace model. In doing so, it will become apparent that the global workspace model is utterly confounded. Through his neuroimaging studies and computational simulations of the attentional blink and inattentional blindness, Dehaene arrives at the conclusion that consciousness is characterized by two properties: the [stimulus-evoked] activation can reverberate, thus holding information on-line for a long duration essentially unrelated to the initial stimulus duration; (2) Stimulus information can be rapidly propagated to many brain systems [29]. These are essentially formalized statements of the workspace and its accessibility respectively. If one takes the first property and translates it into subjective terms, it sounds not like plain consciousness but thinking. In thinking, one can hold a thought for a duration essentially unrelated to the initial stimulus duration, and surely thinking is part of consciousness. But, is this true of consciousness on the whole? I think not. Consider your foot;
- 31 -

you are visually conscious of it when looking at it, but that visual consciousness of your foot evaporates as soon as you look away. The idea or the thought of it can persist, albeit without the full gestalt of actually looking at it, but not the experience of it. Clearly this conclusion confounds thinking or cognition, a very complex high-level process that requires consciousness, with plain, old, experiencing-the-world phenomenal consciousness. Inevitably, Dehaene came to this conclusion because to report properly one must follow directions during the presentation and tasks of his experiments, and to follow directions, which by necessity are linguistic, one must think. Moreover, while thinking is almost always conscious, some serious semantic contortions are needed to make thinking the primary referent of consciousness over the more commonly understood phenomenal experience. This problem can be cast in two different lights: that Dehaene is confounding attention and extended stimulus processing with consciousness, thus claiming consciousness is more than it is. Or, that by only looking for attention and extended stimulus processing he is limiting his purview, thus entirely missing what consciousness is because of a methodological bent for false negatives. Consider the second property, Stimulus information can be rapidly propagated to many brain systems, which later Dehaene also describes as, information can be shared across a broad variety of processes including evaluation, verbal report, planning and long-term memory or elsewhere as the broadcasting of accessed information to many bilateral cortical regions through long-distance cortico-cortical connections including those of the corpus callosum [ibid,31]. This seems reasonable enough; that which is conscious seems to be available for evaluation, verbal report, planning and long-term memory. However, this intuition comes from a false, or at least unjustified assumption, namely perfect fidelity between consciousness and reportability. As discussed in the first section, to science, consciousness is practically metaphysical. Because of this, reportability is essential to probing consciousness, for if we totally rule out report, then we relegate ourselves to the purely empirical, thus severing the connection to consciousness understood as subjectivity and hence the only bridge across the pragmatic metaphysical divide. Indeed Dehaene rightly

- 32 -

states, conscious perception must [ ] be evaluated by subjective report [30]. However, let us analyze a similar but different statement he makes:
Consciousness is systematically associated with the potential ability for the subject to report on his/her mental state. This property of reportability is so exclusive to conscious information that it is commonly used as an empirical criterion to assess the conscious or unconscious status of an information or a mental state (Gazzaniga et al., 1977; Weiskrantz, 1997) [25].

While Dehaene doesnt exactly make the crude verbal report mistake of Gazzaniga and the likes15, though the reference in the quote above begs to differ, he nonetheless makes a deeper, less obvious mistake about reportability in general. Overtly, this statement implies the mistake of converting the conditional: that because reportability is exclusive to consciousness, consciousness is exclusive to reportability. Obviously that is not logically true, and because it is only implied, we must give Dehaene the benefit of the doubt. Nonetheless, we and Dehaene often believe both sides of the statement, i.e. both that reportability is exclusive to consciousness and consciousness is exclusive to reportability. This is driven by a deep but mistaken intuition about oneself. Dehaene, like most people, assumes that what one thinks one is conscious of, and therefore reports as conscious, is what is conscious. While he rightly assumes fidelity between reports and self-knowledge (for if he didnt assume that then he would quickly revert to brutish behaviorism), he wrongly assumes fidelity between self-knowledge, i.e. what one thinks he is conscious of, and actual consciousness. To help illustrate this latter mistake, let me turn quickly to Martin Heidegger. Heidegger notes in Being and Time that when one attempts to contemplate oneself, one is not ones normal self, but in a very peculiar and rare state of self, i.e. a highly-reflective and highly self-conscious state, and thus one is not actually contemplating the self that he thinks he is contemplating, i.e. his normal self, but this particularized state of self. Using similar reasoning, when one thinks about what consciousness is, one will not be thinking about consciousness directly but only about what he thinks consciousness is, i.e. those parts of consciousness that are accessible to the parts of his brain that think, remember, report. He is therefore contemplating only a segment of consciousness, which he takes to be the whole of consciousness, for the
15 !"#$%&'()%**%+",%("+-&./.&-0(-$&('%#1(23(4&.5%'(.&/2.-(56(-$&(.",$-($&7"0/$&.&("+(0/'"-85.%"+(/%-"&+-0(-2(7&%+("-( "0( 9+#2+0#"290:( %+( "+-&./.&-%-"2+( -$%-( #'&%.'6( 5&,0( -$&( ;9&0-"2+( ,"4&+( -$%-( -$&( '%+,9%,&( 3%#9'-"&0( %.&( 329+<( "+( -$&( '&3-($&7"0/$&.&(=>?:@ABC

- 33 -

rest is inaccessible to thought. Again, this is the illusion of the false negative; just because we cannot consciously think about something doesnt mean it isnt conscious. Let me come clean; I am urging the position that your subjectivity, that is, your consciousness may be a much stranger phenomenon than you know it to be. That what you think of when you consider consciousness may be only the part of your consciousness that is accessible for higher manipulation and storage and that what actually happens moment-to-moment, the so-called snap shot of your subjectivity, is a more diverse and unwieldy experience than what you may remember. That is, there are phenomenal parts of any given conscious state that not actively considered, i.e. accessed by higher possessing, but are nonetheless part of consciousness. In this respect, I am in total agreement with Daniel Dennett [17,18,19] and am rephrasing his argument, though I believe we differ on what consciousness is fundamentally. Regardless, just because consciousness is stranger than you know it to be, doesnt mean the science of consciousness should be fooled into thinking it is merely as one knows it. The science of consciousness is about experience, about subjectivity, about what it is like to be something at any given moment. It is not about cognition or thinking. It would be a disservice both to experience and to the science of it to limit our concept of consciousness to what we introspectively think about it. To put this in more concrete neurological terms, to remember and report on a particular percept, the areas of the brain that perform such functions, i.e. possibly the hippocampus and the Brodmann's and Wernicke's areas, must have access to the multiple processors [that] encode the various possible contents of consciousness, e.g. possibly MT+ for motion or the color-coding cells of V4 [25]. We have no good reason to assume that that access is what makes those contents conscious. We cannot appeal to the global workspace theory because it appeals to Dehaenes work for confirmation, thus creating a confirmational loop that fails to explain experience at all [8,58,62]. Indeed, despite Dehaenes results, there is good reason to believe those contents are conscious of their own right [32,33,49,50,55,73,79,80,81]. If we look to dreaming, in which one is undoubtedly conscious, one may have no memory of being conscious, one may not be able to report being conscious, or one may be able to report being conscious without being
- 34 -

able to report what he was conscious of, i.e. he knows he was dreaming but cannot recall what the dream was. Indeed, this is often the case when one wakes suddenly. Or, consider when you just lose a thought entirely. You know it was there, but you cannot report a single fact about it. Thus, phenomenal consciousness apparently can be ephemeral, unmemorable and inaccessed. I must acknowledge that Dehaene often uses conscious access, a more cautious term than consciousness. However, his continual interchange of them as synonyms over his work (see especially 25, 27, & 30) as a whole begs whether he actually discerns a difference. Indeed, Dehaene never actually clarifies what conscious access means, especially in relation to consciousness as whole. Does it mean consciousness has informational access to contents located elsewhere or that which is conscious is being accessed by modules/processors for processing? Dehaene often seems to be making the former case, but Baars claims the opposite [2,3,4]. While Dehaene acknowledges the distinction between access and phenomenal consciousness, and even provides evidence for it, he only rebuts it with evidence from a different effect, change blindness, to which nearly all the same arguments as those detailed above can be applied16, therefore not providing a real rebuttal [30]. Dehaene allows for what he calls pre-consciousness to be what Block calls phenomenal consciousness17 [3,4,5] and Semir Zeki calls microconsciousness [62,63], but says who is right does not seem to be, at this stage, a scientifically addressable question [30]. Returning to Baars and the global workspace model, we can now see that it takes root in the study of access consciousness, not phenomenal consciousness. The global workspace seems more the global spotlight on a much broader amorphous stage. While it does a wonderful job explaining how we attend to stimuli and what subsequently happens, it has no claim on how or what consciousness is. Indeed, I could have even dispensed with all this ado, and simply
For example, change blindness shows nothing other than that not all of what one takes in makes it to working memory, that we cannot carry over the entire gist from one moment to the next. If we cannot carry it over than we cannot compare the two and thus not discriminate between the two. Further, one requires attention to compare two things, but doesnt require attention to be phenomenal conscious of them. Indeed, think of how when you stair at one spot and can see all that is around it. You cannot even consider what another part of the scene is without giving it attention, yet whether youre giving it attention you are conscious of it, i.e. have an experience of it. 17 Ironically, Dehaene uses that same abbreviation, P-conscious, as Block does for what is probably the same thing only Dehaene is saying it isnt actually conscious, or is non-conscious.
16

- 35 -

pointed out that the claim consciousness is the global workspace is just as manifestly unexplanatory as the information integration theorys claim. Further, the claim attention is a prerequisite of consciousness is also conspicuously questionable, if not obviously false: When Baars says Paying attention [is] becoming conscious of some material [3] and Dehaene actually says attention is a prerequisite of consciousness, do they mean that attention occurs before consciousness, as it seems it would have to be if it is to be an actual prerequisite. If it is, then it is not the attention that one is subjectively familiar with, for one at least seems to direct ones attention to that which one isnt conscious of enough, not to what is non-conscious. Indeed, how can one attend to something one is unconscious of? The attention that we are subjectively familiar with is necessarily subsequent to some minimal consciousness, but not fullblown attentive consciousness. Even the common name for this form of attention, i.e. top-down attention, implies consciousness is a precondition, i.e. the top level, whether it be consciousness itself or something above consciousness, guides attention to relevant unattended minimally conscious percepts. If they are not referring to the attention of consciousness, i.e. topdown attention, then they must be referring to the brains ability to give some stimuli precedence over others, also known as bottom-up attention. However, bottom-up attention generally refers to super-salient stimuli, e.g. an explosion, drawing ones top-down attention. But, in the context of being a prerequisite for consciousness, this sounds not so much like attention but awareness, i.e. taking in sensory stimuli, or even vigilance, to use Dehaenes own terminology [23,29,30], not giving some stimuli precedence over others. For one is constantly conscious, or at least minimally conscious of much of ones environment through the gist. Notably, Koch points out,
gist is immune from inattentional blindness: when a photograph was briefly flashed unexpectedly onto a screen, subjects could accurately report a summary of the photograph. In a mere 30 ms presentation time, the gist of a scene can be apprehended. This is insufficient time for top-down attention to play much of a role [38].

This is an excerpt from a paper by Koch and Naotsugu Tsuchiya that makes a more robust criticism of the claim that attention is prerequisite for consciousness, maligning it from similar vantages as presented here [53]. While the global workspace model doesnt account for any feature of phenomenal consciousness because it is only about access

- 36 -

consciousness, it is nonetheless important to keep in mind because any theory of consciousness will have to fit into its framework somehow. In fact, it provides nearly the perfect framework. It describes at what level of processing access to phenomenal consciousness occurs, how it is accessed, and what occurs after access, thus neatly delineating the upper bounds of phenomenal consciousness.

Comparison Table
Theory/Physicality
Intentional Temporal

Features of Consciousness
Lack of relation to constant activity Nonlocalized Quantal NonUniversal Graded Explanatory Gap Diverse Unified

!: Fully accounts for


#: partially accounts for
Neurotransmitters Coalitions of Neurons/ Kochs Theory Action Potential Firing Patterns Coherent Electric Fields Microtubules / Orch OR Theory HOT Theory Information Integration Theory Global Workspace Theory

!# ! ! !! # ! ! ! ! !
18 19

! ! ! ! ! ! ! ! # # ! ! #
20

! ! !
#

! ! !
#

!
#

18 Because Koch doesnt flesh out how the stipulated synchrony of his coalitions relates to unity, I believe it only potentially accounts for unity. 19 Insofar as qualia are fundamentally part of the fabric of the universe, intentionality is explained because qualia are necessarily about something. 20 The HOT theory can either account for quantality or gradation, but not both. If it is accounts for gradation it cannot explain non-universality nor a lack of relation to constant brain activity.

- 37 -

! A Formulation of The Form !


I have now completed my survey of the relevant material, extracting those elements from each that will be useful in constructing a complete explanatory theory of consciousness, by which I mean phenomenal consciousness. The preceding table summarizes these efforts. While it is apparent that no theory or apparent aspect of the brain adequately handles consciousness, the diversity of perspectives is certainly broad enough to encompass consciousness somewhere in the quagmire, as is evidenced by the fact that every column of the table has at least one full mark. The table and my work in general are meant to offer a toolset with all the necessary building blocks for assembling a final theory. Surely these blocks are not sufficient, but they nonetheless amount to a skeleton that can be fleshed out. I have a few intuitions as to how to synthesize them into a satisfactory theory. My staunch belief that consciousness is a physicality in the brain prompts me to start there. Given that both action potential patterns and electromagnetic fields are fundamentally of the same sort of phenomenon, i.e. are electromagnetic, and, looking to the table, together account for nearly every fundamental feature of consciousness, consciousness may be some combination of the two, such that consciousness would be the other side of the coin of a particular state of electromagnetic phenomena. I imagine this state would be that of the electromagnetic field being about the action potential patterns, such that there is something continuous, temporal and unifying being in an intentional relation to something that is diverse. This arrangement is highly isomorphic with consciousness and the intentionality in it is particularly powerful. Unlike HOT theories, the intentionality here doesnt involve one thing being about something of the exact same sort. Electromagnetic field and action potential patterns are very different things, yet have the same physical basis. Thus there isnt the problem of the infinite regression. The diaphanous awareness inherent in intentionality allows a melding of the smooth, continuous whole with the sundry into a richly textured substance of awareness. Through this metaphorical language one can begin to see across the explanatory gap. However, some features remain unaddressed.

- 38 -

While axonal firing patterns are all around the brain and are possibly within the purview of the electromagnetic field, only patterns with meaningful content would be involved in consciousness. With informationally devoid or incoherent patterns there is nothing to be conscious of and therefore to be conscious at all. As James pointed out, There is no self-splitting of [pure experience] into consciousness and what the consciousness is of. Its subjectivity and objectivity are functional attributes solely [45]. Which is to say, consciousness and what the consciousness is of are inseparable, if one is lacking, either the electromagnetic field or the axonal firing pattern, then neither exist. How would an action potential pattern be informationally devoid or incoherent? A stimuli induces some initial pattern that is processed, this processing can be understood as moving through a computational landscape towards a combination of attractorscomputational energy minimums for the given inputs [14,44]. This attractors would form the elements of a complex spoken of in information integration theory and can be considered solutions to the computation. Looking at the action potential pattern before it arrives at the set of attractors that best solve its inputs would be to look at a half-baked solution. Being before the attractor states, and thus not being elements in the information integration theory sense, the patterns have no meaning. Thus, only when attractor states are reached is there meaningful content for the electromagnetic field to be about, and only then is there consciousness. Thus, whether the action potential patterns are in attractor states or not could account for consciousnesss quantality. Obligatory feedforward mechanism would drive all stimuli to this set of stimulus appropriate attractors, which would be a minimally conscious state. This initial stage would correspond to vigilance to use Dehaenes terminology, or what is more commonly called arousal and would be controlled by global state functions. It would resemble a patchwork of microconsciousness to use Zekis terminology [79,80,81] and in this way would be, as Koch puts it, a summary of the sensory input. This minimally conscious state is huge and messy insofar as it contains a lot of information that is totally unprioritized. This creates a problem that if that particular piece of information needs emphasis, i.e. gain prominence on the computing scene, how is it to stand out? Clearly no one neuron can fire more strongly, for that would degrade the actual
- 39 -

code of which its firing is a part. Instead the collective firing pattern must be given emphasis, i.e. fire more strongly, in a coordinated fashion. How is this to be done? It has been shown [74,75,76] that the spike count produced in response to the same test stimulus was higher when the gamma (2070 Hz) power of subthreshold membrane oscillations was higher [55]. That is, " synchrony in the electromagnetic field can strength the firing of action potential patterns. This creates a mechanism for a group of neurons to increase their firing power without affecting the combined code, which is of a combined connectional and temporal basis. Thus, certain percepts are energized in a way screaming louder than the rest to get the attention of higher processing. In doing so the overall strength of relation between the action potential pattern and the electromagnetic field increases, making the consciousness more vivid. This increase in energy has isomorphic relations to the gradation of consciousness. This energizing of percepts would be controlled by attentional mechanisms, which can only emphasize a few percepts at any given time. Those energized percepts would enter the global workspace because of their increased firing power, and would thus be available for higher processing. Because of this, consciousness would exist at many levels, being broad, diffuse and nebulous within the summary but vivid, concise and prominent in the workspace. In many ways, this account is just an expansion of Kochs cautious coherent ensemble theory in the cognitive framework of the global workspace model using the Orch OR as a prototype. The metaphorical language employed allows one to begin to understand how a physical phenomenonelectromagneticscould be the same as consciousness through their isomorphisms. One can see that computation is essential for the electrical phenomena of the brain to achieve the proper state, but computation itself is not consciousness. Undoubtedly, my proposal is deeply flawed, but that does not keep it from being a preliminary example of the form a complete, explanatory theory of consciousness.

- 40 -

! References and Sources!


1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Alkire, M. T. et al (2009) Thalamic Microinfusion of Antibody to a Voltage-gated Potassium Channel Restores Consciousness during Anesthesia. Anesthesiology 110:766-73. Baars, B. J. (1988) A Cognitive Theory of Consciousness. Cambridge University Press, Cambridge, UK. Baars, B. J. (1997) In the theatre of consciousness: Global workspace theory: A rigorous scientific theory of consciousness. Journal of Consciousness Studies, 4, 292309. Baars, B.J. (2005) Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Prog. Brain Res. 150, 45 53. Block, N. (1995) On a confusion about a function of consciousness. Behavioral and Brain Sciences 18 (2); 227-287. Block, N. (2004) Consciousness. Oxford Companion to the Mind, 2nd Ed, edited by Gregory, R. Block, N. (2007) Consciousness, accessibility, and the mesh between psychology and neuroscience. Behavioral and Brain Sciences, 30, 481-548. Block, N. (2009) Comparing The Major Theories of Consciousness. The Cognitive Neurosciences IV, M. Gazzaniga (ed.), MIT Press. Chalmers, D. (1996) The Conscious Mind: In Search of a Fundamental Theory: Oxford University Press. Churchland, P. M. (1981) Eliminative materialism and propositional attitudes. The Journal of Philosophy, 78:67-90. Churchland, P.S. (1980) A Perspective on Mind-Brain Research. The Journal of Philosophy, Vol. 77, No. 4, pp. 185-207. Churchland, P.S. (1987) Epistemology in the Age of Neuroscience. The Journal of Philosophy, Vol. 84, No. 10, pp. 544-553. Churchland, P. & Sejnowski T. (1990). Neural Representation and Neural Computation. Philosophical Perspectives, Vol. 4, pp. 343-382. Colgin, L.L., Moser, E.I., Moser, M.B., (2008) Understanding memory through hippocampal remapping. Trends Neurosci. 31, 469477. Crick, F. (1989) The recent excitement about neural networks. Nature, vol. 337, 12.
- 41 -

16. Cummins, R. E. (2000) "How Does It Work" Versus "What Are the Laws?": Two Conceptions of Psychological Explanation. In F. Keil & Robert A. Wilson (eds.), Explanation and Cognition, 117-145. MIT Press. 17. Damasio, A. (1994) Descartes' Error. Grossett/Putnam, New York. 18. Damasio, A. (1999) The Feeling of What Happens. Heineman, London. 19. Damasio, A. (2000) A Neurobiology for Consciousness. Neural Correlates of Consciousness. Metzinger, T., ed., pp 111-120. MIT Press, Cambridge, Massachusetts. 20. Dennett, D. C. (1971) Intentional Systems. The Journal of Philosophy, Vol. 68, No. 4, pp. 87-106. 21. Dennett, D. (1991) Consciousness Explained. Little, Brown and Co, Boston. 22. Dennett, D. (2005) Sweet Dreams. MIT Press, Massachusetts. 23. Dehaene, S., Kerszberg, M., & Changeux, J. P. (1998) A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences USA, 95, 1452914534. 24. Dehaene, S. et al. (2001) Cerebral mechanisms of word masking and unconscious repetition priming. Nat Neurosci 4: 752758. 25. Dehaene, S. & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition 79, 137. 26. Dehaene, S. et al. (2003) A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proc. Natl. 27. Dehaene, S. & Sergent, C (2004) Is consciousness a gradual phenomenon Evidence for an all-or-none bifurcation during the attentional blink. Psychol Sci 15: 720728. 28. Dehaene S. et al. (2005) Timing of the brain events underlying access to consciousness during the attentional blink. Nat Neurosci 8: 13911400. 29. Dehaene, S. & Changeux, J.P. (2005) Ongoing spontaneous activity controls access to consciousness: A neuronal model for inattentional blindness. PLoS Biol 3: e141. 30. Dehaene, S. et al. (2006) Conscious, preconscious, and subliminal processing: a testable taxonomy. Trends Cogn. Sci. 10, 204211.
- 42 -

31. Dehaene, S. et al. (2007) Brain Dynamics Underlying the Nonlinear Threshold for Access to Consciousness. PLoS Biology Vol. 5. 10: 2408-23. 32. Eccles, J. C. (1992). Evolution of consciousness. Proceedings of the National Academy of Sciences of the USA, 89, 73207324. 33. Eccles, J. C. (1990). A Unitary Hypothesis of Mind-Brain Interaction in the Cerebral Cortex. Proc. R. Soc. London, Ser. B 240: 433-451. 34. Gazzaniga, M. S., LeDoux, J. E., & Wilson, D. H. (1977) Language, praxis, and the right hemisphere: clues to some mechanisms of consciousness. Neurology, 27 (12), 1144-1147. 35. Gennaro, R. (2005) The HOT Theory of Consciousness: Between a Rock and a Hard Place? Journal of Consciousness Studies, Volume 12, Number 2, pp. 3-21. 36. Gold, I. (1999) Does 40-Hz Oscillation Play a Role in Visual Consciousness? Consciousness and Cognition 8, 186195. 37. Goldman, A. (1993) Consciousness, Folk Psychology, and Cognitive Science. Consciousness and Cognition. 2:364-382. 38. Hameroff, S. (1998) Anesthesia, Consciousness and Hydrophobic Pockets - A Unitary Quantum Hypothesis of Anesthetic Action. Toxicology Letters Volumes 100-101, Pages 31-39. 39. Hameroff, S. (1998) Anesthesia, Consciousness and Hydrophobic Pockets - A Unitary Quantum Hypothesis of Anesthetic Action. 40. Hameroff, S. (2001) A quantum approach to visual consciousness. Trends in Cognitive Neuroscience, Vol. 5 No. 11. 41. Hameroff, S. (2006) The Entwined Mysteries of Anesthesia and Consciousness. Anesthesiology, V 105, No 2. 42. Hameroff, S. (2007) Consciousness, neurobiology and quantum mechanics: The case for a connection. The Emerging Physics of Consciousness, edited by Jack Tuszynski, Springer-Verlage. 43. Hobson, J. A. (2009) REM sleep and dreaming: towards a theory of protoconsciousness. Nature Reviews Neuroscience 10, 803-813. 44. Hopfield, J.J. (1982) Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. U. S. A. 79, 25542558. 45. Hopfield, J.J. et al. (1986) Computing with neural circuits- a model. Science 233, 625.

- 43 -

46. James, William (1904) Does Consciousness Exist. Journal of Philosophy, Psychology, and Scientific Methods, 1, 477-491. 47. John, E.R. (2001) A Field Theory of Consciousness. Consciousness and Cognition 10, 184213. 48. John, E. R., Easton, P., & Isenhart, R. (1997). Consciousness and cognition may be mediated by multiple independent coherent ensembles. Consciousness and Cognition, 6(1), 339. 49. Koch, C. (2004) The Quest for Consciousness. Roberts and Co., Englewood, Colorado. 50. Koch, C. & Crick, F. (1990) Towards a neurobiological theory of consciousness. The Neurosciences, Vol 2, 1990: pp 263-275. 51. Koch, C. & Crick, F. (2003) A framework for consciousness. Nature Neuroscience 6, 119 126. 52. Koch, C. & Greenfield, S. (2007) How Does Consciousness Happen. Scientific American 297.4 (October, 2007): 76-83. 53. Koch, C., & Tsuchiya, N. (2007). Attention and consciousness: two distinct brain processes. Trends in Cognitive Sciences, 11, 16-22. 54. Koch, C. & VanRullen, R. (2003) Is perception discrete or continuous? TRENDS in Cognitive Sciences. Vol.7 No.5, pg 207. 55. LaBarge, D. (2006) Apical dendrite activity in cognition and consciousness. Consciousness and Cognition Vol. 15, Issue 2: 235257. 56. Lamme VA (2006) Towards a true neural stance on consciousness. Trends Cogn Sci 10: 494501. 57. Lau, H. C. (2008) A Higher Order Bayesian Decision Theory of Consciousness. Progress in Brain Research, Vol. 168. 58. Levine, J. (1983) Materialism and qualia: the explanatory gap. Pacific Philosophical Quarterly 64:354-361. 59. Lloyd, Dan (2004) Radiant Cool. A Bradford Book, The MIT Press, Cambridge, Mass. 60. Marr, D. (1985) Vision: the philosophy and the approach. Issues in Cognitive Modeling, M. Aitken- head & M.M. Slack, eds, Lawrence Erlbaum, London, pp. 103126. 61. McFadden, J. (2002) The Conscious Electromagnetic Information (Cemi) Field Theory: The Hard Problem Made Easy? Journal of Consciousness Studies, 9 (8), pp. 4560. 62. Nagel, T. (1974) What Is It Like to Be a Bat? The Philosophical Review, Vol. 83, No. 4 , pp. 435-450.
- 44 -

63. Nagel, T. (2002) The Psychological Nexus. In Concealment and Exposure and Other Essays, New York, Oxford University Press. 64. Pockett, S. (2002) Difficulties with the electromagnetic field theory of consciousness. Journal of Consciousness Studies, 9 (4), pp. 51 6. 65. Rosenthal, David. (2002) How Many Kinds of Consciousness? Conscious Cognition. 11(4): 653-665. 66. Sewards, T. V., & Sewards, M. A. (2001). On the correlation between synchronized oscillatory activities and consciousness. Consciousness and Cognition, 10, 485495. 67. Shadlen, M.N., & Movshon, J.A. (1999) Synchrony unbound: A critical evaluation of the temporal binding hypothesis. Neuron 24:67-77. 68. Sperling, G. (1960). The information available in brief visual presentation. Psychological Monographs,74, 1-29. 69. Sperry, Roger (1982) Some Effects of Disconnecting the Cerebral Hemispheres. Science, New Series, Vol. 217, No. 4566, pp. 12231226. 70. Srinivasan, R., Russell, D. P., Edelman, G. M., & Tononi, G. (1999) Increased synchronization of neuromagnetic responses during conscious perception. Journal of Neuroscience, 19 (13), 5435-5448. 71. Tonini et al. (2005) Breakdown of Cortical Effective Connectivity During Sleep. Science Vol. 309, pg 2228. 72. Tonini, G. (2004) An information integration theory of consciousness. BMC Neuroscience 5:42. 73. Tse, P.U. et al. (2005) Visibility, visual awareness, and visual masking of simple unattended targets are confined to areas in the occipital cortex beyond human V1/V2. Proc. Natl. Acad. Sci. U. S. A. 102, 1717817183. 74. Volgushev, M., Chistiankova, M., & Singer, W. (1998). Modification of discharge patterns of neocortical neurons by induced oscillations of the membrane potential. Neuroscience, 83, 1525. 75. Volgushev, M., Pernberg, J., & Eysel, U. T. (2002). A novel mechanism of response selectivity of neurons in cat visual cortex. Journal of Physiology (London), 540, 307320. 76. Volgushev, M., Pernberg, J., & Eysel, U. T. (2003). Gammafrequency fluctuations of the membrane potential and response
- 45 -

77. 78. 79. 80. 81.

selectivity in visual cortical neurons. European Journal of Neuroscience, 17, 17681776. Websters New International Dictionary of the English Language, Second Edition, Unabridged (1945). Merriam Company, Springfield, MA. Wegner, D. M. (2004) Pre cis of the illusion of conscious will. Behav. Brain Sci. 27, 649659. Zeki, S., and Bartels, A., (1998) The asynchrony of consciousness Proceedings Royal Society of London B. 265:1583-85. Zeki, S. and Bartels, A. (1999) Towards a theory of visual consciousness. Conscious. Cogn., 8, 225-259. Zeki, S. (2003) The disunity of consciousness. Trends Cogn. Sci. 7, 214218.

- 46 -

! Supplementary Essays ! Science, Worlds, and Reality21


It seems that we humans, specifically scientists and philosophers, are very concerned that science is conducted by us humans. In science, we look to something outside ourselves, some higher authority to tell us what our world is. We want to avoid being the subject of Alison Wylies accusation that "Only the most powerful, the most successful in achieving control over their world, could imagine that the world can be constructed as they choose." While this rings of the terrifying constructivism of Orwells 1984, I think we need not worry. We will never get beyond the fact that there is something out there, and no science, political regime or raving postmodernist will disprove that. Nonetheless, we cannot appeal to it. To make sense of this apparent contradiction, we must turn to thinkers like Elgin and Goodman. If we do, I think we can make the further step of why we can and should still appeal to science despite our, and consequently its, inability to appeal to the indestructible, irrefutable je ne sais qua. We need not worry about Wylies accusation because we simply cannot construct the world as we choose. But, we can view it as we choose and in that we can construct it as we choose. However circular this reformulation sounds, it introduces the important limiting term of viewing the world. There is something we are all privy to, it exists, and that is the most we can say about it factually. The only way of being privy to it is to have a perspective on it, and thus to construct it. We, to use Elgins words, make it into our reality. Elgin makes an important distinction between making up and making into, that being that the former is an ex nihilo process while the latter is a constructive one. To construct something one must have materials, and in this case those materials are the je ne sais qua. The depths to which this construction extends is profound, indeed it is total. It takes place not just with social labels and theoretical concepts, but with natural kinds and even color. If fact, under this conception, natural kinds are just more sophisticated colors. Color
21

Written for Philosophy of Science S3551

- 47 -

designation occurs unconsciously, thus seeming in some way real. Thinking, let us say seeing someone as a politician, a scoundrel, or a friend is very much more a conscious decision, thus seeming less real. So-called natural kinds straddle the boundaries of consciousness and in some ways seem real and, in others, seem false. My question is, what does consciousness matter or impart on the constructive process? It is all brain constructing that reality. It is all really happening in ones head. How then is my designation of someone as a friend fundamentally different from my designation of him as a human being different from my designation of his eyes as brown? It is only a matter of degree, of susceptibility to change. The circuitry of color designation is fairly hardwired, but could be bungled, as many neurologically damaged patients can attest. The circuitry of friend designation is much more malleable, but theoretically no different. From common senses to horses and dogs to theoretical entities and social categories, our whole world, our reality is all a construction of ones brain. Despite his incoherence, and his failure to actually make the case, Goodman advances the above with his pluralistic world-making notion of reality. He articulates, however poorly, the consequences and the nature of such world making. However, Goodmans world making is often subject to the realist objection to nominalism surrounding the existence of dinosaurs without humans. Elgin formulates a good rebuttal, showing the critique misconceived. She says, constructive nominalism is committed to the counterfactual claim that if there were no concept of a dinosaur, there would be no dinosaurs. It is not committed to the historical claim that when there was no concept of a dinosaur, there were no dinosaurs. For once it is introduced, the concept of a dinosaur applies to all things, be they past, present, or future, that satisfy the criteria (168-69). She makes clear the nominalist claim is that dinosaurs arent just creatures, just things that were out there, but that dinosaurs is a category a natural kind that necessarily comes with a definite perspective and much conceptual baggage. Thus, dinosaurs wouldnt exist without the concept because it is exactly that, a concept. The referents of this term certainly existed, but it would be impossible to say what those were without the concept, for how can one say, let alone conceive of something one has no concept of. This is a specific instantiation of the more profound fact that linguistically, conceptually,

- 48 -

fundamentally we cannot consider, conceive, approach that which is not us, the je ne sais qua. If this is to be accepted, one might ask, why then should we appeal to science? While not a stupid or unreasonable question, it is nonetheless wrong almost in the exact way the realist objection about dinosaurs is. While science, like anyone or thing, cannot describe reality without constructing, it still can construct a world more useful than any individual or other practice can. Science is an attempt and does create an agreed upon human perspective free of bias (at least, in theory). Bias arises out of the differences in perspectives, and thus differences between people. By trying to create some lowest common denominator of human worlds, science is an attempt to make a world that we can all agree upon and use, a universal language if you will. Importantly, science need not be coherent or stable because it is not relative to some Truth. It cannot, as mechanical realism proved, remove the perspective from the scientific perspective. It is, after all, only a perspective, which themselves have no requirements of coherence, immutability, durability or homogeneity. We can still appeal to science because it is an appeal to ourselves and that which we share. Doing otherwise leads to talking past one another, to existing in separate worlds. Reality may be mutable, varied and different from one to the next, but that doesnt mean that we cannot align ours and thus share one.

Scientific Realism22
I am a scientific realist, not because I believe what science says is what is real or true, but because the alternative is meaningless. If one is to deny science and specifically its objects any reality, then one must also deny it to everything; for there is no discernable or meaningful distinction between science and the world, perceived or not, not just on an empirical or methodological level, but on the most fundamental level, reality itself. Anti-realism is fundamentally an unfruitful position, leaving one in an impoverished, purely solipsistic existence, one that is frankly less fun to believe in. Following in the suit of Grover Maxwell, I hope to take his argument where I believe it should have gone, i.e. to
22

Written for Philosophy of Science S3551

- 49 -

complete demolition of any real distinction between observation, theory, and experience itself. Maxwell doesnt go far enough in his discussion of the lack of distinction between observables and unobservables. In reading his argument of the indistinguishablility between looking through a microscope and through a pane of glass, one feels that he stops just short of saying something profound. He comes to the doorstep, but fails to knock. If he had, he would have asked, what separates ones lens from that windowpane, and thus from the microscope. Then further, ones retina from an SEM, and so on. Where does raw experience come in and provide real, observable objects. While Maxwell acknowledges the problem of hallucination, he doesnt address fundamental issue of ordinary perception. Modern neuroscience reveals that experience itself is a theoretical construction of brain. One actively and continuously constructs ones sensory experience however unconsciously. Ones brain has some notion, a theory if you will, of how the world is, employs specific machinery to gather information, and extrapolates from there. Yes, extrapolates, for much of the experiential world is your brains best guess at an adequate representation of what going on within and outside of ones body. The Gestaltists showed us that even the macroperceptions are constructed, indeed, even easily manipulated. From this understanding of the human mind, of the human apparatus, of experience itself, we see that our tools, the perceptions they give us, and the theories we form about them are merely extensions of ourselves, both literally and metaphorically. To take our machines as fundamentally different from our brains is to misconceive and underestimate the scope of the human. Just because we are consciously trying to understand and manipulate our world does not mean that we are not doing that same thing that we were doing unconsciously through evolution before the emergence of consciousness. But this broader conception of the human and its experience begs the question of what is real if all is necessarily just constructed. I think the only viable answer is that it is all real. To deny reality at any given point is to draw a line in the sand, and importantly one that will inevitably change with scientific progress. But this begs the further question that if we are to believe something is real, must we believe it to be True. I answer no because
- 50 -

reality and Truth proper, as far as Im concerned, are as ontologically distinct as can be. All there is is subjectivity, interpretation, perspective. Thus, any claim to truth is only a claim to perspective. While this leaves no room for Truth proper, I believe this radical, almost antithetical, redefining of the term truth is the only way to salvage it in face of modern scientific and philosophical advances. I dont believe that a realist need believe in objectivity, as van Fraassen believes they do, nor do I believe Hilary Putnams formulation of realism is even possible, i.e. that what makes [the sentences of a theory] true or false is something external that is to say, it is not (in general) our sense data, actual or potential, or the structure of our minds, or our language, etc. Given this, as the objects of science change, perspective changes and consequently so do truth and reality. This realist position can be more fully articulated by attacking the issue of the Copenhagen interpretation, which van Fraassen raises towards the end of his piece. The Copenhagen interpretation is necessarily not a realist position because the language its authors use to describe the world is not a language that allows any realist interpretation. For example, when saying that a particle cannot have a definite momentum while also having a definite position is logically impossible within any realist conception. This is because if we are going to posit anything as a particle, i.e. an object, the most abstract and least constrained element of a realist world, then we are necessarily positing something that has both properties at once. Maybe, there is something that cannot have those two properties, but it is not a particle and is not within human conception. The same goes for probability. Probability is not an ontological state. It is merely a description of some underlying one. It is not a description of reality itself, but an epistemic statement about the limits of knowledge/human description. A realist can be real about quantum descriptions in that he believes they are not one to one descriptions, but the best our human minds can develop about some fundamentally incomprehensible subreality. These descriptions are analogous to folk psychology or an intentional stance. In all three, we acknowledge the reality of some underlying phenomenon and simultaneously its inaccessibility. While van Frassen would say, like in his example of dissolving gold, that doing so is a scientifically useless proposition, the analogy above and history prove him wrong. Believing in some way in this
- 51 -

underlying reality is in part what has driven the development of string theory and its rivals. And, as a final note against anti-realism, given that one ultimately cannot know whether science is True or not, why not believe in it? Isnt it more fun to think that we are actually talking about whats really out there, to believe we are perceiving at whole new orders of magnitude.

Basic Problems in Addressing Consciousness Scientifically: A Critique of Dehaenes Approach23


In the most full and sweeping sense of the term, consciousness is the only thing one can and does know, for everything as one has come to know it is merely another moment of consciousness. Kant made a variant of this point when he claimed that one can only know phenomena. However, this definition of consciousness, while motivated by an interesting insight into the nature of ones reality, is generally useless. The problem of how to define consciousness has been plaguing man since time immemorial. Yet, it seems we can all agree on at least one aspect of consciousness: that for anything that is consciousness, there is a way to be like it, to use the famous words of Thomas Nagel [48]. That is, there is nothing it is like to be a rock, but there is a way it like to be like you. Clearly, this formulation can quickly boarder on gibberish, but it nonetheless gets at the most profound fact of consciousness, that it is the substance of experience, and that things that dont have consciousness dont have experience. The problem with experience is that it is subjective, that my experience is mine and yours yours, and to each his own. That is, that any individual experience, e.g. yours or mine, is not objectively available to be anothers experience. Thus, consciousness is a particularly bizarre phenomenon24 in the world insofar as one knows it exists because one
23

Written for Neural Systems W4011

While it is often a faux pas to cite a dictionary, I do believe Websters Unabridged Second is of particular aid in clarifying my usage of phenomenon. Websters reads, Phenomenon: any observable fact or event; as: a in the broadest sense, any fact or event whatever; any item of experience or reality c an object of sense perception as distinguished from an ultimate reality. This meaning is due to Kants absolute separation of the thing-in-itself from the object of experience, or phenomenon. It is more thoroughgoing than the ancient distinction, since Kant asserts the utter unknowability of the thing-in-itself, while the ancients conceived essences to be knowable. d in positivistic and scientific usage, any fact or event of scientific interest susceptible of scientific description and explanation [60].
24

- 52 -

experiences it from one side, i.e. subjectively, but cannot point to it out there in the world in the way one can with light and gravity. Unlike those other phenomenon which are available for everyone to witness, i.e. are objective, consciousness, at least in everyday life, is totally inaccessible in others, and in that way appears metaphysical, i.e. beyond the purview of empiricism. Thus, unlike the latter entities, the exact nature or relationship of consciousness to its surroundings is unspecified because there is no easy empirical theory. But more fundamentally, this means consciousness can always be construed as an end, that is, as that which has no causal properties, the existence or nonexistence of which having no discernable consequences due to the inherent and fundamental separation of subjectivities. In everyday practice, we use a number of quick and dirty tests for consciousness such as the ability to speak coherently or to respond to complex problems reasonably to get around these epistemic issues. How exactly we do this is still controversial as exhibited by the robust theory of mind discourse. However, whether and how we attribute consciousness to others is not my concern here. I raise the hackle of theory of mind only to say I wish to avoid its tangles, and instead be concerned with consciousness as a scientific object. That is, I am not concerned with how one attributes consciousness to another, but what consciousness is, how we come to such knowledge. However, considering consciousness from this perspective, one is confronted with two serious problems: how to scientifically approach an object that is practically metaphysical and when will we know when we have reached it? Because of these problems and those detailed above, it is particularly hard to settle on a definition/criterion that isnt readily open to the attack that said definition/criterion is only specifying a process that can be conscious or is necessary but not sufficient for such. It is foolhardy to take the criteria of the quick and dirty methods, like verbal report, at face value, as exemplified by Michael Gazzaniga, who interprets the lack of verbal report by the right hemisphere in split-brain patients to mean it is unconscious, an interpretation that clearly begs the
The combined meanings of a and d specify my use of the word, which is thus distinct from phenomenologythe study of experience, and phenomenal consciousness, a term coined by Ned Block, who writes, Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state [3]. These similarities within the terminology are an unfortunate idiosyncrasy that is rather unavoidable if one wishes to stay within the parlance. See the Dehaene quote [22] below for a further example of the distinction.

- 53 -

question given that the language faculties are found in the left hemisphere [31,52]. Yet, we cannot totally rule out verbal report, for then we relegate ourselves to the purely empirical, thus severing the connection to consciousness understood as subjectivity and hence the only bridge across the pragmatic metaphysical divide. In rejecting verbal report in toto, one precludes ones work from having any claim on consciousness. Thus, some medley of subjective and objective criteria seems necessary. Stanislas Dehaene and his colleges have seemingly struck upon such a balance. Using the popular paradigms of stimulus onset asynchrony and masking while recording event related potentials (ERPs) (though other methods like fMRI are also occasionally used [21]), he explores such effects as attentional blinks and inattentional blindness, and has thus begun to untangle the consciousness spectrum [22,28]. His work extends from parsing the spectrum into categories [24,27] to constructing neural models to explain results [20,23,26] to correlating effects with specific brain activity [21,25], much of which is theoretically underpinned by the global workspace theory of consciousness, championed by Bernard Baars [1,2]. Most importantly, Dehaene fully grasps the nature of the problem,
What is specific to consciousness, however, is that the object of our study is an introspective phenomenon, not an objectively measurable response In order to cross-correlate subjective reports of consciousness with neuronal or information-processing states, the first crucial step is to take seriously introspective phenomenological reports [22].

The combination of the objective measurements of the ERP and the subjective measures such as visibility scales allows Dehaene to make those cross-correlations and thus bridge the metaphysical divide. However, there are a number of problems to Dehaenes approach ranging from details of the paradigm to fundamental assumptions. All the problems arise directly or indirectly from a lack of appreciation of the epistemic limits of his approach. I will begin by addressing superficial problems, progressing to the more profound ones, hopefully revealing along the way that Dehaene is studying not so much consciousness as conscious cognition. When reviewing Dehaenes work, one problem immediately stuck out, namely, he seems to disregard the fact that the subject is conscious throughout the experiment. Which is to say that insofar as he claims that any part of the ERP is the neural correlate of consciousness (NCC), he

- 54 -

will necessarily be leaving out the rest of what consciousness is, thus possibly loosing some part of the actual NCC. Imagine that one just wants to record the ERP of seeing a flashed red dot. Consciousness of that dot will not only include its color and shape, but also its location, i.e. its position and distance, not to mention the suite of perceptions related to the gist (see next page for discussion and definition of gist). While when the dot is not visible, the color and shape are necessarily not available, the location of where it was or will be is available and, especially if the subject is looking at a particular place, e.g. the center of the screen, it will be part of the subjects consciousness. Thus, if one is to subtract the background from the signal, one necessarily subtracts an important aspect of the NCC of the red dot, namely its location. There is also the far more damaging possibility that there is some necessary but insufficient phenomenon separate from the necessary conditions that is present in the background, but cannot give rise to consciousness of the object alone. I am imagining consciousness may be like protein with a quaternary structure such as hemoglobin, which consists of two polypeptide chains, both of which are necessary for its function, neither of which are sufficient, yet both are entirely distinct phenomena from the pH of the solution they exist in, which must be within a tight range for hemoglobin to function at all. If one were testing for the ability to transport oxygen and initial conditions contained one subunit and not the other, then the other was added, using Dehaenes method, one would conclude that only the added component is hemoglobin, and not the two together. Let me make this possibility more tangible in relation to the topic: imagine the hypothetical possibility that consciousness fundamentally is the interaction of a dendritically-driven regional electric field with specific firing patterns of axons, such that the field is about the pattern, thus yielding consciousness of the informational content of that pattern. Both are necessary, neither sufficient; it is their unison that is consciousness. If this field is present beforehand, it will be discarded as background and consequently one would be overlooking not just an important aspect of the NCC, but an essential one. Thus, Dehaenes method might be doomed to fail insofar as it necessarily creates false negatives, therefore missing part of the NCC, which is the object of interest. In a more forgiving light, one might say that whatever is found using Dehaenes
- 55 -

method can be said to be part of consciousness, but one cannot conclude that the rest is not. The above complication is specific to Dehaenes experimental methods. Closer to the core of his approach are his assumptions. There are two that I take issue with: first is the belief that attention is a prerequisite of consciousness, second, that there is perfect fidelity between consciousness and reportability [22]. In regards to the attention assumption, there are two issues: what exactly is meant by attention and that it may be confounding and/or limiting. When Dehaene says attention is a prerequisite of consciousness, if he means that attention occurs before consciousness, as it seems it would have to be if it is to be an actual prerequisite, then it is not the attention that one is subjectively familiar with, for one at least seems to direct ones attention to that which one isnt conscious of enough, not to what is non-conscious. Indeed, how can one attend to something one is unconscious of? The attention that we are subjectively familiar with is necessarily subsequent to some minimal consciousness, but not full-blown attentive consciousness. Even the common name for this form of attention, i.e. top-down attention, implies consciousness is a precondition, i.e. the top level, whether it be consciousness itself or something above consciousness, guides attention to relevant unattended minimally conscious percepts. If Dehaene is not referring to the attention of consciousness, i.e. top-down attention, then he must be referring to the brains ability to give some stimuli precedence over others, also known as bottom-up attention. However, bottom-up attention generally refers to supersalient stimuli, e.g. an explosion, drawing ones top-down attention. But, in the context of being a prerequisite for consciousness, this sounds not so much like attention but awareness, i.e. taking in sensory stimuli, or even vigilance, to use Dehaenes own terminology [22,26,27], not giving some stimuli precedence over others. For one is constantly conscious, or at least minimally conscious of much of ones environment. Christof Koch calls this part of ones consciousness that isnt overtly attended the gist [38,41]. Notably, Koch points out,
gist is immune from inattentional blindness: when a photograph was briefly flashed unexpectedly onto a screen, subjects could accurately report a summary of the photograph. In a mere 30 ms presentation time, the gist of a scene can be apprehended. This is insufficient time for top-down attention to play much of a role [38].

This is an excerpt from a paper by Koch and Naotsugu Tsuchiya that makes a more robust criticism of the claim that attention is prerequisite

- 56 -

for consciousness, maligning it from similar vantages as presented here. Regardless, Dehaene seems to mean the former top-down attention when he makes his claim. This leads to a second issue with the assumption: that attention is confounding and/or limiting. I am not the first to point this out and Dehaene acknowledges the criticism, admitting, Some have argued that many of the above neuroimaging paradigms are inappropriately controlled because conscious perception is confounded with increased attention and more extended stimulus processing [27]. Oddly, instead of defending his position or rebutting the criticism, Dehaene fends with an ad hominem tu quoque attack upon the methods of a rival paradigm and simply restates that without attention, conscious perception cannot occur (ibid). While I dont wish to defame Dehaene, this smells of intellectual dishonesty insofar as Dehaene seems to know this is a major problem, but doesnt want to admit it because it would undermine his whole approach. Because I wish to avoid previously articulated arguments about how the confounding occurs, Ill simply point out some of the conclusions Dehaene arrives at because of this assumption, and then explain why they are patently confounded or driven by false assumptions. Through his neuroimaging studies and computational simulations, Dehaene arrives at the conclusion that consciousness is characterized by two properties: the [stimulus-evoked] activation can reverberate, thus holding information on-line for a long duration essentially unrelated to the initial stimulus duration; (2) Stimulus information can be rapidly propagated to many brain systems [26]. If one takes the first property and translates it into subjective terms, it sounds not like plain consciousness but thinking. In thinking, one can hold a thought for a duration essentially unrelated to the initial stimulus duration, and surely thinking is part of consciousness. But, is this true of consciousness on the whole? I think not. Consider your foot; you are visually conscious of it when looking at it, but that visual consciousness of your foot evaporates as soon as you look away. The idea or the thought of it can persist, albeit without the full gestalt of actually looking at it, but not the experience of it. Clearly this conclusion confounds thinking or cognition, a very complex high-level process that requires consciousness, with plain, old, experiencing-the-world consciousness. Inevitably, Dehaene came to this conclusion because to
- 57 -

report properly one must follow directions during the presentation and tasks, and to follow directions, which by necessity are linguistic, one must think. Moreover, while thinking is technically conscious, some serious semantic contortions are needed to make thinking the primary referent of consciousness over the more commonly understood phenomenal experience. This problem can be cast in two different lights: that Dehaene is confounding attention and extended stimulus processing with consciousness, thus claiming consciousness is more than it is. Or, that by only looking for attention and extended stimulus processing he is limiting his purview, thus entirely missing what consciousness is, hence arriving at a similar place as that arrived at by the methodological bent for false negatives detailed above, but for a different reason. Consider the second property, Stimulus information can be rapidly propagated to many brain systems, which later Dehaene also describes as, information can be shared across a broad variety of processes including evaluation, verbal report, planning and long-term memory or elsewhere as the broadcasting of accessed information to many bilateral cortical regions through long-distance cortico-cortical connections including those of the corpus callosum [ibid,28]. This seems reasonable enough; that which is conscious seems to be available for evaluation, verbal report, planning and long-term memory. However, this intuition comes from a false, or at least unjustified assumption, namely perfect fidelity between consciousness and reportability. As discussed earlier, it is crucial to maintain the connection with verbal report, and indeed Dehaene rightly states, conscious perception must [ ] be evaluated by subjective report [27]. However, let us analyze a similar but different statement:
Consciousness is systematically associated with the potential ability for the subject to report on his/her mental state. This property of reportability is so exclusive to conscious information that it is commonly used as an empirical criterion to assess the conscious or unconscious status of an information or a mental state (Gazzaniga et al., 1977; Weiskrantz, 1997) [22].

While Dehaene doesnt exactly make the crude verbal report mistake of Gazzaniga and the likes, though the reference in the quote above begs to differ, he nonetheless makes a deeper, less obvious mistake about reportability in general. Overtly, this statement implies the mistake of converting the conditional: that because reportability is exclusive to consciousness, consciousness is exclusive to reportability. Obviously that is not logically true, and because it is only implied, we must give
- 58 -

Dehaene the benefit of the doubt. Nonetheless, we and Dehaene often believe both sides of the statement, i.e. both that reportability is exclusive to consciousness and consciousness is exclusive to reportability. This is driven by a deep but mistaken intuition about oneself. Dehaene, like most people, assumes that what one thinks one is conscious of, and therefore reports as conscious, is what is conscious. While he rightly assumes fidelity between reports and self-knowledge (for if he didnt assume that then he would quickly revert to brutish behaviorism), he wrongly assumes fidelity between self-knowledge, i.e. what one thinks he is conscious of, and actual consciousness. To illustrate this latter mistake let me turn quickly to Martin Heidegger. Heidegger notes in Being and Time that when one attempts to contemplate oneself, one is not ones normal self, but in a very peculiar and rare state of self, i.e. a highly-reflective and highly self-conscious state, and thus one is not actually contemplating the self that he thinks he is contemplating, i.e. his normal self, but this particularized state of self. Using similar reasoning, when one thinks about what consciousness is, one will not be thinking about consciousness directly but only about what he thinks consciousness is, i.e. those parts of consciousness that are accessible to the parts of his brain that think, remember, report. He is therefore contemplating only a segment of consciousness, which he takes to be the whole of consciousness, for the rest is inaccessible to thought. Again, this is the illusion of the false negative; just because we cannot consciously think about something doesnt mean it isnt conscious. Let me come clean; I am urging the position that your subjectivity, that is, your consciousness may be a much stranger phenomenon than you know it to be. That what you think of when you consider consciousness may be only the part of your consciousness that is accessible for higher manipulation and storage and that what actually happens moment-to-moment, the so-called snap shot of your subjectivity, is a more diverse and unwieldy experience than what you may remember. That is, there are phenomenal parts of any given conscious state that not actively considered, i.e. accessed by higher possessing, but are nonetheless part of consciousness. In this respect, I am in total agreement with Daniel Dennett [17,18,19] and am rephrasing his argument, though I believe we differ on what consciousness is fundamentally. Regardless, just because consciousness is stranger than
- 59 -

you know it to be, doesnt mean the science of consciousness should be fooled into thinking it is merely as one knows it. The reason for my lengthy, introductory, philosophical ado now comes to fruition: as stated there, the science of consciousness is about experience, about subjectivity, about what it is like to be something at any given moment. It is not about cognition or thinking. It would be a disservice both to experience and to the science of it to limit our concept of consciousness to what we introspectively think about it. To put this in more concrete neurological terms, to remember and report on a particular percept, the areas of the brain that perform such functions, i.e. the hippocampus and the Brodmann's and Wernicke's areas, must have access to the multiple processors [that] encode the various possible contents of consciousness, e.g. possibly MT+ for motion or the color-coding cells of V4 [22]. We have no good reason to assume that that access is what makes those contents conscious. We cannot appeal to the global workspace theory because it appeals to Dehaenes work for confirmation, thus creating a confirmational loop, is fails to explain experience at all [6,45,59]. Indeed, despite Dehaenes results, there is good reason to believe those contents are conscious of their own right [29,30,33,40,43,56,62,63]. If we look to dreaming, in which one is undoubtedly conscious, one may have no memory of being conscious, one may not be able to report being conscious, or one may be able to report being conscious without being able to report what he was conscious of, i.e. he knows he was dreaming but cannot recall what the dream was. Indeed, this is often the case when one wakes suddenly. Or, consider when you just lose a thought entirely. You know it was there, but you cannot report a single fact about it. Thus, consciousness apparently can be ephemeral, unmemorable and inaccessed. Bringing this back to the underlying theme of the false negative, when consciousness of X is reported, all one can say with Dehaenes method is that something in the brain was consciousness at the time. One cannot subtract when x isnt reported from when it is and claim the remainder to be consciousness because one is not if fact subtracting nonconscious from conscious but not-reported-as-conscious from reported-as-conscious, inaccessed from accessed. That is, subtracting a false negative from a positive inadvertently subtracts too much, in this case making consciousness appear to confined to the higher possesses associated with reportability.
- 60 -

I must acknowledge that Dehaene often uses conscious access, a more cautious term than consciousness. However, his continual interchange of them as synonyms over his work (see especially 22, 24, & 27) as a whole begs whether he actually discerns a difference. Indeed, Dehaene never actually clarifies what conscious access means, especially in relation to consciousness. Does it mean consciousness has informational access to contents located elsewhere or that which is conscious is being accessed by modules/processors for processing? Dehaene often seems to be making the former case, but Baars, whose global workspace theory underpins Dehaenes arguments, claims the opposite [1,2]. Ned Block makes several devastating cases against those who fail to distinguish between access consciousness and phenomenal consciousness [3,4,5], and indeed it seems that Dehaene is failing to do so. While Dehaene acknowledges the distinction between access and phenomenal consciousness, and even provides evidence for it, he only rebuts it with evidence from a different effect, change blindness, to which nearly all the same arguments as those detailed above can be applied25, therefore not providing a real rebuttal [27]. Dehaene allows for what he calls pre-consciousness to be what Block calls phenomenal consciousness26 [3,4,5] and Semir Zeki calls micro-consciousness [62,63], but says who is right does not seem to be, at this stage, a scientifically addressable question [27]. This last problem captured in Dehaenes second assumption seems to put the science of consciousness in a tough spot: reportability is crucial, but reportabilitys relation to consciousness is unclear. This difficulty is just part of the greater one of separating cognition and consciousness, two deeply enmeshed, but at times clearly distinct processes. Resolving this is going to take some extremely well-crafted experiments. One possible route that I can imagine is somehow combining Dehaenes approach with that of J. Allan Hobsons work on dreaming. Dreaming in a way seems a possible control for cognition,
For example, change blindness shows nothing other than that not all of what one takes in makes it to working memory, that we cannot carry over the entire gist from one moment to the next. If we cannot carry it over than we cannot compare the two and thus not discriminate between the two. Further, one requires attention to compare two things, but doesnt require attention to be phenomenal conscious of them. Indeed, think of how when you stair at one spot and can see all that is around it. You cannot even consider what another part of the scene is without giving it attention, yet whether youre giving it attention you are conscious of it, i.e. have an experience of it. 26 Ironically, Dehaene uses that same abbreviation, P-conscious, as Block does for what is probably the same thing only Dehaene is saying it isnt actually conscious, or is non-conscious.
25

- 61 -

because dreams surely involve consciousness, but dont seem to involve anything we associate with thinking. Indeed, Hobson hypothesizes that dreams are driven by lower central-pattern generators that dont activate higher or what he calls secondary consciousness circuits [34]. What I have attempted to show is not only consciousness is a pragmatically metaphysical object from the perspective of science, but it is also a slippery phenomenon taken subjectively. Thus any approach to consciousness must account for the intractabilities from both objective and subjective sides. Dehaene has done well addressing the first account but failed on the second. While he is certainly investigating consciousness, I seek to curtail his claims on such. In this quest for consciousness, one must understand the concepts one uses and their limits. Consciousness has evaded elucidation for thousands of years and we should not expect it to fit easily into our conception of it. To spin the words of Albert Einstein, not only is consciousness stranger than we think, it is stranger than we can think.27
References and Sources
1. Baars, B. J. (1997) In the theatre of consciousness: Global workspace theory: A rigorous scientific theory of consciousness. Journal of Consciousness Studies, 4, 292309. 2. Baars, B.J. (2005) Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Prog. Brain Res. 150, 45 53. 3. Block, N. (1995) On a confusion about a function of consciousness. Behavioral and Brain Sciences 18 (2); 227-287. 4. Block, N. (2004) Consciousness. Oxford Companion to the Mind, 2nd Ed, edited by Gregory, R. 5. Block, N. (2007) Consciousness, accessibility, and the mesh between psychology and neuroscience. Behavioral and Brain Sciences, 30, 481-548. 6. Block, N. (2009) Comparing The Major Theories of Consciousness. The Cognitive Neurosciences IV, M. Gazzaniga (ed.), MIT Press. 7. Churchland, P. M. (1981) Eliminative materialism and propositional attitudes. The Journal of Philosophy, 78:67-90. 8. Churchland, P.S. (1980) A Perspective on Mind-Brain Research. The Journal of Philosophy, Vol. 77, No. 4, pp. 185-207. 9. Churchland, P.S. (1987) Epistemology in the Age of Neuroscience. The Journal of Philosophy, Vol. 84, No. 10, pp. 544-553 10. Churchland, P. & Sejnowski T. (1990). Neural Representation and Neural Computation. Philosophical Perspectives, Vol. 4, pp. 343-382 11. Colgin, L.L., Moser, E.I., Moser, M.B., (2008) Understanding memory through hippocampal remapping. Trends Neurosci. 31, 469477.pdf 12. Crick, F. (1989) The recent excitement about neural networks. Nature, vol. 337, 12 13. Cummins, R. E. (2000) "How Does It Work" Versus "What Are the Laws?": Two Conceptions of Psychological Explanation. In F. Keil & Robert A. Wilson (eds.), Explanation and Cognition, 117-145. MIT Press. 14. Damasio, A. (1994) Descartes' Error. Grossett/Putnam, New York. 15. Damasio, A. (1999) The Feeling of What Happens. Heineman, London. 16. Damasio, A. (2000) A Neurobiology for Consciousness. Neural Correlates of Consciousness. Metzinger, T., ed., pp 111-120. MIT Press, Cambridge, Massachusetts. 17. Dennett, D. C. (1971) Intentional Systems. The Journal of Philosophy, Vol. 68, No. 4, pp. 87-106. 18. Dennett, D. (1991) Consciousness Explained. Little, Brown and Co, Boston. 19. Dennett, D. (2005) Sweet Dreams. MIT Press, Massachusetts. 20. Dehaene, S., Kerszberg, M., & Changeux, J. P. (1998) A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences USA, 95, 14529-14534. 21. Dehaene, S. et al. (2001) Cerebral mechanisms of word masking and unconscious repetition priming. Nat Neurosci 4: 752758. 22. Dehaene, S. & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition 79, 137. 23. Dehaene, S. et al. (2003) A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proc. Natl. 24. Dehaene, S. & Sergent, C (2004) Is consciousness a gradual phenomenon Evidence for an all-or-none bifurcation during the attentional blink. Psychol Sci 15: 720 728. 25. Dehaene S. et al. (2005) Timing of the brain events underlying access to consciousness during the attentional blink. Nat Neurosci 8: 13911400. 26. Dehaene, S. & Changeux, J.P. (2005) Ongoing spontaneous activity controls access to consciousness: A neuronal model for inattentional blindness. PLoS Biol 3: e141. 27. Dehaene, S. et al. (2006) Conscious, preconscious, and subliminal processing: a testable taxonomy. Trends Cogn. Sci. 10, 204211. 28. Dehaene, S. et al. (2007) Brain Dynamics Underlying the Nonlinear Threshold for Access to Consciousness. PLoS Biology Vol. 5. 10: 2408-23. 29. Eccles, J. C. (1992). Evolution of consciousness. Proceedings of the National Academy of Sciences of the USA, 89, 73207324. 30. Eccles, J. C. (1990). A Unitary Hypothesis of Mind-Brain Interaction in the Cerebral Cortex. Proc. R. Soc. London, Ser. B 240: 433-451.

27

Einstein originally said, Not only is the universe stranger than we imagine, it is stranger than we can imagine.

- 62 -

31. Gazzaniga, M. S., LeDoux, J. E., & Wilson, D. H. (1977) Language, praxis, and the right hemisphere: clues to some mechanisms of consciousness. Neurology, 27 (12), 1144-47. 32. Hameroff, S (2006) The Entwined Mysteries of Anesthesia and Consciousness. Anesthesiology, V 105, No 2. 33. Hameroff, S. (2007) Consciousness, neurobiology and quantum mechanics: The case for a connection. The Emerging Physics of Consciousness, edited by Jack Tuszynski, Springer-Verlage. 34. Hobson, J. A. (2009) REM sleep and dreaming: towards a theory of protoconsciousness. Nature Reviews Neuroscience 10, 803-813. 35. Hopfield, J.J. (1982) Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. U. S. A. 79, 25542558. 36. Hopfield, J.J. et al. (1986) Computing with neural circuits- a model. Science 233, 625. 37. James, William (1904) Does Consciousness Exist. Journal of Philosophy, Psychology, and Scientific Methods, 1, 477-491. 38. Koch, C. (2004) The Quest for Consciousness. Roberts and Co., Englewood, Colorado. 39. Koch, C. & Crick, F. (1990) Towards a neurobiological theory of consciousness. The Neurosciences, Vol 2, 1990: pp 263-275. 40. Koch, C. & Crick, F. (2003) A framework for consciousness. Nature Neuroscience 6, 119 126. 41. Koch, C., & Tsuchiya, N. (2007). Attention and consciousness: two distinct brain processes. Trends in Cognitive Sciences, 11, 16-22. 42. Koch, C. & VanRullen, R. (2003) Is perception discrete or continuous? TRENDS in Cognitive Sciences. Vol.7 No.5, pg 207. 43. LaBarge, D. (2006) Apical dendrite activity in cognition and consciousness. Consciousness and Cognition Vol. 15, Issue 2: 235-257. 44. Lamme VA (2006) Towards a true neural stance on consciousness. Trends Cogn Sci 10: 494501. 45. Levine, J. (1983) Materialism and qualia: the explanatory gap. Pacific Philosophical Quarterly 64:354-361 46. Lloyd, Dan (2004) Radiant Cool. A Bradford Book, The MIT Press, Cambridge, Mass 47. Marr, D. (1985) Vision: the philosophy and the approach. Issues in Cognitive Modeling, M. Aitken- head & M.M. Slack, eds, Lawrence Erlbaum, London, pp. 103126. 48. McFadden, J. (2002) The Conscious Electromagnetic Information (Cemi) Field Theory: The Hard Problem Made Easy? Journal of Consciousness Studies, 9 (8), pp. 4560. 49. Nagel, T. (1974) What Is It Like to Be a Bat? The Philosophical Review, Vol. 83, No. 4 , pp. 435-450. 50. Rosenthal, David. (2002) How Many Kinds of Consciousness? Conscious Cognition. 11(4): 653-665. 51. Shadlen, M.N., & Movshon, J.A. (1999) Synchrony unbound: A critical evaluation of the temporal binding hypothesis. Neuron 24:67-77. 52. Sperling, G. (1960). The information available in brief visual presentation. Psychological Monographs,74, 1-29. 53. Sperry, Roger (1982) Some Effects of Disconnecting the Cerebral Hemispheres. Science, New Series, Vol. 217, No. 4566, pp. 1223-1226. 54. Srinivasan, R., Russell, D. P., Edelman, G. M., & Tononi, G. (1999) Increased synchronization of neuromagnetic responses during conscious perception. Journal of Neuroscience, 19 (13), 5435-5448. 55. Tonini et al. (2005) Breakdown of Cortical Effective Connectivity During Sleep. Science Vol. 309, pg 2228. 56. Tse, P.U. et al. (2005) Visibility, visual awareness, and visual masking of simple unattended targets are confined to areas in the occipital cortex beyond human V1/V2. Proc. Natl. Acad. Sci. U. S. A. 102, 1717817183 57. Volgushev, M., Chistiankova, M., & Singer, W. (1998). Modification of discharge patterns of neocortical neurons by induced oscillations of the membrane potential. Neuroscience, 83, 1525. 58. Volgushev, M., Pernberg, J., & Eysel, U. T. (2002). A novel mechanism of response selectivity of neurons in cat visual cortex. Journal of Physiology (London), 540, 307320. 59. Volgushev, M., Pernberg, J., & Eysel, U. T. (2003). Gamma-frequency fluctuations of the membrane potential and response selectivity in visual cortical neurons. European Journal of Neuroscience, 17, 17681776. 60. Websters New International Dictionary of the English Language, Second Edition, Unabridged (1945). Merriam Company, Springfield, MA. 61. Wegner, D. M. (2004) Pre!cis of the illusion of conscious will. Behav. Brain Sci. 27, 649659. 62. Zeki, S., and Bartels, A., (1998) The asynchrony of consciousness Proceedings Royal Society of London B. 265:1583-85. 63. Zeki, S. (2003) The disunity of consciousness. Trends Cogn. Sci. 7, 214218.

- 63 -

You might also like