This action might not be possible to undo. Are you sure you want to continue?
DEFINATION: "It means the awareness people have of outside world and of their perceptions, images and feelings." OR To define consciousness, we can only use another word ___ awareness, for example consciousness means you are conscious of something; it is opposed to inertness or non consciousness. EXPLANATION: Consciousness is extremely complex because of its interdisciplinary nature. Because it includes discipline as Psychology Biology Philosophy Physics Extrasensory perception
y y y y y
Behaviorists considered consciousness as inappropriate for scientific study; in fact they were concerned about the validity of the introspection, a method to measure consciousness.
With the emergence of cognitive psychology in 1980's consciousness has become a popular topic for numerous books. In recent years, cognitive psychologists have been especially interested with four interrelated concerned with consciousness. y The ability to brought thoughts into consciousness. y Our ability to let thoughts escape from consciousness. y Blind sight, reveals that people can perform quite accurately on cognitive tasks, even they are not aware to their accurate performance. y Perspective on unconsciousness.
THEORIES OF CONSCIOUSNESS
1. PERCEPTUAL FIELD BASED TH EORIES OF CONSCIOUSNESS In almost all theories of consciousness there is, in some form or other, what is called the perceptual field which is later called as the Blackboard, Global Workspace, and the Cartesian Theatre. In some cases, the theory is independent of the exact nature of the perceptual field. However, in others the influence of presentational concepts and terminology is reflected in the theory¶s view of consciousness. a. BAARS¶ COGNITIVE THEORY
Baars develops a cognitive theory of consciousness. In this, a broad definition of ³consciousness´ is used in which it is treated as a distinct system to the rest of brain functioning, i.e. the µunconscious¶ with a strong emphasis on cognition. The brain is composed of interacting subsystems, one of which is conscious and self controlled. Central to the theory is the ³Global Workspace´, a common area where messages are relayed and broadcast, it has a strong presentational character. Here is where consciousness is said to reside, although details are not given as to what makes
something conscious. From this perspective it is suggested ³Conscious processes have a great range of possible contents,´ and therefore offers an evolutionary selective advantage. This is in contrast to the functioning of unconscious components, without the intervention of conscious control, which it is suggested lead to certain drawbacks and possible errors in the operation of some tasks. Exploring the constraints of consciousness, Baars considers the serial nature of consciousness, and suggests that we cannot have two different thoughts at the same time and be conscious of them. How much is this due to our environment and image of the Self as one? Language imposes a serial order, so words and thoughts have a sequentially defined meaning, i.e. defined on a serial grammar with hard-wired semantics. Finally, Baars makes the point that ³Consciousness processes are computationally inefficient´ when understood on certain accounts. When something is worked out consciously, it forces a conscious representation that is anchored by environmental constraints. Reasoning can become susceptible to errors since it involves manually applying an appropriate procedure. This paraphrases Baars¶ view of the brain as subsystems, the conscious subsystem being a limited sequentially oriented device, perhaps a symbolic manipulator, a recent evolutionary adjunct. b. EDELMAN¶S BIOLOGICAL THEORY ASPECTS OF QUALITATIVE
CONSCIOUSNESS: A COMPUTER SCIENCE PERSPECTIVE. Edelman develops a biological theory of consciousness. The theory for primary consciousness can be summed up by four main points: Self and non-Self components. Value-category memory. Real time, in parallel and for each sensory modality.
y y y
Re-entrant connections between conceptual and ongoing perceptual systems.
Edelman suggests the emergence of primary consciousness ³results from the interaction in real time between memories of past value ± category correlation¶s and present world input as it is categorized by global mappings (but before the components of these mappings are altered by internal states).´ Qualia are described as ³forms of higher-order categorization, as relations reportable to the self and reportable to others«´However, this leaves much unexplained, for instance, why is the red quale, red? Edelman considers the need for a mental language as a prerequisite for secondary consciousness, suggesting there is a need for ³a symbolic representation of the self acting on the environment and vice versa.´ By symbolic, Edelman means something identified with, or labeling a thing, but without any physical connection with its definition in the implementation. While language may indeed be a requirement for higher order consciousness of certain varieties, it may not be a requirement for consciousness. A system could be devised which handles planning etc., in a symbolic sense, but does not break through to consciousness. To investigate whether mild AI can succeed in some form, the thesis extends these ideas of Edelman, by formalizing and explaining in detail the nature of the mechanism behind qualia c. DENNETT¶S SOCIAL THEORY Dennett proposes a theory of consciousness from a philosophy of mind perspective, and remarks that some have supposed there must be a point between the afferent and efferent neurons at which consciousness occurs. The correlation of pain and C-fibre firing is often given as such an example ± see Levine. This implies what Dennett (calls the Cartesian Theatre, a private screening of reality for the mind¶s eye. Dennett
dismisses this view and the existence of qualia. Instead, human consciousness is simply explained as being the result of a huge complex of memes, which are ideas, concepts or information floating around in society. However, this does not explain what makes them conscious; for example, why is the red quale, red? ³Conscious human minds are more-or-less serial virtual machines implemented inefficiently - on the parallel hardware that evolution has provided for us,´ suggests Dennett in support of strong AI. However, not all of consciousness is to do with serial thought. For example, forever present qualia and vision in particular, parallel processes. Finally, attention is drawn to how the brain does not have to actually fill in some aspects of reality, e.g. the blind spot. ³The discontinuity of consciousness is striking because Of the apparent continuity of consciousness.´ 2. COMPUTATIONAL THEORIES O F CONSCIOUSNESS This category of theories is particularly relevant to determining the fate of mild AI since they are based on underlying computational mechanisms. If one of these theories succeeds in explaining qualia, then it can be concluded that mild AI is able to accommodate one of the harder aspects of mind. a. REY¶S COMPUTATIONAL / REPRESENTATIONAL THEORY
Rey presents a computational / representational theory of thought and qualitative states (CRTQ), in which qualitative experiences are accommodated within the language of thought hypothesis. Thus (p308), ³qualitative experience is just a particular species of propositional attitudes; and propositional attitudes are relations to representations in fairly integrated computational systems.´Quoting Davies and Humpreys¶ summary of Rey ³« we can include sensations within [a language of thought] picture in two steps. First, we suppose that the language of thought contains certain predications (meaning roughly: its
looking red, for example) to which a subject stands in the computational relation corresponding to judging only when (or normally only when) the predication is tokenized as a direct result of output from sensory systems. Second, we suppose that tokens of these predications cause characteristic subsequent processing, and we identify sensory experiences (of something looking red, for example) with instances of this processing.´ Rey gives definitions for relating (mental) propositional attitudes to computational operations in which computational and semantical issues are separated. This is denoted by prefixing the computational aspects with the word µcomp¶ in the discussion. While these definitions tell us something about what is involved, they are not operational definitions, i.e. they contain terms which need unpacking and further operational denotations. For example (p245), having defined µsensing¶, an applied example is given: ³A red sensory experience would involve comp-judging a restricted predication, µs(R)¶, as a result of the stimulation of predominantly L-wave sensitive cones, a comp-judgment that, by virtue of the predication¶s processing, produces further comp-judgments of µwarm¶, and µadvancing¶ predications.´ Clearly, a more detailed account would be needed to implement this. Rey suggests that many people make the mistake of thinking functionalism is largely about dispositional states, whereas they are µoccurrent¶ states. Thus ³A sensory state « is fully activated. «they are best viewed not as single states but, as processes involving interactions among a variety of cognitive states. A qualitative experience is presumably a process involving the comp judging of a certain restricted predicate, a comparison of it with certain memories, involving restricted and unrestricted predicates and other associations«´ Rey notes that just how much of such a process is required for having the sensation, in a wide and narrow sense, is an interesting issue. For example, it affects whether people can be said to have similar red quale, or if the experience of the quale is dependent on too much of their personal experience to be
generalized. b. JOHNSON-LAIRD¶S THEORY O F MODELS AND PARALLELISM Johnson-Laird¶s theory suggests an important representational mechanism for cognition is the formulation of mental models, such as models of ourselves. A precise model is not necessary, nor is conscious knowledge of it. The only restriction being that, structurally, ³the actual algorithm for consciousness « must be a parallel one.´ Parallelism is used in two senses, the model must contain models of itself that are accessible in parallel, and secondly, a Turing Machine being a serial device could only simulate the parallel model. No maintainable reason is given for this when one considers that a dependence on parallelism implies the model contains events that are causally simultaneously bound; in effect, a form of quantum mechanical action at a distance, an effect that this thesis denies is necessary. c. Chalmers¶ Functionalist ± Dualist Theory Chalmer¶s claims that ³consciousness arises from functional organization but is not a functional state,´ which is called ³non reductive functionalism´, a combination of functionalism and property dualism. As a representational basis for a fundamental theory, information spaces are suggested whose structure is determined by the combinatorial and relational structure of the subspaces. Information can be discrete or continuous. An abstract information space is mapped to a physical system according to Bateson¶s slogan: ³information is a difference that makes a difference.´ Chalmer¶s theory is essentially a representational cause and effect model, in other words a variant of the state machine model. It focuses on the instantaneous structure of the phenomenal space, the logical structure. The argument against state machine models is that they fall prey to the spatial-temporal limit problem ± see chapter three. Briefly, in the instantaneous limit of structure with respect to time they imply the right kind of stationary structure, such as a certain picture, would be conscious. The requirements for a semiotic
system developed in chapter four will show that qualia are more closely related to processes than stationary representations. d. Aleksander's Iconic Theory
Aleksander has proposed a computational theory that emphasizes neural state-machines. Central to the theory is how inner models of the world are built up through iconic learning. This is a form of learning for producing internal representations that preserve the functional structure of that being modeled. They explained it as internal representations produced through iconic learning that preserves the functional and logical structure discriminated during sensory perception. This is the functionalist perspective. Consequently, further elaboration is required, in order to fend off qualia being treated as epiphenomenal, and determine whether mild AI can succeed. Another recent computational theory is that of Ray Jackendoff, which emphasizes modularity
3. QUANTUM MECHANICAL THEORIES OF CONSCIOUSNESS The apparent intractability of consciousness to understanding has spurned some to suggest this may be because the mechanism relies on effects from quantum mechanics. These theories start by showing how the brain could operate in a quantum mechanical way and how mental states might map to quantum mechanical ones. The brain¶s possible use of quantum mechanics has been proposed in an anti-mild AI argument. Penrose suggests that Godel¶s theory implies consciousness relies on a mechanism that is non-computable and not even amiable to simulation on computers. Otherwise, it is suggested, this would mean an algorithm could produce qualia, and a computation could experience mentality. Penrose suggests the only alternative is an as yet unknown noncomputable property of quantum mechanics. However, this ignores the significance of levels of description. For example, dynamical systems are not concerned with computing and
instead evolve according to equations of motion. It could be that consciousness is the result of a process, which can be simulated, rather than a computation, and so the argument against mild AI would be less forceful. 4. SELF-AWARENESS THEORIES OF CONSCIOUSNESS These theories place self-awareness as essential to explaining consciousness. They commonly involve, in some fashion, the availability of models of the Self to consciousness. However, it is doubtful that self-awareness is needed at all to explain qualia. Israel Rosenfield supports the idea that self-awareness is the key to consciousness. Self-awareness is distinguished from perception, which is taken as something different but perhaps dependent on self-awareness memory. Qualia are deemed to be fixed ³by the dynamic qualities of [the] body image.´ However, not all qualia are grounded by a body image. A number of other interesting comments are made. For example, discusses coherent stimulus responses. ³Hence it is not the individual coherent response [as suggested by Edelman] that is important but the relation of different coherent responses to each other. « It is the very process of change that rises to consciousness that is consciousness; awareness is change, not the direct perception of stimuli.´ This is a subtle point. Consciousness is a temporally dependent process. It is integrally a process of change. Rosenfield suggests ³The brain creates qualities - the colors, sounds and other sensations we are conscious of - by establishing relations among stimuli.´ Categorization is then linked strongly with language. ³In the same way, notions of µbig¶ and µlittle¶ that a child appears to acquire at about the age of three and a half are not inherent characteristics of the stimuli but abstractions that are only possible with, and that necessitate, words.´ One interesting point is that a child uses words about size at around three and a half years old, and words about color six months later, suggesting color is more abstract. a. Theories with Consciousness as Higher Order Thought
These theories suggest a mental state is conscious if there is a second mental state that has it as content. There are still many questions to be answered by these theories, and many terms that go unpacked. The following presents a brief review of the work by Rosenthal, a key exponent in this area. Rosenthal considers what makes a mental state conscious. ³To confer consciousness of a particular mental state, the higher-order thought must be about that very mental state« So, the higher-order thought must be a thought that one is, oneself, in that mental state.´ However, what about when one reads a book? One is conscious of the words, but not conscious that it is they who are conscious of the words. Rosenthal responds, ³We normally focus on the sensory state and not on our consciousness of it only because that consciousness consists in our having a higher-order thought, and that thought is usually not itself a conscious thought. « For a mental state to be conscious, the corresponding higher-order thought must be a thought about oneself, that is, a thought about the mental being that is in that conscious state.´ This statement highlights a problem with the higher-order view: if it is the higher order state which makes the sensory state conscious and yet we are focused on, i.e. conscious of, the sensory state, why have a higher-order state at all? That is, what is it about the higher-order state that makes the sensory one conscious? Where Rosenthal goes wrong is drawing a comparison between levels of reference in language, which he relates (unknowingly) to the (logical) manipulation rules used in our serial stream of conscious thought, and the mechanism of thought. This analogy does not extend to qualia. Rosenthal suggests that for the higher-order account to work, one must be conscious of being in a particular mental state token type: ³Perhaps a dispositional account will require not that the disposition refers to a mental-state token, but that it is a disposition to have a higher-order
thought that refers to it.´ As it stands this will still not do since no qualia enter the picture here merely by having the µpotential¶ disposition to have a higher-order thought. Paraphrasing, this becomes a disposition to have a higher order thought that refers to µred¶ (say). This does not lead to µred¶ being experienced as red since the µred¶ is only a token at that point. The disposition needs to be continually actioned. The problem with the higherorder view is that the content of the higher-order states is taken as being a conceptual kind of presentation
A counter argument to Rosenthal¶s theory is this: a single higher-order thought could accompany a group of lower order thoughts; there is nothing in the theory to prevent this from happening. Finally, Rosenthal goes on to say, ³For an organism to be conscious means only that it is awake, and mentally responsive to sensory stimuli.´ This seems to be confusing µconscious¶ with µconscious of¶ as in µaware of¶. For example, if one stares through a window at a landscape, and empties one¶s mind of thoughts, so that you just have the µconscious¶ sensory experiences. That is primary consciousness
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.