You are on page 1of 40

Adaptation

In Piaget's Theory of Development, there are two cognitive processes that are crucial for progressing from stage to
stage: assimilation, accommodation. These two concepts are described below.

Assimilation
This refers to the way in which a child transforms new information so that it makes sense within their existing
knowledge base. That is, a child tries to understand new knowledge in terms of their existing knowledge. For
example, a baby who is given a new knowledge may grasp or suck on that object in the same way that he or she
grasped or sucked other objects.

Accomodation
This happens when a child changes his or her cognitive structure in an attempt to understand new information. For
example, the child learns to grasp a new object in a different way, or learns that the new object should not be sucked.
In that way, the child has adapted his or her way of thinking to a new experience.
Taken together, assimilation and accomodation make up adaptation, which refers to the child's ability to adapt to his
or her environment.
References:
1. Siegler, R. (1991). Children's thinking. Englewood Cliffs, NJ: Prentice-Hall.
2. Vasta, R., Haith, M. M., & Miller, S. A. (1995). Child psychology: The modern science. New York, NY:
Wiley.

Alzheimer's Disease

Alzheimer's Disease (AD), a term coined by Alois Alzheimer in 1907, is a relentlessly progressive disease
characterized by cognitive decline, behavioural disturbances, and changes in personality. Current estimates of
prevalence of AD in Canada suggest that 5.1% of all Canadians 65 and over meet the criteria for the clinical
diagnosis of AD, which translates into approximately 161,000 cases. AD prevalence is slightly higher in women
than in men. It may be that this difference is due to the longer life expectancy of women although other factors have
not been ruled out. The prevalence of dementia is strongly associated with age, affecting 1% of the Canadian
population aged 65 to 74, 6.9% of individuals 75-84 and 26% of individuals 85 years and older (Canadian Study of
Health and Aging, 1994).
The diagnostic criteria for dementia of the Alzheimer's Type (DAT) are as follows:
• (A) The development of multiple cognitive deficits manifested by both:
3. Memory impairment (impaired ability to learn new information or to recall previously learned
information)
4. One or more of the following cognitive disturbances:
• aphasia (language disturbance)
• apraxia (impaired ability to carry out motor activities despite intact motor function)
• agnosia (failure to recognize or identify objects despite intact sensory function)
• disturbances in executive functioning (i.e., planning, organizing, sequencing, abstracting)
• (B) The cognitive deficits in Criteria A1 and A2 each cause significant impairment in social and
occupational functioning and represent a significant decline from a previous level of functioning.
• (C) The course is characterized by gradual onset and continuing cognitive decline
• (D) The cognitive deficits in Criteria A1 and A2 are not due to any of the following:
1. other central nervous system conditions that cause progressive deficits in memory and cognition
(e.g., cerebrovascular disease, Parkinson's Disease, Huntington's Disease, subdural hematoma,
normal pressure hydrocephalus, brain tumor).
2. systemic conditions that are known to cause a dementia (e.g., hypothyroidism, vitamin B12 or
folic acid deficiency, hypercalcemia, neurosyphilis, HIV infection)
3. substance-induced conditions
• (E) The deficits do not occur exclusively during the course of a delirium
• (F) The disturbance is not better accounted for by another Axis 1 disorder (e.g., Major Depressive Disorder,
Schizophrenia)

1
The diagnosis of AD is based on exclusionary criteria (i.e., the absence of an identifiable cause) with diagnosis
confirmed at autopsy. Treatment strategies to date have been largely ineffective, with experimental treatments
mainly directed toward overcoming the cholinergic deficit.
References:
1. American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th ed.).
Washington, DC: Author.
2. Canadian study of health and aging: Study methods and prevalence of dementia. (1994). Canadian Medical
Association Journal, 150(6).
3. Whitehouse, P.J. (1993) Dementia. Philadelphia: F.A. Davis.

Analogy

In cognitive psychology, analogy is considered an important method of problem solving. The problem solver
attempts to use his or her knolwedge of one problem to solve another problem about which she or he has very little
or no information. Barsalou (1992) provides the following example of problem solving by analogy:
"...someone who has worked at the complex for a while could simply explain to
you that the layout is analogous to a starfish. On hearing this analogy you
might transfer knowledge about starfish to the office complex. Thus the
knowledge that a starfish has a circular body, with five legs extending from
it radially and symetrically would lead to the belief that the office complex
contains a center circular body, with five tapered buildings extending from
it in a radially symmetric pattern." (p.110)
Obviously people do not use all of their knowledge about one problem to solve another problem. In the context of
his starfish example Barsalou points out that we would not begin to think that the office complex is alive, or that it
lives underwater.
One problem facing cogntive psychologists is to determine how people decide upon the extent to which an analogy
applies. Determining how this may be done is more difficult than it may seem. Consider that, given enough time
people can find analogies between any two phenomena. We might want to say that, like the starfish, the office
complex is alive--its heating ducts are like blood vessels, its doors are like mouths eating the people who enter the
office complex every day. As a cognitive process analogy seems limitless. In a science that strives for regularity and
lawfulness the limitlessness of analogical thinking poses a serious problem.
References:
5. Barsalou, L. (1992). Cognitive psychology: An overview for cognitive psychologists. Hillsdale, NJ:
Lawrence Erlbaum Associates.

Apparent Motion

This is a perceptual phenomenon that occurs when we perceive motion in two or more static images that are
presented in succession with appropriate spatial and temporal displacements. The ability to perceive this
phenomenon is mediated by the visuospatial pathway of the visual association regions of the brain.
We see examples of this phenomenon almost everyday when we view television or movies.
This is an example of a cognitively impenetrable perception. That is, even though we know that the images are not
moving, we still perceive motion.
References:
6. Marr, D. (1982). Vision. Freeman: San Francisco, pp.159-182.
7. Zeki, S. (1992). The visual image in mind & brain. Scientific American, 241(3), 150-162.

Articulatory Loop

The articulatory loop (AL) is one of two passive slave systems within Baddeley's (1986) tripartite model of working
memory. The AL, responsible for storing speech based information, is comprised of two components. The first
component is a phonological memory store which can hold traces of acoustic or speech based material. Material in
2
this short term store lasts about two seconds unless it is maintained through the use of the second subcomponent,
articulatory subvocal rehearsal. Prevention of articulatory rehearsal results in very rapid forgetting. Try this
experiment with a friend. Present your friend with three consonants (e.g., C-X-Q) and ask them to recall the
consonants after a 10 second delay. During the 10 second interval, prevent your friend from rehearsing the
consonants by having them count 'backwards by threes' starting at 100. You will find that your friend's recall is
significantly impaired! See Murdoch (1961) and Baddeley (1986) for a complete review.
References:
8. Baddeley, A. (1986). Working memory. Oxford: Clarendon Press.
9. Murdock, B.B. Jr. (1961). The retention of individual items. Journal of Experimental Psychology, 62, 618-
625.

See Also:
Working Memory | Visuospatial Sketchpad | Central Executive

Artificial Intelligence
Artificial intelligence is concerned with the attempt to develop complex computer programs that will be capable of
performing difficult cognitive tasks. Some of those who work in artificial intelligence are relatively unconcerned as
to whether the programs they devise mimic human cognitive functioning, while others have the explicit goal of
simulating human cognition on the computer.
The artificial intelligence approach has been applied to several different areas within cognitive psychology,
including perception, memory, imagery, thinking, and problem solving.
There are a number of advantages of the artificial intelligence approach to cognition. Computer programming
requires that every process be specified in detail, unlike cognitive psychology which often relies on vague
descriptions. AI also tends to be highly theoretical, which leads to general theoretical orientations having wide
applicability. The main disadvantage of AI is that there is a lot of controversy about the ultimate similarity between
human cognitive functioning and computer functioning.
Some of the major differences between brains and computers were spelled out in the following terms by Churchland
(1989, p.100):
"The brain seems to be a computer with a radically different style.
For example, the brain changes as it learns, it appears to store and process
information in the same places...Most obviously, the brain is a parallel
machine, in which many interactions occur at the same time in many different
channels."
This contrasts with most computer functions which involves serial processing and relatively few interactions.
References:
10. Churchland, P.S. (1989). From Descartes to neural networks. Scientific American , July, 100.
11. Eysenck, M.W. (Ed.). (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil
Blackwell.

See Also:
Cognitive Science | Cognitive Psychology

Associative Memory
At its simplest, an associative memory is a system which stores mappings of specific input representations to
specific output representations. That is to say, a system that "associates" two patterns such that when one is
encountered subsequently, the other can be reliably recalled. Kohonen draws an analogy between associative
memory and an adaptive filter function [2]. The filter can be viewed as taking an ordered set of input signals, and
transforming them into another set of signals---the output of the filter. It is the notion of adaptation, allowing its
internal structure to be altered by the transmitted signals, which introduces the concept of memory to the system.
A further refinement in terminology is possible with regard to the associative memory concept, and is ubiquitous in
connectionist (neural network) literature in particular. A memory that reproduces its input pattern as output is
referred to as autoassociative (i.e. associating patterns with themselves). One that produces output patterns
dissimilar to its inputs is termed heteroassociative (i.e. associating patterns with other patterns).
Most associative memory implementations are realized as connectionist networks. Hopfield's collective computation
network [1] serves as an excellent example of an autoassociative memory, whereas Rosenblatt's perceptron [3] is

3
often utilized as a heteroassociator. There are many practical problems implementing effective associative memories
however, most notably their inefficiency; the tendency is for them to fill up and become unreliable rather quickly.
This is a long running open problem for both connectionism and adaptive filter theory---one that Kohonen refers to
as the "problem of infinite state memory" [2].
References:
12. J.J. Hopfield. Neural networks and physical systems with emergent collective computation abilities.
Proceedings of the National Academy of Science. 79:2554-2558, 1982.
13. T. Kohonen. Self-Organization and Associative Memory. Springer Series In Information Sciences, Vol.8.
Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984.
14. F. Rosenblatt. Principles of Neurodynamics. Spartan, New York, 1962.

See Also
Connectionism| Content Addressable Memory

Attention
"Attention" is a term commonly used in education, psychiatry and psychology. The definition is often vague.
Attention can be defined as an internal cognitive process by which one actively selects environmental information
(ie. sensation) or actively processes information from internal sources (ie. visceral cues or other thought processes).
In more general terms, attention can be defined as an ability to focus and maintain interest in a given task or idea,
including managing distractions.
William James, a 19th century psychologist, explains attention as follows:
"Everyone knows what attention is. It is the taking possession by the
mind in clear and vivid form, of one out of what seem several simultaneously
possible objects or trains of thought...It implies withdrawl from some things
in order to deal effectively with others, and is a condition which has a real
opposite in the confused, dazed, scatterbrained state." (1890, p. 403)
Attention is important to psychologists because it is often considered a core cognitive process, a basis on which to
study other cognitive processes; most importantly learning. DeGangi and Porges (1990) illustrate only "when a
person is actively engaged in voluntary attention, functional purposeful activity and learning can occur." (p. 6) Poor
attention is often a key symptom of behaviour disorders such as hyperactivity and learning disorders.
References:
15. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD:
American Occupational Therapy Association.
16. James, W. (1890). Principles of psychology. New York: Holt.

See Also:
Attention Getting | Attention Holding | Sustained Attention

Attention Getting
Attention getting is more than just the orienting reflex, it is the "initial orientation or alerting to a stimulus." Though
this may be considered an automatic act, in fact it requires complex active thought processing. Attention getting is
reliant on the qualitative nature of the stimulus. The stimulus must be stong enough to elicit a response.
DeGangi and Porges (1990) explain the types of stimuli that are attention getting vary according to past experiences
of the individual, what they already know, individual reactivity to sensory stimuli, and what an individual has
determined to be important to them. A hungry person may be more apt to pay attention to the smell of food than the
sounds surrounding them in a traffic jam!
Attention getting is important to psychologists, particularily developmental psychologists because of its role in
learning. A child's chosen attention getting stimuli can guide his/her learning abilities. "A child who learns better
through the auditory channel will orient more readily to a song about body parts than a picture of a body."
References:
17. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD:
American Occupational Therapy Association.

See Also:
Attention Holding | Attention Releasing | Sustained Attention
4
Attention Holding
Attention holding is the "maintenance of attention when a stimulus is intricate or novel." Stimuli that hold our
attention must be both novel and complex in order to encourage information processing. Attention holding is
measured by how long one engages in a cognitive activity involving that stimulus.
Attention holding is important because of its role in learning. If an activity or stimulus is moderately complex, the
person will expend energy in information processing. In other words, the person will expend energy in learning.
Unfortunately, this can be complicated by poor motivation. Low motivation may present a challenge as the
psychologist (or other professional) must determine if the decreased motivation is due to sensory processing
problems, cognitive impairment, or other learning-related problems (of which poor attention holding may be
identified).
References:
18. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD:
American Occupational Therapy Association.

See Also:
Attention Getting | Attention Releasing | Sustained Attention

Attention Releasing
Attention releasing is the final stage in DeGangi and Porges' (1990) process of sustained attention. Attention
releasing can simply be defined as the "releasing or turning off of attention from a stimulus." Attention releasing can
occur for a variety of reasons. A person can fatigue physically or mentally requiring release of attention. Arousal
level can decrease, therefore a different type/strength of stimuli becomes required to maintain an alert and active
state.
Attention releasing provides a person with a method to reach closure on a given activity, task, or event thereby
allowing that person to switch attention to something new. As with attention getting and holding, attention releasing
(the ability to shift focus) plays an important role in the learning process.
References:
19. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD:
American Occupational Therapy Association.

See Also:
Attention Holding | Attention Getting | Sustained Attention

Behavioural Indeterminacy
The claim that in principle psychology is restricted to establishing weak equivalence. Weak equivalence is
equivalence with respect to input/output behaviour. Therefore, measuring behavioural data is unable to establish
equivalence at the level of functional architecture. Behavioural studies are indeterminate with respect to strong
equivalence.
This issue is of importance to cognitive psychology because, if true, it implies that cognitive psychology cannot
generate insight into cognition without importing knowledge based on non-behavioural observations from other
disciplines.
References:
20. Pylyshyn, Z. W. (1989). Computing in cognitive science. In M. I. Posner (Ed.), Foundations of cognitive
science, Cambridge MA: MIT Press.

See Also:
Functional Architecture | Strong Equivalence | Weak Equivalence

Bilogical Naturalism
Promoted by John Searle, Biological Naturalism states that consciousness is a higher level function of the brain's
physical capabilities. The neurophysiological processes in the brain cause mental phenomena, which are also a
feature of the brain. However, such features as consciousness are not reducible to neurophysiological systems. Not
5
all brains produce this higher level functioning, and there are many questions still open in Biological Naturalism,
which Searle himself points out, for example: how does neurophysiology account for the range of mental
phenomena? how does consciousness come about? how advanced does a neurophysiological system have to be to
produce consciousness?
References:
21. Searle, John. The Rediscovery of the Mind. MIT Press, Massachusetts. 1994

Bottom-Up Processing
The cognitive system is organized hierarchically. The most basic perceptual systems are located at the bottom of the
hierarchy, and the most complex cogntive (e.g. memory, problem solving) systems are located at the top of the
hierarchy.
Information can flow both from the bottom of the system to the top of the system and from the top of the system to
the bottom of the system. When information flows from the bottom of the sytstem to the top of the system this is
called "bottom-up" processing. Lower level systems categorize and describe incoming perceptual information and
pass this descriptive information onto hiher levels for more complex processing.

See Also:
Top-Down Processing

Broca's Area
Named for Paul Broca who first described it in 1861, Broca's area is the section of the brain which is involved in
speech production, specifically assessing syntax of words while listening, and comprehending structural complexity.
People suffering from neurophysiological damage to this area (called Broca's aphasia or nonfluent aphasia) are
unable to understand and make grammatically complex sentences. Speech will consist almost entirely of content
words.
Auditory and speech information is transported from the auditory area to Wernicke's area for evaluation of
significance of content words, then to Broca's area for analysis of syntax. In speech production, content words are
selected by neural systems in Wernicke's area, grammatical refinements are added by neural systems in Broca's area,
and then the information is sent to the motor cortex, which sets up the muscle movements for speaking.
References:
22. Gray, Peter. (1994). Psychology. New York, NY: Worth Publishing.

See Also:
Wernicke's Area

Cascade Processing
Under the assumption that a cpmplex task can be broken down into distinct stages of information processing, and
that these stages can be sequentially ordered, the complex task can be performed by completing each distinct stage.
Unlike discrete processing, with cascade models the latter stages of information processing can begin operating
before the completion of earlier information processing stages. Connectionist models of information processing
operate in a cascade manner and are important for the way in which these models can learn relationships between
stimule and responses.
Depending on the complexity of the information being processed, it may be transmitted between some processing
stages in a cascade manner, but in other stages it may be processed in a discrete manner.
References:
23. Eysenck, M.W. (Ed.). (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil
Blackwell.

See Also:
Discrete Processsing

Central Executive
The central executive, the most important yet least well understood component of Baddeley's (1986) working
memory model, is postulated to be responsible for the selection, initiation, and termination of processing routines
6
(e.g., encoding, storing, retrieving). Baddeley (1986, 1990) equates the central executive with the supervisory
attentional system (SAS) described by Norman and Shallice (1980) and by Shallice (1982).
According to Shallice (1982), the supervisory attentional system is a limited capacity system and is used for a
variety of purposes, including:
• tasks involving planning or decision making
• trouble shooting in situations in which the automatic processes appear to be running into difficulty
• novel situations
• dangerous or technically difficult situations
• situations where strong habitual responses or temptations are involved
Extensive damage to the frontal lobes may result in impairments in central executive functioning. Baddeley (1986)
coined the term dysexecutive syndrome (DES) to describe dysfunctions of the central executive. The classic frontal
syndrome is characterized by
disturbed attention, increased distractibility, a difficulty in grasping the
whole of a copmlicated state of affairs ... well able to work along old
routines
... (but) ... cannot learn to master new types of task, in new situations ...
[the patient is] at a loss. (Rylander, 1939, p.20)
In other words, patients suffering from frontal lobe syndrome lack flexibility and the ability to control their
processing resources, functions attributed to the central executive.
References:
24. Baddeley, A.D. (1990). Human memory: Theory and practice,. Oxford: Oxford University Press.
25. Baddeley, A.D. (1986). Working memory. Oxford: Clarendon Press.
26. Norman, D.A., & Shallice, T. (1980). Attention to action. Willed and automatic control of behavior.
University of California San Diego CHIP Report 99.
27. Shallice, T. (1982). Specific impairments of planning. Philosophical Transactions of the Royal Society
London B 298, 199-209.
28. Rylander, G. (1939). Personality changes after operations on the frontal lobes. Acta Psychiatrica
Neurologica, Supplement No. 30.

See Also:
Articulatory Loop | Visuospatial Sketchpad | Working Memory

Cognitive Development (In Children)


Generally it is referred to the changes which occur to a person's cognitive structures, abilities, and processes. The
most widely known theory of childhood cognitive development was proposed by Jean Piaget in 1969. He proposed
the idea that cognitive development consisted of the development of logical competence, and that the development
of this competence consists of four major stages:
29. sensori-motor
30. preoperational
31. concrete operational
32. formal operational
He also argued that a child's cognitive performance depended more on the stage of development he was in than on
the specific task being performed.
More recent studies have cast some doubt on Piaget's theory of homogeneous performance within a given stage.
Instead, it is now believed that performance varies greatly within each stage and depends more on the acquisition
and development of language, perception, decision rules, and real-world knowledge for any individual child.

Cognitive Mapping

Cognitive mapping is a general term that applies to a series of methods for measuring mental representations. These
techniques attempt to describe mental images that subjects use to encode knowledge and information. Most
researchers treat cognitive maps as a tool that can usefully summarise and communicate information rather than as a
literal description of mental images.
References:
33. Huff, A.S. (1990). Mapping Strategic Thought Chichester, John Wiley & Sons
7
Cognitive Penetrability

An approach to testing strong equivalence. The cognitive penetrability approach seeks to establish whether
phenomena are equivalent at the level of functional architecture by investigating whether phenomena are
independent of beliefs and goals, that is if they are primitive. If manipulation of beliefs and goals systematically
alters the empirical phenomenon then the phenomenon is not describing functional architecture and is cognitively
penetrable.
The cognitive penetrability approach was used in the imagary debate in cognitive science in the 1980's.
References:
34. Pylyshyn, Z. W. (1989). Computing in cognitive science. In M. I. Posner (Ed.), Foundations of cognitive
science. Cambridge, MA: MIT Press.

See Also:
Strong Equivalence | Weak Equivalence

Cognitive Psychology
Cognitive psychology is concerned with information processing, and includes a variety of processes such as
attention, perception, learning, and memory. It is also concerned with the structures and representations involved in
cognition. The greatest difference between the approach adopted by cognitive psychologists and by the Behaviorists
is that cognitive psychologists are interested in identifying in detail what happens between stimulus and response.
Some of the ingredients of the information processing approach to cognition were spelled out by Lachman,
Lachman, and Butterfield (1979). In essence, it is assumed that the mind can be regarded as a general purpose,
symbol processing system, and that these symbols are transformed into other symbols as a result of being acted on
by different processes. The mind has structural and resource limitations, and so should be thought of as a limited
capacity processor.
A key issue in the field is the extent to which human and computer information processing systems resemble one
another. The consensual view is probably that there are indeed striking similarities between computer minds, but
there are also probably substantial differences. In recent years, explicitly cognitive approaches have been adopted in
social and developmental psychology, as well as in occupational and clinical psychology.
References:
35. Eysenck, M.W. (Ed.). (1990). Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil
Blackwell.
36. Lachman, R., Lachman, J.L., & Butterfield, E.C., (1979) Cognitive psychology and information processing.
Hillsdale, NJ: Lawrence Erlbaum Associates.

Cognitive Science

Several students have supplied definitions for this term:


#1 | #2 | #3

Definition 1
"the study of intelligence and intelligent systems, with particular reference to intelligent behaviour as computation"
(Simon & Kaplan, 1989)
Simon, H. A. & C. A. Kaplan, "Foundations of cognitive science", in Posner, M.I. (ed.) 1989, Foundations of
Cognitive Science, MIT Press, Cambridge MA.

Contributed by J. Andrews, November 23, 1995

8
Definition 2
Cognitive science refers to the interdisciplinary study of the acquisition and use of knowledge. It includes as
contributing disciplines: artificial intelligence, psychology, linguistics, philosophy, anthropology, neuroscience, and
education. The cognitive science movement is far reaching and diverse, containing within it several viewpoints.
Cognitive science grew out of three developments: the invention of computers and the attempts to design programs
that could do the kinds of tasks that humans do; the development of information processing psychology where the
goal was to specify the internal processing involved in perception, language, memory, and thought; and the
development of the theory of generative grammar and related offshoots in linguistics. Cognitive science was a
synthesis concerned with the kinds of knowledge that underlie human cognition, the details of human cognitive
processing, and the computational modeling of those processes.
There are five major topic areas in cognitive science: knowledge representation, language, learning, thinking, and
perception.
Eysenck, M.W. ed. (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, Massachusetts: Basil
Blackwell Ltd.

See Also:
Cognitive Psychology I Artificial Intelligence

Contributed by: L.A. Keple, November 5, 1995

Definition 3
Generally stated, this is the study of intelligence and intelligence systems.
It is a relatively new science that combines knowledge gained from a number of disciplines. These include:
computer science,neuroscience, cognitive psychology, philosophy, and linguistics.
As a result of the collaborative effort between these disciplines, there have been, and will continue to be, huge
advancements in our understanding of human cognition.

See Also:
Neuroscience
Contributed by M. Kincade
Dictionary Home Page

Connectionism
Connectionism is an alternate computational paradigm to that provided by the von Neumann architecture. Originally
taking its inspiration from the biological neuron and neurological organization, it emphasizes collections of simple
processing elements in place of the monolithic processors seen more commonly within computing. These simple
processing elements are typically only capable of rudimentary calculations (such as summation), however possess a
high degree of weighted inter-connectivity with one another and generally operate in parallel [2].
A particular organization of inter-connected processing elements (a network), is paired with a mathematical basis by
which the connection weights are adjusted (or simply calculated directly). This allows a network to either learn a
task by iterating on training examples (induction learning), or to provide a system in which solutions to particular
problems can be computed. Arguably the most widely used example of the former is the multi-layer perceptron
trained via error back-propagation (see [5], for example); whereas the latter is typified by networks such as the
Hopfield and Tank model for combinatorial optimization [3].
To the casual reader, "connectionism", "parallel distributed processing" (PDP) and "neural networks" may be
entirely synonymous. The term "neural network" is somewhat misleading to begin with as, aside from the original
inspiration coming from biology, there is nothing particularly "neural" about them and any perceived biological
relevance is often debatable. There is also merit in making a philosophical distinction between PDP and
connectionism. For example, over time, PDP has been disposed to seek biological relevance for their models, tended
to emphasize learning oriented tasks and follow a largely empirical approach. The field of neural networks has
become richer than is encompassed by the traditional view of PDP.
Connectionism distinguishes itself by also viewing the network model as a computational architecture. This
encompasses a wider range of network structures for which biological relevance is not an issue or for which a
learning process per se is not utilized. Falling into areas such as these include a wealth of recent work which has
sought to establish the formal relationship between computational power of connectionist networks and abstract
9
machines (for example [1],[4]), and even harkens back to the aforementioned Hopfield and Tank model which
computes solutions to problems by minimizing energy within a pre-wired system of weights [3].
In this respect, connectionism subsumes PDP. That is to say that PDP researchers are connectionists, however not all
connectionists consider themselves to be PDP researchers. Although debatable, this point is one that this author,
among others, feels is an important one.
References:
37. C.L. Giles, B.G. Horne, T. Lin. Learning a class of large finite state machines with a recurrent neural
network. Neural Networks. 8(9):1359-1365, 1995.
38. J. Hertz, A. Krogh and R.G. Palmer. Introduction to the theory of neural computation. Addison-Wesley,
Redwood City, 1991.
39. J.J. Hopfield and D.W. Tank. `Neural' computation of decisions in optimization problems. Biological
Cybernetics. 52:141-152.
40. S.C. Kremer. On the computational power of Elman-style recurrent networks. IEEE Transactions on
Neural Networks. 6(4):1000-1004, 1995.
41. D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In
D.E. Rumelhart and J.L. McClelland, editors, Parallel Distributed Processing, volume 1. MIT Press,
Cambridge, 1986.

See Also
Associative Memory| Content Addressable Memory| Induction Learning| Learning Rule| Machine Learning| Parallel
Distributed Processing Models

Consciousness

Consciousness refers to awareness of our own mental processes (or of the products of such processes). This
awareness can be made manifest by introspective reports, in which an individual provides information about his or
her mental experience.
There has been a considerable amount of controversy over the centuries concerning the value of psychology of
assessing the contents of consciousness by means of introspective evidence. Aristotle claimed that the only way to
study thinking was by introspection. Others, such as Galton (1883), argued that the position of consciousness
"appears to be a helpless spectator of but a minute fraction of automatic brain work. Behaviorists tend to agree with
Galton that psychologists should not concern themselves with consciousness and introspection.
There are certain cognitivists who would disagree with these definitions. Marvin Minsky (1985), maintains that
human consciousness can never represent what is occurring at the present moment, but only a little of the recent
past. This is due both because agencies have limited capacity to represent what happened recently and partly
because it take time for agencies to communicate with one another. Consciousness is difficult to describe because
each time we attempt to examine temporary memories, we distort the very record we are trying to interpret.
References:
42. Eysenck, M.W. (Ed.). (1990). Blackwell Dictionary of Cognitive Psychology . Cambridge, MA: Basil
Blackwell.
43. Galton, F. (1883). Inquiries into human faculty and its development. London: Macmillan.
44. Minsky, M. (1985). The society of mind. New York, NY: Simon & Schuster.

See Also:
Mandelbrot Set

Content Addressable Memory


In a symbolic system information is stored in an external mechanism. In the example of the computer it is stored in
files on the disks. As the information has been encoded in some form of file system in order to retrieve that
information one must know the index system of the files. In other words, data can only be accessed by certain
attributes. In a connectionist system the data is stored in the activation pattern of the units. Hence, if a processing
unit receives excitatory input from one of its connections, each of its other connections will either be excited or
inhibited. If these connections represent the attributes of the data then the data may be recalled by any one of its
attributes, not just those that are part of an indexing system. As these connections represent the content of the data,
this type of memory is called content addressable memory. This type of memory has the advantage of allowing

10
greater flexibility of recall and is more robust. This distributed memory is able to work its way around errors by
reconstructing information that may have been lesioned from the system.
References:
45. Bechtel, W., & Abrahamsen, A. (1991). Connectionism and the mind: An introduction to parallel
processing in networks. Cambridge, MA: Blackwell.

See Also:
Functional Architecture | Graceful Degradation | Parallel Distributed Processing Models | Spontaneous
Generalisation | Symbolic Architecture

Crystallized Intelligence
Crystallized intelligence can be defined as "the extent to which a person has absorbed the content of
culture."(Belsky, 1990, p. 125) It is the store of knowledge or information that a given society has accumulated over
time.
Crystallized intelligence is measured by most of the verbal subtests of the Wechsler Adult Intelligence Scale
(WAIS).
Crystallized intelligence is important to psychologists as it relates to the study of aging. There is ongoing intense
debate among psychologists as to whether or not intelligence declines with aging. Horn (1970) hypothesized that
because crystallized intelligence is based on learning and experience, it remains relatively stable over time. He
claims it may even increase "as the rate at which we acquire or learn new information in the course of living
balances out or exceeds the rate at which we forget." (as cited in Belsky, 1990, p. 125) On the other side of the
debate, Belsky (1990) claims crystallized intelligence in fact declines with age. Why? Because, "at a certain time of
life the cumulative effect of losses - of job, of health, of relationships - cause disengagement from the culture, and so
forgetting finally exceeds the rate at which knowledge is acquired." (p. 125)
References:
46. Belsky, J. K. (1990). The psychology of aging theory, research, and interventions. Pacific Grove, CA:
Brooks/Cole.
47. Horn, J. (1970). Organization of data on life-span development of human abilities. In R. Goulet and P.B.
Baltes (Eds.). Life-span developmental psychology: Research and theory. New York: Academic Press.

See Also:
Fluid Intelligence | WAIS

Cued Recall
This is a component of a memory task in which the subject is asked to recall items that were presented to them on an
intial training, or initial presentation list.
However, it is slightly different than the free recall task because the subject is given a hint, or a cue, about the items
on the original list. For example, and experimenter may say: "Tell me all the words from the list that were animals".

See Also:
Free Recall | Intrusions | Perseverations

Deductive (Logical) Inference


Inferences are made when a person (or machine) goes beyond available evidence to form a conclusion. With a
deductive inference, this conclusion always follows the stated premises. In other words, if the premises are true, then
the conclusion is valid. Studies of human efficiency in deductive inference involves conditional reasoning problems
which follow the "if A, then B" format.
The task of making deductions consists of three stages. First, a person must understand the meaning of the premises.
Next they must be able to formulate a valid conclusion. Thirdly, a person should evaluate their conclusion to tests its
validity. Although deductive inference is easy to test or model, the results of this type of inference never increase the
semantic information above what is already stated in the premises.
References:
48. Eysenck, M.W. (Ed.). (1990). The Blackwell dictionary of cognitive psychology. Cambridge, MA: Basil
Blackwell.
11
49. Johnson-Laird, P. N. (1993). Human and machine thinking. Hillsdale, NJ : Lawrence Erlbaum Associates.

See Also:
Inductive Inference

Dementia
Dementia is a clinical state characterized by loss of function in multiple cognitive domains. The most commonly
used criteria for diagnoses of dementia is the DSM-IV (Diagnostic and Statistical Manual for Mental Disorders,
American Psychiatric Association). Diagnostic features include :
• memory impairment and at least one of the following: aphasia, apraxia, agnosia, disturbances in executive
functioning.
• In addition, the cognitive impairments must be severe enough to cause impairment in social and
occupational functioning.
• Importantly, the decline must represent a decline from a previously higher level of functioning.
• Finally, the diagnosis of dementia should NOT be made if the cognitive deficits occur exclusively during
the course of a delirium.
There are many different types of dementia (approximately 70 to 80). Some of the major disorders causing dementia
are:
50. Degenerative diseases (e.g., Alzheimer's Disease, Pick's Disease)
51. Vascular Dementia (e.g., Multi-infarct Dementia)
52. Anoxic Dementia (e.g., Cardiac Arrest)
53. Traumatic Dementia (e.g., Dementia pugilistica [boxer's dementia])
54. Infectious Dementia (e.g., Creutzfeldt-Jakob Disease)
55. Toxic Dementia (e.g., Alcoholic Dementia)
7.9 % of all Canadians 65 years and older meet the criteria for the clinical diagnoses of dementia (Canadian Study
on Health and Aging, 1994). Alzheimer's Disease is the major cause of dementia, accounting for 64% of all
dementias in Canada for persons 65 and older and 75% of all dementias for persons 85 plus.
References:
4. American Psychiatric Association (1994). Diagnostic and statistical manual of mental disorders (4th ed.).
Washington, DC: Author.
5. Canadian study of health and aging: Study methods and prevalence of dementia. (1994). Canadian Medical
Association Journal, 150(6).

See Also:
Alzheimer's Disease

Discrete Processing
A model using discrete processing requires that information is passed from one stage to another only after the
processing in the first stage is complete. Therefore, the processing time required in a discrete model is additive and
equal to the sum of the time taken at each level of processing.
The advantage of this type of model is that it provides a convienent method of understanding the effects of different
variables on the performance of a given task.
References:
56. Eysenck, M.W. (Ed.). (1990). The Blackwell Dictionary of Cognitive Psychology. Cambridge, MA: Basil
Blackwell.

See Also:
Cascade Processsing

The Disjunction Problem


Any theory of the content of a representation must be able to explain how a representation can misrepresent --how it
can represent an object as being something it is not, or as having properties it does not have-- basically how its
content can be false of the object represented.
The difficulty is that we need to explain --in a principled, non-circular way-- how the representation can correctly
represent some things which cause its activation, yet misrepresent other things which cause its activation. For
12
instance, we9d like to be able to say that my kangaroo representation represents kangaroos. If so, then if a wallaby
causes the activation of that representation, then the wallaby is misrepresented; the representation9s content that9s a
kangaroo is false of the wallaby.
Unfortunately, to Fodor (1987, 1990) this doesn9t work. The problem is that if the wallaby can also cause the
activation of my kangaroo representation, then we seem to have no principled reason for saying that the content of
the representation is simply that9s a kangaroo rather than the disjunctive content that9s either a kangaroo or a
wallaby. If this is so, then when a wallaby activates my kangaroo representation, this representation doesn9t
represent the wallaby as something it is not. This representation has the (disjunctive) content that9s either a
kangaroo or it9s a wallaby which, of course, is true of the wallaby.
This content might better be described as 3unspecific2, rather than 3disjunctive2. That is, perhaps the content is
something like an unspecific description which applies correctly to all the things which can activate it, such as
that9s a large animal with a long tail that gets about by hopping on its hind legs. So to say that some things which
activate the representation are correctly represented and others are misrepresented doesn9t work. Even if I9ve only
ever seen kangaroos, and have never met a wallaby, the wallaby can be correctly represented by this representation,
because the wallaby is also a large animal with a long tail that gets about by hopping on its hind legs.
This is especially a problem for theories which explain content in terms of covariance: some sort of reliable, lawlike,
connection between tokenings of the representation and the occurrence of certain types of thing in the world. Such
theories have to be able to justify describing the representation9s content 3conservatively2 as Cummins (1990) calls
it, rather than 3liberally2; as that9s a kangaroo rather than that9s a large animal with a long tail that gets about by
hopping on its hind legs. Cummins summarises various attempts to do this, arguing that covariance theories don9t
explain content in a way that allows representations to misrepresent.
Fodor (1990) claims that any theory which purports to account for the content of a representation must solve the
disjunction problem. Such an account must be able to explain misrepresentation, by showing what a
representation9s content is--exactly-- and also how a representation can be caused to be activated by something to
which that content does not apply.
References:
57. Cummins, R. (1989). Meaning and Mental Representation. Cambridge, Mass: MIT Press. A Bradford
Book.
58. Fodor, J. (1987). 3Meaning and the World Order2. In Psychosemantics (pp. 97-133). Cambridge Mass.:
MIT Press. A Bradford Book.
59. Fodor, J. (1990). 3A Theory of Content I: The Problem2. In A Theory of Content and Other Essays. (pp. 51-
88). Cambridge, Massachusetts: MIT Press. A Bradford Book.

See Also:
Semantics | Misrepresentation | Representation

Elaborative Rehearsal
Elaborative rehearsal is a type of rehearsal proposed by Craik and Lockhart (1972) in their Levels of Processing
model of memory. In contrast to maintenance rehearsal, which involves simple rote repetition, elaborative rehearsal
involves deep sematic processing of a to-be-remembered item resulting in the production of durable memories.
For example, if you were presented with a list of digits for later recall (4920975), grouping the digits together to
form a phone number transforms the stimuli from a meaningless string of digits to something that has meaning.
References:
60. Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing. A framework for memory research. Journal
of Verbal Learning and Verbal Behaviour, 11, 671-684.

See Also:
Levels of Processing | Maintenance Rehearsal

Enactment
Weick (1988) describes the term enactment as representing the notion that when people act they bring structures and
events into existence and set them in action. The process of enactment involves two steps. First, preconceptions are
used to set aside portions of the field of experience for further attention, that is, perception is focused on
predetermined stimuli. Second, people act within the context of these portions of experience guided by
preconceptions in such a way as to reinforce these preconceptions. Hence, attention to certain stimuli will guide
subsequent action so that those stimuli are confirmed as important. The result of the process of enactment is the
13
enacted environment (Weick, 1988). This enacted environment comprises "real" objects but the significance,
meaning and content of these objects will vary. These objects are not significant unless they are acted upon and
incorporated into events, situations and explanations. In this way the enacted environment is a direct result of the
preconceptions held by the social actor. An enacted environment is internalised by social actors as the way in which
actions have led to certain consequences; it is therefore analogous to the concept of schema and is the source of
expectations for future action (Weick, 1988) . An enacted environment is "a map of if-then assertions in which
actions are related outcomes" that in turn serve as expectations for future action and focus perception in such way
that these preconceived relationships will be supported.
The importance of the notion of enactment is that it provides a direct link between individual cognitive processes
and environments. By showing how preconceptions can shape the nature of the environment this concept allows one
to argue the importance of schema in the sensemaking process. Schema guide both perception and inference (Fiske
& Taylor, 1991) and so will 'enact' environment by assigning significance, meaning and content to objects perceived
in the environment.
References:
61. Fiske, S.T., & Taylor, S.E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill.
62. Weick, K. E. (1988). Enacted sensemaking in crisis situations. Journal of Management Studies, 24(4).

Contributed by Julian Andrews

Encoding

Encoding refers to the processess of how items are placed into memory.

See Also:
Working Memory

Encoding Specificity
The encoding specificity principle of memory (Tulving & Thomson, 1973) provides an general theoretical
framework for understanding how contextual information affects memory. Specifically, the principle states that
memory is improved when information available at encoding is also available at retrieval. For example, the
encoding specificity principle would predict that recall for information would be better if subjects were tested in the
same room they had studied in versus having studied in one room and tested in a different room (see S.M. Smith,
Glenberg, & Bjork, 1978).
References:
63. Smith, S.M., Glenberg, A.M., & Bjork, R.A. (1978). Environmental contest and human memory. Memory
and Cognition, 6, 342-353.
64. Tulving, E., & Thomson, D.M. (1973). Encoding specificity and retrieval processes in episodic memory.
Psychological Review, 80, 352-373.

See Also:
Encoding | Retrieval

Equilibration
According to Piaget, development is driven by the process of equilibration. Equilibration encompasses assimilation
(i.e., people transform incoming information so that it fits within their existing thinking) and accommodation (i.e,
people adapt their thinking to incoming information). Piaget suggested that equilibration takes place in three phases.
First children are satisfied with their mode of thought and therefore are in a state of equilibrium.
Then, they become aware of the shortcomings in their existing thinking and are dissatisfied (i.e., are in a state of
disequilibration and experience cognitive conflict).
Last, they adopt a more sophisticated mode of thought that eliminates the shortcomings of the old one (i.e., reach a
more stable equilibrium).

See Also:
Adaptation | Piaget's Stage Theory of Development

14
Error Analysis
One of the key goals of cognitive science is to develop theories that are strongly equivalent with respect to to-be-
explained systems. This requires that evidence be collected to defend the claim that the model and the to-be-
explained system are carrying out the same procedures to compute a function.
One kind of information that could be used to examine this claim is called error analysis. In an error analysis, one
could (for two different systems) rank order problems in terms of their difficulty, as revealed by their likelihood to
produce mistakes. This is an example of relative complexity evidence. A more detailed approach would be to
classify the nature of the errors that each system made. In either case, if the two systems were strongly equivalent,
then we would expect them to produce the same rank orderings of difficulty, and to also produce the same
qualitative patterns of errors.
References:
65. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:
Intermediate State Evidence | Protocol Analysis | Relative Complexity Evidence | Strong Equivalence

Extension
The extension of the term 'cat' is the class of 'cat'.
What a term means has two components: i) the referent of the term--this is 'class' talk, and is the component of
meaning to which 'extension' applies; and ii) the sense of the term, i.e., all of the psychological associations that one
has with that term--this is 'concept' talk. This second sense is referred to as the 'intension' of the term.
Examples of the two components follow. The referent of the term 'cat' is all the cats; the sense of the term is related
to your experience of cats, their history, their attributes, etc. A classic example is 'the morning star' and 'the evening
star'; both of which refer to the same thing, the planet 'Venus', but the sense of 'morning star' and 'evening star' is not
the same. You cannot change the terms in a statement including one of them and retain the same truth value.
Other words sometimes used to pick out the distinctions between 'extension' and 'intension' are 'denotation' and
'connotation', respectively. Note the following definition by Cohen and Nagel:
A term [an element of a proposition] may be viewed in two ways, either as a class of
objects (which may have only one member), or as a set of attributes or characteristics
which determine the objects. The first phase or aspect is called the denotation or
extension of the term, while the second is called the connotation or intension. The
extension of the term 'philosopher' is 'Socrates', 'Plato', 'Thales', and the like; its intension
is 'lover of wisdom', 'intelligent', and so on. (31)
The distinctions in the meaning of a term are important to clarify. Without such distinctions, no discussion of
meaning in general can begin. If we wish to construct models and theories of human language and thought--and here
talk of meaning necessarily enters--we need to make precise those issues and problems we specifically want to
address.
Cohen, M. R. and Nagel, E. (1993). An Introduction to Logic. Indianapolis, Indiana: Hackett Publishing Company.

See Also:
Intension

Fluid Intelligence
Fluid intelligence is tied to biology. It is defined as our "on-the-spot reasoning ability, a skill not basically dependant
on our experience." (Belsky, 1990, p. 125) Belsky (1990) indicates this type of intelligence is active when the central
nervous system (CNS) is at its physiological peak.
Fluid intelligence is measured by the performance subtasks on the Wechsler Adult Intelligence Scale (WAIS).
Fluid intelligence is important to psychologists as it relates to the study of aging. There is ongoing intense debate
among psychologists as to whether or not intelligence declines with aging. Belsky (1990) claims fluid intelligence
"reaches a peak in early adulthood and then regularly declines." (p. 125) This is because of the physiological
changes that accompany aging. "The development of CNS structures is exceeded by the rate of CNS breakdown."
(Horn, 1970 as quoted in Belsky, 1990, p. 125)
References:
15
66. Belsky, J. K. (1990). The psychology of aging theory, research, and interventions. Pacific Grove, CA:
Brooks/Cole.
67. Horn, J. (1970). Organization of data on life-span development of human abilities. In R. Goulet and P.B.
Baltes (Eds.). Life-span developmental psychology: Research and theory. New York: Academic Press.

See Also:
Crystallized Intelligence | WAIS

The Formality Condition


The semantic properties of a representation are the properties it has due to its relationship with the world; properties
such as being true, of being a representation of something, of saying something about some object. On the other
hand, the properties that the representation has in itself, are its formal properties. Fodor (1980) defines a
representation9s formal properties negatively, by specifying what they are not: 3Formal properties are the ones that
can be specified without reference to such semantic properties as, for example, truth reference, and meaning.2
(p.227) Fodor stresses that formal properties are not syntactic properties. A representation can have formal
properties, and a process can operate on those formal properties, without that representationhaving a syntax (p227);
rotating an image on a screen, for instance this operation is performed on the image9s formal properties, but the
image doesn9t even have a syntax..
The point for a computational theory of mind, which takes mental processes to be formal operations on
representations, (and thus, to Fodor, taking the mind to be a 3kind of computer2) is that such processes only have
access to a representation9s formal properties. Computational processes do not have any access to semantic
properties; that is, to a representation's relationships with the world.
Thus the processes that operate on representations cannot operate on the basis of what this is a representation of, or
whether it represents that thing correctly or not, but only on the character of the representation itself, its 3shape2 as
it were. Thus the Formality Condition incurs what Putnam (1975) calls Methodological Solipsism.
3If mental processes are formal, then they have access only to the formal properties of
such representations of the environment as the senses provide. Hence, they have no
access to the semantic properties of such representations, including the property of being
true, of having referents, or, indeed, the property of being representations of the
environment.2 (Fodor (1980), p231, Fodor9s emphasis)
The solution to this methodological solipsism is to pair a computational psychology with what Fodor calls a
naturalistic psychology: a theory of the relations between representations and the world, which fix the semantic
interpretations of representations9 formal properties. (p233) That is, a representation9s formal properties must
somehow mirror the representation9s semantic properties, so that operations can operate on formal properties which
can at least be interpreted as saying something about some part of the world (whether or not that interpretation is
correct, true, appropriate, etc.).
References:
68. Fodor, J. (1980). Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology. In
Representations (pp. 225-253). Cambridge, Massachusetts: MIT Press. A Bradford Book.
69. Putnam, H. (1975). 3The Meaning of Meaning2. In K. Gunderson (Ed.), Minnesota Studies in the
Philosophy of Science (pp. 131-193). Minneapolis: University of Minnesota Press.

See Also:
Semantics | Representation

Free Recall
Free recall is a basic paradigm used to study human memory. In a free recall task, a subject is presented a list of to-
be-remembered items, one at at time. For example, an experimenter might read a list of 20 words aloud, presenting a
new word to the subject every 4 seconds. At the end of the presentation of the list, the subject is asked to recall the
items (e.g., by writing down as many items from the list as possible). It is called a free recall task because the
subject is free to recall the items in any order that he or she desires.
The free recall task is of interest to cognitive science because it provided some of the basic information used to
decompose the mental state term "memory" into simpler subfunctions ("primary memory", "secondary memory").
This is because the results of a free recall task were typically plotted as a serial position curve. This curve exhibited

16
a recency effect and a primacy effect. The behavior of these two effects provided support to the hypothesis that the
free recall task called upon both a short-term and a long-term memory.

See Also:
Primacy Effect | Recency Effect | Serial Position Curve | Short Term Memory

Functional Analysis
Functional analysis is a methodology that is used to explain the workings of a complex system. The basic idea is that
the system is viewed as computing a function (or, more generally, as solving an information processing problem).
Functional analysis assumes that such processing can be explained by decomposing this complex function into a set
of simpler functions that are computed by an organized system of subprocessors. The hope is that when this type of
decomposition is performed, the subfunctions that are defined will be simpler than the original function, and as a
result will be easier to explain.
A very detailed treatment of functional analysis is provided by Cummins (1983). He proposes a three-stage
methodology that defines functional analysis. In the first stage, the to-be-explained function is defined. In the second
stage, analysis is performed. The to-be-explained function is decomposed into an organized set of simpler functions.
This analysis can proceed recursively by decomposing some (or all) of the subfunctions into sub-subfunctions. In
the third stage, analysis is stopped by subsuming the bottom level of functions. This means that the operation of
each of these operation is explained by appealing to natural laws (e.g., mechanical or biological principles). If
functional analysis is applied to an information processing system, then the level of subsumed functions defines the
functional architecture for that information processor.
Functional analysis is important to cognitive science because it offers a natural methodology for explaining how
information processing is being carried out. For instance, any "black box diagram" offered as a model or theory by a
cogntive psychologist represents the result of carrying out the analytic stage of functional analysis. Any proposal
about what constitutes the cognitive architecture can be viewed as a hypothesis about the nature of cognitive
functions at the level at which these functions are subsumed.
References:
70. Cummins, R. (1983). The nature of psychological explanation. Cambridge, MA: MIT Press.

See Also:
Functional Architecture | Primitive | Ryle's Regress

Functional Architecture
The functional architecture can be viewed as the set of basic information processing capabilities available to an
information processing system.
"Specifying the functional architecture of a system is like providing a
manual that defines some programming language. Indeed, defining a
programming language is equivalent to specifying the functional architecture
of a virtual machine" (Pylyshyn, 1984, p. 92).
In other words, if it is assumed that cognition is the result of the brain's "running of a program", then the functional
architecture is the language in which that program has been written.
The functional architecture is of interest to cognitive science because if offers an escape from Ryle's Regress (a.k.a.
the homunculus problem). The functional architecture is comprised of a set of primitive operations or functions. This
means that these basic functions cannot be explained by being further decomposed into less complex ("smaller")
subfunctions. Instead, they must be explained by appealing to implementational properties (e.g., for human
cognition, properties of the human brain). As a result, the functional architecture represents the point at which the
decomposition of mental state terms into other mental state terms via functional analysis can stop. By specifying the
functional architecture, one converts the black box descriptions that cognitivists create into explanations.
References:
71. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:
Functional Analysis | Primitive | Ryle's Regress

17
Generalization
Klahr & Wallace (1982) felt that Piaget's theory of adaptation was not enough to explain cognitive development.
They therefore developed a new theory, and posited that the mechanism behind development was generalization.
Klahr and Wallace divided generalization into three more specific categories: the time line, regularity detection, and
redundancy elimination (Siegler, 1991). These three categories are described below.

The Time Line


The time line contains the data on which generalizations are based. In Klahr and Wallace's theory, whenever a
system encounters a situation, it records the responses to that situation, the outcomes from those actions, and what
new situations arose as a result. This recording of events ensures that the system keeps all the information about an
even stored so that it can be referred back to in the future.

Regularity Detection
This process uses the contents of the time line to draw generalizations about experience. The system notes situations
that are similar and notes where variations do not change the outcomes of situations.

Redundancy Elimination
This process improves efficiency by identifying processeing steps that are unecessary. In this way, it reaches a
generalization that a less-complex sequence can achieve the same goal (Siegler, 1991).
Klahr and Wallace have developed a self-modifying computer simulation that models findings about children's
thinking, and can demonstrate these processes in generalization.
References:
72. Klahr, D. (1982). Nonmonotone assessment of monotone development: An information processing
analysis. In S. Strauss (Ed.), U-shaped behavioral growth. New York: Academic Press.
73. Siegler, R. (1991). Children's thinking. Englewood Cliffs, NJ: Prentice-Hall.
74. Vasta, R., Haith, M. M., & Miller, S. A. (1995). Child psychology: The modern science. New York, NY:
Wiley.

See Also:
Adaptation | Equilibration

Graceful Degradation
In a symbolic system removing part of the system will result in a clear degradation of performance. Removing a
symbol token will result in the loss of the information stored in that token. The loss of an operating procedure
destroys the systems ability to perform the missing process. The fall in performance is sudden and clearly defined.
In a connectionist system performance does fall sharply with either damage to the system or erroneous inputs.
Instead, the performance will decline gradually, depending on the nature of the loss and the architecture of the
system. This property means that connectionist models still function relatively error free when the system has
damage to its connections or units or when the input stimuli is incomplete.
References:
75. Bechtel, W., & Abrahamsen, A. (1991). Connectionism and the mind: An introduction to parallel
processing in networks. Cambridge, MA: Blackwell.

See Also:
Content Addressable Memory | Functional Architecture | Parallel Distributed Processing Models | Spontaneous
Generalisation | Symbolic Architecture

Hebbian Learning Rule


The Hebbian Learning Rule is a learning rule that specifies how much the weight of the connection between two
units should be increased or decreased in proportion to the product of their activation. The rule builds on Hebbs's
1949 learning rule which states that the connections between two neurons might be strengthened if the neurons fire
simultaneously. The Hebbian Rule works well as long as all the input patterns are orthogonal or uncorrelated. The
requirement of orthogonality places serious limitations on the Hebbian Learning Rule. A more powerful learning

18
rule is the delta rule, which utilizes the discrepancy between the desired and actual output of each output unit to
change the weights feeding into it.
References:
76. Bechtel, W., & Abrahamsen, A. (1993). Connectionism and the mind: An introduction to parallel
processing in networks. Oxford, UK: Blackwell.
77. Hebb, D.O. (1949). The organization of behavior. New York: Wiley.
78. Rumelhart, D.E., & McClelland, J. L.(1986). Parallel distributed processing: Explorations in the
microstructure of cognition, vol. 1: Foundations. Cambridge, MA: MIT Press.

See Also:
Learning Rule

Humor
There are many reasons why people find something humorous, which are reflected in the large number of theories
on the subject. Humor has been related to aggression, incongruity, and surprise. The cognitive psychologist's interest
in the subject is usually related to the notion that humor stems from a resolution of incongruity.
For example, consider this joke by W.C. Field. "Do you believe in clubs for children?" "Only when kindness fails".
Schultz(1974) offered a three step theory of processing. In the first stage, the listener notices the incorrect
interpretation of the ambiguous element (clubs = social groups). In the second step, the incorrect element of
incongruity is processed ( "only when kindness fails"). In the final stage the hidden meaning of the ambiguous
element is perceived (clubs = sticks). The incongruity resolution theory explains the fact that a joke previously
encountered will seem less funny on subsequent exposure.
Similarly, Freud (1905, in Minsky 1985) suggested that humorous stories are a way of fooling our internal censors.
A joke's power comes from a description that fits two different frames at once. The first meaning must be
transparent and innocent, while the second meaning is disguised and reprehensible.
Although most cognitive psychologists have not extended their theorizing to humor, it does have an important
cognitive aspect. In particular, cognitive theory helps provide an explanation of why verbal jokes are found amusing
by looking at the comprehension processes involved.
References:
79. Kristal, L. (Ed.). (1981). ABC of psychology. London: Multimedia Publications.
80. Minsky, M. (1985). The society of mind. New York, NY: Simon & Schuster.
81. Schultz, T.R. (1974). Order and processing in humor appreciation. Canadian Journal of Psychology, 28,
409-420.

Imagery Debate
The imagery debate centres around the problem of what can be viewed as the primitives of cognition. Primitives
serve as the foundation of the algorithmic level of the computational hierarchy. Presumably, it is these primitives
which are implemented in the physical substrate of the brain.
The central question related to the imagery debate then is: Do images form the basis of all our higher cognition? If
not, what does? Could propositions serve that function? Or both images and propositions? Or something altogether
different?
Kosslyn, S. M., Pinker, S., Smith, G., & Shwartz, S. P. (1979). On the demystification of mental imagery. The
Behavioral and Brain Sciences, 2, 535-581.
Pylyshyn, Z. W. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological Review, 88,
16-45.
Anderson, J. R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85, 249-
277.

Incidental Learning Paradigm


The incidental learning paradigm is an experimental paradigm used to investigate learning without intent. Using this
paradigm, several groups of subjects are presented with the same list of items (e.g., 20 words) and are instructed to
process them in different ways (different orienting conditions), with each group asked to perform a different activity
or orienting task with the list. For example,
• count the number of letters in each word (shallow processing)
• name a rhyming word for each item (again, shallow processing, but deeper than #1

19
• form an image of each word and rate the vividness of each image (deep processing).
Importantly, subjects are not told that there will be a subsequent test of memory. At the end of the list presentation,
subjects are unexpectedly asked to recall as many of the words as possible. Processing information at a deeper level
results in superior recall of that information (Eysenck, 1974).
References:
82. Eysenck, M.W. (1974). Age differences in incidental learning. Developmental Psychology, 10, 936-941.

See Also:
Levels of Processing

Induction Learning
Inductive learning is essentially learning by example. The process itself ideally implies some method for drawing
conclusions about previously unseen examples once learning is complete. More formally, one might state: Given a
set of training examples, develop a hypothesis that is as consistent as possible with the provided data [1]. It is
worthy of note that this is an imperfect technique. As Chalmers points out, "an inductive inference with true
premises [can] lead to false conclusions" [2]. The example set may be an incomplete representation of the true
population, or correct but inappropriate rules may be derived which apply only to the example set.
A simple demonstration of this type of learning is to consider the following set of bit-strings (each digit can only
take on the value 0 or 1), each noted as either a positive or negative example of some concept. The task is to infer
from this data (or "induce") a rule to account for the given classification:

- 1000101 - 1110100 + 0101


+ 1111 + 10010 + 1100110
- 100 + 111111 - 00010
- 1 - 1101 + 101101
+ 1010011 - 11111 - 001011

A rule one could induce from this data is that strings with an even number of 1's are "+", those with an odd number
of 1's are "-". Note that this rule would indeed allow us to classify previously unseen strings (i.e. 1001 is "+").
Techniques for modeling the inductive learning process include: Quinlan's decision trees (results from information
theory are used to partition data based on maximizing "information content" of a given sub-classification) [3],
connectionism (most neural network models rely on training techniques that seek to infer a relationship from
examples) and decision list techniques [4], among others.
References
83. Adapted from lectures in a graduate course in representation & reasoning given by Dr. Peter van Beek,
Department of Computing Science, University of Alberta.
84. A.F. Chalmers. What is this thing called science?. University of Queensland Press, Australia, 1976.
85. J.R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, 1993.
86. R.L. Rivest. Learning decision lists. Machine Learning. 2(3):229-246, 1987.

See Also
Connectionism| Inductive Inference| Learning Rule| Machine Learning

Inductive (Pragmatic) Inference


Inferences are made when a person (or machine) goes beyond available evidence to form a conclusion. An inductive
inference is one which is likely to be true because of the state of the world. Unlike deductive inferences, inductive
inferences do yield consclusions that increase the semantic information over and above that found in the initial
premises.
However, in the case of inductive inferences, we cannot be sure that our conclusion is a logical result of the
premises, but we may be able to assign a likelihood to each conclusion.
Similar to deductive inference, induction can be broken down into three stages. The first stage is to understand the
observation or stated information. The second is to form a hypothesis that attempts to describe the above
information in relation to t person's general knowledge. The resulting conclusion goes beyond initial information by
incorporating one's general knowledge in the result. The third step is to evaluate the validity of the conclusion that
was reached.
References:

20
87. Eysenck, M.W. (Ed.). (1990). The Blackwell dictionary of cognitive psychology. Cambridge, MA: Basil
Blackwell.
88. Johnson-Laird, P. N. (1993). Human and machine thinking. Hillsdale, NJ : Lawrence Erlbaum Associates.

See Also:
Deductive Inference

Intension
What a term means has two components: i) the referent of the term--this is 'class' talk, and is the component of
meaning to which 'extension' applies; and ii) the sense of the term, i.e., all of the psychological associations that one
has with that term--this is 'concept' talk. This second sense is referred to as the 'intension' of the term.
Examples of the two components follow. The referent of the term 'cat' is all the cats; the sense of the term is related
to your experience of cats, their history, their attributes, etc. A classic example is 'the morning star' and 'the evening
star'; both of which refer to the same thing, the planet 'Venus', but the sense of 'morning star' and 'evening star' is not
the same. You cannot change the terms in a statement including one of them and retain the same truth value.
Other words sometimes used to pick out the distinctions between 'extension' and 'intension' are 'denotation' and
'connotation', respectively. Note the following definition by Cohen and Nagel:
A term [an element of a proposition] may be viewed in two ways, either as a class of
objects (which may have only one member), or as a set of attributes or characteristics
which determine the objects. The first phase or aspect is called the denotation or
extension of the term, while the second is called the connotation or intension. The
extension of the term 'philosopher' is 'Socrates', 'Plato', 'Thales', and the like; its intension
is 'lover of wisdom', 'intelligent', and so on. (31)
The distinctions in the meaning of a term are important to clarify. Without such distinctions, no discussion of
meaning in general can begin. If we wish to construct models and theories of human language and thought--and here
talk of meaning necessarily enters--we need to make precise those issues and problems we specifically want to
address.
Cohen, M. R. and Nagel, E. (1993). An Introduction to Logic. Indianapolis, Indiana: Hackett Publishing Company.

See Also:
Extension
Intention
Intentionality refers to "aboutness." Beings having intentionality have propositional attitudes, they have beliefs,
knowledge, hopes, dreams, desires, etc. about things. Whenever we come across "that" in an utterance or piece of
writing, we know that we are dealing with something intentional. (Notice the intentionality of the preceding
statement.) If we hear someone say "ouch," "oops," "hey," etc., these expressions do not reveal what sets humans
apart from the rest of the animals. Intentionality does; it is considered by most to be a singularly human feature.
This issue is important to the extent that any theory of consciousness, or mind, must answer as to how intentionality
is possible.
'Intentional' is not to be confused with 'intensional' spelled with an 's', the latter of which refers to the meaning of a
term, (along with 'extensional'). Intentional, intensional, and extensional can be paired loosely in the following way:
intentional/propositional, intensional/conceptual, and extensional/perceptual.

See Also:
Intension | Extension

The Intentional Stance


An intentional stance refers to the treating of a system as if it has intentions, irrespective of whether it does. By
treating a system as if it is a rational agent one is able to predict the system's behaviour. First, one ascribes beliefs to
the system as those the system ought to have given its abilities, history and context. Then one attributes desires to
the system as those teh system ought to have given its survival needs and means of fulfilling them. One can then
predict the systems behaviour as that a rational system would undertake to further its goals given its beliefs. Dennett
argues for three main reasons for taking an intentional stance. First, it fits well with our understandings of the
21
processes of natural selection and evolution in complex environments. Second, it has been shown to be an accurate
method of predicting behaviour. Third, it is consistent with our folk psychology of behaviour.
References:
89. Dennett, D.C. (1987). The Intentional Stance Cambridge MA, MIT Press

See Also:

Intermediate State Evidence


One of the key goals of cognitive science is to develop theories that are strongly equivalent with respect to to-be-
explained systems. This requires that evidence be collected to defend the claim that the model and the to-be-
explained system are carrying out the same procedures to compute a function.
One type of evidence that can be used to support this claim is intermediate state evidence. This involves
observations of the intermediate steps, and/or the intermediate states of knowledge, that the two systems pass
through as they move from being given a problem to providing an answer.
For example, if one was using a Turing machine as a model, then an immediate source of intermediate state
evidence would be what the machine does to its tape with each processing step.
In studying human subjects, intermediate state evidence is not directly available. However, one method that might
provide some evidence about these intermediate states is protocol analysis.
References:
90. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:
Protocol Analysis | Strong Equivalence

Intrusion Errors
In a recall portion of a memory task, these are errors that occur when the subject includes items that were not on the
original list.

See Also:
Cued Recall | Free Recall

Learning Rule
Learning rules, for a connectionist system, are algorithms or equations which govern changes in the weights of the
connections in a network. One of the simplest learning procedures for two- layer networks is the Hebbian Learning
Rule, which is based on a rule initially proposed by Hebb in 1949. Hebb's rule states that the simultaneous excitation
of two neuron results in a strengthening of the connections between them. More powerful learning rules are learning
rules which incorporate an error reduction procedure or error correction procedure (e.g., delta rule, generalized delta
rule, back propagation). Learning rules incorporating an error reduction procedure utilize the discrepancy between
the desired output pattern and an actual output pattern to change (improve) its weights during training. The learning
rule is typically applied repeatedly to the same set of training inputs across a large number of epochs or training
loops with error gradually reduced across epochs as the weights are fine-tuned.
References:
91. Bechtel, W., & Abrahamsen, A. (1993). Connectionism and the mind: An introduction to parallel
processing in networks. Oxford, UK: Blackwell.
92. Hebb, D.O. (1949). The organization of behavior. New York: Wiley.
93. Rumelhart, D.E., & McClelland, J. L.(1986). Parallel distributed processing: Explorations in the
microstructure of cognition, vol. 1: Foundations. Cambridge, MA: MIT Press.

See Also:
Hebbian Learning Rule | Parallel Distributed Processing Models

Levels of Processing
Levels of Processing - an influential theory of memory proposed by Craik and Lockhart (1972) which rejected the
idea of the dual store model of memory. This popular model postulated that characteristics of a memory are
determined by it's "location" (ie, fragile memory trace in short term store [STS] and a more durable memory trace in

22
the long term store [LTS]. Instead, Craik and Lockhart proposed that information could be processed in a number of
different ways and the durability or strength of the memory trace was a direct function of the depth of processing
involved. Moreover, depth of processing was postulated to fall on a shallow to deep continuum.
Shallow processing (e.g., processing words based on their phonemic and orthographic components) leads to a fragile
memory trace that is susceptible to rapid forgetting. On the other had, deep processing (e.g., semantic or meaning
based processing) results in a more durable memory trace.
A typical paradigm employed to investigate the Levels of Processing theory is the incidental learning paradigm.
Results reveal superior recall for items processed deeply compared to those items processed at the more shallow
level (Eysenck, 1974: Hyde & Jenkins, 1969).
Craik and Lockhart also distinguished between two kinds of rehearsal, maintenance and elaborative rehearsal. Of the
two, elaborative rehearsal is the most effective in producing a more durable memory trace.
References:
94. Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing. A framework for memory research. Journal
of Verbal Learning and Verbal Behaviour, 11, 671-684.
95. Eysenck, M.W. (1974). Age differences in incidental learning. Developmental Psychology, 10, 936-941.
96. Hyde, T.S., & Jenkins, J.J. (1969). Differential effects of incidental tasks on the organization of recall of a
list of highly associated words. Journal of Experimental Psychology, 82, 472-481.

See Also:
Elaborative Rehearsal | Incidental Learning Paradigm | Maintenance Rehearsal

Linguistic Determination
Linguistic determination is the argument that language directly effects that way that people think about and see the
world. Linguistic determination is also known as the Whorfian hypothesis or the Sapir-Whorf hypothesis (Sapir,
1968; Whorf, 1956). Whorf provides the example of the Eskimo words for snow. The Eskimo people are inhabitants
of the Arctic. Whereas in the English language there is only one word for snow the Eskimo language has many
words for snow. Whorf argues that this language for snow allows the Eskimo people to "see" snow differently than
speakers of other languages who do not have as many words for snow. That is, Eskimo people see subtle differences
in snow that other people do not.
Researchers have studied color perception across different linguistic groups to find support for the Whorfian
hypothesis (Berlin & Kay, 1969; Heider, 1972; Heider & Oliver, 1973; Miller & Johnson-Laird, 1976; Rosch, 1974).
The evidence indicates that people of all cultures perceive colour in the same way. The tentative conclusion is that
language does not determine the way that people think. It is possible that language, whiule not determining the way
that people think may influence the way that people think. Exactly how language might influence thought is yet
unclear.

Long-Term Potentiation
The enduring facilitation of synaptic transmission that
occurs following the activation of a synapse by high-frequency
stimulation of the presynaptic neuron. (Pinel, 1993, p.515)
Long-Term Potentiation (LTP) was originally discovered in Aplysia. Recently, however, LTP has also been found to
occur in the mammalian nervous system, specifically the hippocampus. This is an extremely important finding as it
suggests that LTP could be the cellular basis of the neural implementation of learning and memory, especially when
combined with the fact that the hippocampus is believed to be one of the major brain regions responsible for
processing memories.
LTP is one of the first examples of a mechanisms for neural implementation of a cognitive function.
References:
97. Pinel, J. (1993). Biopsychology (2nd ed.). Toronto: Allyn & Bacon.

See Also:
Cognitive Science | Neuron

Machine Learning
The acquisition and application of knowledge plays a central role in describing learning. For the most part, human
beings perform this task quite well (for better or worse). It is under the banner of machine learning that researchers,
particularly within artificial intelligence, attempt to develop methods for accomplishing this task algorithmically (i.e.
on computers).
23
Dietterich differentiates between three types of learning a system can exhibit [1]:
• Speed-up learning occurs when a system becomes more efficient at a task over time without external
input.
• Learning by being told occurs when a system acquires new knowledge explicitly from an external source.
• Inductive learning occurs when a system acquires new knowledge that was neither explicitly nor
implicitly available previously.
In order to evaluate the success (or failure) of machine learning techniques, it will be important to define what is
meant by "learning". Dietterich suggests that by defining "knowledge", we can simplify the specification of
"learning" by defining it to be an increase in this "knowledge" [1]. It is debatable whether this makes the task any
easier. A formalism often employed to judge the effectiveness of a learning system is Valiant's definition of what it
means for a system to be probably approximately correct [2]: the system should, with high probability, exhibit
knowledge that is largely in agreement with the "true" information (i.e. approximately correct).
A problem endemic to most machine learning techniques is a lack of generality. For example, a particular algorithm
may perform well on discrete data, whereas application to continuous data is difficult. These issues are invariably
task specific---most learning formalisms handle some subset of tasks extremely well while performance on others is
substandard. Major performance issues often revolve around the ability of a given system to generalize what it has
learned to novel circumstances.
References
98. T.G. Dietterich. Machine learning. Annual Review of Computer Science. Vol. 4, Spring 1990.
99. L.G. Valient. A theory of the learnable. Communications of the ACM. 27:1134-1142, 1984.

See Also
Artificial Intelligence| Induction Learning| Learning Rule

Maintenance Rehearsal
Maintenance rehearsal is a type of rehearsal proposed by Craik and Lockhart (1972) in their Levels of Processing
Model of memory. Maintenance rehearsal involves rote repetition of an item's auditory representation. In contrast to
elaborative rehearsal, this type of rehearsal does not lead to stronger or more durable memories.
References:
100.Craik, F.I.M., & Lockhart, R.S. (1972). Levels of processing. A framework for memory research. Journal
of Verbal Learning and Verbal Behaviour, 11, 671-684.

See Also:
Elaborative Rehearsal | Levels of Processing

Mandelbrot Set
A Mandelbrot set is an intricate geometric shape, where if any region of the set is magnified, new and intricate
details appear. Every time you focus further on one section, more detail shows up. This will continue ad infinitum,
as you investigate further. It was originally postulated to help explain fractals.
Another way of looking at this is as follows. When "simple" laws govern systems with large numbers of variables,
the underlying order may become obscured by our inability to track every component. Simple rules can produce
incredibly complex effects. Mandelbrot sets relate philosophically to the study of cognitive science, in that some
theories in the field may need to be more complex in order to be fully validated, while other topics may be simpler
than they first appear. This seems to be the case in the study of groups of agencies and agents in Minsky's (1985)
The Society of Mind.
References:
101.Cohen J., & Stewart, I. (1994). The collapse of chaos. New York: Viking Press.
102.Minsky, M. (1985). The society of mind. New York, NY: Simon & Schuster.

See Also:
Consciousness

Memory Span
Memory span refers to the number of items (usually words or digits) that a person can hold in working memory.
Tests of memory span are often used to measure working memory capacity. A typical test of memory span involves
having an examiner read a list of random digits (digit span) or words (word span) aloud at the rate of one per
24
second. At the end of a sequence, subjects are asked to recall the items in order. The average span for normal adults
is 7 (Miller, 1956).
References:
103.Miller, G.A. (1956). The magical number seven plus or minus two. Some limits on our capacity for
processing information. Psychological Review, 63, 81-97.

See Also:
Working Memory

Metaphor
Metaphor is the use of a word or phrase to label an object or concept that it does not literally denote, suggesting a
comparison of that concept to the phrase's denoted object. There are many nuances in the meanings of metaphors.
Mark Johnson and George Lakoff discuss preconceptual elemants (which include: general human purposes, cultural
instistuions and practices, theoretical paradigms, individual traits and values, and personality traits). They claim that
it is only because of these preconceptions that metaphor is able to affect our thinking, emotions and language. Earl
Mac Cormac writes that the way in which we explain things influences how we understand them. While this
relationship may initially appear backwards, the circularity can easily be withdrawn when one realizes that after the
original clumsy description is given, we sstart trying to make the thing we are describing fit the model, which is only
eliminated if it does not fit.

Misrepresentation

A Representation represents, or is about, a certain object or state of affairs (the representation9s object) and says
something about that object (the representation9s content). Misrepresentation happens when what that content says
about the object isn9t true of the object. For instance my cow representation has a certain content; suppose that this
content is something like that9s a four-legged mammal that gives milk, goes 3moo2, and eats grass. Anything this
representation 3is about2 will be represented as something that description applies to. So if my cow representation is
activated by--and thus refers to--a short fat muddy horse seen from a distance, that horse is misrepresented, because
it9s represented as a four-legged mammal that gives milk, goes 3moo2, and eats grass, which is false of the horse.
Theories of content, which attempt to explain how representations correctly represent their objects have a
tremendous amount of trouble explaining how they can also sometimes misrepresent their objects. Jerry Fodor9s
(1990) disjunction problem points out the difficulty here. A representation9s content can9t be such that the
representation represents whatever causes its activation. A representation with content construed in this way can9t
misrepresent.

Modularity

Jerry Fodor (1983) is the strongest proponent of a modular theory of cognition. Fodor argues that certain
psychological processes are self contained--or modular. This is in contrast to "New look" or Modern Cognitivist
positions which hold that nearly all psychological processes are interconnected, and freely exchange information.
Fodor proposes a three tiered cognitive system. The first level of the system, the transducer level, transforms
environmental signals into a form that can be used by the cognizing organism. The second level, the input systems
level, performs basic recognition and description functions. In Fodor's model input systems are modular. The third
level of the system, higher level cognitive functions, performs complex operations on the output of the input
systems. An example of a higher level process is analogous thinking.
Fodor holds that input systems are modular and that higher level cognitive processes are nonmodular. This means
that all of the information necessary for performing their tasks of recognition and description is contained within the
input systems. For example, object perception might be modular, in which case the object perception module need
not reference language modules, or music modules, or mathematics modules in order to perform its operations. In
contrast, higher level processes have access to all information contained within the cognitive system when
performing a given operation. Fodor provides the example of scientific reasoning (a higher level cognitive process).
Potentially, when solving a scientific problem, the scientist can reference any knowledge that he or she has about the
world to help in solving this problem. As such, if necessary, knowledge about botany can be referenced in order to
understand problems in mathematics.
Modular systems have the following properties:
25
104.They are domain specific--they operate on, and have a computational architecture that is unique to certain
stimuli.
105.Their operation is mandatory, or they are cognitively impenetrable--beliefs cannot affect the operations of
modules, we cannot help seeing, or hearing the world in a certain way.
106.Modules are fast--modular processes are among the fastest psychological processes,this is because modules
are self-contained and need not spend time referencing information outside of the module to complete their
tasks.
107.Modules are informationally encapsulated--they need not reference any other psycholgical systems in order
to perform their operations.
108.Modules have shallow outputs--the output of modules is very basic, more complex representations follow
after higher level computation.
References:
6. Fodor, J.A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
7. Fodor, J.A. (1985). Precis on The Modularity of Mind. Behavioral and Brain Sciences, 8, 1-42.

See Also:
Analogy

Neurocognition
The study of the relationships between neuroscience and cognitive psychology.
The goal is to look for specific neurophysiological correlates of cognitive functions. This is based on the assumption
that specific brain regions are responsible for mediating certain aspects of cognitive function.
References:
109.Pinel, J. (1993). Biopsychology (2nd ed.). Toronton: Allyn & Bacon.

See Also:
Cognitive Psychology|<AHREF= " C .. cognitive_science.html?Cognitive Science | Neuroscience

Neuron
These are the specialized, functional cells of the nervous system that conduct neural information.
There were originally 2 basic hypotheses about the structure and function of the nervous system (Kolb & Whishaw,
1985, p.317):
110.Neuron Hypothesis: the nervous system is composed of discrete, autonomous cells, or units, that can
interact but are not physically connected.
111. Nerve Net Hypothesis: the nervous system is composed of a continuous network of interconnected fibres.
The current understanding of cognition in the brain represents a combination of these hypotheses. Cognition is
viewed as occuring by the interaction between neurons through complex excitatory and inhibitory synapses.
As such, cognitive scientists should recognize the need to incorporate basic properties of neurons, and neural
organization in the development of models of cognition.
The parallel distributed processing model, is a good example of a model that has attempted to account for the basic
neural properties.
References:
8. Kolb, B., & Whishaw, I. (1985). Fundamentals of human neuropsychology (2nd ed.). New York: W.H.
Freeman & Co.
9. Pinel, J. (1993). Biopsychology (2nd ed.). Toronto: Allyn & Bacon.

See Also:
Cognitive Science | Neuroscience | Parallel Distributed Processing Models

Neuroscience
Neuroscience is the study of the nervous system and has many different branches, such as:
• Biopsychology,
• Developmental Neurobiology,
• Neuroanatomy,
• Neurochemistry,
• Neuroendocrinology,
26
• Neuroethology,
• Neuropharmacology,
• Neurophysiology, and
• Neuropsychology.
In cognitive science, it is very important to recognize the importance of neuroscience in contributing to our
knowledge of human cognition. Cognitive scientists must have, at the very least, a basic understanding of, and
appreciation for, neuroscientific principles. In order to develop accurate models, the basic neurophysiological and
neuroanatomical properties must be taken into account.

See Also:
Cognitive Science | Neuron

Occam's Razor
The simplest definition of Occam's Razor is "Don't make unnecessarily complicated assumptions". It can be used as
a philosophical way of sorting the simple theories from the complicated ones. When scientists select theories, they
don't just use the criterion of agreement or disagreement with observations. They also have aesthetic principles, and
a desire for an elegant, universal theory. They use these aesthetic principles to remove the cloud of trivially
competing theories that necessarily surround every theory. Occam's razor is a working rule of thumb, not the
ultimate answer.
References:
112.Cohen J., & Stewart, I. (1994). The collapse of chaos. New York: Viking Press.

Paradigm

The Oxford English Dictionary defines a paradigm simply as an "example or pattern". Within the scientific
community however, the notion of paradigm is a far more significant issue. It typically defines what a given
individual is willing to accept of his or her field, and how they perform their own work within it---whether they are
conscious of it or not. It is here in fact that the more formal concept of a paradigm is realized.
Chalmers [2], in a discussion of Kuhn's writings about what constitutes a shift in paradigm [3], loosely characterizes
it as a framework of beliefs and standard which defines legitimate work within the science for which it applies. He
states further that defining "paradigm" rigorously is inherently problematic. He does however offer some
suggestions for what, at least in part, characterizes a paradigm; although worded with science in mind, some of these
can be seen to apply to the concept of a paradigm in general.

A paradigm (from Chalmers [2]):


• is composed of "explicitly stated laws and theoretical assumptions".
• includes "standard ways of applying the fundamental laws to a variety of types of situations".
• possess "instrumentation and instrumental techniques necessary for bringing the laws of the paradigm to
bear on the real world".
• "consists of some very general, metaphysical principles that guide work within the paradigm".
• "contains some very general methodological prescriptions".
Much animated debate occurs regarding what constitutes a shift of paradigm, and what does not. Kuhn writes that in
the face of a scientific revolution, the "new" world-view is virtually incompatible with that which it replaced [3].
Bohm and Peat characterize this interpretation as overly restrictive [1]. They suggest that it introduces significant
fragmentation within the growth process of the scientific endeavour. I interpret this as a more reasoned attitude, as
there is more potential for benefit than harm in the co-existence of (even contradictory) paradigms. I would argue in
fact that this is more the norm than Kuhn seemed to feel was the case.
References
113.D. Bohm and F.D. Peat. Science, Order, and Creativity. Bantam Books, New York, 1987.
114.A.F. Chalmers. What is this thing called science?. University of Queensland Press, Australia, 1976.
115.T.S. Kuhn. The Structure of Scientific Revolutions. University of Chicago Press, Chicago, 1970.

Parallel Distributed Processing Models


Parallel Distributed Processing (PDP) models are a class of neurally inspired information processing models that
attempt to model information processing the way it actually takes place in the brain.
27
This model was developed because of findings that a system of neural connections appeared to be distributed in a
parallel array in addition to serial pathways. As such, different types of mental processing are considered to be
distributed throughout a highly complex neuronetwork.
The PDP model has 3 basic principles:
116.the representation of information is distributed (not local)
117.memory and knowledge for specific things are not stored explicitly, but stored in the connections between
units.
118.learning can occur with gradual changes in connection strength by experience.
These models assume that information processing takes place
through interactions of large numbers of simple processing
elementscalled units, each sending excitatory and inhibitory
signals to other units. (Rumelhart, Hinton, & McClelland, 1986, p. 10)
Rumelhart, Hinton, and McClelland (1986) state that there are 8 major components of the PDP model framework:
10. a set of processing units
11. a state of activation
12. an output function for each unit
13. a pattern of connectivity among units
14. a propagation rule for propagating patterns of activities through the network of connectivities
15. an activation rule for combining the inputs impinging on a unit with the current state of that unit to produce
a new level of activation for the unit
16. a learning rule whereby patterns of connectivity are modified by experience
17. an environment within which the system must operate
References:
4. Rumelhart, D.E., Hinton, G.E., & McClelland, J.L. (1986). A general framework for parallel distributed
processing. In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group (Eds.). Parallel distributed
processing: Explorations in the microstructure of cognition. Vol. 1: Foundations. Cambridge, MA: MIT
Press.

See Also:
Learning Rule | Neuron

Perseveration Errors
On a recall portion of a memory task, these are errors that occur when a subject repeats items that they have already
said on that same recall trial.

See Also:
Cued Recall | Free Recall | Intrusion Errors

Philosophy of Mind
The philosophy of mind has emerged as a field of philosophy in its own right, due to the convergence of issues
raised in more traditional areas of philosophy, such as metaphysics, epistemology, and ethics.
Some questions asked by philosophers of mind reveal these origins. One might ask: Are mind and body one
substance?; Does mind depend on the body?; Is 'mind' identical with 'body'? These questions may lead to others: Do
humans actually make free choices, or are all human acts physically determined? As well as physical states, we have
mental states and many of the latter relate to each other. For example, individuals have beliefs, desires, and feelings
about other mental states, i.e., about concepts. When talk turns to such intentional states or propositional attitudes,
further questions arise. Do only humans have intentionality? Must any account which attempts to explain our actions
consider intentionality? Or can physical events (brain and body processes in interraction with the physical
environment) wholly explain our actions?
Because of the nature of these questions, it becomes apparent why the philosophy of mind might cross over into
cognitive science. Cognitive science, after all, tries to answer many of these same questions.

See Also:
Intention

28
Piaget's Stage Theory of Development
Piaget was among other things, a psychologist who was interested in cognitive development. After observation of
many children, he posited that children progress through 4 stages and that they all do so in the same order. These
four stages are described below.
The Sensorimotor Period (birth to 2 years)
During this time, Piaget said that a child's cognitive system is limited to motor reflexes at
birth, but the child builds on these reflexes to develop more sophisicated procedures. They
learn to generalize their activities to a wider range of situations and coordinate them into
increasingly lengthy chains of behaviour.
PreOperational Thought (2 to 6/7 years)
At this age, according to Piaget, children acquire representational skills in the areas mental
imagery, and especially language. They are very self-oriented, and have an egocentric view;
that is, preoperational chldren can use these representational skills only to view the world
from their own perspective.
Concrete Operations (6/7 to 11/12 years)
As opposed to Preoperational children, children in the concrete operations stage are able to
take another's point of view and take into account more than one perspective simultaneously.
They can also represent transformations as well as static situations. Although they can
understand concrete problems, Piaget would argue that they cannot yet perform on abstract
problems, and that they do not consider all of the logically possible outcomes.
Formal Operations (11/12 to adult)
Children who attain the formal operation stage are capable of thinking logically and
abstractly. They can also reason theoretically. Piaget considered this the ultimate stage of
development, and stated that although the children would still have to revise their knowledge
base, their way of thinking was as powerful as it would get.
It is now thought that not every child reaches the formal operation stage. Developmental psychologists also debate
whether children do go through the stages in the way that Piaget postulated. Whether Piaget was correct or not,
however, it is safe to say that this theory of cognitive development has had a tremendous influence on all modern
developmental psychologists.
References:
119.Santrock, J.W. (1995). Children. Dubuque, IA: Brown & Benchmark.
120.Siegler, R. (1991). Children's thinking. Englewood Cliffs, NJ: Prentice-Hall.
121.Vasta, R., Haith, M.M., & Miller, S.A. (1995). Child psychology: The modern science. New York, NY:
Wiley.

See Also:
Adaptation | Cognitive Development | Equilibration | Generalization

Primacy Effect
The primacy effect is found when the results of a free recall task are plotted in the form of a serial position curve.
Generally, this curve is U-shaped, and the primacy effect corresponds to the tail of the U on the left. This tail
indicates that words presented at the start of a list of to-be-remembered items are better remembered than words
presented in the middle of this list. It is called the primacy effect because these items were the ones presented first to
the subject in the memory experiment.
The primacy effect appears to be the result of subjects recalling items directly from a semantic memory. This is
because the primacy effect can be sharply attenuated by performing manipulations that adversely affect this system
29
-- such as using fast presentation of items (which does not permit much elaborative rehearsal to transfer memories
from short-term to long-term stores), or by using list items that have similar meanings (and thereby producing
semantic confusions).
The primacy effect was important to cognitive science because it provided empirical evidence for the decomposition
of memory into an organized set of subsystems, which is required by functional analysis.

See Also:
Free Recall | Recency Effect | Serial Position Curve

Priming
Priming is discussed in the context of the activation theory. It is assumed that concepts that have some relation to
each other are connected in some mental network, so that if one concept is activated, then concepts related to it are
also activated.
Priming is a phenomenon related to this concept. It can be shown in the following example:
A subject is shown the word nurse. Presumably the subject will then think of other words related to the word nurse.
If the subject is then shown either the word doctor or the word butter, the subject should be able to read the former
word more quickly than the latter word because doctor is related to nurse and therefore has been recently accessed,
and so more familiar to the subject.
The word nurse then serves to "prime" the second word, doctor.

Primitive
A primitive is a basic building block of a system. Complex systems can be decomposed into simpler things, but
primitives -- by definition -- cannot.
To provide an example that gives a nice intuition about what a primitive is, consider teaching a child the meanings
of different words. If a child asks us "What does `bachelor' mean?", we might break "bachelor" down into other
meanings ("`Bachelor' means that someone is a `man' who is `not married'"). However, if a child asks us "What does
`red' mean?", we are not likely to do this, because it is difficult to decompose such a basic term. Instead, we are
more likely to point to different things that are `red'. In this sense, `red' represents something that we might call a
semantic primitive (a basic meaning), while `bachelor' does not.
Primitives are important in cognitive science because of its tendency to view information processors functionally
instead of physically. Because of this view, researchers use a methodology called functional analysis to decompose a
complex information processor into simpler, functional components. However, if this decomposition is not stopped,
the functional analysis goes on indefinitely and falls prey to Ryle's Regress. This means that the functional analysis
is not explanatory. Researchers try to escape Ryle's regress by identifying a set of primitive functions which cannot
be further decomposed. This set of functions is the functional architecture for cognition.

See Also:
Functional Analysis | Functional Architecture | Ryle's Regress

Production

A production systemis program that comprises a series of conditional statements that specify what action is to be
taken under certain circumstances. These 'If ... then ...' statements are known as productions. Each production has a
condition and an action. If the condition is found to be true by the system then the action will be performed. For
example, a production system for a thermostat may contain a production such as the following.
122.temperature > 70 and temperature < 72 ----> stop
Information from the environment is compared to the conditions of the production. If the condition to the left of the
arrow is true then the process to the right of the arrow will be performed. In the above example will the thermostat
will stop as long as the temperature remains within the range of 70 and 72 degrees. If the temperature is outside that
range then a different production will be activated and the system will change behaviour.
References:
18. Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

30
See Also:
Production System

Production System
A production system is program that comprises a series of conditional statements that specify what action is to be
taken under certain circumstances. These 'If ... then ...' statements are known as productions. For example, a
production system for a cricket batsman may comprise a series of productions such as the following.
123.ball outside offstump ------> no action
124.ball pitched on wicket and good length ------> forward defensive stroke
125.ball pitched short on leg side and fast------> duck
126.ball pitched short on leg side and slow------> hook
Information from the environment is matched against all productions and if the condition on the left of the arrow is
true then then action on the right will be performed. However, as systems become more complex many productions
may be triggered and the system will face a scheduling problem. The system must contain a production that will
determine which production of the many possible will be fired. Common conflict scheduling productions are; order
in the production system, specificity, refractoriness and recency.
Production systems were one of the first attempts to model cognitive behaviour and form the basis of many existing
models of cognition.
References:
19. Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

See Also:
productions

Proposition
The proposition is a concept borrowed by cognitive psychologists from linguists and logicians. The propostion is the
most basic unit of meaning in a representation. It is the smallest statement that can be judged either true or false.
Anderson (1990) gives the following example of a setence divided up into its constituent propositions:
"Nixon gave a beautiful Cadillac to Brezhnev, who was the leader of the USSR."
This sentence can be divided into three propositions:
127.Nixon gave a Cadillac to Brezhnev.
128.The Cadillac was beautiful.
129.Brezhnev was the leader of the USSR.
A popular view in cognitive psycyhology is that the mind is structured much like a language. In such a structure,
propositions function as basic units of representation--or the building blocks--of the mind. It is the content of the
propositions, the connections between propositions, and the strength of the connections between propositions that
determine the structure of mind.
References:
20. Anderson, J.R. (1990). Cognitive psychology and its implications (3rd ed.). New York: W. H. Freeman.

Protocol Analysis
Protocol analysis is one experimental method that can be used to gather intermediate state evidence concerning the
procedures used by a system to compute a function. In protocol analysis, subjects are trained to think aloud as they
solve a problem, and their verbal behaviour forms the basic data to be analyzed. The first step of a protocol analysis
is to obtain, and then transcribe, a verbal protocol. The next step is to take the protocol and use it to infer the
subject's problem space (i.e., infer the rules being used, as well as various knowledge states concerning the
problem). The third step is to create a problem behaviour graph, which reflects state transitions as subjects search
through the problem space in their attempt to solve the problem. Finally, the problem behavior graph is used to
create a computer simulation (typically created as a production system) that will solve the problem. By comparing,
in detail, the behaviour of the simulation to the verbal protocol, one can validate the assumptions that led to the
program's creation. In turn, the program provides a rich description of an individual's processing steps, and
transitions in knowledge,during the problem-solving process.
References:

31
130.Ericsson, K.A., & Simon, H.A. (1984). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT
Press.
131.Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

See Also:
Intermediate State Evidence | Strong Equivalence

Recency Effect
The recency effect is found when the results of a free recall task are plotted in the form of a serial position curve.
Generally, this curve is U-shaped, and the recency effect corresponds to the tail of the U on the right. This tail
indicates that words presented at the end of a list of to-be-remembered items are better remembered than words
presented in the middle of this list. It is called the recency effect because these items were the ones presented most
recently to the subject in the memory experiment.
The recency effect appears to be the result of subjects recalling items directly from the maintenance rehearsal loop
used to keep items in primary memory. In other words, it reflects short-term memory for items. This is because the
recency effect can be sharply attenuated by performing manipulations that adversely affect such rehearsal -- such as
delaying recall of list items with a distractor task, or by using list items that have similar sounds.
The recency effect was important to cognitive science because it provided empirical evidence for the decomposition
of memory into an organized set of subsystems, which is required by functional analysis.

See Also:
Free Recall | Primacy Effect | Serial Position Curve

Recognition Recall
This is a variation of the recall portion of a memory task. The subject is not required to explicitly state the items, but
instead, they must simply identify which items (from a larger group of items) were on the original list.
For instance, the subject may be read a large list of items and be asked to say "YES" if the item was on the list, and
say "NO" if it was not on the list.
This task is slightly easier than the cued or free recall task. The answers provided by the subject fall into 4
categories:
132.HITS: These are the responses that correctly identify items as being from the original list when they
actually are.
133.CORRECT NEGATIVES: These are the responses that correctly state an item as not being on the original
list when it actually was not.
134.MISSES: These are the responses that fail to identify a word as being from the original list when it was.
135.FALSE POSITIVES: These are responses that incorrectly identify items as being from the original list
when they were not on that list.

See Also:
Cued Recall | Free Recall

Recursive Decomposition
Recursive decomposition (Palmer & Kimchi, 1986) refers to the process whereby any complex informational event
at one level of description can be specified more fully at a lower level of description by decomposing the event into:
• a number of components and
• processes that specifiy the relations among these components
The information processing model of memory provides a good example of recursive decomposition.
Model of Memory
The research strategy, functional analysis, relies on the principle of recursive decomposition.
Recursive decomposition should not be equated with reductionism, which is based on the assumption that the best of
correct level of description is the most specific one (e.g., at the level of physics).
References:
136.Medin, D.L., & Ross, B.H. (1992). Cognitive psychology. Fort Worth, TX: Harcourt Brace Jovanovich.
137.Palmer, S. & Kimchi, R. (1986). The information approach to cognition. In T. Knapp, & L. Robertson
(Eds.), Approaches to cognition. Hillsdale NJ: Erlbaum.
32
See Also:
Functional Analysis

Relative Complexity Evidence

One of the key goals of cognitive science is to develop theories that are strongly equivalent with respect to to-be-
explained systems. This requires that evidence be collected to defend the claim that the model and the to-be-
explained system are carrying out the same procedures to compute a function.
One type of evidence that can be used to defend this claim is called relative complexity evidence. Imagine that
someone is proposing that a Turing machine is a strongly equivalent model of how children do mental arithmetic. To
collect relative complexity evidence concerning this claim, we could present a number of different addition
problems to the Turing machine, and then rank order them in terms of the number of processing steps that each
problem required. We could then present the same problems to a group of children, and rank order the difficulty they
caused the children on the basis of reaction time taken to solve the problems. If the two systems are strongly
equivalent, then we would expect the same rank-orderings to be obtained for both the Turing machine and the
children. If they are not strongly equivalent (as we would expect in this example), then differen rank-orderings
would emerge because different procedures are used to solve the problems.
References:
138.Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:
Intermediate State Evidence | Protocol Analysis | Strong Equivalence

Retrieval

Retrieval refers to the processess through which we recover items from memory.

See Also:
Working Memory

Ryle's Regress
Ryle's Regress is a classic argument against cognitivist theories, and concludes that such theories cannot be
scientific. The philosopher Gilbert Ryle (1949) was concerned with critiquing what he called the intellectualist
legend, which required intelligent acts to be the product of the conscious application of mental rules. Ryle (p. 31)
argued that the intellectualist legend results in an infinite regress of thought:
According to the legend, whenever an agent does anything intelligently,
his act is preceded and steered by another internal act of considering a
regulative proposition appropriate to his practical problem. [...] Must we
then say that for the hero's reflections how to act to be intelligent he must
first reflect how best to reflect how to act? The endlessness of this
implied
regress shows that the aplication of the appropriateness does not entail the
occurrence of a process of considering this criterion.
Variants of Ryle's Regress are commonly aimed at cognitivist theories. For instance, in order to explain the behavior
of rats, Edward Tolman (e.g., 1932, 1948) found that he had to use terms that modern cognitive scientists would be
very comfortable with. For instance, Tolman suggested that his rats were constructing a "cognitive map" that helped
them locate reinforcers, and he used intentional terms (e.g., expectancies, purposes, meanings) to describe their
behavior. This led to a famous attack on Tolman's work by Guthrie (1935, p. 172):
Signs, in Tolman's theory, occasion in the rat realization, or cognition,
or judgement, or hypotheses, or abstraction, but they do not occasion action.
In his concern with what goes on in the rat's mind, Tolman has neglected to
predict what the rat will do. So far as the theory is concerned the rate is

33
left buried in thought; if he gets to the food-box at the end that is his
concern, not the concern of the theory.
Cognitive scientists must be constantly aware of Ryle's Regress as a potential problem with their theories, and must
ensure that their theories include a principled account of how the (potentially) infinite regress that emerges from
functional analysis can be stopped. This is why the identification of the functional architecture is one of the
fundamental goals of cognitive science.
References:
139.Guthrie, E.R. (1935). The psychology of learning. New York: Harper
140.Ryle, G. (1949). The concept of mind. London: Hutchinson & Company.
141.Tolman, E.C. (1932). Purposive behavior in animals. New York: Century Books.
142.Tolman, E.C. (1948). Cognitive maps in rats and men. Psychological Review, 55, 189-208.

See Also:
Functional Analysis | Functional Architecture | Primitive

Schema
A schema representation is a way of capturing the insight that concepts are defined by a configuration of features,
and each of these features involves specifying a value the object has on some attribute. The schema represents a
concept by pairing a class of attribute with a particular value, and stringing all the attributes together. They are a way
of encoding regularities in categories, whether these regularities are propositional or perceptual. They are also
general, rather than specific, so that they can be used in many situations.
References:
143.Anderson, J.R. (1990). Cognitive psychology and its implications. New York, NY: Freeman.

Semantics
Semantics deals with the relationship between representations and the world. Anything which can said to be a
representation--which could be said to stand for, represent, point to, indicate, mean, refer to, or in some way be
about something else--has semantic relations to that something else. Semantics is what makes the word Coffee9
mean that smelly muddy brown hot liquid that people drink.
A representation's semantic properties are those properties the representation has in virtue of the sort of relationship
the representation has with a part of the world. So when we talk about what object (the thing in the world)
represents, or whether the representation is a true representation of its object or whether it's a highly inaccurate
representation of that object, or whether it misrepresents that object, we're talking about the representation's
semantic properties.
The problem is that if cognitive scientists define the essence of cognition as processes operating on representations,
then any process which operates on a representation has no access to that representation's semantic properties.
Fodor9s (1990) Formality Condition maintains that any process which operates on a representation can only operate
on the representation's nonsemantic or formal properties.
The idea, then, is that if a process which operates on a representation is to be sensitive to the semantic properties of
the representation, such as what object it represents, then that representation9s semantic properties must somehow
be mirrored in the representation's syntactic properties. So my cow representation must be fairly complex, and
somehow 3contain2 formal descriptions of all the properties I ascribe to cows, so that processes which operate on
this representation (such as those which allow me to utter 3Cows give milk,2) can operate on those properties.
But whether the properties I ascribe to cows in such formal descriptions are true of cows is inaccessible to those
processes. Whether what I believe is true or not is a semantic property of that representation9s relationship with the
world. And semantic properties like truth are transparent to the processes that operate on my representations.
Perhaps the best we can hope is that the formal properties of all my representations are consistent, and form a
coherent network of beliefs that facilitate my acting successfully in my environment. Whether these are true or not is
inaccessible to the brain-processes which operate on those representations. (Hence what Fodor (1980) calls
3Methodological Solipsism2)
References:
144.Fodor, J. (1980). 3Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology2.
Behaviour and Brain Sciences, 3(1), 63-109.
145.Fodor, J. (1978). 3Tom Swift's Procedural Grand-mother2. Cognition, 6.

34
(Both of these are reprinted in Fodor (1981).Representations, Brighton U.K.: The Harvester Press. pp204-224 and
pp225-256.)

See Also:
The Formality Condition | Misrepresentation | Representation

Serial Position Curve


The serial position curve is used to plot the results of a free recall experiment. The x-axis of this curve indicates the
serial position of to-be-remembered items in the list (e.g., the first item, the second item, the third item, and so on).
The y-axis of this curve indicates the probability of recall for the item, which is typically obtained by averaging
across a number of subjects
The serial position curve is important to cognitive science because it revealed two effects, the recency effect and the
primacy effect, which were fundamentally important pieces of evidence for the functional decomposition of
"memory" into an organized set of subsystems.

See Also:
Free Recall | Primacy Effect | Recency Effect | Short Term Memory

Serial Search
A type of memory search in which information is retrieved one piece after another. Serial searches are represented
by a linear function. That is, when retrieval time is plotted against the number of items to be retrieved the slope of
the graph is constant, and is equivalent to the amount of time that it takes to retrieve a single piece odf information.
Serial memory search is often contrasted with parallel memory search in which a number of pieces of information
are retrieved at the same time. Graphically, the slope of the line representing parallel search is zero. That is, as the
number of items to be retreived increases the amount of time that it takes to retrieve these items remains constant.
Sternberg (1966, 1969a, 1969b, 1975) argued that retrievel from short term memory relies upon serial type searches,
whereas retrieval from long term memory relies upon parallel type searches.

See Also:
Short Term Memory

Short Term Memory


Generally cognitive psychologists divide memory into three stores: sensory store, short-term store, and long-term
store. After entering the sensory store, some information proceeds into the short-term store. This short-term store is
commonly refered to as short-term memory.
Short-term memory has two important characteristics. First, short-term memory can contain at any one time seven,
plus or minus two, "chunks" of informaton. Second, items remain in short-term memory around twenty seconds.
These unique characteristics, among others, suggested to researchers that short-term memory was autonomous from
sensory and long-term memory stores
Craik and Lockhart (1972) argued short-term memory was not autonomous from the other memory systems. They
suggested that short-term memory and long-term memory were different manifestations of a single, underlying
memory system.
As an alternative to short-term memory Baddely and Hitch have propsed the concept of a working memory. As in
traditional models of short-term memory, working memory is limited in the amount of information that it can store,
and the length of time that it can store information.

See Also:
Working Memory | Free Recall

Spontaneous Generalization

35
Connectionist networks may be designed so that they can retrieve information from cues that are too vague to match
a particular memory and provide a generalized picture of what is common to the memories that match the cues. Thus
the network has the ability to generalize about classes of memories as part of its architecture.
References:
146.Bechtel, W., & Abrahamsen, A. (1991). Connectionism and the mind: An introduction to parallel
processing in networks. Cambridge, MA: Blackwell.

See Also:
Content Addressable Memory | Functional Architecture | Graceful Degradation | Parallel Distributed Processing
Models | Symbolic Architecture

Strong Equivalence
Strong equivalence is a stronger condition for model validation than is weak equivalence. If two systems are
strongly equivalent then
147.they compute the same function (i.e., they are weakly equivalent),
148.they use the same program to compute this function, and
149.this program is written in the same programming language (i.e., the two systems have the same functional
architecture).
As far as "algorithmic" approaches to cognitive science are concerned (e.g., experimental psychology,
psycholinguistics), the aim of the discipline is to generate strongly equivalent theories of people. This requires
collecting evidence to support the claim that a simulation uses the same procedures to solve a problem as do human
subjects, as well as evidence to support the claim that a proposed architecture is primitive. It is not surprising, then,
that the search for strongly equivalent theories is a formidable (but necessary) challenge for cognitive scientists.
References:
21. Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:
Functional Architecture | Weak Equivalence

Sustained Attention

Sustained attention is "the ability to direct and focus cognitive activity on specific stimuli." In order to complete any
cognitively planned activity, any sequenced action, or any thought one must use sustained attention. An example is
the act of reading a newspaper article. One must be able to focus on the activity of reading long enough to complete
the task. Problems occur when a distraction arises. A distraction can interrupt and consequently interfere in sustained
attention.
DeGangi and Porges (1990) indicate there are 3 stages to sustained attention which include:
150.attention getting,
151.attention holding, and
152.attention releasing.
Sustained attention is important to psychologists because it is "a basic requirement for information processing."
Therefore, sustained attention is important for cognitive development. When a person has difficulty sustaining
attention, they often present with an accompanying inability to adapt to environmental demands or modify
behaviour (including inhibition of inappropriate behaviour).
References:
22. DeGangi, G., & Porges, S. (1990). Neuroscience foundations of human performance. Rockville, MD:
American Occupational Therapy Association.

See Also:
Attention Getting | Attention Holding | Attention Releasing

36
Symbolic Architecture
Symbolic architecture refers to the classical view of the architecture of the mind. In this approach the mind is
viewed as a process in which symbols are manipulated. Symbols are moved between memory stores such as long
term and short term memory and are acted upon by an explicit set of rules in a particular sequence. The symbolic
architecture is the manner in which memory stores are related and the set of rules applied to the system.
The symbolic architecture approach has been widely applied and formed the basis of influential work such as
Newell & Simon's Human Problem Solving. More recently, this approach to cognitive architecture has been
challenged by the connectionist architecture approach.
References:
153.Collins, A. & Smith, E.E. (Eds.). (1988). Readings in cognitive science: A perspective from psychology and
artificial intelligence. San Mateo, CA: Morgan Kaufman.

See Also:
Functional Architecture

Top-down Processing
The cognitive system is organized hierarchically. The most basic perceptual systems are located at the bottom of the
hierarchy, and the most complex cogntive (e.g. memory, problem solving) systems are located at the top of the
hierarchy.
Information can flow both from the bottom of the system to the top of the system and from the top of the system to
the bottom of the system. When information flows from the top of the sytstem to the bottom of the system this is
called "top-down processing".
The implications of this top to bottom flow of information is that information coming into the system (perceptually)
can be influenced by what the individual already knows about the information that is coming into the system (as
information about past experiences are stored in the higher levels of the system).
Extreme versions of top-down processing argue that all information coming into the system is affected by what is
already known about the world. An alternative version is offered by Jerry Fodor (1983). In his theory of modularity,
Fodor argues that top-down processing occurs only in some parts of the cognitive system at certain times. Fodor
rejects the idea that all stored information can potentially effect all incoming information.
References:
154.Fodor, J.A. (1983). The modularity of mind. Cambridge, MA: MIT Press.

See Also:
Bottom-Up Processing | Modularity

Weak Equivalence

Weak equivalence is a relationship between the outputs of two systems that are being compared. If these systems are
only weakly equivalent, then we can say that they are computing the same function (or generating the same external
behavior), but that they are using different procedures to do so. For example, human chess players and computer
chess players are weakly equivalent, in the sense that they both play the game of chess, but use very different
procedures to decide which move to make next in a game. (Computer chess players usually use some form of
intensive search, which is beyond the memory capacity of human players. Indeed, an interesting question is how
humans can play chess so well given that they do not use brute force search methods!)
Weak equivalence is important in cognitive science in two respects. First, it is the kind of comparison that the Turing
test offers, which is why it is also sometimes called Turing equivalence. Second, although weak equivalence is
necessary for validating theories in cognitive science, it is not sufficient. This is because while it is required of
theories or simulations in cognitive science that they compute the same functions as the to-be-explained system, it is
also crucial that they compute these functions in the same way. This later requirement is called strong equivalence.
References:
155.Pylyshyn, Z.W. (1984). Computation and cognition. Cambridge, MA: MIT Press.

See Also:
Strong Equivalence | Turing Test
37
Turing Test
The Turing test is a behavioural approach to determining whether or not a system is intelligent. It was originally
proposed by mathematician Alan Turing, one of the founding figures in computing. Turing argued in a 1950 paper
that conversation was the key to judging intelligence. In the Turing test, a judge has conversations (via teletype) with
two systems, one human, the other a machine. The conversations can be about anything, and proceed for a set period
of time (e.g., an hour). If, at the end of this time, the judge cannot distinguish the machine from the human on the
basis of the conversation, then Turing argued that we would have to say that the machine was intelligent.
There are a number of different views about the utility of the Turing test in cognitive science. Some researchers
argue that it is the benchmark test of what Searle calls strong AI, and as a result is crucial to defining intelligence.
Other researchers take the position that the Turing test is too weak to be useful in this way, because many different
systems can generate correct behaviours for incorrect (i.e., unintelligent) reasons. Famous examples of this are
Weizenbaum's ELIZA program and Colby's PARRY program. Indeed, the general acceptance of ELIZA as being
"intelligent" so appalled Weizenbaum that he withdrew from mainstream AI research, which he attacked in his
landmark 1976 book.
References:
156.Colby, K.M. et al. (1972) Artificial paranoia. Artificial Intelligence, 2, 1-26.
157.Colby, K.M. et al. (1973) Turing-like undistinguishability tests for the validation of a computer simulation
of paranoid processes. Artificial Intelligence, 3, 47-51.
158.Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-560.
159.Weizenbaum, J. (1976). Computer power and human reason. San Francisco, CA: W.H. Freeman.

See Also:
Turing Equivalence | Weak Equivalence

Veridicality
Veridicality is the extent to which a knowledge structure accurately reflects the information environment it
represents. This is a construct of interest as our understanding of the relationship between knowledge structures and
information environments is weak. In particular, the optimal level of veridicality is problematic. The value of a
knowledge structure lies in its ability to simplify an environment, yet simplification increases the probability of a
false characterisation and hence error. The study of veridicality is concerned with investigating the consequences of
this trade off between accuracy and efficiency.
References:
160.Walsh, J.P., Henderson, C.M. & Deighton,J. (1988). Negotiated belief structures and decision performance:
An empirical investigationOrganizational Behavior and Human Decision Processes. 42, 194-216

See Also:

Visuospatial Perception
This is one component of cognitive functioning and it refers to our ability to process and interpret visual information
about where objects are in space.
This is an important aspect of cognitive functioning because it is responsible for a wide range of activities of daily
living.
For instance, it underlies our ability to move around in an environment and orient ourselves appropriately.
Visuospatial perception is also involved in our ability to accurately reach for objects in our visual field and our
ability to shift our gaze to different points in space.
The association areas of the visual cortex are separated into two major component pathways, and are believed to
mediate different aspects of visual cognition. In humans, the parieto-occipital region is believed to process
visuospatial and visual motion types of information. Conversely, the inferotemporal region of the brain is believed to
mediate our ability to process visual information about the form and color of objects.
References:
161.Kolb, B., & Whishaw, I. (1985). Fundamentals of human neuropsychology (2nd ed.). New York: W.H.
Freeman.
162.Pinel, J. (1993). Biopsychology (2nd ed.). Toronto: Allyn & Bacon.

38
See Also:
Apparent Motion

Visuospatial Sketchpad
The visuospatial sketchpad or scratchpad (VSSP) is one of two passive slave systems in Baddeley's (1986) model of
working memory. The VSSP is responsible for the manipulation and temporary storage of visual and spatial
information. To date, more is known about the second slave system, the articulatory loop, than about visual coding
in memory.
References:
163.Baddeley, A. (1986). Working memory. Oxford: Clarendon Press.

See Also:
Articulatory Loop | Central Executive | Working Memory

WAIS
The Wechsler Adult Intelligence Scale (WAIS) was developed by Wechsler in 1955. An updated version of the scale
(WAIS-R) was developed in 1981. WAIS measures global or general intelligence and is commonly used by
psychologists. It is divided into two parts: the verbal scale and the performance scale. Each of these two parts is
further divided into subtests, each of which taps a specific verbal or nonverbal skill. Each subtest has items ranging
from easy to increasingly more difficult.
Verbal subtests measure "our store of knowldedge" (Belsky, 1990, p. 120). They focus on
learned or absorbed knowledge [testing] knowledge of historical, literary
or biological facts; knowledge relating to competent functioning in the
world;
knowledge of mathematics; knowledge of the meaning of specific words.
Performance subtests (except picture completion) contain relatively unfamiliar tasks. Speed is critical to these tasks
as these subtests are timed. They measure
on-the-spot analytical skills, how well a person can master a new, never
before encountered problem (Belsky, 1990, p. 120).
The IQ measure of a person is derived by comparison to a particular reference group, to people of that test subject's
age group. Therefore, the raw score has a different meaning depending upon the test subject's age.
The WAIS is not only important to psychologists as a commonly used assessment tool, but it is often at the centre of
the debate of whether or not intelligence declines with age. It is questionable whether the current intelligence tests
(specifically the WAIS) are appropriate for use with older persons. Belsky (1990) says critics must be
looking critically at the appropriateness of the measures themselves,
questioning whether existing tests of intelligence are really doing an
adequate job of tapping cognitive ability in middle-aged and elderly adults.
(p. 119)
Belsky further asks if
the dramatic age decline is confined mainly to particular subtests.
Would we see the same age loss if we looked at data other than the
cross-sectional studies used to determine the norms? (p. 121).
References:
164.Belsky, J.K. (1990). The psychology of aging theory, research, and interventions. Pacific Grove, CA:
Brooks/Cole.

See Also:
Crystallized Intelligence | Fluid Intelligence

Wernicke's Area
Named for Carl Wernicke who first described it in 1874, Werenicke's area appears to be crucial for language
comprehension. People who suffer from neurophysiological damage to this area (called Wernicke's aphasia or fluent

39
aphasia) are unable to understand the content words while listening, and unable to produce meaningful sentences;
their speech has grammatical structure but no meaning.
Auditory and speech information is transported from the auditory area to Wernicke's area for evaluation of
significance of content words, then to Broca's area for analysis of syntax. In speech production, content words are
selected by neural systems in Wernicke's area, grammatical refinements are added by neural systems in Broca's area,
and then the information is sent to the motor cortex, which sets up the muscle movements for speaking.
References:
165.Gray, Peter. (1994). Psychology. New York, NY: Worth Publishing.

See Also:
Broca's Area

Working Memory
>Working memory, the more contemporary term for short-term memory, is conceptualized as an active system for
temporarily storing and manipulating information needed in the execution of complex cognitive tasks (e.g., learning,
reasoning, and comprehension). There are two types of components: storage and central executive functions (see
Baddeley, 1986 for a review). The two storage systems within the model (the articulatory loop [AL] and the
visuospatial sketchpad or scratchpad [VSSP] are seen as relatively passive slave systems primarily responsible for
the temporary storage of verbal and visual information (respectively).
The most important, and least understood, aspect of Working Memory is the central executive, which is
conceptualized as very active and responsible for the selection, initiation, and termination of processing routines
(e.g., encoding, storing, and retrieving).
References:
166.Baddeley, A. (1986). Working memory. Oxford: Clarendon Press.

See Also:
Articulatory Loop | Central Executive | Encoding | Retrieval | Visuospatial Sketchpad

Z Lens
The Z Lens is a sophisticated piece of apparatus developed by Roger Sperry and his associates in 1955 to enable
them to project visual stimuli onto the retina of the eye so that they are interpreted either by the left or right
hemisphere of the brain, not both at once. Sperry, a pioneer of the split brain operation, used it to demonstrate that
split brain patients had two separate visual inner worlds. If the picture of an object was presented to the left
hemisphere, the patient recognized it when it was presented again to the same hemisphere. However, if the same
object was presented to the other half of the visual field, the patient had no recollection of having seen it before.
References:
167.Kristal, L. (Ed.). (1981). ABC of psychology. London: Multimedia Publications.

40

You might also like