You are on page 1of 24

COGNITIVE NEUROPSYCHOLOGY, 1988,5( 1) 3-25

Semantic Systems or System? Neuropsychological


Evidence Re-examined
M.J. Riddoch, G.W. Humphreys,
M. Coltheart, and E. Funnel1
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

Birkbeck College, University of London, London, U.K.

In this paper we consider whether our stored conceptual knowledge about


stimuli is represented within a single semantic system which is indifferent to
the modality of stimulus presentation, or whether conceptual knowledge is
represented in different semantic systems according to either the modality
of stimulus presentation or the nature of the concept (e.g. whether the concept
specifies visual or verbal knowledge about an object). Previous work suggests
three areas of neuropsychological research which are relevant to this issue:
modality-specific aphasias, modality-specific priming on semantic access dis-
orders, and modality-specific aspects of semantic memory disorders. Evidence
from each of these areas is reviewed and we argue that there is only equivocal
support for the multiple semantic systems position. We outline an alternative
account which distinguishes between a single amodal semantic system and
modality-specific perceptual recognition systems, and we discuss the evidence
in light of this single semantic system account.

INTRODUCTION
Access to a semantic system is necessary for comprehension to occur. Com-
prehension is more than mere recognition: it is possible to recognise that
something has been encountered previously, without necessarily knowing
how that item differs functionally from other similar items, how it is related
to other items, or how it might be used. We take it that a semantic system
specifies the kind of knowledge that allows decisions to be made concerning
the functional and associative characteristics of things.

Requests for reprints should be addressed to M.J. Riddoch, Department of Paramedical


Sciences, North East London Polytechnic, Rornford Road, London El5 4LZ.
This paper was presented to the Venice meeting on Cognitive Neuropsychology in March
1985. The work was supported by grants from the Chest, Heart and Stroke Association and
the MRC to the first two authors and from an MRC grant to the third and fourth authors.
We thank members of the Birkbeck College Cognitive Neuropsychology group for comments,
and in particular Philip Quinlan for his help in preparing the paper.

@ 1988 Lawrence Erlbaum Associates Limited


4 RIDDOCH ET AL.

visual verbal
)
semantics semantics
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

(semantics from
< (semantics from

visual input ) verbal input 1

Our paper is concerned with the following issue: is there a single semantic
system which is used for all comprehension tasks, regardless of the modality
of the stimulus input or the nature of the semantic information required
by the task? O r are there, instead, separate semantic systems associated
with separate modalities of input or separate types of semantic information?
The second of these views is often expressed by distinguishing “visual
semantics” from “verbal semantics”. However, the distinction can be inter-
preted in a variety of rather different ways. Clarification is therefore impera-
tive here.
As a first point: the terms “visual” and “verbal” are slightly unfortunate,
since all those who subscribe t o the distinction between visual and verbal
semantics would regard reading comprehension as involving access to the
verbal semantic system even though the stimuli involved are visual. So it
might be wise to adopt a slightly different terminology: one might refer to
pictorial semantics (covering objects and pictures) and verbal semantics
(covering both spoken and written words). However, t o be consistent with
previous work, we will continue to use the current terminology.
The two rather different interpretations of the distinction between visual
semantics and verbal semantics extant in the literature are as follows.
1. Modality of input. Comprehension of pictures or objects depends upon
access t o a visual semantic system. Comprehension of words depends upon
access t o a verbal semantic system. Analogously there will be a tactile
SEMANTIC SYSTEMS OR SYSTEM? 5

semantic system (for comprehending touched or felt objects), an auditory


semantic system (for comprehending nonverbal sounds), an olfactory
semantic system, and so on. We will refer t o this as the input account of
the distinction between semantic systems. We take this to be the view held
by Wamngton (see, e.g., Wamngton, 1975) and by Shallice (see, e.g.,
Shallice, 1987). One important implication of their views appears to be
that semantic information is duplicated in the two systems. The fact that
a cat drinks milk is represented in verbal semantics (which is how we know
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

that milk-drinking is associated with the printed word cat) and also in visual
semantics (which is how the property of milk-drinking can be evoked when
we see a cat). It is because of this duplication that the same question (e.g.
“Is it larger than a telephone directory?”) can be treated as a test of access
to visual semantics (when the stimulus is a picture) and as a test of access
to verbal semantics (when the stimulus is a word: see Wamngton, 1975)’.
A processing framework consistent with this account is given in Fig. 1.

‘There are also further refinements to the input modality semantic systems argument. For
instance, there may be further, differential representation within each semantic system for
stimuli according to the nature of their defining attributes. We may distinguish here between
perceptual (colour, shape, size, etc.) and functional attributes of objects. Such attributes are
not equally well represented in all object and word classes. Concrete, but not abstract, words
can be defined in terms of both perceptual and functional attributes of the underlying concept;
also, in the object domain, tools and items of furniture can be defined on the basis of both
their perceptual and functional attributes whilst exemplars from other categories (such as
animals, fruit, etc.) are more likely to be defined in terms of their perceptual attributes.These
differing attributes could be separately represented within the hypothesised visual and verbal
semantic systems, making certain objects or words especially vulnerable to impairment if
there is degeneration of the semantic system. Alternatively, different objects and words could
be separately represented according to their defining attributes.
In the pioneering work ofwarrington (1975), no distinction was made between perceptual
and other attributes of objects and words, and, judging by the choice of questions used to
probe recognition performance, the assumption seems to be that perceptual and functional
attributes are equally represented in the visual and verbal semantic systems (e.g. the series
of probes included the questions “is it English or not?”, “is it larger than a telephone direc-
tory?”, “is it used indoors?”, “is it made of metal?” etc.). In later work, Warrington and
McCarthy (1983) and Warrington and Shallice (1984) have distinguished between a patient
with impairments in accessing knowledge about inanimate objects from audition and patients
with impairments in accessing knowledge about living things and foods from vision. They
interpret such deficits as indicating category-specific representation of objects and words
within both the visual and verbal semantic systems.
Interestingly, Warrington and McCarthy (1983) argue that the category-specific deficits in
one patient (VER) reflect a problem in the processes involved in gaining access to relevant
semantic knowledge, rather than impairments to the semantic representations themselves.
One implication of this would appear to be that there are independent access routes to the
differentiated parts of the visual and verbal semantic systems. In effect, this idea maintains
that there are separate semantic systems for different categories for objects and words, for
each input modality.
6 RIDDOCH E T A L

2 . Modaliry of information. Access to information about visuaVpictorial


properties of a concept depends upon access to a pictorial semantic system
in which all information about such properties is stored. In order to retrieve
this information all stimuli would have to access the pictorial semantic
system, irrespective of the modality of input. Access to information about
verbal properties of a concept depends upon access to a verbal semantic
system, again irrespective of the modality of stimulus input. We will refer
to this as the representation account of the distinction between semantic
systems. We take this to be the view held by Beauvois (1982; see also
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

Beauvois & Saillant, 1985). Figure 2 gives a processing framework illustrat-


ing the representation account.
To demonstrate the difference between the input and the representation
accounts, consider how the two accounts explain how we answer a simple
auditory question, such as “What is the colour of a strawberry?”. According
to the input account, we must access the verbal semantic system to answer
this question because the information is presented auditorily. According to
the representation account, we must access the visual semantic system to
answer this question because information about the colour of a strawberry
is part of our visual semantic knowledge.
Whether approach (1) or approach (2) is adopted, it is necessary to
postulate bi-directional links between the two semantic systems. The input
account needs such links to explain our ability to match pictures to words.
The representation account needs such links to explain our ability to co-

V W

visual verbal
3
semant I C S semantics
t
(pictorial information (verbal information

about concepts) about concepts


SEMANTIC SYSTEMS OR SYSTEM? 7

ordinate pictorial semantic information with verbal semantic informatior+our


ability, for example, to answer such questions as “How large is the domestic
animal which kills mice?”.
There are thus two different accounts which assume that there exist
separate semantic systems. We also need to incorporate the process of
access to phonology for the purpose of naming. So an answer needs to be
provided to this question: is it only the verbal semantic system (whether
defined in sense [l] or in sense [2]) that can access the name of a concept
directly, or are there direct links from representations in visual semantics
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

(however defined) to names?This question does not arise, of course, for


those who take the view that there is only a single semantic system, acces-
sible from all modalities of input containing all forms of semantic informa-
tion.
The relationship between output phonology and the different semantic
systems is only explicit in the representation account of modality-specific
semantic systems. For instance, in order to account for optic aphasia (a
naming impairment specific to visual stimuli), Beauvois (1982: Beauvois
& Saillant, 1985) posits that the bidirectional links between the visual and
verbal semantic systems are disrupted, so that such patients are impaired
at any ‘tasks requiring mediation between the two semantic systems. It
follows that, in order to account for a visual naming impairment in these
terms, we must assume that there are no direct links between the visual
semantic system and output phonology; stored visual attributes of objects
can only be named via the verbal semantic system, and since the links from
visual to verbal semantics are thought to be impaired, this will affect the
naming of visually presented stimuli selectively. On the other hand, since
such patients are often able to mime the gestures to the objects that they
cannot name, it is posited that access to the visual semantic system is intact.
A rather different view comes from the work of Wamngton (1975). War-
rington argued that two patients, AB and EM, were selectively impaired
in the representation of visual and verbal semantic knowledge, where the
semantic knowledge systems were tested selectively by varying the input
modality. EM, hypothesised to have an impaired verbal semantic system,
was better at picture identification than AB, hypothesised to have an
impaired visual semantic system (the patients scored 37/40 vs. 19/40 respec-
tively on a picture recognition task). Accordingly, we may suggest the need
for direct connections between the visual semantic system and the phonolog-
ical output lexicon in order to explain EM’Ssuperior naming and definitions
of pictures2.
’Unfortunately, Warrington (1975) did not report EM’S ability to name pictures separately
from her ability to give correct definitions of objects. Nevertheless, if EM has impaired
representation of verbal semantic knowledge, it would appear that her relatively good spoken
definitions of pictures must be mediated by the visual semantic system, implicating direct
connections between the visual semantic system and the phonological output lexicon.
8 RIDDOCH ETAL.

In a recent discussion of the topic, Shallice (1987) has argued that there
are at least three lines of neuropsychological evidence which support the
existences of multiple semantic systems:
1. modality-specific aphasias;
2. modality-specific priming effects in access deficits t o semantics; and
3 . modality-specific aspects of semantic memory disorders.
Two questions may be raised: how strong is this evidence, and which variant
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

of the multiple semantic sytem view is supported-the input or the represen-


tation account?

MODALITY-SPECIFIC APHASIAS
Patients have been described with a naming impairment specific to a particu-
lar input modality; for example, impaired visual object naming relative to
auditory and tactile naming (Lhermitte & Beauvois, 1973; Riddoch &
Humphreys, 1987), impaired auditory relative to visual object naming
(Denes & Semenza, 1975), and impaired tactile naming relative to visual
object naming (Beauvois, Saillant, Meininger, & Lhermitte, 1978). Shallice
(1985) suggests that: “the simplest explanation for these syndromes is
that multiple semantic sytems do exist but there is an impairment in the
transmission of information from one of the modality specific semantic
systems to verbal systems (including the verbal semantic system)”. Within
the framework of the representation account (Fig. 2), Shallice’s argument
would accord with the existence of a single lesion impairing connections
between one of the nonverbal semantic systems and the verbal semantic
system. Within the framework of the input account given in Fig. 1, there
need to be two separate lesions, one separating the visual semantic system
from the verbal semantic system and one separating the visual semantic
system from output phonology.
Shallice’s argument rests on the assumption that modality-specific
aphasias have been properly demonstrated and that, in such cases, there
is normal access to semantic knowledge (of at least some form). One
problem, though, is that in some cases the tests of naming presented in
different modalities have not used the same stimuli. It is difficult to ensure
that task difficulty was equated in such instances. For instance, the contrast
between the naming of tactilely presented real objects and pictures of
objects is not a valid test of a modality-specific naming deficit since less
information will be available in pictures than when real objects are presented
visually (cf. the discussion of patient JF by Lhermitte & Beauvois, 1973).
Nevertheless, Riddoch and Humphreys (Note 3; in press) have shown
poorer naming of the same objects presented visually than when they were
presented tactilely in a patient (JB) who could also make accurate gestures
SEMANTIC SYSTEMS OR SYSTEM? 9

to visually presented objects that he could not name. For instance, in a


test of naming and gesturing to visually presented objects, JB was presented
with 44 objects, each with a discriminably different gesture, JB named
20/44 (45%) of the objects correctly from their visual presentation: he made
33/44 (75%) correct gestures. There is a reliable difference between his
naming and gesturing ability (McNemar’s test x2 = 8.45, P < 0.005). We
also showed that JB was impaired at accessing semantic (associative and
functional) knowledge about objects from vision (see Humphreys, Riddoch,
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

& Quinlan, this volume). From this work we take it that we cannot assume
that a patient has intact access to semantic knowledge when correct gestures
are made; it seems possible that gestures could be made on the basis of
other nonsemantic forms of information (the perceptual attributes of
objects or following access to stored structural knowledge). Thus access to
semantic knowledge must always be tested directly.
A patient (PWD) with a selectively impaired ability to name meaningful
nonverbal sounds in the presence of relatively good comprehension of
nonverbal sounds has been reported by Denes and Semenza (1975). PWD
demonstrated a gross impairment in both understanding and repeating
spoken speech whereas his spontaneous speech was fluent, demonstrating
normal prosody and articulation with no paraphasias. His naming of items
presented visually, tactilely, or olfactorily was perfect but he was only able
to name 4/20 (20%) of meaningful nonverbal sounds. PWD also carried
out a sound-picture match task where four pictures were used for each
target sound. The four pictures depicted:
1. the natural source of the sound;
2. an acoustically similar sound;
3. a sourceof sound from the same semantic category as the natuxal source;and,
4. an unrelated sound.
PWD scored 85% (17120) correct on this test; such relatively good perfor-
mance is difficult to account for if PWD’s semantic system for auditory
nonverbal stimuli were impaired (cf. Shallice, 1985). Rather, it can be
argued that PWD had at least relatively intact knowledge of the meaning
and referents for sounds, enabling an improvement to occur in the picture-
sound match task, when he could use pictures to derive associated sound
information. Such an account would be inconsistent with the position that
PWD was impaired at accessing semantic knowledge about nonverbal
sounds.
Patient R G (Beauvois et al., 1978) is described as a case of tactile aphasia.
This patient was impaired at naming tactilely presented objects, right hand
71% (71/100), left hand 64% (64/100), while no impairment was
demonstrated in naming objects presented visually (96%, 96/100) o r auditor-
ily (98.8%,79/80). Beauvois et al. suggest that R G is able to identify (and
10 RIDDOCH ETAL.

presumably access the correct semantics of) objects presented tactilely


because he can handle perfectly objects that he cannot name. As with cases
of optic aphasia, though, the inference of correct access to semantic know-
ledge because of correct gestures may not be valid. Gestures to tactile
stimuli, similarly to gestures from vision, may be based either on perceptual
information or on stored nonsemantic knowledge. Interestingly RG showed
good performance on a tactile-visual matching task (loo%,100/100;where
the patient had to decide whether an unseen object in his hand matched
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

a picture that was shown to him), whereas he performed slightly worse on


a tactile-verbal matching task (a%, 84/100), where the patient had to
decide whether an unseen object in his hand matched a name spoken by
the examiner). Correct interpretation of the relations between data from
matching and naming tasks requires that we should correct the matching
data to take out those responses which may be correct by chance. Since
the chance level in the tactile-verbal matching task was 50%, correcting
the tactile-verbal matching performance for chance indicates that the prob-
ability of correct tactile-verbal matching was 0.68. This is equal to the
probability of correct tactile naming (0.68). It is therefore reasonable to
claim that all of RG’s failures of tactile naming were due to failures of
access to semantics, implying: (a) that no post-semantic naming difficulties
were involved here; and (b) that RG did not have available a route from
touch to naming that by-passes semantics. Shallice (1985) accounts for the
difference between the tactile-visual and the tactile-verbal matching tasks
by assuming that there is a deficit in the processes mediatingcommunication
between the tactile and verbal semantic systems but not between the tactile
and visual semantic systems. However, on the basis of RG’s data alone we
cannot distinguish between input and representation views of the semantic
systems involved.

MODALITY-SPECIFIC PRIMING
Patient AR (Wamngton & Shallice, 1979) is described as a semantic access
dyslexic. AR was poor at naming words and objects, although his ability
to name objects from an auditory description was better than his naming
of the same objects from vision (11/15 vs. 1/15). AR was also reported as
being able to describe objects by their function. However, his ability to
name words (e.g. pyramid) was increased more by an auditory verbal
prompt (such as Egypt) than by presenting a picture corresponding to the
word.
Wamngton and Shallice describe AR as having an access deficit because
his naming of words was inconsistent in different test sessions. Shallice
SEMANTIC SYSTEMS OR SYSTEM? 11

(1987) argues that his disorder is consistent with a theory in which modality-
specific semantic systems are posited because of the differences in priming
by auditory word and picture prompts. Accordingly he suggests that A R
has a deficit in the processes transmitting information between the visual
and verbal semantic systems, leading to the lack of picture priming.
Nevertheless, access to visual semantics is thought to be intact on the basis
of AR’s ability to give functional descriptions of objects. This account, then,
must posit that AR has at least two impairments, one in accessing the
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

verbal semantic system via print and one in the processes mediating visual
and verbal semantics. It also assumes that the same verbal system is addres-
sed both b y auditory and by printed words (to account for the auditory-
printed word priming effect). Unlike other input modality accounts, though,
it must hold that there are no direct links from the visual semantic sytem
to phonology since AR was poor at picture naming (cf. Fig. 1).
One interesting aspect of AR’s performance was that, when he was unable
to name a printed word, he nevertheless showed access to some semantic
information about it. For example, he was considerably above chance at
judging the superordinate category t o which the object corresponding to
the word belonged, and he was also relatively good at categorising words
as being surnames or forenames, the names of boys or girls, the names of
authors o r politicians, etc. Despite such good categorisation, though, his
access to more precise information from print remained poor. This aspect
of AR’s performance leads to two questions: what form of access deficit
leads to the derivation of such partial knowledge from print, and was his
access to semantic information from print actually any worse than his access
to semantic knowledge from pictures? Given that AR’s access to semantic
knowledge from pictures was not fully tested, the latter question cannot
be evaluated: it remains feasible that A R had only partial access to semantic
knowledge from both pictures and words. Note here, though, that if A R
did not achieve full semantic access for pictures, then the claim that he
failed to show priming effects from pictures on word naming is not particu-
larly interesting, and would be expected whether semantic information is
modality-specific or modality-independent. The nature of the access
deficit for words is also puzzling. For instance, AR’s performance did not
differ on questions probing superordinate and co-ordinate information from
words. It seems difficult to account for this lack of effect if the access
process is “noisy”, so that only superordinate information can be addressed
reliably. Without an adequate account of the access deficit, we may question
whether AR’s relatively good ability o n forced-choice compared with nam-
ing tasks is a function of the different decision processes required in the
two cases; for instance, sampling based on partial information will be better
when there is a limited set of possible responses.
12 RIDDOCH ETAL.

MODALITY-SPECIFIC ASPECTS OF SEMANTIC


MEMORY DISORDERS
Warrington (1975) used a cued definitions task in order to distinguish bet-
ween pictorial and verbal semantic systems. Patients were asked a series
of auditory questions probing increasingly specific information about a
given item. Items were either presented pictorially (with the assumption
being that the test assesses the visual semantic system) or their names were
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

presented auditorily (with the assumption being that the test then assesses
the verbal semantic system). Wamngton demonstrated dissociations bet-
ween different patients on this task. Warrington suggests that one patient
(AB) showed greater residual knowledge of objects if they were named
aloud than if they were shown as pictures, whereas the other patient (EM)
showed an opposite pattern of results. The patients made consistent errors
(see Warrington & Shallice, 1979), and Warrington attributes the deficits
to a degeneration of the verbal (in the case of EM) and the visual semantic
system (in the case of AB).
However, there are some problems with this argument. For instance,
both of these patients performed better when presented with pictures than
when the object names were read aloud, when we sum their scores over
all the probe questions (AB scored 112/160 on the pictorial version and
109/160 on the auditory version; EM scored 132/160 on the pictorial version
and 104/160 on the auditory version). It is difficult to reconcile AB’s overall
score with the argument that he has a selective loss of visual semantic
information.
Also both patients were consistently worse than control subjects on both
versions of the task, suggesting that both of the putative semantic systems
are impaired in both patients. Further, the differences between the perfor-
mance of each patient on the pictorial and auditory versions of the task
were not particularly consistent. Most interestingly, on the question “is it
English?”, both patients performed better on the auditory than the pictorial
version of the task (AB scored 14/20 and 9/20; EM scored 12/20 and 11/20
on the auditory and pictorial versions).The point here is that on the pictorial
version of the task it may be possible to answer many of the probe questions
correctly on the basis of information represented in the picture even without
access to semantic information; however, this is unlikely to be the case for
the “is it English?” question.
To make the pictorial and auditory versions of probe recognition tasks
equally sensitive to semantic knowledge, probes should not be answerable
from structural cues: on the only probe question used by Warrington which
seems to demand semantic access (“is it English?”), there is no evidence
for a selective impairment in verbal semantic knowledge. Accordingly, the
evidence from patients AB and EM does not clearly indicate the existence
of separate visual and verbal semantic systems.
SEMANTIC SYSTEMS OR SYSTEM? 13

Wamngton and Shallice (1984) use data from two further patients, JBR
and SBY t o support their notion of the existence of separate visual and
verbal semantic systems. Both patients were impaired in naming pictures
of living things relative to pictures of inanimate objects, and, similarly,
both patients were impaired at defining auditorily presented names of living
things relative to names of inanimate objects.
The scores of these two patients were comparable when they were asked
to give definitions based o n visual and auditory presentations of a particular
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

item. Nevertheless, Wamngton and Shallice argue that the data are suppor-
tive of modality-specific semantic systems because of the patterns of
responses made by the patients. Both JBR and SBY were consistent on
which items they defined correctly from visual presentation, and SBY was
consistent on the items he defined correctly from auditory presentation:
however, neither patient showed consistency on the inanimate items they
defined in both the auditory and the visual tests.
Warrington and Shallice propose that, because both patients showed
consistent patterns of impairment when retested using the same input mod-
ality (i.e., the patients consistently find some items more difficult than
others), the deficits can be attributed to a loss of stored (semantic) know-
ledge about animate objects. Also, since different items were found most
difficult in the visual and the auditory presentation conditions, there is a
loss of different types of stored knowledge: one type appropriate to visual
stimuli, one appropriate to auditory stimuli. The suggestion is that both
JBR and SBY have lost semantic knowledge appropriate to animate objects,
and that this knowledge has been lost separately from both the visual and
the verbal semantic systems.
Unfortunately, there remain several problems. For instance, Warrington
and Shallice were unable to test whether JBR showed consistency in his
performance to auditory items; the fact that there was no consistency bet-
ween his performance on auditory and visual presentations may not be
illuminating if there was also no consistency between his performance with
auditory stimuli on different occasions. Also, SBY did show consistency
across modalities in one test session (out of the four tests for consistency
across modalities conducted with this patient). In addition to this, it can
be argued that consistent patterns of responses are not necessarily indicative
of an impaired representation system (Humphreys et al., this volume), so
we should be wary in arguing that data concerning consistency of error
patterns alone are informative about the underlying representation system.
Finally, it is quite possible that giving information about a picture of a
stimulus will call on different knowledge to that required to give information
about the stimulus when its name is read aloud. The dependence of the
two tasks o n different types of knowledge could produce inconsistent per-
formance, even though the semantic information relating to the objects is
the same whether accessed pictorially or from their names: For example,
14 RIDDOCH ETAL.

verbal definitions may typically be made on the basis of functional charac-


teristics of objects, while picture recognition and naming may be critically
dependent on structural similarity between objects (e.g. see Humphreys
et al., this volume). Now, consider the case of a patient whose verbal
definitions tend to consist of superordinate information3 (e.g. let them be
descriptions of the action performed when using the object), and whose
visual identification is strongly affected by structural similarity. When given
the names “axe”, “hammer”, “saw”, and “knife”, such a patient may give
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

contrasting definitions to the first two names (e.g. chopping and hitting)
and similar definitions to the last two names (e.g. cutting). The verbal
definitions may distinguish “axe” from “hammer” but not “saw” from
“knife”. When given pictures of the same objects, however, such a patient
may be more likely to confuse axe and hammer (which would tend to be
more structurally similar) than saw and knife (being more structurally dis-
tinct). An inconsistent pattern of performance would arise even though
the same semantic information may be implicated in the two tasks.

A N ALTERNATIVE ACCOUNT
This review suggests that the evidence in favour of modality-specific seman-
tic systems is not unequivocal. Can it be interpreted in any other way? We
would like to put forward a theory.of visual and auditory information
processing which assumes the existence of a single amodal semantic system,
and to re-examine the data in this light. A framework illustrating an
approach assuming a single amodal semantic system is given in Fig. 3.
As given in Fig. 3, the approach covers only the processing of auditory
and visual inputs. It assumes that auditory and visual information is mapped
onto pre-semantic perceptual recognition systems which for vision may
specify the orthographic descriptions of words (Morton & Patterson, 1980)
and stored structural descriptions of objects (cf. Humphreys et al. this
volume; Riddoch & Humphreys, 1987); for auditory stimuli, the percep-
tual storage systems may include an auditory input lexicon (Morton &
Patterson, 1980) and a system for the categorisation of nonverbal sounds.
These storage systems are thought to be pre-semantic in the sense that
knowledge about the associative and functional characteristics of objects
are not specified at this level; the latter information is specified in the
semantic system. The semantic system may also hold information about
certain categorised visual properties of objects (e.g. that John is tall and
thin). Clearly, it may be possible to derive such information about objects

’We should note that an inability to articulate other than superordinate information when
giving a verbal definition does not in itself constitute evidence for a loss of detailed semantic
knowledge. Patients, like control subjects, often have more knowledge about objects than
they can articulate on any given occasion.
SEMANTIC SYSTEMS OR SYSTEM? 15

visual input auditory input

r- /\
orthographic
input
lexicon
st ruct u rat
description
system
auditory
non-verbal
sound
auditory

lexlcon
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

descriptions I 1

FIG. 3. A processing framework which distinguishes modality-specific perceptual recognition


systems from a single, amodal semantic system.

from their structural descriptions (where it will be coded implicitly); how-


ever, the explicit representation of categorised visual properties in the
semantic system would facilite the direct assignment of responses to this
information. There may also be direct links between the perceptual recog-
nition systems for words and the phonological output lexicon (cf. Morton
& Patterson, 1980; Schwartz, Marin, & Saffran, 1979) and between the
stored structural description system for objects and learned action routines
(cf. Riddoch & Humphreys, 1987).
Bi-directional Links are posited between the semantic system and each
perceptual recognition system. For visually presented objects, such links
allow naming to occur and they enable verbal descriptions of their structural
attributes to take place (e.g. for the use of visual imagery). A similar
argument applies to the naming and description of nonverbal sounds.
According to this model, differences may occur when abstract and concrete
words are defined (cf. Wamngton, 1975; Wamngton & Shallice, 1984)
because in the latter case items are frequently defined in terms of structural
attributes and therefore possibly require a means of “looking up” these
16 RIDDOCH ETAL.

attributes. Difficulties may occur if the links between the structural descrip-
tion and semantic systems are disrupted (Riddoch & Humphreys, 1987).
The bi-directional link between the structural description system and the
semantic system is also necessary in order to account for the ability to draw
from memory, when presumably a stored structural description is invoked.
The nature of the stored structural descriptions is also presumed to have
an effect on the operation of the semantic system. For example, certain
items are unambiguous in their stored structural description (e.g. scissors)
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

and these items may be contrasted with items that are visually similar to
other items (e.g. sheep, horse, cow). As a result of viewing an item with
a nonspecific structural description a whole class of items may be activated
both in the structural description system and in the semantic system. For
a patient with a semantic access impairment, this multiple activation may
make it difficult to identify stimuli with nonspecific structural descriptions
(cf. Humphreys et al., this volume).
We will consider whether the amodal semantic system account can accom-
modate the data from the brain-damaged subjects most pertinent to the
other models described in this paper.

Patient JF (Lhermitte & Beauvois, 1973)


JF, described by Lhermitte and Beauvois (1973), had a selective deficit in
naming pictures relative to naming tactilely presented objects. He could
also give gestures to the stimuli he could not name and he showed good
drawing from memory.
According to the amodal semantic system account, JF can be thought of
as having an impairment in the route leading from stored structural descrip-
tions of objects to the semantic system, leading to a selective difficulty in
naming pictures. The route from the semantic system to the structural
description system is thought to be intact, however, since JF could draw
extremely well from memory. In other optic aphasic patients, though (e.g.
JB; Riddoch & Humphreys, 1987), there may be a bi-directional impairment
between the structural description system and semantic memory.
In order to account for the relatively preserved ability of optic aphasic
patients to gesture correctly to objects that they cannot name, the model
assumes that gestures may be made on the basis of stored structural infor-
mation and that access to such information is intact (Riddoch & Humphreys,
1987). It may be possible, therefore, for correct gestures to be unrelated
to the naming response on a particular trial (Lhermitte & Beauvois, 1973;
Riddoch & Humphreys, 1987). It may also be that in many tests of
gesturing, nonspecific responses have been accepted as correct (Ratcliff &
Newcombe, 1982).
SEMANTIC SYSTEMS OR SYSTEM? 17

One interesting aspect of JF‘s performance noted by Lhermitte and


Beauvois (1973) was that: “when uninterrupted after an incorrect naming,
however, he often continued talking, gradually approaching the right
name”.To account for this, we would suggest that, having given an incorrect
name, JF could use the name to interrogate stored structural descriptions
and attempt to match them with the visually presented object. The intact
route between the semantic and the structural description system would
also allow JF to point to the appropriate picture of an object in response
to a spoken name. Neither of these aspects of performance should occur
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

in a patient with a bi-directional structural descriptiodsemantics impair-


ment (Riddoch & Humphreys, 1987).

Patient PWD (Denes & Semenza, 1975)


We may interpret PWD in a rather similar way to our interpretation of JE
For instance, we may assume that there is an impairment to the processes
mapping perceptual representations of nonverbal sounds onto the semantic
system. Such an impairment would produce difficulties in naming nonverbal
sounds, though the relative preservation of the route from the semantic
system to the nonverbal sound system would enable sound-visual word
matching t o operate. To account for PWD’s deficit in auditory word com-
prehension, we must assume either bi-directional impairments between the
auditory input lexicon and the semantic system or deficits to the processes
accessing the auditory input lexicon (e.g. either early perceptual processing
deficits or deficits to the representations themselves), since PWD was not
able to perform auditory word-visual word matches. Indeed, if we assume
that there is an impairment in relatively early perceptual processes (prior
to accessing either the auditory input lexicon or the nonverbal sound sys-
tem), we may account for both auditory deficits. Consistent with this is
PWD’s poor ability to repeat words. The effectiveness of sound-visual word
compared with auditory word-visual word matching may be because of
greater redundancy in nonverbal sound signals.

Patient RG (Beauvois et al., 1978)


In order to account for RG we must assume that a pre-semantic perceptual
recognition system exists for tactile input. RG appears to have no low-level
tactile deficit (e.g. light touch, pin prick, hair sensation, temperature, two-
point discrimination, position sense, vibration, localisation of touch, and
tests of perceptual rivalry appeared normal) and he was able to make
correct gestures to tactilely presented stimuli. It therefore appears that he
was able to access the tactile perceptual recognition system. His poor nam-

CN 5:l-8
18 RIDDOCH ETAL

ing but good gesturing of tactilely presented stimuli may be explained if


there is a bi-directional impairment to the processes transmitting informa-
tion from the tactile perceptual recognition system to the semantic system,
and if we assume that there are direct links from the tactile perceptual
recognition system to action routines. The bi-directional impairment would
give rise to difficulties in matching tasks requiring mediation between the
semantic and tactile perceptual recognition systems (e.g. verbal-tactile
matching); in contrast, visual-tactile matching may be relatively preserved
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

because it can operate directly from stimulus information (i.e. it need not
be semantically mediated). For example, visual-tactile matching may be
accomplished by normal subjects even with objects which are completely
unfamiliar.

Patient AR (Warrington & Shallice, 1979)


As noted earlier, AR was very impaired at naming visually presented items,
whether they were objects or words. Certain deficits were also apparent
in his aural comprehension because, although he was able to name an item
given an auditory definition, his performance was impaired on a task prob-
ing recognition of subordinate category information of single auditory words
(the difference between AR’s performance on subordinate questions given
visual and auditory presentations of words was not statistically significant
[x’ < 1.01, averaging over two sets of subordinate choice stimuli). Nonethe-
less, auditorily presented words facilitated the naming of printed words
better than a picture. AR was also better at matching a spoken word than
a written word to a picture (x2 = 21.86, P < 0.001).
These findings suggested a dissociation between the systems dealing with
the processing of auditory and visual words, with relatively better access
to semantic information occurring for auditory words. It is also possible
that AR has difficulty in accessing detailed semantic information from
pictures (even though the information he has may be sufficient to enable
gestures and general functional descriptions to occur). Thus differences in
priming the naming of printed words from auditory words and from pictures
may be because auditory words access more accurate semantic information
than do pictures. Other accounts of the differences between auditory word
and picture priming with AR are also possible: for instance, the relatively
successful priming of a written word (e.g. “pyramid”) by an auditorily
presented semantic cue (such as “Egypt”), may have been caused by AR’s
ability to retrieve the phonological code of some letters of the word. AR
could read aloud some nonwords so, presumably, he was able to sound out
at least some written letters. If AR sounded out some of the letters of the
written target words, then the range of possible word responses would be
greatly reduced; for example down to only those words beginning with
SEMANTIC SYSTEMS OR SYSTEM? 19

I p 1. The auditory cue could then be used to reduce even further the
possible range of responses--to only those words beginning with / p / that
have specific relevance to Egypt. Pictures may be less helpful because of
their greater name uncertainty and AR’s difficulties in accessing phonology
from pictures (see Funnell, in press). Whatever the case, AR’s better perfor-
mance on forced-choice tests of semantic knowledge than on naming printed
words may reflect sampling based on partial information. At this stage, it
is difficult to be precise concerning the processing locus (or loci) of ARs
picture and word naming impairments, though it is possible that they are
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

due to some relatively early perceptual deficit, given that AR’s ability to
perform a visual lexical decision task was poor. It would be interesting to
investigate this patient’s performance on a pictorial equivalent of the lexical
decision task (an object decision task), to assess the relations between his
visual word and his pictorial processing deficits more closely (Humphreys
et al., this volume; Riddoch 8i Humphreys, 1987).
Finally, we may note that in cases such as A R where we believe the
patient to be operating on the basis of partial visual descriptions, we may
also expect differences in categorisation difficulty according to the nature
of the target-distractor choices. Shallice and Saffran (Note 4) have reported
the case of a letter-by-letter reader (MIL) who was able to perform correct
lexical decisions given relatively short exposures of stimuli (sufficient to
prevent full identification in this patient), depending on the choice of
nonwords, with nonwords differing only by one letter from real words
producing a high level of false-positive responses. This is precisely the
deficit we might expect if only partial information is available to higher
order representations.

Patients EM and AB (Warrington, 1975)


AB
AB was impaired on both auditory and pictorial versions of the cued
definitions test of access to semantic knowledge (see earlier), suggesting
some general deterioration in his semantic knowledge. Nevertheless, there
are some indications that his performance given pictorial stimuli was worse
than his performance when given an auditory name (e.g. when asked to
give definitions from either input, he scored 19/40 when given pictures and
27/40 when given auditory words).
There are at least four explanations for this:
1. Because pictures have more alternative interpretations than words, they
present more difficulties for a patient with impaired semantic knowledge.
2. A B has some pre-semantic processing deficit for visual stimuli. In fact,
AB was at the bottom of the normal range of some fairly simple tests of
20 RIDDOCH ETAL.

visual perceptual function and, perhaps more interestingly, he apparently


showed an inability to process printed words as whole-letter strings (since
high-frequency, exception words produced naming difficulties; this was true
of both AB and EM; Warrington, 1975). It is possible to attribute such a
deficit to the loss of a particular form of visual input description which
could, in turn, selectively impair access to all semantic information for
visual (relative to auditory) stimuli.
3. AB has a loss of semantic knowledge which is more severe for some
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

concepts than others; it is particularly severe for concrete concepts and


less severe for abstract concepts. Definitions to auditory words may often
draw on abstract concepts (e.g. concerning object function); however, pic-
ture recognition may depend on mapping structural information onto con-
crete concepts in the semantic system. Degeneration of knowledge about
concrete concepts could selectively impair the ability to give informatim
about pictures relative to the ability to give information about auditory
words. One finding consistent with the latter account is that AB was rela-
tively good at defining abstract, but not concrete, words from auditory
input.
4. Definitions of auditorily presented concrete words may depend on acces-
accessing the structural description system from the semantic system, rather
than drawing solely on semantic knowledge. AB may have impairments to
his stored structural descriptions about objects, which could selectively
impair both picture recognition and the ability to define concrete words.
To separate deficits to semantic knowledge about concrete concepts from
deficits to the structural description system, it would (again) be interesting
to examine AB’s object decision performance (Humphreys et al, this vol-
ume; Riddoch & Humphreys, 1987). If AB’s structural description system
is intact, he may perform relatively well on object decision tasks.

EM
Similarly to AB, EM showed some impairments to both pictorial and
auditory versions of the cued definitions task, indicating that access to
semantic information from both modalities was far from perfect. We attri-
bute her better overall performance on the pictorial than on the auditory
version of the cued definitions task to her ability to make use of pictorial
cues to answer the majority of probe questions. Also there was no effect
of concreteness on EM’s ability to define auditorily presented words. Such
a result is consistent with two accounts. One is that EM’s loss of semantic
knowledge is not specific to certain concepts (e.g. concrete vs. abstract
concepts). The other is that EM is able to access the structural description
system for objects (so supporting definitions of concrete concepts at least
SEMANTIC SYSTEMS OR SYSTEM? 21

to the level for abstract concepts). EM also appears to have a selective


loss of whole-word descriptions in her reading (see earlier).

PatientsJBR and SBY (Warrington & Shallice, 1984)


JBR and SBYwere both impaired at naming pictures of living things relative
to inanimate objects; they were also impaired at defining the names of
living things relative to inanimate objects. Again, assuming the existence
of a single, amodal semantic system, alternative accounts of this pattern
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

of performance are possible.


One possibility is that JBR and SBY have impaired information about
living things within the semantic system. This suggestion accounts for why
the patients are impaired with both pictorially and auditorily presented
stimuli, since the same semantic system is thought to serve both picture
and auditory word recognition. In this respect, the account is more par-
simonious than dual semantic system accounts, which must posit the coin-
cidental existence of category-specific impairments in both the pictorial
and the verbal semantic systems of both patients. In suggesting that a single
deficit within the semantic system can accommodate the data from JBR
and SBY, we acknowledge that we are placing little weight on the failure
to find consistency between SBYs definitions to auditorily and pictorially
presented inanimate objects on 3/4 tests. Our reasons for this were outlined
earlier.
A second possibility is that both JBR and SBY have difficulty accessing
the semantic system from visual stored structural descriptions, and vice
versa. JBR’s and SBYs problems were most apparent with living things,
and the patients tended to be less impaired with inanimate objects. Now,
living things often tend to have similar structural descriptions (particularly
animals, foods, plants, and insects; see Humphreys et al., this volume),
whilst inanimate objects often tend to have more distinct structural descrip-
tions (i.e. they will tend to have less structural overlap and fewer parts in
common than animate objects). JBR’s and SBY’s identification impairments
may be re-described as due to a problem in accessing specific structural
information about objects from categories which tend to have visually
similar exemplars. Such a pattern of results would be expected on the basis
of a structural description-semantic system access deficit, if the access pro-
cess operated in cascade (where activation in structurally similar structural
descriptions precipitates activation in the semantic descriptions of those
objects; see Humphreys et al., this volume; Riddoch & Humphreys, 1987).
The deficit in the processes transmitting information between the structural
description and semantic systems also appears to be bi-directional, since
the patients were poor at defining living objects. Such objects may typically
22 RIDDOCH ETAL

be defined in terms of their perceptual structural characteristics, since, at


least at a gross level, the objects may be functionally equivalent (e.g. foods
are eaten, etc.). An inability to invoke such a structural description from
semantic information may produce a selective problem.

EVALUATING THE AMODAL SEMANTIC SYSTEM ACCOUNT


One difficulty in assessing the notion of an amodal semantic system, at
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

least where neuropsychological evidence is concerned, occurs because the


account predicts patterns of associations rather than dissociations. The
problem here, of course, is that an associated deficit, say in accessing
semantic information from pictures and from auditory words, may not
necessarily reflect the functional organisation of the processing system but
rather the anatomical proximity of the functionally separate components.
Nevertheless, certain predictions can be made. One is that a patient wit4
an impairment in the representation of semantic information should show
this impairment across a range of tasks mediated by the semantic system,
irrespective of the modality of stimulus input. An interesting approach to
understanding the representation of semantic information in brain damaged
patients has been developed by Caramazza and his colleagues (e.g.,
Caramazza, Berndt, & Brownell, 1982).These investigators have used multi-
dimensional scaling analyses to investigate the ways in which patients re-
present relationships between objects, using data based on similarity judge-
ments between pairs of objects. The work suggests that brain damage can
produce qualitive changes in the representation of relationships between
objects. A prediction which follows is that patients may show similar qual-
itative changes in the representation of relationships between objects when
the information about the objects is presented in different modalities. A
further predicition is that errors precipitated by the change in the semantic
system will similarly reflect the characteristics of the system for different
modalities of stimulus presentation (e.g. object naming, word retrieval in
spontaneous speech, naming to definition). The modality-specific semantic
systems approaches would predict dissociations based either on the input
modality or the type of semantic information addressed in the task.
Using a very different approach, we have recently obtained evidence in
favour of an amodal semantic system subserving the processing of pictures
and spoken and written words in a study of a severely anomic patient. This
patient, AL, displays several interesting features. In such tasks as reading
aloud, or writing to dictation, AL could transcode only highly imageable/
concrete words between their orthographic and phonological word forms.
A series of experiments showed that successful transcoding depended upon
access to a particular set of semantic descriptions which were considered
to be nonlinguistic in nature (Allport & Funnell, 1981; Funnell & Allport,
SEMANTIC SYSTEMS OR SYSTEM? 23

in press). Further, AL was only able to learn to relate arbitrary graphic


symbols to auditory words if these words named imageable/concrete con-
cepts (Funnell, Note 1). Thus, in AL, communication between visually
presented alphabetic and nonalphabetic material and spoken words
appeared to depend upon intervening semantic activation (for evidence
against the idea that the word forms themselves are categorically organised,
see Funnell & Allport, in press).
To test the theory that pictures also have direct access to this set of
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

semantic descriptions, two experiments were carried out. In the first exper-
iment AL was taught to write a set of five arbitrary graphic symbols to
each of five spoken object names: this was formally tested three times, and
then a surprise transfer test was given in which pictures of the objects
named were presented instead. AL's ability to write down the correct symbol
to each picture was then tested three times. The second experiment rep-
resented the reverse of the first. Here AL was taught first to write a new
set of five arbitrary symbols to a set of five pictures representing a new set
of objects. After his ability to carry out the task had been tested formally,
a surprise transfer test was given in which the names of the pictures were
spoken to AL. His ability to write down the correct symbol to each of these
names was then tested three times (for further details, see Funnell, in
press). In each of the four tests, he scored a maximum of 15/15. Clearly,
nonlinguistic material associated in the first place with either spoken object
names or with pictures of objects can be transmitted, without training, to
the alternative outward representation of the concept. It appears from this
test that the concept must therefore be common to both pictures and
phonological word forms and, on the basis of evidence reported here,
common to written word forms as well.

THE QUESTION OF COLOUR


In our outline of the perceptual representation system for objects we have
confined discussion to the structural characteristics of objects. It is quite
possible, though, that more general perceptual attributes (such as colour)
are also specified in this system. Recent work suggesting a dissociation
between the representation of perceptual information about colour and
verbal information associated with colour names has been reported by
Beauvois (1982; Beauvois & Saillant, 1985). They discuss the case of a
patient MP who was impaired when asked to name colour patches and
when asked to point to a named colour. MP also performed poorly when
asked to imagine and then name the colour of an object, when given its
name auditorily. However, no deficits were apparent when MP named the
colour verbally associated with a named stimulus and when she was asked
to point to the correctly coloured picture of an object amongst incorrectly
24 RIDDOCH ETAL.

coloured pictures of the same object. These results indicate that MP’s impair-
ment lies in the processes mediating between stored visual and verbal
information about colours, irrespective of the input modality. The data are
therefore contrary to the argument for visual and verbal semantic systems
determined by input modality (Fig. 1); they are consistent either with the
view that there are visual and verbal semantic systems determined by the
nature of represented information (Fig. 2), or that perceptual attributes of
colour information are represented separately from verbal semantic infor-
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

mation (perhaps in the structural description system; cf. Fig. 3).


We have argued that the optic aphasic patient that we have examined
(JB; Riddoch & Humphreys, 1987), has a bi-directional impairment in the
processes mediating the structural description and the semantic systems..
One might imagine, then, that such a patient provides an excelleyt oppor-
tunity to test whether colour information is represented along with structural
information in the structural description system, since this patient had
intact access to structural but not semantic information about objects (see
also Humphreys et al., this volume). We have tested JB’s ability to access
stored information about colour quite intensively, using tests which demand
verbal mediation (e.g. colour naming) or tests where we stress to the patient
that he should respond without trying to name the object’s colour (picking
out correctly coloured from incorrectly coloured exemplars). In no tests
where JB must access stored information about the perceptual attributes
of colour have we been able to show intact performance, though he shows
good access to stored verbal knowledge about colour (e.g. giving verbal
colour associations to a named word; Riddoch, Note 2). It remains possible,
of course, that our tests of JB’s access to stored perceptual attributes of
colour have not been sufficiently sensitive to indicate that the stored know-
ledge is in fact intact; nevertheless, we have no evidence to argue that
colour information is stored along with structural information in the struc-
tural description system. It may be that colour information is represented
independently of both stored structural and stored semantic knowledge
about objects.

REFERENCES
Allport, D . A . & Funnell, E. (1981). Components of the mental lexicon. Philosophical Tram-
actions of the Royal Sociery of London. B295, 397-410.
Beauvois, M. F. (1982). Optic aphasia: A process of interaction between vision and language.
Philosophicol Ponsacrions of :he Royol Society of London, 8298. 35-47.
Beauvois, M. F. & Saillant, B . (1985). Optic aphasia for colours and colour agnosia: A
distinction between visual and visuo-verbal impairments in the processing of colours.
Cognitive Neuropsychology, 2, 1-48.
Beauvois, M. F., Saillant, B . , Meininger,V., & Lhermitte, F. (1978). Bilateral tactile asphasia:
A tacto-verbal dysfunction. Brain, 101. 381-401.
SEMANTIC SYSTEMS OR SYSTEM? 25

Caramazza, A., Berndt, R. R., & Brownell, H. H. (1982). The semantic deficit hypothesis:
Perceptual parsing and object classification by asphasic patients. Brain and Language, 15,
161-189.
Denes, G . & Semenza, C. (1975). Auditory modality-specific anomia: Evidence from a case
of pure word deafness. Cortex, 11, 4 0 1 4 1 .
Funnell, E. (in press). Object concepts: Some evidence from acquired dyslexia. In G. W.
Humphreys & M. J. Riddoch (Eds.). Vuual objecrprocessingr A cognitive neuropsycholog-
ical approach. London: Lawrence Erlbaum Associates Ltd.
Funnell, E. & Allport, D. A. (in press). Nonlinguistic cognition and word meanings. In D.
A. Allport. D. G. Mackay, W. Pririz, & E. Schearer (Eds.), Language perception and
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014

production: Shared mechanisims in listening, reading, and writing. London: Academic


Press.
Lhermitte, E & Beauvois, M. F. (1973). A visual-speech disconnection syndrome: Report of
a case with optic-aphasia, agnosic alexia, and colour agnosia. Brain, 96,695-714.
Morton, J. & Patterson, K. E. (1980). A new attempt at an interpretation, or, an attempt
at a new interpretation. In M. Coltheart, K. E. Patterson, & J . C. Marshall (Eds.), Deep
dyslexia. London: Routledge & Kegan Paul.
Ratcliff, G. & Newcombe. F. (1982). Object recognition: Some deductions from clinical
evidence. In A. W. Ellis (Ed.), Normality and pathology in cognitive function. London:
Academic Press.
Riddoch, M. J & Humphreys, G. W. (1987). Visual object processing in a case of optic
aphasia. Cognitive Neuropsychology, 4 , 131-185.
Schwartz, M. F., Marin, 0. S. M., & Saffran, E. M. (1979). Dissociations of language function
in dementia: A case study. Brain and Language, 7, 277-306.
Shallice, T. (1979). Case study approach in neuropsychologicdi research. Journal of Clinical
Neuropsychology, 1. 183-211.
Shallice, T. (1987). Impairments of semantic processing: Multiple dissociations. In M. Col-
theart, R. Job, & G. Sartori (Eds.), The cognitive neuropsychology of language. London:
Lawrence Erlbaum Associates Ltd.
Warrington, E. K. (1975). The selective impairment of memory. Quarterly Journalof Psychol-
ogy, 73, 117-130.
Warrington, E. K. & McCarthy, R. (1983). Category specific access dysphasia. Brain, 106,
859-878.
Warrington, E . K. & Shal1ice.T. (1979). Semantic access dyslexia. Brain, 102, 43-63.
Warrington, E. K. & Shallice,T. (1984). Category specific semantic impairments. Brain, 107,
829-854.

REFERENCE NOTES
1. Funnell, E. (1983). Ideographic communication and word class differences in asphasia.
Unpublished Ph.D. thesis, Reading University.
2. Riddoch, M. J. (1984). Neurological impairments of visuul perception. Unpublished
Ph. D. thesis, University of London.
3. Riddoch, M. J. & Humphreys, G. W. (1983). Access to semantic informarion in rhe case
of optic asphasia. Paper presented to the Experimental Psychology Society, Oxford.
4. Shallice,T. & Saffran, E. M. (1983). Afu stripping and lexical decision in the absence of
explicit word identification: Evidence from a letter-by-letrer reader. Paper presented to the
Experimental Psychology Society, Manchester.
This article was downloaded by: [University North Carolina - Chapel Hill]
On: 22 October 2014, At: 07:58
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK

Cognitive Neuropsychology
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/pcgn20

Semantic systems or system? Neuropsychological


evidence re-examined
a a a a
M. J. Riddoch , G. W. Humphreys , M. Coltheart & E. Funnell
a
Birkbeck College, University of London , London, U. K.
Published online: 16 Aug 2007.

To cite this article: M. J. Riddoch , G. W. Humphreys , M. Coltheart & E. Funnell (1988) Semantic systems or system?
Neuropsychological evidence re-examined, Cognitive Neuropsychology, 5:1, 3-25, DOI: 10.1080/02643298808252925

To link to this article: http://dx.doi.org/10.1080/02643298808252925

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions

You might also like