Professional Documents
Culture Documents
Riddoch, M., Humphreys, G., Coltheart, M. & Funnell, E. Semantic Systems or System. Neuropsychological Evidence Re-Examined
Riddoch, M., Humphreys, G., Coltheart, M. & Funnell, E. Semantic Systems or System. Neuropsychological Evidence Re-Examined
INTRODUCTION
Access to a semantic system is necessary for comprehension to occur. Com-
prehension is more than mere recognition: it is possible to recognise that
something has been encountered previously, without necessarily knowing
how that item differs functionally from other similar items, how it is related
to other items, or how it might be used. We take it that a semantic system
specifies the kind of knowledge that allows decisions to be made concerning
the functional and associative characteristics of things.
visual verbal
)
semantics semantics
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
(semantics from
< (semantics from
Our paper is concerned with the following issue: is there a single semantic
system which is used for all comprehension tasks, regardless of the modality
of the stimulus input or the nature of the semantic information required
by the task? O r are there, instead, separate semantic systems associated
with separate modalities of input or separate types of semantic information?
The second of these views is often expressed by distinguishing “visual
semantics” from “verbal semantics”. However, the distinction can be inter-
preted in a variety of rather different ways. Clarification is therefore impera-
tive here.
As a first point: the terms “visual” and “verbal” are slightly unfortunate,
since all those who subscribe t o the distinction between visual and verbal
semantics would regard reading comprehension as involving access to the
verbal semantic system even though the stimuli involved are visual. So it
might be wise to adopt a slightly different terminology: one might refer to
pictorial semantics (covering objects and pictures) and verbal semantics
(covering both spoken and written words). However, t o be consistent with
previous work, we will continue to use the current terminology.
The two rather different interpretations of the distinction between visual
semantics and verbal semantics extant in the literature are as follows.
1. Modality of input. Comprehension of pictures or objects depends upon
access t o a visual semantic system. Comprehension of words depends upon
access t o a verbal semantic system. Analogously there will be a tactile
SEMANTIC SYSTEMS OR SYSTEM? 5
that milk-drinking is associated with the printed word cat) and also in visual
semantics (which is how the property of milk-drinking can be evoked when
we see a cat). It is because of this duplication that the same question (e.g.
“Is it larger than a telephone directory?”) can be treated as a test of access
to visual semantics (when the stimulus is a picture) and as a test of access
to verbal semantics (when the stimulus is a word: see Wamngton, 1975)’.
A processing framework consistent with this account is given in Fig. 1.
‘There are also further refinements to the input modality semantic systems argument. For
instance, there may be further, differential representation within each semantic system for
stimuli according to the nature of their defining attributes. We may distinguish here between
perceptual (colour, shape, size, etc.) and functional attributes of objects. Such attributes are
not equally well represented in all object and word classes. Concrete, but not abstract, words
can be defined in terms of both perceptual and functional attributes of the underlying concept;
also, in the object domain, tools and items of furniture can be defined on the basis of both
their perceptual and functional attributes whilst exemplars from other categories (such as
animals, fruit, etc.) are more likely to be defined in terms of their perceptual attributes.These
differing attributes could be separately represented within the hypothesised visual and verbal
semantic systems, making certain objects or words especially vulnerable to impairment if
there is degeneration of the semantic system. Alternatively, different objects and words could
be separately represented according to their defining attributes.
In the pioneering work ofwarrington (1975), no distinction was made between perceptual
and other attributes of objects and words, and, judging by the choice of questions used to
probe recognition performance, the assumption seems to be that perceptual and functional
attributes are equally represented in the visual and verbal semantic systems (e.g. the series
of probes included the questions “is it English or not?”, “is it larger than a telephone direc-
tory?”, “is it used indoors?”, “is it made of metal?” etc.). In later work, Warrington and
McCarthy (1983) and Warrington and Shallice (1984) have distinguished between a patient
with impairments in accessing knowledge about inanimate objects from audition and patients
with impairments in accessing knowledge about living things and foods from vision. They
interpret such deficits as indicating category-specific representation of objects and words
within both the visual and verbal semantic systems.
Interestingly, Warrington and McCarthy (1983) argue that the category-specific deficits in
one patient (VER) reflect a problem in the processes involved in gaining access to relevant
semantic knowledge, rather than impairments to the semantic representations themselves.
One implication of this would appear to be that there are independent access routes to the
differentiated parts of the visual and verbal semantic systems. In effect, this idea maintains
that there are separate semantic systems for different categories for objects and words, for
each input modality.
6 RIDDOCH E T A L
V W
visual verbal
3
semant I C S semantics
t
(pictorial information (verbal information
In a recent discussion of the topic, Shallice (1987) has argued that there
are at least three lines of neuropsychological evidence which support the
existences of multiple semantic systems:
1. modality-specific aphasias;
2. modality-specific priming effects in access deficits t o semantics; and
3 . modality-specific aspects of semantic memory disorders.
Two questions may be raised: how strong is this evidence, and which variant
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
MODALITY-SPECIFIC APHASIAS
Patients have been described with a naming impairment specific to a particu-
lar input modality; for example, impaired visual object naming relative to
auditory and tactile naming (Lhermitte & Beauvois, 1973; Riddoch &
Humphreys, 1987), impaired auditory relative to visual object naming
(Denes & Semenza, 1975), and impaired tactile naming relative to visual
object naming (Beauvois, Saillant, Meininger, & Lhermitte, 1978). Shallice
(1985) suggests that: “the simplest explanation for these syndromes is
that multiple semantic sytems do exist but there is an impairment in the
transmission of information from one of the modality specific semantic
systems to verbal systems (including the verbal semantic system)”. Within
the framework of the representation account (Fig. 2), Shallice’s argument
would accord with the existence of a single lesion impairing connections
between one of the nonverbal semantic systems and the verbal semantic
system. Within the framework of the input account given in Fig. 1, there
need to be two separate lesions, one separating the visual semantic system
from the verbal semantic system and one separating the visual semantic
system from output phonology.
Shallice’s argument rests on the assumption that modality-specific
aphasias have been properly demonstrated and that, in such cases, there
is normal access to semantic knowledge (of at least some form). One
problem, though, is that in some cases the tests of naming presented in
different modalities have not used the same stimuli. It is difficult to ensure
that task difficulty was equated in such instances. For instance, the contrast
between the naming of tactilely presented real objects and pictures of
objects is not a valid test of a modality-specific naming deficit since less
information will be available in pictures than when real objects are presented
visually (cf. the discussion of patient JF by Lhermitte & Beauvois, 1973).
Nevertheless, Riddoch and Humphreys (Note 3; in press) have shown
poorer naming of the same objects presented visually than when they were
presented tactilely in a patient (JB) who could also make accurate gestures
SEMANTIC SYSTEMS OR SYSTEM? 9
& Quinlan, this volume). From this work we take it that we cannot assume
that a patient has intact access to semantic knowledge when correct gestures
are made; it seems possible that gestures could be made on the basis of
other nonsemantic forms of information (the perceptual attributes of
objects or following access to stored structural knowledge). Thus access to
semantic knowledge must always be tested directly.
A patient (PWD) with a selectively impaired ability to name meaningful
nonverbal sounds in the presence of relatively good comprehension of
nonverbal sounds has been reported by Denes and Semenza (1975). PWD
demonstrated a gross impairment in both understanding and repeating
spoken speech whereas his spontaneous speech was fluent, demonstrating
normal prosody and articulation with no paraphasias. His naming of items
presented visually, tactilely, or olfactorily was perfect but he was only able
to name 4/20 (20%) of meaningful nonverbal sounds. PWD also carried
out a sound-picture match task where four pictures were used for each
target sound. The four pictures depicted:
1. the natural source of the sound;
2. an acoustically similar sound;
3. a sourceof sound from the same semantic category as the natuxal source;and,
4. an unrelated sound.
PWD scored 85% (17120) correct on this test; such relatively good perfor-
mance is difficult to account for if PWD’s semantic system for auditory
nonverbal stimuli were impaired (cf. Shallice, 1985). Rather, it can be
argued that PWD had at least relatively intact knowledge of the meaning
and referents for sounds, enabling an improvement to occur in the picture-
sound match task, when he could use pictures to derive associated sound
information. Such an account would be inconsistent with the position that
PWD was impaired at accessing semantic knowledge about nonverbal
sounds.
Patient R G (Beauvois et al., 1978) is described as a case of tactile aphasia.
This patient was impaired at naming tactilely presented objects, right hand
71% (71/100), left hand 64% (64/100), while no impairment was
demonstrated in naming objects presented visually (96%, 96/100) o r auditor-
ily (98.8%,79/80). Beauvois et al. suggest that R G is able to identify (and
10 RIDDOCH ETAL.
MODALITY-SPECIFIC PRIMING
Patient AR (Wamngton & Shallice, 1979) is described as a semantic access
dyslexic. AR was poor at naming words and objects, although his ability
to name objects from an auditory description was better than his naming
of the same objects from vision (11/15 vs. 1/15). AR was also reported as
being able to describe objects by their function. However, his ability to
name words (e.g. pyramid) was increased more by an auditory verbal
prompt (such as Egypt) than by presenting a picture corresponding to the
word.
Wamngton and Shallice describe AR as having an access deficit because
his naming of words was inconsistent in different test sessions. Shallice
SEMANTIC SYSTEMS OR SYSTEM? 11
(1987) argues that his disorder is consistent with a theory in which modality-
specific semantic systems are posited because of the differences in priming
by auditory word and picture prompts. Accordingly he suggests that A R
has a deficit in the processes transmitting information between the visual
and verbal semantic systems, leading to the lack of picture priming.
Nevertheless, access to visual semantics is thought to be intact on the basis
of AR’s ability to give functional descriptions of objects. This account, then,
must posit that AR has at least two impairments, one in accessing the
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
verbal semantic system via print and one in the processes mediating visual
and verbal semantics. It also assumes that the same verbal system is addres-
sed both b y auditory and by printed words (to account for the auditory-
printed word priming effect). Unlike other input modality accounts, though,
it must hold that there are no direct links from the visual semantic sytem
to phonology since AR was poor at picture naming (cf. Fig. 1).
One interesting aspect of AR’s performance was that, when he was unable
to name a printed word, he nevertheless showed access to some semantic
information about it. For example, he was considerably above chance at
judging the superordinate category t o which the object corresponding to
the word belonged, and he was also relatively good at categorising words
as being surnames or forenames, the names of boys or girls, the names of
authors o r politicians, etc. Despite such good categorisation, though, his
access to more precise information from print remained poor. This aspect
of AR’s performance leads to two questions: what form of access deficit
leads to the derivation of such partial knowledge from print, and was his
access to semantic information from print actually any worse than his access
to semantic knowledge from pictures? Given that AR’s access to semantic
knowledge from pictures was not fully tested, the latter question cannot
be evaluated: it remains feasible that A R had only partial access to semantic
knowledge from both pictures and words. Note here, though, that if A R
did not achieve full semantic access for pictures, then the claim that he
failed to show priming effects from pictures on word naming is not particu-
larly interesting, and would be expected whether semantic information is
modality-specific or modality-independent. The nature of the access
deficit for words is also puzzling. For instance, AR’s performance did not
differ on questions probing superordinate and co-ordinate information from
words. It seems difficult to account for this lack of effect if the access
process is “noisy”, so that only superordinate information can be addressed
reliably. Without an adequate account of the access deficit, we may question
whether AR’s relatively good ability o n forced-choice compared with nam-
ing tasks is a function of the different decision processes required in the
two cases; for instance, sampling based on partial information will be better
when there is a limited set of possible responses.
12 RIDDOCH ETAL.
presented auditorily (with the assumption being that the test then assesses
the verbal semantic system). Wamngton demonstrated dissociations bet-
ween different patients on this task. Warrington suggests that one patient
(AB) showed greater residual knowledge of objects if they were named
aloud than if they were shown as pictures, whereas the other patient (EM)
showed an opposite pattern of results. The patients made consistent errors
(see Warrington & Shallice, 1979), and Warrington attributes the deficits
to a degeneration of the verbal (in the case of EM) and the visual semantic
system (in the case of AB).
However, there are some problems with this argument. For instance,
both of these patients performed better when presented with pictures than
when the object names were read aloud, when we sum their scores over
all the probe questions (AB scored 112/160 on the pictorial version and
109/160 on the auditory version; EM scored 132/160 on the pictorial version
and 104/160 on the auditory version). It is difficult to reconcile AB’s overall
score with the argument that he has a selective loss of visual semantic
information.
Also both patients were consistently worse than control subjects on both
versions of the task, suggesting that both of the putative semantic systems
are impaired in both patients. Further, the differences between the perfor-
mance of each patient on the pictorial and auditory versions of the task
were not particularly consistent. Most interestingly, on the question “is it
English?”, both patients performed better on the auditory than the pictorial
version of the task (AB scored 14/20 and 9/20; EM scored 12/20 and 11/20
on the auditory and pictorial versions).The point here is that on the pictorial
version of the task it may be possible to answer many of the probe questions
correctly on the basis of information represented in the picture even without
access to semantic information; however, this is unlikely to be the case for
the “is it English?” question.
To make the pictorial and auditory versions of probe recognition tasks
equally sensitive to semantic knowledge, probes should not be answerable
from structural cues: on the only probe question used by Warrington which
seems to demand semantic access (“is it English?”), there is no evidence
for a selective impairment in verbal semantic knowledge. Accordingly, the
evidence from patients AB and EM does not clearly indicate the existence
of separate visual and verbal semantic systems.
SEMANTIC SYSTEMS OR SYSTEM? 13
Wamngton and Shallice (1984) use data from two further patients, JBR
and SBY t o support their notion of the existence of separate visual and
verbal semantic systems. Both patients were impaired in naming pictures
of living things relative to pictures of inanimate objects, and, similarly,
both patients were impaired at defining auditorily presented names of living
things relative to names of inanimate objects.
The scores of these two patients were comparable when they were asked
to give definitions based o n visual and auditory presentations of a particular
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
item. Nevertheless, Wamngton and Shallice argue that the data are suppor-
tive of modality-specific semantic systems because of the patterns of
responses made by the patients. Both JBR and SBY were consistent on
which items they defined correctly from visual presentation, and SBY was
consistent on the items he defined correctly from auditory presentation:
however, neither patient showed consistency on the inanimate items they
defined in both the auditory and the visual tests.
Warrington and Shallice propose that, because both patients showed
consistent patterns of impairment when retested using the same input mod-
ality (i.e., the patients consistently find some items more difficult than
others), the deficits can be attributed to a loss of stored (semantic) know-
ledge about animate objects. Also, since different items were found most
difficult in the visual and the auditory presentation conditions, there is a
loss of different types of stored knowledge: one type appropriate to visual
stimuli, one appropriate to auditory stimuli. The suggestion is that both
JBR and SBY have lost semantic knowledge appropriate to animate objects,
and that this knowledge has been lost separately from both the visual and
the verbal semantic systems.
Unfortunately, there remain several problems. For instance, Warrington
and Shallice were unable to test whether JBR showed consistency in his
performance to auditory items; the fact that there was no consistency bet-
ween his performance on auditory and visual presentations may not be
illuminating if there was also no consistency between his performance with
auditory stimuli on different occasions. Also, SBY did show consistency
across modalities in one test session (out of the four tests for consistency
across modalities conducted with this patient). In addition to this, it can
be argued that consistent patterns of responses are not necessarily indicative
of an impaired representation system (Humphreys et al., this volume), so
we should be wary in arguing that data concerning consistency of error
patterns alone are informative about the underlying representation system.
Finally, it is quite possible that giving information about a picture of a
stimulus will call on different knowledge to that required to give information
about the stimulus when its name is read aloud. The dependence of the
two tasks o n different types of knowledge could produce inconsistent per-
formance, even though the semantic information relating to the objects is
the same whether accessed pictorially or from their names: For example,
14 RIDDOCH ETAL.
contrasting definitions to the first two names (e.g. chopping and hitting)
and similar definitions to the last two names (e.g. cutting). The verbal
definitions may distinguish “axe” from “hammer” but not “saw” from
“knife”. When given pictures of the same objects, however, such a patient
may be more likely to confuse axe and hammer (which would tend to be
more structurally similar) than saw and knife (being more structurally dis-
tinct). An inconsistent pattern of performance would arise even though
the same semantic information may be implicated in the two tasks.
A N ALTERNATIVE ACCOUNT
This review suggests that the evidence in favour of modality-specific seman-
tic systems is not unequivocal. Can it be interpreted in any other way? We
would like to put forward a theory.of visual and auditory information
processing which assumes the existence of a single amodal semantic system,
and to re-examine the data in this light. A framework illustrating an
approach assuming a single amodal semantic system is given in Fig. 3.
As given in Fig. 3, the approach covers only the processing of auditory
and visual inputs. It assumes that auditory and visual information is mapped
onto pre-semantic perceptual recognition systems which for vision may
specify the orthographic descriptions of words (Morton & Patterson, 1980)
and stored structural descriptions of objects (cf. Humphreys et al. this
volume; Riddoch & Humphreys, 1987); for auditory stimuli, the percep-
tual storage systems may include an auditory input lexicon (Morton &
Patterson, 1980) and a system for the categorisation of nonverbal sounds.
These storage systems are thought to be pre-semantic in the sense that
knowledge about the associative and functional characteristics of objects
are not specified at this level; the latter information is specified in the
semantic system. The semantic system may also hold information about
certain categorised visual properties of objects (e.g. that John is tall and
thin). Clearly, it may be possible to derive such information about objects
’We should note that an inability to articulate other than superordinate information when
giving a verbal definition does not in itself constitute evidence for a loss of detailed semantic
knowledge. Patients, like control subjects, often have more knowledge about objects than
they can articulate on any given occasion.
SEMANTIC SYSTEMS OR SYSTEM? 15
r- /\
orthographic
input
lexicon
st ruct u rat
description
system
auditory
non-verbal
sound
auditory
lexlcon
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
descriptions I 1
attributes. Difficulties may occur if the links between the structural descrip-
tion and semantic systems are disrupted (Riddoch & Humphreys, 1987).
The bi-directional link between the structural description system and the
semantic system is also necessary in order to account for the ability to draw
from memory, when presumably a stored structural description is invoked.
The nature of the stored structural descriptions is also presumed to have
an effect on the operation of the semantic system. For example, certain
items are unambiguous in their stored structural description (e.g. scissors)
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
and these items may be contrasted with items that are visually similar to
other items (e.g. sheep, horse, cow). As a result of viewing an item with
a nonspecific structural description a whole class of items may be activated
both in the structural description system and in the semantic system. For
a patient with a semantic access impairment, this multiple activation may
make it difficult to identify stimuli with nonspecific structural descriptions
(cf. Humphreys et al., this volume).
We will consider whether the amodal semantic system account can accom-
modate the data from the brain-damaged subjects most pertinent to the
other models described in this paper.
CN 5:l-8
18 RIDDOCH ETAL
because it can operate directly from stimulus information (i.e. it need not
be semantically mediated). For example, visual-tactile matching may be
accomplished by normal subjects even with objects which are completely
unfamiliar.
I p 1. The auditory cue could then be used to reduce even further the
possible range of responses--to only those words beginning with / p / that
have specific relevance to Egypt. Pictures may be less helpful because of
their greater name uncertainty and AR’s difficulties in accessing phonology
from pictures (see Funnell, in press). Whatever the case, AR’s better perfor-
mance on forced-choice tests of semantic knowledge than on naming printed
words may reflect sampling based on partial information. At this stage, it
is difficult to be precise concerning the processing locus (or loci) of ARs
picture and word naming impairments, though it is possible that they are
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
due to some relatively early perceptual deficit, given that AR’s ability to
perform a visual lexical decision task was poor. It would be interesting to
investigate this patient’s performance on a pictorial equivalent of the lexical
decision task (an object decision task), to assess the relations between his
visual word and his pictorial processing deficits more closely (Humphreys
et al., this volume; Riddoch 8i Humphreys, 1987).
Finally, we may note that in cases such as A R where we believe the
patient to be operating on the basis of partial visual descriptions, we may
also expect differences in categorisation difficulty according to the nature
of the target-distractor choices. Shallice and Saffran (Note 4) have reported
the case of a letter-by-letter reader (MIL) who was able to perform correct
lexical decisions given relatively short exposures of stimuli (sufficient to
prevent full identification in this patient), depending on the choice of
nonwords, with nonwords differing only by one letter from real words
producing a high level of false-positive responses. This is precisely the
deficit we might expect if only partial information is available to higher
order representations.
EM
Similarly to AB, EM showed some impairments to both pictorial and
auditory versions of the cued definitions task, indicating that access to
semantic information from both modalities was far from perfect. We attri-
bute her better overall performance on the pictorial than on the auditory
version of the cued definitions task to her ability to make use of pictorial
cues to answer the majority of probe questions. Also there was no effect
of concreteness on EM’s ability to define auditorily presented words. Such
a result is consistent with two accounts. One is that EM’s loss of semantic
knowledge is not specific to certain concepts (e.g. concrete vs. abstract
concepts). The other is that EM is able to access the structural description
system for objects (so supporting definitions of concrete concepts at least
SEMANTIC SYSTEMS OR SYSTEM? 21
semantic descriptions, two experiments were carried out. In the first exper-
iment AL was taught to write a set of five arbitrary graphic symbols to
each of five spoken object names: this was formally tested three times, and
then a surprise transfer test was given in which pictures of the objects
named were presented instead. AL's ability to write down the correct symbol
to each picture was then tested three times. The second experiment rep-
resented the reverse of the first. Here AL was taught first to write a new
set of five arbitrary symbols to a set of five pictures representing a new set
of objects. After his ability to carry out the task had been tested formally,
a surprise transfer test was given in which the names of the pictures were
spoken to AL. His ability to write down the correct symbol to each of these
names was then tested three times (for further details, see Funnell, in
press). In each of the four tests, he scored a maximum of 15/15. Clearly,
nonlinguistic material associated in the first place with either spoken object
names or with pictures of objects can be transmitted, without training, to
the alternative outward representation of the concept. It appears from this
test that the concept must therefore be common to both pictures and
phonological word forms and, on the basis of evidence reported here,
common to written word forms as well.
coloured pictures of the same object. These results indicate that MP’s impair-
ment lies in the processes mediating between stored visual and verbal
information about colours, irrespective of the input modality. The data are
therefore contrary to the argument for visual and verbal semantic systems
determined by input modality (Fig. 1); they are consistent either with the
view that there are visual and verbal semantic systems determined by the
nature of represented information (Fig. 2), or that perceptual attributes of
colour information are represented separately from verbal semantic infor-
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
REFERENCES
Allport, D . A . & Funnell, E. (1981). Components of the mental lexicon. Philosophical Tram-
actions of the Royal Sociery of London. B295, 397-410.
Beauvois, M. F. (1982). Optic aphasia: A process of interaction between vision and language.
Philosophicol Ponsacrions of :he Royol Society of London, 8298. 35-47.
Beauvois, M. F. & Saillant, B . (1985). Optic aphasia for colours and colour agnosia: A
distinction between visual and visuo-verbal impairments in the processing of colours.
Cognitive Neuropsychology, 2, 1-48.
Beauvois, M. F., Saillant, B . , Meininger,V., & Lhermitte, F. (1978). Bilateral tactile asphasia:
A tacto-verbal dysfunction. Brain, 101. 381-401.
SEMANTIC SYSTEMS OR SYSTEM? 25
Caramazza, A., Berndt, R. R., & Brownell, H. H. (1982). The semantic deficit hypothesis:
Perceptual parsing and object classification by asphasic patients. Brain and Language, 15,
161-189.
Denes, G . & Semenza, C. (1975). Auditory modality-specific anomia: Evidence from a case
of pure word deafness. Cortex, 11, 4 0 1 4 1 .
Funnell, E. (in press). Object concepts: Some evidence from acquired dyslexia. In G. W.
Humphreys & M. J. Riddoch (Eds.). Vuual objecrprocessingr A cognitive neuropsycholog-
ical approach. London: Lawrence Erlbaum Associates Ltd.
Funnell, E. & Allport, D. A. (in press). Nonlinguistic cognition and word meanings. In D.
A. Allport. D. G. Mackay, W. Pririz, & E. Schearer (Eds.), Language perception and
Downloaded by [University North Carolina - Chapel Hill] at 07:58 22 October 2014
REFERENCE NOTES
1. Funnell, E. (1983). Ideographic communication and word class differences in asphasia.
Unpublished Ph.D. thesis, Reading University.
2. Riddoch, M. J. (1984). Neurological impairments of visuul perception. Unpublished
Ph. D. thesis, University of London.
3. Riddoch, M. J. & Humphreys, G. W. (1983). Access to semantic informarion in rhe case
of optic asphasia. Paper presented to the Experimental Psychology Society, Oxford.
4. Shallice,T. & Saffran, E. M. (1983). Afu stripping and lexical decision in the absence of
explicit word identification: Evidence from a letter-by-letrer reader. Paper presented to the
Experimental Psychology Society, Manchester.
This article was downloaded by: [University North Carolina - Chapel Hill]
On: 22 October 2014, At: 07:58
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK
Cognitive Neuropsychology
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/pcgn20
To cite this article: M. J. Riddoch , G. W. Humphreys , M. Coltheart & E. Funnell (1988) Semantic systems or system?
Neuropsychological evidence re-examined, Cognitive Neuropsychology, 5:1, 3-25, DOI: 10.1080/02643298808252925
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions