Professional Documents
Culture Documents
net/publication/242401981
Semantic Representation
CITATIONS READS
51 7,533
2 authors, including:
Gabriella Vigliocco
University College London
199 PUBLICATIONS 12,022 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Gabriella Vigliocco on 13 January 2014.
Semantic Representation
Department of Psychology
05 October 2005
Semantic Representation 2
This chapter deals with how word meaning is represented by speakers of a language,
four key issues in the investigation of word meaning, then we introduce current theories of
Meaning representation has long interested philosophers (since Aristotle) and linguists (e.g.,
Chierchia & McConnell-Ginet, 2000; Dowty, 1979; Pustejovsky, 1993; see Jackendoff, 2002),
in addition to psychologists, and a very extensive literature exists in these allied fields.
However, given our goal to discuss how meaning is represented in speakers’ minds/brains, we
will not be concerned with theories and debates arising primarily from these fields except
where the theories have psychological or neural implications (as for example the work by
linguists such as Jackendoff, 2002; Kittay, 1987; Lakoff, 1987, 1992). Moreover, the discussion
we present is limited to the meaning of single words; it will not concern the representation and
processing of the meaning of larger linguistic units such as sentences and text.
When considering how meaning is represented, four fundamental questions to ask are: (1)
How are word meanings related to conceptual structures? (2) How is the meaning of each word
represented? (3) How are the meanings of different words related to one another? (4) Can the
same principles of organisation hold in different content domains (e.g., words referring to
objects, words referring to actions, words referring to properties)? With few exceptions, existing
theories of semantic organisation have made explicit claims concerning the representation of
each meaning and the relations among different word meanings, while the relation between
conceptual and semantic structures is often left implicit, and the issue of whether different
Semantic Representation 3
principles are needed for representation of different content domains is often neglected. Let us
Language allows us to share experiences, needs, thoughts, desires, etc.; thus, word
meanings need to map into our mental representation of objects, actions, properties, etc. in the
world. Moreover, children come to the language learning task already equipped with substantial
knowledge about the world (based on innate biases and concrete experience) (e.g., Bloom, 1994;
Gleitman, 1990), thus, word meanings must be grounded in conceptual knowledge. This claim is
certainly uncontroversial in the cognitive sciences and neuroscience: it is not so unusual, in fact,
for researchers to assume (explicitly, or more often implicitly) that concepts and word meanings
are the same thing, or at least are linked on a one-to-one mapping (e.g. Humphreys, Price &
Riddoch, 1999). It has also been discussed in bilingualism research (Grosjean, 1998) where this
issue has important ramifications for theories of the representation of meaning in multiple
languages.
In the concepts and categorisation literature, scholars treat semantics (i.e., word
conceptual knowledge use words as stimuli but the findings are discussed in terms of concepts,
under the tacit assumption that the use of words in a given task should produce comparable
results to nonlinguistic stimuli (for example, pictures, or artificial categories). In other words, it
is often assumed that the conceptual system is entirely responsible for categorising entities in the
world (physical, mental), whereas the assignment of a name to a conceptual referent (and its
Obviously, word meanings and concepts must be tightly related, and when we activate
the fact that comprehending language entails activation of information beyond linguistic
Semantic Representation 4
meaning comes from imaging studies showing that primary motor areas are activated when
speakers see or hear sentences or even single words referring to motion, in comparison to
sentences referring to abstract concepts (Tettamanti et al., 2004), non-words (Hauk, Johnsrude &
Pulvermüller, 2004) or spectrally rotated speech (Vigliocco, Warren, Arciuli, Siri, Scott & Wise,
submitted). These activations indicate that the system engaged in the control of action is also
automatically engaged in understanding language related to action. Because of this tight link
between conceptual structure and word meanings, research into the latter cannot dispense with
the former. The issue that we must address is whether concepts and word meanings can be
As discussed in Murphy (2002), effects of category membership and typicality are well
established in both the concepts and categorisation literature (for concepts) (e.g., Rosch, 1978)
and in the psycholinguistic literature (for words) (e.g., Federmeier & Kutas, 1999; Kelly, Bock
& Keil, 1986). Moreover, properties that have been argued to have explanatory power in the
structure of conceptual knowledge and its breakdown in pathological conditions (e.g., semantic
dementia) such as correlation among conceptual features, shared and distinctive features (e.g.,
Gonnerman, Andersen, Devlin, Kempler & Seidenberg, 1997; McRae & Cree, 2002; McRae, de
Sa & Seidenberg, 1997; Tyler, Moss, Durrant-Peatfield & Levy, 2000; see Tyler & Moss
chapter), have also been shown to predict semantic effects such as semantic priming among
words (e.g., McRae & Boisvert, 1998; Vigliocco, Vinson, Lewis & Garrett, 2004). For example,
McRae and Boisvert (1998) showed that previously conflicting patterns of results from semantic
priming studies could be accounted for in terms of featural overlap and featural correlation
Thus, if the factors that affect conceptual structures also affect semantic representations,
However, the relation between concepts and word meanings may not be so
straightforward. First, word meanings and concepts cannot be the same thing because speakers
of any language have far many more concepts than words. Murphy (2002) provides the
following example of a concept familiar to many of us but which is not lexicalised at least
amongst most English speakers: “the actions of two people maneuvering for one armrest in a
movie theatre or airplane seat”. Support for the distinction between concepts and word meanings
also comes from findings in the neuropsychological literature documenting semantic deficits, for
example, impairments limited to a given semantic field of knowledge, such as fruits and
vegetables (Hart & Gordon, 1992) or artifacts (Cappa, Frugoni, Pasquali, Perani & Zorat, 1998)
only in linguistic tasks (such as naming) and not in non-verbal tasks. These findings suggest that
a distinction must be drawn between concepts and word meanings. They do not, however,
preclude a system in which concepts and word meanings are mapped to each other on a one-to-
one basis, and in which only a subset of concepts are lexicalized (as discussed below with
regards to holistic theories of semantic representation). This possibility, however, incurs in other
difficulties unless additional claims are made. Within a language (e.g., English) words can be
ambiguous (e.g., bank) or polysemous (e.g., “move” used in the context of moving objects, or
moving houses). Even under the reasonable assumption that there are two distinct word
meanings for the two senses of an ambiguous word like “bank” (Cruse, 1986; Lyons, 1977), it is
extremely difficult to apply a similar approach to the multiple senses of polysemous words (see
Lupker, this volume). Things become more complicated and interesting if we extend the
discussion to consider cross-linguistic variability in which conceptual properties are mapped into
lexical entries.
Semantic Representation 6
Vigliocco and Filipovic (2004), following Gentner and Goldin-Meadow (2003) and
Levinson (2003) describe the dominant position within cognitive psychology for the last few
decades as one in which: (i) the conceptual structure of humans is relatively constant in its core
features across cultures, and (ii) conceptual structure and semantic structure are closely coupled.
Levinson (2003) terms one version of such a position ‘Simple Nativism’, according to which
‘linguistic categories are a direct projection of universal concepts that are native to the species’
(p. 28).
manners (see e.g. Kittay, 1987). To take very simple examples, English and Italian speakers both
have different words for the body parts “foot” [It. “piede”] and “leg” [It. “gamba”], while
Japanese speakers have a single word [“ashi”] which refers to both foot and leg. By the same
token, English and Hebrew speakers have a large repertoire of verbs corresponding to different
manners of jumping (leap, hop, spring, bounce, caper, vault, hurdle, galumph, and so on),
whereas Italian and Spanish speakers do not (Slobin, 1996b). Surely, Japanese speakers can
conceptualise “foot” as distinct from “leg”, and there do not appear to be obvious cultural
reasons (independent of language) to explain why Spanish or Italian speakers have fewer verbs
to describe the manner of motion. To consider a further example, given the universality of
human spatial abilities and the concrete relation between spatial terms and real-world referents, it
might be expected that spatial categories should be similar cross-linguistically. However, spatial
categories differ even among closely related languages such as Dutch and English; for example,
where English has two terms (“on” and “in”), Dutch has three (“aan”, “in” and “op”) (see
Under the assumption that linguistic categories are a projection of conceptual categories,
cases of cross-linguistic variability like the above have important implications. If people have
different semantic structures in their languages, they may also have different conceptual
Semantic Representation 7
structures, contra to the universality of conceptual structures. This claim is represented in the
literature as the linguistic relativity hypothesis (Boroditsky, 2001; Davidoff, Davies, &
Roberson, 1999; Levinson, 1996; Lucy, 1992; Roberson, Davies & Davidoff, 2000; Sapir, 1921;
Sera, Elieff, Forbes, Burch, Rodriguez & Dubois, 2002; Slobin, 1992, 1996a, b; Whorf, 1956).
Under the strongest versions of this hypothesis, proponents maintain the assumption of a one-to-
one mapping between concepts and lexico-semantic representations, and thus that differences in
different languages (Davidoff, et al., 1999; Levinson, 1996; Lucy, 1992; Sera, et al., 2002; Sapir,
1921; Whorf, 1956). Weaker versions also exist in which a one-to-one mapping is not assumed
but instead, linguistic categories affect conceptualization (processing) during verbal tasks (e.g.,
the thinking for speaking hypothesis put forward by Slobin, 1992; 1996a).
Assuming that cross-linguistic differences can arise in semantic rather than conceptual
representation allows for (at least some) universality of conceptual structures while at the same
time allowing for cross-linguistic variability of semantic representations (Vigliocco, et al., 2004).
It does not, however, preclude the possibility that language specific properties can play a role in
shaping conceptual representation (at least during development). For example, in the domain of
colour perception, Roberson, et al. (2000) have shown that the perceptual categories we impose
on colours may in part be affected by the language spoken. Boroditsky (2001) has shown that the
type of spatial metaphor used to refer to the time-line (i.e., behind and in front as in English, or
above and below as in Mandarin) differentially affects English and Mandarin speakers'
conceptions of time.
Language-specific effects have also been demonstrated for properties of events. Kita and
Özyürek (2003) investigated the spontaneously produced gestures of speakers co-occurring with
speech referring to motion events, in order to assess whether these co-speech gestures would
differ according to language differences. The co-speech gestures of interest in the study were so-
called iconic gestures, namely gestures that iconically resemble properties of the event. These
Semantic Representation 8
gestures encode the same message being produced in speech and are timed-locked with speech,
but crucially they also encode imagistic aspects of the event being described that are not encoded
in the speech (as discussed below, see also McNeill, 1992) indicating that their origin is non-
linguistic. Languages of the world differ with respect to whether they preferentially encode the
path (direction) or the manner of motion in the verb stem (Talmy, 1986); whereas English
speakers encode manner in the verb stem, Japanese and Turkish speakers encode path. Gestures
speakers of Japanese and Turkish (path languages). While speakers of English tended to produce
gestures that conflated manner and path (for example, producing a left-to-right arc gesture while
talking about a left-to-right “swinging” event, speakers of Japanese and Turkish, instead, tended
to produce successive gestures separating path from manner (e.g., producing first a straight left-
to-right gesture and then an arc gesture) for the same swinging event. Kita and Özyürek argued
that the finding that gestures, in all languages encoded properties of the events that were not
encoded in speech (e.g., the left-to-right direction of the movement was universally present in
the gestures and absent in the speech) indicates that gestures are generated at a pre-linguistic
(conceptual) level. The finding that these gestures were affected by the lexicalisation patterns of
specific languages thus indicates that linguistic features can affect conceptual structures.
However, there are also studies investigating other language specific properties that
report effects that appear to be strictly limited to semantic representations being present only in
tasks that require verbalization (Brysbaert, Fias & Noel, 1998; Malt, Sloman, Gennari, Shi &
Wang, 1999; Vigliocco, Vinson, Paganelli & Dworzynski, in press). Brysbaert, et al. (1998)
showed that the time to perform mental calculations were affected by whether speakers' language
required them to produce number words in forms like "four-and-twenty" (Dutch) or "twenty-
four" (French), but these differences disappeared when participants were asked to type their
responses rather than saying the numbers aloud. Vigliocco et al (in press) investigated
grammatical gender of Italian words (e.g., “tigre” [tiger] is feminine in Italian regardless of
Semantic Representation 9
whether it refers to a male or female tiger), and found that Italian words referring to animals
sharing the same grammatical gender were judged to be semantically more similar; and were
more likely to replace one another in semantically-related slips of the tongue that words that did
not share the same gender, compared to an English baseline. Although grammatical gender of
nouns is a syntactic, relatively arbitrary property of words, the findings from this study suggest
gender disappeared when the task did not require verbalization (similarity judgments upon
pictures rather than words), indicating that the effect was semantic rather than conceptual.
Together these results seem to demand a theoretical distinction between conceptual and
semantic levels of representation. As we will see below, one way in which this distinction can be
realised in models is by assuming that concepts comprise distributed featural representations and
that lexical-semantics binds these distributed representations for the purpose of language use
(Damasio, Tranel, Grabowski, Adolphs & Damasio, 2004; Vigliocco et al., 2004). Before we
discuss this architectural possibility, let us consider different ways in which conceptual and
Theories of semantic organisation starting from the 1960s and -70s can be divided into
those theories that consider each word’s meaning as holistic and are concerned with the types of
relations between meanings (e.g., Anderson & Bower, 1973; Collins & Loftus, 1975); and
theories that instead consider meanings as decomposable into features and discuss semantic
similarity in terms of featural properties such as feature overlap, among others (e.g., Minsky,
1975; Norman & Rumelhart, 1975; Rosch & Mervis, 1975; Smith, Shoben & Rips, 1974). It is
important here to note that despite our aim of discussing semantic representations, in both types
of theories the issue of decomposition vs. non-decomposition applies more directly to conceptual
structures. It is necessary, however, to consider it here as this issue has direct consequences on
any attempt to describe semantic representations (concerning those conceptual structures which
Semantic Representation 10
are, or can be, lexicalised). In the case of non-decompositional views, word meanings are
characterised as lexical concepts (i.e., a part of conceptual structures that represents holistic
decompositional views, word meanings are conceived as combining a set of conceptual features
(Jackendoff, 1992; Vigliocco et al., 2004), which are bound into lexical representations in order
to interface with other linguistic information such as phonology (e.g., Damasio, Grabowski,
Tranel, Hichwa & Damasio, 1996; Damasio, et al., 2004; Rogers et al., 2004; Vigliocco et al.,
2004).
words (Fodor, 1976; Fodor, Garrett, Walker & Parkes, 1980; Levelt, 1989; Levelt, Roelofs &
Meyer, 1999; Roelofs, 1997). In these views the mental representations for each thing, event,
property etc. in the world that can be lexicalised in a language is represented in a unitary and
abstract way in a speaker’s conceptual system. These representations can be innate, at least in
some instances (for concepts that do not require combinations of different concepts) or learnt via
association among different features. Even in the latter case it is argued that the representation
for adult language users does not involve retrieving constituent properties but that the assembled
properties become a single lexical concept during language acquisition (Roelofs, 1997). Thus,
for example, the concept “spinster” for the adult language user does not imply the retrieval of
features such as “female” (Levelt, 1989; Roelofs, 1997). The manner in which lexical concepts
knowledge as well as to the sensory-motor systems is not specified (but see Levelt, 1989 for a
sketch).
Moreover, in holistic views, the conceptual system must include among its entries all
the lexical concepts that are lexicalised and can be lexicalised in all possible languages, in order
(Levinson, 2003). This is because the assumption of holistic lexical concepts implies a strict one-
to-one mapping between concepts and lexical representations. If this is the case, cross-linguistic
Universality in conceptual structures can, thus, be defended only by assuming that conceptual
structures comprise more entries than lexical representations, and crucially, they comprise all
lexical concepts that are (or can be) lexicalised in all existing languages.
Initial “feature list” theories, such as the Feature Comparison Model (Smith, et al., 1974)
criticisms in the 1980s and -90s that lead to their disappearance. For example, Fodor, Fodor and
Garrett (1975) and Fodor et al (1980) argue for the impossibility of identifying defining features
Another argument that has been raised against decompositionality is the so-called
hyponym/hyperonym problem (Roelofs, 1997; Levelt et al., 1999): if word meanings were to be
decomposed, nothing could stop a speaker from erroneously producing the word “animal” every
time s/he wants to say “dog”. This is because all the features of “animal” are also features of
“dog” (which has additional features not part of the representation of animal). This issue,
however, can be computationally solved, as has been shown by Bowers (1999, see also
Caramazza, 1997) who demonstrates that laterally inhibitory connections between lexical units
allow the correct production of both subordinates (hyponyms) and superordinates (hyperonyms).
In the recent literature, two alternative types of featural proposals have been developed.
On one hand, in the linguistic literature we find proposals such as by Jackendoff (1983; 1990;
1992; 2002) who argues for abstract conceptual primitive features as underlying word meanings,
some of which can be mapped into syntactic properties across languages (THING, EVENT,
Semantic Representation 12
STATE, PLACE, PATH, PROPERTY). These abstract conceptual features include high-level
abstract perceptual rendering of concrete things (along the lines of 3D models, Marr, 1982).
starting primarily from work in neuropsychology (Allport, 1985; Warrington & Shallice, 1984)
and computational neuroscience (from Farah & McClelland, 1991). In these approaches
conceptual features are not abstract, but grounded in perception and action at least to some
extent. In other words, conceptual features, the building blocks of semantic representation, are
embodied in our concrete interactions with the environment. In a number of proposals, one
important dimension along which concepts in different semantic fields differ is the type of
conceptual features with some concepts relying primarily on sensory-related properties; and
others relying primarily on motor-related properties (e.g., Barsalou, Simmons, Barbey & Wilson,
2003; Cree & McRae, 2003; Damasio et al., 2004; Gallese & Lakoff, 2005; Rogers et al., 2004;
In order to gain insight into these conceptual features some authors have used feature
norms obtained by asking speakers of a language to provide a list of the features they believe to
be important in describing and defining the meaning of a given word (Cree & McRae, 2003;
McRae et al., 1997; Vigliocco et al., 2004; Vinson & Vigliocco, 2002). Example of features
produced for four words (two referring to objects, two referring to actions are given in Table 1.
Table 1
These feature norms have been shown to be useful not only in giving some leverage into
identifying the relative proportions of different featural types underlying each word’s meaning,
as illustrated in Figure 1, but also in providing information concerning featural properties shared
Figure 1
Among the evidence in support of embodied views are imaging studies (like the ones we
have mentioned earlier) that show differential activations for example in naming entities from
Semantic Representation 13
different semantic fields (e.g., Damasio et al., 1996; Martin & Chao, 2001) and entities vs.
actions (Damasio, et al., 2001; Tranel, Adolphs, Damasio & Damasio, 2001). In addition, a
number of recent behavioural studies have shown the importance of embodied knowledge in
guiding comprehension. For example, Glenberg and Kaschak (2002) presented participants with
sentences depicting transfer events between “you” and another person (both concrete, e.g.,
“Courtney handed you the notebook” and abstract, e.g., “You told Liz the story”) and asked
participants to indicate whether the sentences were sensible by pressing buttons. Crucially, the
buttons were arranged so that a response (yes or no) was either consistent or inconsistent with
the (implied) direction of the transfer (towards or away from the body). Participants’ response
times were sensitive to the relation between the direction of the transfer and the position of the
“yes” response button, demonstrating that comprehension of sentences depicting transfer events
had consequences for manual responses. Richardson, Spivey, Barsalou & McRae (2003) also
acoustically presenting participants with sentences whose verbs reflected either horizontal or
vertical motion schemas (e.g. push vs. lift) and asking them to perform nonlinguistic tasks for
which the horizontal or vertical axes were relevant (visual discrimination, pictorial memory) at
the same time. Performance on these tasks differed depending on the direction of the motion
implied by the sentences, again consistent with the influence of embodied knowledge upon
comprehension.
Taken together the imaging and behavioural studies suggest that concrete aspects of our
interaction with the environment (sensory-motor features) are automatically retrieved as part of
sentence comprehension. Note that under assumptions of holistic lexical concepts or abstract
featural representations, these findings can only be explained by invoking strategic (i.e.,
Interestingly, within the general embodiment framework, until recently researchers have
assumed some degree of abstraction (i.e. supramodality). For example, on the basis of imaging
studies in which participants were asked to name different types of entities, Martin and
represented in the brain in areas adjacent to, but not overlapping with, perceptual associative
areas (Martin & Chao, 2001). In a very recent and highly provocative paper, however, Gallese
and Lakoff (2005) present the possibility that featural representations are literally sensory-motor,
engaging the same cortical networks used in perception and motion (including primary sensory
and motor areas) without the need to invoke additional, more abstract representations.
Among the reasons why it has been argued that conceptual features must be abstract,
rather than embodied in perception and action, Jackendoff (1983) suggests that featural view in
which the features are directly linked to perception and action cannot underlie the representation
of conceptual types (i.e., representations of a class of objects rather than of specific exemplars or
which type representations are able to arise in modality-related perceptual symbol systems.
Figure 2 provides a schematic representation of the basic differences between the holistic
and featural views. To summarise, although both holistic and featural views are represented in
the current literature, it appears that the weight of the evidence is in favour of featural views, in
particular embodied views in which language is seen as more similar to other cognitive systems
than it has been considered in traditional psycholinguistic views in which language was
Figure 2
Semantic similarity effects are powerful and well documented in the word recognition
literature (semantic priming, Neely, 1991); word production literature (semantic substitution
Semantic Representation 15
errors, Garrett, 1984; 1992; 1993; interference effects in the picture-word interference paradigm,
Glaser & Düngelhoff, 1984; Schriefers, Meyer & Levelt, 1990) and neuropsychological
literature (patients who make only semantic errors in their speech, e.g. Caramazza & Hillis,
1990; Ruml, Caramazza, Shelton & Chialant, 2000; patients who cannot name entities in specific
semantic fields, e.g. Hart, Berndt & Caramazza, 1985; see Vinson, Vigliocco, Cappa & Siri,
2003 for a review). These similarity effects must reflect principles of semantic organisation and
refers to the robust finding that speakers typically respond faster to a target word when preceded
by a semantically related word than when preceded by an unrelated word (Meyer &
Schvaneveldt, 1971). The finding of semantic priming in lexical decision tasks (word/non-word)
or in naming tasks has been largely investigated because it has been considered to directly reflect
the organization of semantic memory (e.g., Anderson, 1983; Collins & Loftus, 1975; Cree,
McRae & McNorgan, 1999; McRae & Boisvert, 1998). Especially relevant are the results of
Cree et al. (1999) and McRae and Boisvert (1998), who found that priming for prime/target
words from the same semantic category can be observed in the absence of associative relations
among items (the latter reflecting co-occurrence in speech and text, rather than being sensitive to
the structure of semantic memory), if the related items are selected based on empirically
obtained measures of semantic similarity. Furthermore, they also reported these priming effects
the recognition of a target word, the presentation of a semantically related word immediately
before a picture which must be named slows down naming latencies, compared to an unrelated
word. Studies using this picture-word interference paradigm have established that the time
course of these semantic effects for categorically related distracters/targets ranges from slightly
Semantic Representation 16
before picture onset (SOA -200 milliseconds) up to very slightly after the picture onset (SOA
The contrast between the interference effect in the picture-word interference experiments
and the priming effect in word recognition experiments may be explained in terms of differences
between the processes involved in the two tasks. In particular, in the picture naming task, a
specific word must be selected for naming with no additional support from orthography (because
the input is a picture). In this case, other coactivated lexical representations could slow down the
process by competing for selection. The lexical decision task, instead, requires recognition rather
than selection; furthermore, additional support from the orthographic form is available (because
the input is a word). In this case, co-activation of other words would not create costs because the
task only requires participants to decide whether the presented string is a word or not, rather than
to select that specific word from among competitors. Hence, in both cases, the prime/distracter
word would have similar effects upon lexico-semantic processing; however, because of task
differences the effect is facilitatory in one case (priming in lexical decision) and inhibitory in the
Under a holistic view, these semantic effects are accounted for in terms of spreading of
activation in the lexical network (Collins & Loftus, 1975; Roelofs, 1997). Holistic lexical
concepts are linked to each other on the basis of different links, representing different types of
relationships among concepts. With respect to words referring to concrete objects (with few
exceptions the only type of words investigated in the literature), the links represent hyperonymy
meronymy (parts of), and various other relations depending on the specific theoretical
framework.
Under a featural view, the semantic effects are accounted for in terms of featural
properties: the semantic relation between words can be determined on the basis of the featural
overlap (between individual features, but also between sets of correlated features) (Cree &
Semantic Representation 17
McRae, 1999; Masson, 1995; Plaut, 1995; Vigliocco et al., 2004). Using featural norms, McRae
and Boisvert (1998) evaluated previous studies which failed to find semantic priming in the
absence of associative relations (Shelton & Martin 1992), discovering that these studies
employed prime/target pairs that were not sufficiently semantically similar despite being from
Semantic effects can be graded even for complex semantic categories such as objects, not
just for categories that naturally extend along a single dimension (numbers) or only a few
dimensions (colours) in both word recognition and picture naming (Vigliocco et al, 2004). These
effects will be discussed in some more detail in Section 2. Here we would like to point out that
finding graded effects is somewhat problematic for holistic theories. Take for example
“cucumber” and “pumpkin”. Both words are from the same semantic category as “celery”
“celery” in holistic theories would come about because both prime words would activate the
superordinate term “vegetable” via hyperonym links. Because “cucumber” and “pumpkin” are
co-hyponyms, they should activate the superordinate to the same extent, and would therefore
spread an equal amount of activation to other co-hyponyms of the primes such as the target
“celery”. The amount of priming should not differ between these conditions, unless additional
assumptions are implemented, such as, for example, the assumption that priming reflects
category typicality, or that other types of links are also relevant to determine the amount of
priming. Featural views, instead, can easily account for such effects: the degree of featural
overlap (and hence the degree of co-activation) is higher between “cucumber” and “celery”
(which share many specific features like <crisp>, <green>, <healthy>, < salad>, <watery>, etc.)
than between “cucumber” and “pumpkin” (which mainly share only the basic features common
Featural views also seem to offer a more promising manner to account for semantic-
relatedness effects reported in the neuropsychological literature than holistic views. Because this
Semantic Representation 18
literature is treated at length elsewhere (Moss & Tyler, this volume), we limit our discussion
to a single category) for holistic and featural views of semantic representation. Most current
accounts of category-related deficits assume featural representations and explain the observed
e.g. Warrington & McCarthy, 1983, 1987; Borgo & Shallice, 2001) or in terms of different
featural properties characterising different categories (e.g. Devlin, Gonnerman, Andersen &
Seidenberg, 1998; Gonnerman, et al., 1997; McRae & Cree, 2002; Tyler, et al. 2000; Vinson, et
al., 2003). These accounts are in line with the general finding that the category-related deficits
are usually graded rather than all-or-none for a specific category, compatible with distributed
neural representation of knowledge. It is unclear how these findings could be explained within
holistic views.
Thus, although semantic priming in word recognition and semantic interference in the
picture word interference task can be accounted for by both holistic and featural views, some
results, in particular the indication that the semantic effects may be graded are more naturally
accounted for within featural accounts. This is also the case for category-related deficits reported
in the neuropsychological literature. Before we discuss in some detail some of the current
holistic and featural theories of semantic organisation it is important to consider the final key
issue: whether words from different content domains should be represented following the same
principles or not.
1.4 ARE WORDS FROM DIFFERENT CONCEPTUAL DOMAINS REPRESENTED IN THE SAME WAY?
This issue has received relatively little attention, perhaps because much of the
behavioural research aimed at assessing models of semantic representation has focused upon
nouns referring to concrete objects. As Miller and Fellbaum (1991) put it, “When psychologists
think about the organisation of lexical memory it is nearly always the organisation of nouns they
Semantic Representation 19
have in mind.” (p.204). We can further add that most research has been limited not only to
Words referring to objects differ from words referring to other content domains along a
number of dimensions. Because the content domain of words referring to actions/events is the
only other domain beyond words referring to concrete objects that has received appreciable
attention, our discussion below is primarily limited to contrast these two domains. For work on
the semantic representation of adjectives see Gross, Fischer and Miller (1989); for a short
discussion of the representation of abstract words see Section 3 below. A first intuitive
difference between objects and actions/events is that objects can be understood in isolation,
while events are relational in nature. One implication of this difference is that words referring to
events are more abstract than words referring to objects (Bird, Lambon Ralph, Patterson and
Hodges, 2000; Breedin, Saffran and Coslett, 1994). Some authors have also argued that words
referring to objects and events differ in featural properties (Graesser, Hopkinson & Schmid,
1987; Huttenlocher & Lui, 1979). For words referring to objects there would be more features
referring only to narrow semantic fields (e.g., <domesticated> vs. <wild> for animals) than for
words referring to events. For these latter, instead, more features would apply to members of
diverse semantic fields (e.g., <intentionality>, <involves motion>). Furthermore, features would
tend to be more strongly correlated within semantic fields for objects (e.g., <having a tail>, and
<having four legs> for mammals) than for events. Because of these differences, Huttenlocher
and Lui (1979) proposed that these two content domains are differentially organized: words
referring to objects would be organized hierarchically, whereas words referring to events would
terms of featural properties between objects and events (from Vinson and Vigliocco, 2002) is
reported in Table 2.
Table 2
Semantic Representation 20
basic, subordinate) (Rosch & Mervis, 1975) is relatively simple and fruitful for object concepts
(and especially natural kinds); the situation is different for events. Nonetheless, there have been
attempts to define basic level actions (Lakoff, 1987; Morris & Murphy, 1990) and hierarchical
representations for events have been offered in the literature (Jackendoff, 1990; Keil, 1989).
Differences between the domains, however, persist. For example, in Keil (1989), the hierarchical
organization for objects and events is described as being different, with event categories being
represented by fewer levels (generally two) and with fewer distinctions at the superordinate
level. Other attempts to capture a level of organization for events have included distinctions
between "light" (e.g., "do") and "heavy" (e.g., "construct") verbs (Jespersen, 1965; Pinker, 1989)
and distinctions between "general" (e.g., "move") and "specific" (e.g., "run") verbs (Breedin,
Saffran & Schwartz, 1998). However, the light/heavy dichotomy only allows us to draw a
distinction between verbs used as auxiliaries and other verbs; while where to draw the line
Finally, words referring to objects and words referring to events also differ in dimensions
crossing the boundary between semantics and syntax. The syntactic information associated with
event words is richer than for object words. It has been argued that the lexico-semantic
representations for actions contain the "core" meaning (the action or the process denoted) and the
thematic roles specified (e.g., the verb "to kick" implies striking out with the foot, and thematic
roles that refer to the roles played by the verb in terms of "who did what to whom"). Strictly
related to the thematic roles are the number and kind of arguments the verb can take (argument
structure and subcategorization) (Grimshaw, 1991; Jackendoff, 1990; Levin, 1993). These
different kinds of information seem to be strongly related to each other within a language
In the very few existing models that have addressed the semantic representation of objects
and events, researchers have decided either to embed different or the same organisational
Semantic Representation 21
principles. In Wordnet, a holistic model of semantic representation (Miller and Fellbaum (1991)
has been developed using the strategy of deciding a priori diagnostic properties of the different
domains, with different types of relational links for objects and events. For nouns referring to
objects, relations such as synonymy, hyponymy (e.g., “dog” is a hyponym of “animal”), and
meronymy (e.g., “mouth” is part of “face”) are argued to play a prime role in describing the
semantic organization. For verbs, instead, the authors propose that relational links among verb
concepts include troponymy (i.e., hierarchical relation in which the term at a level below, e.g.,
(e.g., "snoring" entails "sleeping") and antonymy (e.g., "coming" is the opposite of "going"),
while relations such as meronymy would not apply. In the Featural and Unitary Semantic Space
Hypothesis (FUSS, Vigliocco et al., 2004) the strategy, instead, has been not to decide a priori
upon criteria to distinguish the object and the event domains, but to model both types of words
within the same lexico-semantic space using the same principles. This strategy has also been
used by global co-occurrence memory models such as Latent Semantic Analysis (LSA, Landauer
& Dumais, 1997) and Hyperspace Analogue to Language (HAL, Burgess & Lund, 1997).
Theories of semantic representation can be described as falling into the two general
classes: those in which a word's meaning is represented in terms of its relation to other words,
and those in which a word's meaning is, instead, represented in terms of separable aspects of
Early theories of this type were framed as semantic networks in which different words are
represented as nodes, and semantic relationships are expressed by labelled connections between
them. Within such views a word’s meaning is expressed by the links it has to other words: which
other words is it connected to; what types of connections are involved; etc. Of paramount
importance for network-based theories is the type, configuration and relative contribution of the
links that exist between words; numerous alternative frameworks have been developed (see
Johnson-Laird, Herrmann & Chaffin, 1984 for a review) which differ along these crucial
dimensions. Importantly, these models have in common a focus upon (explicit) intensional
relations, and a necessity to explicitly designate those relations that are implemented within the
network.
Perhaps the most extensive model which implements distinct representational themes is
Wordnet (Miller & Fellbaum, 1991), a network model of the representations of a large number
of nouns, verbs and adjectives in English. In Wordnet, “nouns, adjectives and verbs each have
their own semantic relations and their own organisation determined by the role they must play in
the construction of linguistic messages” (p.197). These relations and organisation are constructed
by hand based on the relations that are believed to be relevant within a given class of words. For
nouns the most important roles are typically played by relations including synonymy,
hierarchical relations and part-whole relations. For verbs, instead, dominant are troponymy
Some evidence compatible with a different role of relations such as cohyponymy and antonymy
for nouns and verbs comes from spontaneously occurring semantic substitution errors. In an
analysis of semantic substitution errors from the MIT/AZ corpus, Garrett (1992) reports that for
nouns (n = 181) the large majority of substitutions involve category coordinates (n = 137); with
opposites being present but far less common (n = 26). In contrast, for verbs (n = 48), the
preferred semantic relationship between target and intruding words is different than for nouns,
Semantic Representation 23
with opposite pairs (e.g., go/come; remember/forget) being more represented (n = 30) than non-
Like network models, semantic field theory (originating with Trier, 1931; see Lehrer,
1974; Kittay, 1987) considers semantic representations as arising from relationships among
words’ meanings. Semantic fields are considered to be a set of words that are closely related in
meaning, and the meaning of a word within a field is determined entirely in terms of contrast to
other words within a given semantic field. Crucially, unlike network models this approach does
not require overtly defining the particular relations that are involved in general, but in order to
allow evaluation of such views it is necessary to identify the principles of contrast that apply
within a field (c.f. Wordnet’s distinction by grammatical class). For example, the semantic field
of colour words is largely distinguished by the continuous properties of hue and brightness
(Berlin & Kay, 1969); kinship terms by dimensions like age, sex, degree of relation (Bierwisch,
1969); cooking terms by factors like heat source, utensils involved and materials cooked (Lehrer,
1974); body parts by function and bodily proximity (Garrett, 1992). Such factors have been
demonstrated to have behavioural consequences, for example, Garrett (1992) showed that the
spontaneously occurring semantic substitution errors reflect minimal contrasts along the relevant
dimensions within a semantic field (errors involving body parts are largely related by physical
proximity; while errors involving artefacts are much more likely to share function). One crucial
difference with network models is that semantic field theory allows for flexibility in
representation, and therefore a word can be a member of multiple semantic fields at once.
All of the above theories depend upon deciding which relationships or characteristics are
most relevant in representing meaning (either at a broad level, like network models, or within
finer-grained domains of meaning, like semantic field theory), and then deciding upon a manner
discover representations of words in terms of their relationship to other words, without making
Semantic Representation 24
any a priori assumptions about which principles are most important. This approach can be found
in global co-occurrence models such as Latent Semantic Analysis (LSA, Landauer & Dumais,
1997) and Hyperspace Analogue to Language (HAL, Burgess & Lund, 1997). These models take
advantage of computational techniques, using large corpora of texts in order to compute aspects
of a word’s meaning based on those other words found in the same linguistic contexts. As such,
representations are purely abstract, denoting a word’s similarity to other words without revealing
which aspects of meaning or relation are responsible in one case or another. Measures of
similarity based on these models have been demonstrated to predict behavioural performance
(Burgess & Lund, 1997; Landauer & Dumais, 1997), suggesting that the abstract representations
derived from words’ contexts reflect patterns of similarity that have some psychological
plausibility.
These abstractionist theories, however, have in common a serious flaw in that they focus
only upon relationships among words and are not grounded in reality. As Johnson-Laird et al
(1984) wrote, “The meanings of words can only be properly connected to each other if they are
properly connected to the world” (p. 313). Although specifically criticising semantic network
models, this statement is equally relevant to any theory of representation that is not embodied in
A variety of theories (e.g. Rosch & Mervis, 1975; Smith, et al., 1974; Collins & Quillian,
1969; Jackendoff, 1990; Minsky, 1975; Norman & Rumelhart, 1975; Shallice, 1993; Smith &
Medin, 1981) considered the representation of meaning not in terms of the relationships among
words, but in terms of feature lists: those properties of meaning which, taken together, express
the meaning of a word. As discussed above, models differ with respect to whether these features
with the world. Whereas some authors’ research agenda concerns the identification of those
Semantic Representation 25
primitive features that represent the actual decompositional components of meaning (e.g.
Jackendoff, 1983), other authors have attempted to gain insight into those dimensions of
(e.g. McRae, et al., 1997; Rogers & McClelland, 2004; Vigliocco, et al., 2004). These feature
properties of such features have been shown to account for effects of meaning similarity (Cree &
McRae, 1999; Vigliocco et al., 2004), have been used as the basis to develop models of impaired
performance (Vinson, et al., 2003) and to generate prediction for imaging experiments
implemented, differing mainly in the manner in which semantic representations are derived from
a set of feature norms. One such class of models employs connectionist frameworks which
develop representations from semantic input informed by feature norms, often in order to
network with input that, although not directly obtained from speakers, is informed by particular
characteristics of feature norms that are hypothesized to play a role. For example, Farah and
McClelland (1991) constructed a model in which words referring to living or nonliving entities
were associated with different proportions of visual-perceptual vs. functional features (the
former predominant for living things, the latter predominant for nonliving entities), consistent
with evidence from feature generation tasks. Differential category-related effects were found
when the model was lesioned, depending upon whether the lesion targeted the visual-perceptual
or functional features. A similar approach was taken by Devlin, et al. (1998) who addressed the
differential impairment over time for living or nonliving things as a consequence of Alzheimer's
dementia. Instead of distinguishing between the two on the basis of feature types of individual
items, Devlin et al. implemented semantic representations that were based upon the relationship
Semantic Representation 26
between entities in a particular domain: intercorrelated features (those features which frequently
co-occur with each other) and distinguishing features (those which best enable similar entities to
be distinguished from each other). While living things typically have many intercorrelated
features but few distinguishing ones, the situation is reversed for nonliving entities (McRae et al,
1997). These differences in the composition of living and nonliving entities were sufficient to
explain the progression of relative impairment for living and nonliving things as a consequence
of Alzheimer's dementia, even within a single representational system (see also Rogers et al,
2004).
features for living things, as in Farah & McClelland, 1991; more intercorrelated but fewer
distinguishing features as in Devlin et al, 1998). Such approaches, however, require making a
priori assumptions about the particular properties that are relevant to explain a particular pattern
of data (e.g., difference in correlated features between living and non-living entities). An
alternative approach has been to assemble sets of words and decide a priori upon their features.
Such an approach was taken by Hinton & Shallice (1991, also see Plaut & Shallice, 1993) who
created a set of semantic features which capture intuitive properties of common objects (e.g.
attractor network to learn the mapping between orthography and semantics. Lesioning this
network produced semantic, visual, and combined visual/semantic errors consistent with patterns
of performance in deep dyslexia. In a similar vein, Plaut (1995) used the same approach to
investigate double dissociations between reading concrete and abstract words, using a set of
empirical semantic features underlying the meanings of concrete and abstract words. One
particular aspect of these representations was that abstract words had fewer features overall; this
broad difference in featural properties between concrete and abstract domains (perhaps in
conjunction with other differences) translated into differential consequences when different
Semantic Representation 27
aspects of the model were damaged: abstract words were more impaired when the feedforward
connections were lesioned, while concrete words were more impaired when the recurrent
A common property of these models, however, is the fact that the semantic features used
are chosen a priori by the investigators, and may not reflect the full range of properties of
meaning that may be relevant to the representations of the words in question. It is therefore an
important additional step to assess comparable feature information that is produced by speakers
who are naive about the research questions involved (i.e. feature norms). This allows the
investigation issues like featural properties, distribution of features across different sensory
modalities, etc. using entirely empirically derived measures. By collecting features of meaning
from multiple naive speakers, we also gain a fine-grained measure of featural salience, as a
feature's relative contribution to a word's meaning can be weighted according to the number of
speakers who produced that word (while the above approaches used binary features which are
feature norms have been implemented (McRae, et al. 1997; Vigliocco et al 2004). These models
differ mainly in their composition (McRae et al. including a substantially larger set of object
nouns; Vigliocco et al. including object nouns but also nouns and verbs referring to actions and
events), and the specific manner in which semantic representations are derived from a set of
feature norms (dimensionality reduction techniques in both cases; McRae et al through attractor
networks, Vigliocco et al through self-organizing maps, Kohonen, 1997). Here we focus upon
Vigliocco et al’s (2004) Featural and Unitary Semantic Space (FUSS) model because it has been
developed to account for the semantic organisation of both words referring to objects and words
Conceptual features (of which feature norms are a proxy) are bound into a separate level of
lexico-semantic representations which serves to mediate between concepts and other linguistic
Semantic Representation 28
information (syntax and wordform), in line with Damasio et al’s (2001) convergence zones (see
also Simmons & Barsalou, 2003). The organisation at this level arises through an unsupervised
learning process (implemented using self-organising maps; Kohonen, 1997; see Vinson &
Vigliocco, 2002; Vigliocco et al. 2004 for complete details of implementation) which is sensitive
to properties of the featural input, such as the number of features for each concept, how salient a
given feature is for a concept (feature weight), features that are shared among different concepts
and features that are correlated. As such it is not necessary to specify in advance which aspects
of the input should be reflected in lexico-semantic organisation; this process also allows different
properties to exert different influences depending upon the characteristics of a given semantic
field (see Table 2 for resulting differences between objects and actions; see also Cree & McRae,
2003), giving rise, for example, to the different smoothness of the space for objects (organised in
a "lumpy" manner) with semantic field boundaries being well-defined, and events (organised
"smoothly") in which there are no clear boundaries among fields (see Figure 3).
Figure 3
This model, which uses the same representational principles for the organisation of objects and
events, has been shown to account for graded semantic effects in both domains (Vigliocco,
Vinson, Damian & Levelt, 2002; Vigliocco et al., 2004). In these studies graded effects were
assessed by varying the semantic distance between words (operationalised as the Euclidean
distance between two units, in the resulting composite output map, best responding to the input
vectors corresponding to the features of two different words). Importantly, these graded effects
were found both in production (semantic errors and naming latencies in picture word
interference experiments) and word recognition; crucially, they were found both for words
referring to objects and words referring to events (Vigliocco et al., 2004). Finally, especially in
the object domain, these effects were found both within and across categories (see Vigliocco et
al., 2002) suggesting that there are no boundaries between categories. These findings suggest
that some important aspects of words’ meanings can be captured using feature norms and the
Semantic Representation 29
same principles across domains, despite the differences between objects and events that we have
discussed above, and the greater context dependency of words referring to events than words
referring to objects.
How do these various models compare with each other? The results of experiments by
Barsalou et al. (2003); Glenberg and Kaschak (2002) and Richardson, et al. (2003) demonstrate
the importance of grounding representations in experience in some manner, and the evidence
from cognitive neuroscience (e.g. Hauk, et al., 2004; Martin & Chao, 2001; Tettamanti, et al.,
2004; Vigliocco et al, submitted) shows that brain areas related to sensory-motor modalities are
activated even in the most abstract linguistic tasks. But beyond this call for grounding, models
can also be directly contrasted with each other in the extent to which their semantic
representations predict performance on behavioural tasks involving the same items, provided the
Although most models are not implemented to the extent which would allow such
comparisons (e.g. most network models, semantic field theory, and the embodied views of
Glenberg and Barsalou), this is possible with Wordnet, global co-occurrence models and featural
models. Vigliocco et al (2004) contrasted similarity measures extracted from three models, one
derived from each class (Wordnet, LSA, FUSS) assessing the extent to which similarity
substitution errors in speeded picture naming; semantic interference effects on picture naming
latencies from distracter words; semantic priming in lexical decision), for both words referring to
objects and words referring to events. In these studies, the feature-based model (FUSS)
consistently performed the best in predicting semantic effects. LSA measures of similarity were
also strong predictors in general, though not to the extent of FUSS. Wordnet’s performance for
objects was also strong, but for events it performed particularly poorly, as the limited degree of
Semantic Representation 30
loosely-related events. Within the current implementations, these findings highlight the
We have said above that most research has focused primarily or exclusively on the
representation of words referring to objects, and that only very few studies have addressed the
semantic organisation of words from other content domains, such as events and properties. More
generally, however, even when events and properties have been addressed, the much of the work
has been limited to explore the organisation of concrete concepts/words. Notable exceptions are
studies in neuropsychology that have documented a double dissociation between concrete and
abstract words (Breedin, et al., 1994), and have offered an account for the greater degree of
impairment of abstract rather than concrete words in terms of differences in featural richness of
semantic representations, with concrete words having richer representations and therefore being
more resistant to damage than abstract words whose representation, instead, would be
“freedom” and “invention”, Barsalou and Wiemer-Hastings (2005) suggest that an important
distinction between abstract and concrete words is which situations are more salient for the two
types of words. Whereas for objects attention focuses on the specific object against a
background, for abstract notions attention focuses on social context, events and introspective
properties. For example, for “true”, the focus would include the speaker’s claim, the listener’s
representation of the claim and the listener’s assessment of the claim, rendering abstract words
developed within Cognitive Linguistics by Lakoff and colleagues (Lakoff, 1992; Lakoff &
Johnson, 1980; see also Coulson, 2000, Turner & Fauconnier, 2000). Abstract knowledge is
viewed as originating in conceptual metaphors (i.e., the use of a concrete conceptual domain of
knowledge to describe an abstract conceptual domain). For example, English consistently uses
language concerning “throwing” and “catching” to describe communication of ideas; and words
such as “in front” and “behind” to describe the passage of time. It has been argued that these
patterns of metaphorical language use reflect the manner in which we think about abstract
concepts (Boroditsky, 2001). In this view, learning and representation of abstract concepts in the
mind/brain is grounded in the learning and representation of concrete knowledge, which in turn
is grounded in our bodily experience of the world (see also Taub, 2001 who presents an excellent
discussion of iconicity in American Sign Language as a window into the concrete dimensions
expressed in conceptual metaphors). Some initial evidence for a role of conceptual metaphors in
representing abstract words comes from the study by Richardson et al (2003) presented in
Section 1 who showed that speakers were sensitive to the direction of motion implied in abstract
words such as “respect” (upward motion, linked to the metaphor of “looking up” to someone
who ones respects). This hypothesis suggests continuity between the representation of concrete
and abstract words both of which would be grounded to our experiences with the physical world.
An alternative possibility, however, is that the meaning of abstract words is more highly
dependent from language than the meaning of concrete words: concepts corresponding to
concrete things and events could develop in a manner that is to a large extent driven by innate
predispositions and direct experience with the world (and then generalisation of these
experiences). Abstract words, however, are learnt later and are learnt primarily via language.
These facts invite the hypothesis that implicit learning mechanisms such as those underlying
models such as LSA and HAL (i.e., that inferences about meaning are made starting from the
fact that words sharing similar meaning tend to be found in similar sentences), but also
Semantic Representation 32
mechanisms such as syntactic bootstrapping (e.g., Gleitman, Cassidy, Nappa, Papafragou &
Trueswell, 2005) underlie the development and thus the representation of abstract words more
than concrete words. We are not aware of any empirical study that has addressed this possibility.
It is interesting to note, however, that in the model comparison carried out by Vigliocco et al
(2004) and described above in Section 2.3, LSA, a global co-occurrence model whose
representations were developed solely from linguistic input, provided better predictions of
graded semantic effects for words referring to events (which are more abstract) than words
Of course, the two possibilities outlined above are not necessarily mutually exclusive and
whereas conceptual metaphors may be part of the semantic representation for at least some
words, implicit learning mechanisms based on extracting similarity in meaning from similarity in
linguistic context may also play an important part. As abstract words also appear to be more
susceptible to cross-linguistic variability (and cross-cultural variability too) than concrete words,
conceptual universal biases may interact with language specific factors in determining the
organisation of the semantic system. It is a challenge for future research to explore these issues
in a systematic manner.
Acknowledgments: While writing this chapter, Gabriella Vigliocco and David Vinson were
4. REFERENCES
Brysbaert, M., Fias, W, & Noel, M.P. (1998). The Whorfian hypothesis and numerical
cognition: Is 'twenty-four' processed in the same way as 'four-and-twenty'? Cognition, 66,
51-77.
Burgess, C. & Lund, K. (1997). Modeling parsing constraints with high-dimensional context
space. Language and Cognitive Processes, 12, 177-210.
Cappa, S.F., Frugoni, M., Pasquali, P., Perani, D., & Zorat, F. (1998). Category-specific
naming impairment for artifacts: A new case. Neurocase, 4, 391-397.
Caramazza, A. (1997). How many levels of processing are there in lexical access? Cognitive
Neuropsychology, 14, 177-208.
Caramazza, A. & Hillis, A.E. (1990). Where do semantic errors come from? Cortex, 26: 95-
122.
Chierchia, G. & McConnell-Ginet, S. (2000). Meaning and grammar. Cambridge, MA: MIT
Press.
Collins, A. C. & Loftus, E.F. (1975). A spreading-activation theory of semantic processing.
Psychological Review, 82, 407-428.
Collins, A. M. & Quillian, M.R. (1969). Retrieval time from semantic memory. Journal of
Verbal Learning and Verbal Behavior, 12, 240-247.
Coulson, S. (2000). Semantic leaps: Frame-shifting and conceptual blending in meaning
construction. Cambridge, UK: Cambridge University Press.
Cree, G. S., McRae, K. & McNorgan, C. (1999). An attractor model of lexical conceptual
processing: Simulating semantic priming. Cognitive Science, 23, 371-414.
Cree, G.S. & McRae, K. (2003). Analyzing the factors underlying the structure and
computation of the meaning of chipmunk, cherry, chisel, cheese and cello (and many other
such concrete nouns). Journal of Experimental Psychology: General, 132, 163-201.
Cruse, D. A. (1986). Lexical semantics. Cambridge, Cambridge University Press.
Damasio, H., Grabowski, T.J., Tranel, D., Hichwa, R.D. & Damasio, A.R. (1996). A neural
basis for lexical retrieval. Nature, 380, 499-505.
Damasio, H., Grabowski, T.J., Tranel, D., Ponto, L. L. B., Hichwa, R.D. & Damasio, A.R.
(2001). Neural correlates of naming actions and of naming spatial relations. Neuroimage,
13, 1053-1064.
Damasio, H., Tranel, D., Grabowski, T.J., Adolphs, R., and Damasio, A.R., 2004, Neural
systems behind word and concept retrieval, Cognition, 92:179-229.
Davidoff, J., Davies, I., & Roberson, D. (1999). Colour categories in a stone-age tribe. Nature,
398, 203-204.
Semantic Representation 35
Devlin, J.T., Gonnerman, L.M., Andersen, E.S. & Seidenberg, M.S. (1998). Category-specific
semantic deficits in focal and widespread brain damage: A computational account. Journal
of Cognitive Neuroscience, 10, 77-94.
Dowty, D. R. (1979). Word meaning and Montague Grammar: The semantics of verbs and
times in Generative Semantics and in Montague's PTQ. Dordrecht: D. Reidel Publishing
Co.
Farah, M. J. & McClelland, J.L. (1991). A computational model of semantic memory
impairment: Modality-specificity and emergent category specificity. Journal of
Experimental Psychology: General, 120, 339-357.
Federmeier, K.D. & Kutas, M. (1999) A rose by any other name: long-term memory structure
and sentence processing. Journal of Memory and Language, 41, 469-495.
Fisher, C. (1994). Structure and meaning in the verb lexicon: Input for a syntax-aided verb
learning procedure. Cognitive Psychology, 5, 473-517.
Fodor, J. (1976). The language of thought. Sussex: Harvester Press.
Fodor, J. A., Garrett, M.F., Walker, E.C.T. & Parkes, C.H. (1980). Against definitions.
Cognition, 8, 263-367.
Fodor, J., Fodor, J. A., & Garrett, M. (1975). The psychological unreality of semantic
representations. Linguistic Inquiry, 6, 515-535.
Gallese & Lakoff (2005). The brain's concepts: the role of the sensory-motor system in
conceptual knowledge. Cognitive Neuropsychology, 22, 455-479.
Garrett, M. F. (1984). The organization of processing structure for language production:
Applications to aphasic speech. In D. Caplan, A. R. Lecours & A. Smith (Eds.). Biological
perspectives on language. (pp. 172-193). London, The MIT Press.
Garrett, M. F. (1992). Lexical retrieval processes: Semantic field effects. In: Frames, fields and
contrasts: New essays in semantic and lexical organization. E. Kittay & A. Lehrer (Eds.).
Hillsdale, NJ, Erlbaum.
Garrett, M. F. (1993). Errors and their relevance for models of language production. In:
Linguistic disorders and pathologies. G. Blanken, J. Dittman, H. Grim, J. Marshall & C.
Wallesch (Eds.). Berlin, de Gruyter
Gentner, D., & Goldin-Meadow, S. (2003). Language in mind: Advances in the study of
language and thought. Cambridge, MA: MIT Press.
Glaser, W. R. & Düngelhoff, R.J. (1984). The time course of picture-word interference. Journal
of Experimental Psychology: Human, Perception and Performance, 10, 640-654.
Gleitman, L.R. (1990). The structural sources of verb learning. Language Acquisition, 1, 3-55.
Semantic Representation 36
Gleitman, L.R., Cassidy, K., Nappa, R., Papafragou, A. & Trueswell, J.C. (2005). Hard Words.
Language Learning and Development, 1, 23-64.
Glenberg, A. & Kaschak, M. (2002). Grounding language in action. Psychonomic Bulletin &
Review, 9, 558-565.
Gonnerman, L. M., Andersen, E. S., Devlin, L. T., Kempler, D., & Seidenberg, M. (1997).
Double dissociation of semantic categories in Alzheimer’s disease. Brain and Language,
57, 254–279.
Graesser, A. C., Hopkinson, P.L. & Schmid, C. (1987). Differences in interconcept
organization between nouns and verbs. Journal of Memory and Language, 26, 242-253.
Grimshaw, J. (1991). Argument structure. London, Massachusetts Institute of Technology
Press.
Grosjean, F. 1998. Studying bilinguals: Methodological and conceptual issues. Bilingualism:
Language and Cognition 1:131-149
Gross, D.; Fischer, U. & Miller, G. A. (1989). Antonymy and the representation of adjectival
meanings. Journal of Memory and Language 28: 92-106.
Hart, J., Berndt, R.S., & Caramazza, A. (1985). Category-specific naming deficit following
cerebral infarction. Nature, 316: 439-440.
Hart, J., & Gordon, B. (1992). Neural subsystems for object knowledge. Nature, 359, 60-64.
Hauk, O., Johnsrude, I. & F. Pulvermüller, F. (2004). . Somatotopic representation of action
words in human motor and premotor cortex. Neuron, 41, 301-7.
Hinton, G. E. & Shallice, T. (1991). Lesioning an attractor network: Investigations of acquired
dyslexia. Psychological Review, 98, 74-95.
Humphreys, G.W., Price, C.J., & Riddoch, M.J. (1999). From objects to names: A cognitive
neuroscience approach. Psychological Research, 62, 118-130
Huttenlocher, J. & Lui, F. (1979). The semantic organization of some simple nouns and verbs.
Journal of Verbal Learning and Verbal Behavior, 18, 141-179.
Jackendoff, R. (1983). Semantics and cognition. Cambridge, MA, The Massachusetts Institute
of Technology Press.
Jackendoff, R. (1990). Semantic structures. Cambridge, MA, MIT Press.
Jackendoff, R. (1992). Languages of the mind. Cambridge, MA: MIT Press
Jackendoff, R. (2002). Foundations of language. Oxford: Oxford University Press.
Jesperson, O. (1965). A modern English grammar based on historical principles. London, Allen
& Unwin.
Johnson-Laird, P.N., Herrmann, D.J., and Chaffin, R. Only connections: a critique of
semantic networks. Psychological Bulletin, 96, 292-315.
Semantic Representation 37
Keil, F. C. (1989). Concepts, kinds, and cognitive development. Cambridge, MA, MIT Press.
Kelly, M. H., Bock, J. K., & Keil, F. C. (1986). Prototypicality in a linguistic context: Effects
on sentence structure. Journal of Memory and Language, 25, 59- 74
Kita, S., & Özyürek, A. (2003). What does cross-linguistic variation in semantic coordination
of speech and gesture reveal?: Evidence for an interface representation of spatial thinking
and speaking. Journal of Memory and Language, 48, 16-32.
Kittay, E. F. (1987). Metaphor: Its cognitive force and linguistic structure. Oxford, Clevender
Press.
Kohonen, T. (1997). Self-organizing maps. New York, Springer.
Lakoff, G. (1987). Women, fire, and dangerous things. What categories reveal about the mind.
Chicago/London, The University of Chicago Press.
Lakoff, G. (1992). The contemporary theory of metaphor. In Andrew Ortony (Ed.), Metaphor
and thought, 2nd ed. Cambridge: Cambridge University Press.
Lakoff, G. & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press.
Landauer, T.K. & Dumais, S.T. (1997). A solution to Plato's problem: The Latent Semantic
Analysis theory of acquisition, induction and representation of knowledge. Psychological
Review, 104, 211-240.
Lehrer, A. (1974). Semantic fields and lexical structure. Amsterdam: North-Holland
Publishing.
Levelt, W. J. L. (1989). Speaking: From intention to articulation. Cambridge, MA, MIT Press.
Levelt, W. J. M., Roelofs, A. & Meyer, A.S. (1999). A theory of lexical access in speech
production. Behavioral and Brain Sciences, 22, 1-38.
Levin, B. (1993). English verb classes and alternations. A preliminary investigation. Chicago,
University of Chicago Press.
Levinson, S.C. (1996). Frames of reference and Molyneux’s question: Crosslinguistic evidence.
In P. Bloom, M. Peterson, L. Nadel, & M. Garrett (Eds.), Language and space, pp. 109-
169. Cambridge, MA: MIT Press.
Levinson, S.C. (2003). Language and mind: Let’s get the issue straight! In D. Gentner and S.
Goldin-Meadow (Eds.), Language in Mind: Advances in the Study of Language and
Thought, pp. 25-46. Cambridge, MA: MIT Press.
Lucy, J.A. (1992). Grammatical categories and cognition: A case study of the Linguistic
Relativity Hypothesis. Cambridge, UK: Cambridge University Press.
Lyons, J. (1977). Semantics (Vol. 2). Cambridge, UK: Cambridge Univ. Press.
Semantic Representation 38
Malt, B., Sloman S., Gennari S., Shi M., and Wang Y. (1999). Knowing versus naming:
Similarity and linguistic categorization of artifacts. Journal of Memory and Language, 40,
230-262.
Marr, D. (1982). Vision. San Francisco, W. H. Freeman and Co.
Martin, A. & Chao, L.L. (2001). Semantic memory and the brain: Structure and processes.
Current Opinion in Neurobiology, 11, 194-201.
Masson, M. E. J. (1995). A distributed memory model of semantic priming. Journal of
Experimental Psychology: Learning, Memory and Cognition, 21, 3-23.
McNeill, D. (1992) Hand & Mind. Chicago: University of Chicago Press.
McRae, K. & Boisvert, S. (1998). Automatic semantic similarity priming. Journal of
Experimental Psychology: Learning, Memory and Cognition, 24, 558-572.
McRae, K., & Cree, G. S. (2002). Factors underlying category-specific semantic deficits. In E.
M. E. Forde & G. W. Humphreys (Eds.), Category-Specificity in Brain and Mind (pp. 211-
250). East Sussex, UK: Psychology Press.
McRae, K., de Sa, V. & Seidenberg, M.C. (1997). On the nature and scope of featural
representations of word meaning. Journal of Experimental Psychology: General, 126, 99-
130.
Meyer, D.E. & Schvaneveldt, R.W. (1971). Facilitation in recognizing pairs of words:
Evidence of a dependence between retrieval operations. Journal of Experimental
Psychology, 90, 227-234.
Miller, G. A. & Fellbaum, C. (1991). Semantic networks of English. Cognition, 41, 197-229.
Minsky, M. (1975). A framework for representing knowledge. In: P. H. Winston (Ed.), The
psychology of computer vision. (pp. 211-277). New York, McGraw-Hill.
Morris, M. W. & Murphy, G.L. (1990). Converging operations on a basic level in event
taxonomies. Memory and Cognition, 18, 407-418.
Murphy, G.L. (2002). The big book of concepts. Cambridge, MA: MIT Press.
Neely, J. H. (1991). Semantic priming effects in visual word recognition: A selective review of
current findings and theories. In: Basic processes in reading: Perception and
comprehension. D. Laberge & S. J. Samuels (Eds.). Hillsdale, NJ, Erlbaum.
Norman, D. A. & Rumelhart, D. E. (1975). Explorations in cognition. San Francisco, Freeman.
Pinker, S. (1989). Learnability and cognition: The acquisition of argument structure.
Cambridge, MA, MIT Press.
Plaut, D. C. (1995). Semantic and associative priming in a distributed attractor network.
Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society, 17,
37-42.
Semantic Representation 39
Slobin, D. I. (1996a). From "thought and language" to "thinking for speaking". In J. Gumperz
& S. Levinson (Eds.), Rethinking Linguistic Relativity. Cambridge, MA: Cambridge
University Press, 70-96.
Slobin, D. I. (1996b). Two ways to travel: Verbs of motion in English and Spanish. In M.
Shibatani & S. A. Thompson (Eds.), Grammatical constructions: Their form and meaning.
Oxford, Clarendon Press.
Smith, E. E. & Medin, D.L. (1981). Categories and concepts. Cambridge, MA, Harvard
University Press.
Smith, E. E., Shoben, E.J. & Rips, L.J. (1974). Structure and process in semantic memory:
Featural model for semantic decisions. Psychological Review, 81, 214-241.
Talmy, L. (1986). Lexicalization patterns: semantic structure in lexical forms. In T. Shopen,
Ed. Language typology and syntactic description, Vol. III: Grammatical categories and the
lexicon. (p. 57-150). Cambridge, Cambridge University Press.
Taub, S. F. (2001). Language from the body: Iconicity and metaphor in American Sign
Language. Cambridge: Cambridge University Press.
Tettamanti, M., Buccino, G., Saccuman, M.C., Gallese, V., Danna, M., Scifo, P. Fazio, F.,
Rizzolatti, G. Cappa, S.F. & Perani, D. (2005). Listening to action-related sentences
activates fronto-parietal motor circuits. Journal of Cognitive Neuroscience. 17, 273-281.
Tranel, D., Adolphs, R., Damasio, H. & Damasio, A.R. (2001). A neural basis for the retrieval
of words for actions. Cognitive Neuropsychology, 18, 655-670.
Trier, J. (1931) Der Deutsche Wortschatz im Sinnbezirk des Verstandes. Heidelberg: Winter.
Turner, M. & Fauconnier, G. (2000). Metaphor, metonymy, and binding. In A. Barcelona (Ed.),
Metonymy and Metaphor at the Crossroads. New York: Walter de Gruyter.
Tyler, L.K. , Moss, H.E., Durrant-Peatfield, M., & Levy, J. (2000) Conceptual structure and the
structure of categories: A distributed account of category-specific deficits. Brain and
Language, 75, 195-231.
Vigliocco, G. & Filipovic, L. (2004). From mind in the mouth to language in the mind. Trends
in Cognitive Science, 8, 5-7.
Vigliocco, G., Vinson, DP, Damian, MF, & Levelt, W. (2002). Semantic distance effects on
object and action naming. Cognition, 85, B61-B69
Vigliocco, G., Vinson, D.P, Lewis, W. & Garrett, M.F. (2004). The meanings of object and
action words. Cognitive Psychology, 48, 422-488.
Vigliocco, G., Vinson, D.P., Paganelli, F. & Dworzynski, K. (in press) Grammatical gender
effects on cognition: Implications for language learning and language use. Journal of
Experimental Psychology: General.
Semantic Representation 41
Vigliocco, G., Warren, J., Arciuli, J., Siri, S., Scott, S., & Wise, R. (submitted). The role of
semantics and grammatical class in the neural representation of words.
Vinson, D. P. & Vigliocco, G. (2002). A semantic analysis of noun-verb dissociations in
aphasia. Journal of Neurolinguistics, 15, 317-351.
Vinson, D. P., Vigliocco, G., Cappa, S.F. & Siri, S. (2003). The breakdown of semantic
knowledge: insights from a statistical model of meaning representation. Brain and
Language, 86: 347-365.
Warrington, E.K. & McCarthy, R. (1983) Category specific access dysphasia. Brain 106, 859–
878.
Warrington, E.K. & McCarthy, R. (1987) Categories of knowledge: further fractionations and
an attempted integration. Brain 110, 1273–1296.
Warrington, E. K. & Shallice, T. (1984). Category specific semantic impairments. Brain, 107,
829-854.
Whorf, B. (1956). Language, Thought, and Reality: selected writing of Benjamin Lee Whorf,
ed. J.B. Carroll. Cambridge, MA: MIT Press.
Semantic Representation 42
Table 1. Sample of speaker-generated features for two nouns referring to objects and two verbs referring to actions. Weights reflect the number of speakers (max = 20) who generated that feature for a given word
(from Vinson & Vigliocco, 2002).
________________________________________________________________________________________________________________________________________________
healthy 2 use-hammer 3
violent 3
________________________________________________________________________________________________________________________________________________
43
Objects Actions
________________________________________________________________________
__________________________________
Feature types:
__________________________________
________________________________________________________________________
a
Shared features were defined as those features shared by 15% or more of the exemplars within a semantic
field. The values given here are the average summed weight of all the “shared” features per word in the test
set.
b
Average correlation of feature pairs for each word in the test set, calculated on the basis of correlations
with all feature pairs across the entire set of 456 words.
44
Figure Captions:
Figure 1: Distribution of features of different types across some semantic fields: nouns
referring to objects and verbs referring to events. Black sections depict features related to
the visual modality (e.g. <red>); white sections depict features related to other perceptual
modalities (e.g. <loud>, <hot>); grey sections depict features related to motion (e.g.
<fast>, <awkward>).
Figure 2: Schematic representation of the differences between holistic views (left panel),
featural views based upon abstract conceptual features (centre panel) and featural views
Vigliocco, 2002; Vigliocco, et al. 2004) illustrating the different clustering tendencies
among some object nouns (left panel) and action verbs (right panel).
45
70 motor
other perc.
% of total feature weight
60
visual
50
40
30
20
10
0
animals fruit/veg tools clothing vehicles body
Nouns referring to objects
90 motor
80 other perc.
% of total feature weight
visual
70
60
50
40
30
20
10
0
body tool-action cook motion noise
Verbs referring to events
46
Holistic views Abstract Features Embodied Features
stripes roars moves fast (etc.) stripes roars moves fast (etc.)
3-dimensional
representation
Abstract conceptual
representations