Professional Documents
Culture Documents
Lecture 7 Updated
Lecture 7 Updated
UALL 2004
PSYCHOLINGUISTICS
LECTURE 7
Semantics Phonology
Selecting Words
Syntax
3
Stepping-stone Waterfall /
Model Cascade Model
Selecting Words
Interactive/Spreading
Activation Model
Stepping-stone Model 4
Stepping-stone Model 5
Two major stones: (1) meaning and word class, (2) sounds
First, a person picks a meaning, a word class, and then the sound
structure.
(McClelland, 1979)
There was a need for a model to indicate that a person still thinks
about the meaning as he/she selects a sound – The waterfall model
shows this.
In this model, all the information activated at the first stage is still
available at the next stage.
So, word selection is not just a case of following one word through
from beginning to end.
Waterfall/Cascade Model 10
Waterfall/Cascade Model 11
(McClelland, 1979)
Example:
The dialogue shows information can flow both ways: particular sounds
can enable a speaker to activate meanings, just as meanings activate
sounds.
Interactive/Spreading 12
Activation Model
In speech production, the current is normally initiated in the semantic
component, where a semantic field will be aroused and then narrowed
down.
The current flows to the phonological ‘area-code’ before the final choice
is made, where many words will be triggered, and those activated will
feed back into semantics (information flow backwards and forwards).
All the links between activated sections will metaphorically be lit up, with
electric current rushing backwards and forwards, exciting more and more
related words.
Interactive/Spreading 13
Activation Model
Interactive/Spreading 14
Activation Model
The related words are stimulated strongly while the unrelated ones fade.
Phonology activation follows the same pattern of activating any that fits
more than one animal and is later matched with semantically activated
meaning to narrow down the choice.
If a person doesn’t pay much attention, the wrong choice can be made.
Basic Problems:
Process
Process
Lexical decision (selection): Accumulating sensory input
continues to map onto this subset until the intended lexical entry
is eventually selected.
WORD RECOGNITION
A set of processing units that would receive input from spoken or written
modalities and fire when their activated inputs exceeded some criterion
level (threshold).
❑ Thus, stored representations of words that do not fit into the evolving
context are activated anyway as long as they match the acoustic
properties of the word stimulus.
Cohort Model 25
(Marslen-Wilson, 1987; Marslen-Wilson & Welsh, 1978;
Marslen-Wilson & Tyler, 1980, 1981)
❑ Words that fit better into the context will have an advantage over words
that do not fit, especially in cases where the bottom-up input is
ambiguous between two or more stored word candidates.
The model allows for minor adjustments to the recognition point based on
semantic or syntactic requirements imposed by the context, which allows
words that are highly predictable in context to be recognized faster than
the less predictable ones.
Cohort Model 28
(Marslen-Wilson, 1987; Marslen-Wilson & Welsh, 1978;
Marslen-Wilson & Tyler, 1980, 1981)
(1) There has to be positive evidence for the presence of the word (e.g.:
the input ‘tres’ provides clues that the word ‘trespass’ is the matching
word target).
(2) The input has to rule out the presence of other words (e.g.: the onset
‘tr’ rules out the possibility that the matching word target is ‘tap’, ‘top’
‘table’ or any other word that does not begin with ‘tr’)
TRACE Model 29
(McClelland & Rumelhart, 1981)
Phonemes affect the activation of word units, but word units do not affect
activation in the units that represent phonemes.
TRACE Model 31
(McClelland & Rumelhart, 1981)
The model offers a good explanation of the “word superiority” effect that
indicates that we have an easier time recognising and processing letters
and phonemes when they appear in the context of a word than when they
occur by themselves or in the context of a string of letters that does not
make up a real word.
Cohort and TRACE Models 33
Distributed Feature Models 34
The context units store a copy of the activations in the hidden units
between processing cycles.
The network would respond not just to the current state of the input units
but also to recent events, as reflected in the activity of the context units.
The explicit task that the network performed was to predict the upcoming
word in an utterance.
Simple Recurrent Network Model 37
Jeff Elman (2004)
In this system, word identities can be represented as an activation
pattern among the hidden units.
The patterns split neatly into two classes, corresponding to nouns and
verbs; within each class, the word representations are subdivided further
into subclasses, with similar representations assigned to words that we
would judge as being close in meaning.