You are on page 1of 5

This module will focus on mental grammar and mental vocabulary.

The mental lexicon is the


internal vocabulary that we use when we generate and perceive speech. Mental grammar stores those
rules that we use when we generate the form of words and perceive them, as well as generate and
perceive whole sentences. First of all, I want to say a few words about how the study of speech generation
and perception correlates in this area. The speaker or writer, when he wants to convey some thought,
encodes it with linguistic means, and the task of the reader or listener, in fact, is to understand this code and
get to the meaning. Thus, generation is primary. However, most of the research is devoted to perception.
What is it connected with? What does it take to make a good experiment? We need to control the stimulus
material very precisely and measure the reaction of the person who is undergoing this experiment very
accurately. In the case of perception, we have every opportunity for this. We can pick words very precisely,
balance them on various parameters, and then measure all sorts of things that the subject does, say, how fast
he reads the text, how fast he presses the buttons. We can measure also the evoked potentials of the brain. In
the case of generation, we know that a person goes through several stages before he speaks or writes a
phrase. First, he determines that, in principle, he wants to say or write something, then he selects the words,
puts them in the right forms, and chooses the right sentence structure. However, usually we see only the
finished result. Therefore, there are few studies of generation compared to the number of studies that are
devoted to perception.
So, let's start with how we perceive words. What is a mental lexicon? When we have a regular
dictionary, the words in it go in order, usually alphabetically, but there are, for example, thematic
dictionaries. The mental lexicon is a gigantic web where words that are related in meaning and sound are
connected to each other.
How do we know this? The main method of studying the mental lexicon is the task of making a
lexical decision. During this experiment, a person sees sets of letters on the screen. He must press one button
if it is a word; to another button if it's not a word. Thus, you can measure the reaction time to a word, how
fast it is read, and understand what factors affect it. First of all, this is influenced by the length of the word
and its frequency. There is a somewhat more complicated version of this technique, when a person first sees
one set of letters, then another. And if the first set of letters is connected with the second in meaning or in
sound, then the answer to the second word will be faster. This suggests that words that are related in sound
and meaning are related in the mental lexicon. Therefore, if one of them is activated, the second will be
recognized much faster. So, we just saw how the experiments on making a lexical decision work. We made
sure that words that are related in meaning and similar in sound and spelling are really related to each other
in the mental lexicon. The main experiment for studying the generation of words is the so-called naming
task. Let's look at an experiment that was carried out on Dutch material by Schrifers, Meyer and Levelt. In
this experiment, people saw pictures on a screen. Look at the slide. There is a picture of a crocodile. The
subject's task is to name the picture, that is, to say the word "crocodile". At the same time, people heard the
words in the headphones at the same time. Sometimes these words were heard 150 milliseconds before the
picture appeared, sometimes at the same time as the picture appeared, and sometimes 150 milliseconds after
the picture appeared on the screen. Some of the words were related to the picture in meaning. Well, for
example, in this case, such a word was the word "behemoth". Some words sounded similar to the word that
the subjects were supposed to say. Let's say the word "crocus" sounds similar to the word "crocodile". And
some words were not connected with the picture either in sound or in meaning. This is called a control
condition, and an example of such a word in this case would be, say, the word "wall". Schrifers, Meyer, and
Levelt wanted to see how different types of words would affect how quickly people called what was shown
in the picture. They managed to establish that if a word sounds that is related to the picture in meaning, then
if it sounds 150 milliseconds before the appearance of the picture or simultaneously with its appearance,
people call the picture longer. If it sounds 150 milliseconds after the picture appeared, this does not affect
the time of its naming in any way. If the word is connected with the picture with the word in the picture by
sound, then, accordingly, if it sounds 150 milliseconds before the picture appears, or simultaneously with the
appearance of the picture, then it does not affect the naming speed in any way. If it sounds 150 milliseconds
after the appearance of the picture, then people name the picture faster. What does this tell us? This
experiment allows us to isolate the different stages that we go through when looking up a word in our mental
lexicon. That is, first a person determines the meaning, looking for the concept that corresponds to the one
shown in the picture. If shortly before or at the same time we activate a similar concept that belongs to the
same semantic field, this prevents us from naming what is shown in the picture. Then the person must
understand how to pronounce the word that corresponds to the chosen concept.
And here it helps us if at the same time a word similar in sound was sounded and, accordingly,
similar programs were activated for pronouncing certain sounds, which we can also use when we call the
animal shown in the picture. Thus, Schrifers, Meyer and Levelt succeeded in identifying the various stages
that we go through in the generation of a word. Well, as one of the most striking, I would like to single out
the study of polysemantic words and homonymous words. Let's say we take a word like "boron". Boron is
both a forest and a dentist's tool. When we hear the word "boron", say, in isolation or within a sentence, do
we activate both meanings or only one of them? How fast is all this happening? What factors affect this?
These and other topics are addressed by those psycholinguists who study the storage of words in the mental
lexicon and access to these words in the generation and perception of speech. VIDEO FROM UTUBE

We will now talk about some of the theories and ideas within theoretical linguistics and other
cognitive sciences that have played a leading role in the experimental study of grammar. In the middle of the
20th century, important changes took place in many cognitive sciences, which are generally called the
cognitive revolution. In linguistics, the leading role in them was played by the American Noam Chomsky,
who now heads one of the leading areas of linguistics - generative grammar. In 1957, he published the first
book, Syntactic Structures, in which he laid the foundations for this direction. And in 1959, he wrote a book
review by behavioral psychologist Burres Frederick Skinner. Skinner has been involved in teaching and, in
particular, has written about how people learn language. His idea was that we hear some pairs of phrases and
actions with which people react to these phrases. Let's say mom says close the window. And dad goes and
closes the window. When we hear it many times, we memorize these pairs of phrases and actions, and we
ourselves begin to use such phrases when we want to achieve the appropriate effect. Chomsky, considering
this theory, showed that it, in fact, is completely untenable. After all, the number of words in the language is
large, but still finite. The number of rules by which we can combine these words into sentences is, in fact,
quite small in general. Nevertheless, you and I can generate and perceive a potentially almost infinite
number of proposals.
Chomsky drew attention to this endless potential of the grammar of a language, and since then all
areas of linguistics, both those that follow Chomsky and his ardent opponents in the field of linguistic
theory, agree that the most interesting thing in language is these the ability of the speaker to say and
understand an infinite number of sentences. The second important thing that Chomsky pointed out has to do
with how a child learns language. Chomsky drew attention to the fact that at the age when children are not
yet able to master the simplest rules of logic, are not able to add, say, two and two, they perfectly master
their native language. Somewhere in the region of two, two and a little years, they begin to speak in
sentences, first of three words, then the number of words increases, and in just a few months they master the
entire basic grammar of their native language. Chomsky was the first to point out how amazing this was. He
posed a question. Further, different linguistic directions answered this question in very different ways.
Chomsky's response to how a child does this turned out to be rather non-trivial. Chomsky suggested that
each of us has an innate language ability, and this is what helps us to master our native language so quickly
in childhood. Further, he had a hypothesis as to how exactly this innate ability works. He suggested that
people have in their heads what he called a universal grammar, that is, a certain system of principles, some
rules that are common to all languages, and parameters by which different languages differ from each other.
Thus, the task of the child is simply to set the desired values for the parameters, and in this way he masters
all the basic grammar of his language. That language acquisition is a miracle, and that we must pay special
attention to the mental lexicon and mental grammar, is agreed by all modern linguistic schools. However,
not everyone agrees with the answers that Chomsky offered to these questions - he has both ardent
supporters and staunch opponents. And this played a huge role for experimental studies of morphology and
syntax, because some researchers tried to find confirmation of Chomsky's ideas, while other researchers, on
the contrary, stubbornly tried to refute these ideas. However, the idea appealed to a great many linguists,
who, in particular, were supporters of Chomsky's generative grammar. Since Chomsky emphasized that
language is a special skill that does not depend on our general intellectual abilities, on how developed, say, a
person has a logical apparatus, and so on, but relies on some kind of innate knowledge system, many
researchers it seemed logical to assume that the language is a module or some system of modules. In
psycholinguistics, testing this idea of modularity resulted, first of all, in testing the idea of how autonomous
different systems within a language are, or whether they interact with each other
One of the key issues that are discussed in the framework of experimental morphology is related to
morphologically complex words - those words that consist of several morphemes. For example, the word
"book" consists of a root and an ending, the word "book" consists of a root, a suffix and an ending. The
question is how such words are stored in the mental lexicon: whole or broken into separate morphemes? Do
we pick them up piece by piece when we generate them, and do we break them down into their component
parts when perception occurs?
Two main approaches can be distinguished here: one of them is called dual-system, the other is
called single-system. Within the framework of the two-system approach, the key idea is that all forms can be
divided into so-called correct, or regular; and irregular or irregular. It is assumed that regular forms are
generated with the help of a rule, while irregular ones are stored as a whole list in the mental lexicon. So we
see that there are two modules inside this approach. And within the framework of a single-system approach,
forms are not divided into correct or incorrect, that is, it is assumed that all forms are stored, generated and
perceived by one system, and we do this mainly based on analogy. The first tests of the one-system and two-
system approaches were conducted on the material of the English language, so now we can look at the slide
for some examples related to how the past tense of different verbs is arranged in English. As we see on the
slide, according to the two-system approach, we have a rule, which is that we must add the ending -ed to the
verb stem. And this rule always works, unless we find the right form in the list of irregular verbs. That is,
accordingly, if the desired form is stored in the mental lexicon, then the rule is blocked. Within the
framework of the one-system approach, it is assumed that all forms are generated and perceived in the same
way, relying on analogy. Let's say we have one group of verbs, the largest, which includes the so-called
regular verbs like work - worked, try - tried, cook - cooked and so on. There are other smaller groups that
include verbs like think-thought, bring-brought, or another group that includes verbs like sing-sang, ring-
rang, and so on.
There is no fundamental difference between them. But as we can all notice, the English language is very
simple in this respect, so the data of other languages that were studied after English, including very
interesting studies on the material of the Russian language, these data made their own adjustments to the
two-system and one-system approach. So, for example, if we look at the Russian language, we will see that
we do not have any single correct class of verbs, but there are several models that are frequent and according
to which forms are formed from a large number, that is, from many thousands of verbs . Therefore, it can be
emphasized here that in experimental studies of grammar it is very important that data be taken from a large
number of languages that are diverse in this respect. Another approach, the one-system one, opposes this
idea and tries to refute it. What is interesting here is that the experiments were carried out not only with
adult native speakers. This debate also explored child speech data, computer simulations, clinical data, and
neurophysiological data, some of which we will cover in other modules of this course. In the next part, we
will move on to experimental studies of syntax.
In this part, we will talk about how we perceive proposals. When we read or hear a sentence, we
build its syntactic structure step by step. This is called parsing. To answer the question of how we do this,
scientists put the question this way: suppose we have already read some piece of a sentence and built some
fragment of a syntactic structure. And here we see or hear a new word. How exactly do we build it into an
already existing structure? Since this process is completely unconscious, it is not so easy to answer this
question. Therefore, scientists turn to ambiguous forms. The earliest studies were carried out mainly on the
material of the English language, where there are a lot of ambiguous forms, much more than, for example, in
Russian. Therefore, we will also now look at an example from the English language. Look at the slide. Here
we see the sentence: "The man returned to his house was very happy", which means "The man returned to
his house was very happy." We see that the form returned is ambiguous, namely: on the one hand, it can be a
passive participle, “returned”, and on the other hand, it can be a past tense form, which means “returned”. If
we compare this sentence with another sentence, "The man who was returned to his house, was very happy",
we can see that in the first sentence people slow down a lot when they get to the word was, and this
slowdown continues for several subsequent words. . What does this slowdown mean in this case? It testifies
that initially people choose not the variant of the analysis of the returned form, which they will come to in
the end, namely, they decide that this is the past tense form, and not participle. By examining this kind of
ambiguous sentences, of which there are a lot in the English language, scientists have tried to understand
exactly what principles we are guided by when we make this kind of choice. The first theories that were put
forward were completely modular, namely, they assumed that syntax works first, that is, our initial decision
is always due to some syntactic considerations, and only then can we already attract some arguments from
the field of semantics , wider context, etc. In this case, the argumentation was built as follows: if we choose
the option where our returned form is the past tense form, we get a more syntactically simple structure. That
is, here we have a subject, we have a predicate, and the sentence may be about to end. If we choose the
option with participial turnover, then it is clear that first we will have to go through the participial turnover,
then the predicate will appear, and then the sentence may end. This structure is obviously more complicated.
However, subsequent numerous experiments showed that in different cases we can simultaneously involve
in the analysis not only considerations of syntactic simplicity, but also many other considerations, namely,
we can also rely on which variant is more likely from the point of view of the context, or, say , if our form of
a verb is more frequent in the role of a passive participle, then we more often choose this option, and not the
option with the past tense, etc. etc. That is, the original models that followed the strict principle of
modularity, they showed their failure. And modern models, they can be divided into two groups. The first
models, they completely reject the idea of modularity, that is, they assume that all sources of information are
involved in the analysis from the very beginning and interact with each other, here. And in the second
models, one can single out what is called weak modularity. Namely, they assume that syntax, so to speak,
gives us a certain set of options that are in principle possible, and this is its important, so to speak, primary
role. And only then, when choosing options, any components of the language that interact with each other
can participate. That is, we can rely on syntactic simplicity, and on which option is more likely from the
point of view of the context, and on some properties of individual words, for example, on considerations that
this word is more frequent in the role of participle, and this the word is more often in the past tense role. We
talked about the fact that some fragment in one sentence is read faster than the same fragment in another
sentence. How do we know about it, how can we measure it? The main method used in this area is the self-
paced reading task. Look at the screen. When this method is used, we take some kind of sentence and
replace all letters with hyphens, say, or with some other icons. This preserves spaces between words, and
preserves punctuation marks. Next, the subject must read the sentence by pressing a key on the computer.
When he presses the key for the first time, the first word appears, when he presses the key a second time, the
first word is again masked by hyphens and the second word appears, and so on. Modern programs that can
be installed, in principle, even on any laptop, allow you to measure the reaction time with an accuracy of
about one millisecond, that is, with very high accuracy. In this way, we can record the reading time of
various sentences and compare whether, say, in the first sentence that we saw on the previous slide, the
fragment “was very happy” (“was very happy”) is read on average much more slowly than in the second the
proposal we saw on the previous slide. In addition, there is another method that allows you to measure how
quickly or slowly a person reads various words in a sentence. This method is related to the recording of eye
movements and will be discussed in the next module of this course. In general, we can say that in this part
we learned a little about how real-time parsing works. It is worth adding that there are many other
interesting topics in this area, to which many experiments are devoted. For example, psycholinguists study
agreement - say, how the subject and verb in a sentence agree, for example, in number. They study the
anaphora: after all, if there are pronouns in the sentence, you need to understand what exactly they refer to.
There are many studies on word order and many other grammatical phenomena. As we said at the beginning
of this module, it is quite difficult to study the generation of sentences. Most of the existing experiments are
devoted to the choice of one or another design.
The earliest, and perhaps the most famous works are devoted to the choice of active or passive
construction in English. Let's look at the slide. There he will give an example from the review work of
Myachikov, Garrod and Scheepers, in which the subjects see the picture that they must describe. The picture
shows a policeman and a boxer. And depending on, so to speak, what the subjects’ attention was drawn to,
what the text sounded before, or, say, what the experimenter showed when he asked the subjects to describe
this picture, people more often use either an active construction, say, “A policeman is punching a boxer", or
a passive construction - "A boxer is being punched by a policeman". In the same way, the choice between a
number of other constructions that we can make at generation is explored. That is, various experiments show
how, depending on, say, the context, on what information is given, new, more or less highlighted, we choose
from various sentences, which, in principle, are all one way or another suitable for describing one or another
different situation, but emphasize some different points in it, some different information, and, on the
contrary, take other information into the background. The final part of this module will focus on clinical
research related to mental grammar and mental lexicon.
The first models, they completely reject the idea of modularity, that is, they assume that all sources of
information are involved in the analysis from the very beginning and interact with each other, here. And in
the second models, one can single out what is called weak modularity. Namely, they assume that syntax, so
to speak, gives us a certain set of options that are in principle possible, and this is its important, so to speak,
primary role. And only then, when choosing options, any components of the language that interact with each
other can participate. That is, we can rely on syntactic simplicity, and on which option is more likely from
the point of view of the context, and on some properties of individual words, for example, on considerations
that this word is more frequent in the role of participle, and this the word is more often in the past tense role.
We talked about the fact that some fragment in one sentence is read faster than the same fragment in another
sentence. How do we know about it, how can we measure it? The main method used in this area is the self-
paced reading task. Look at the screen. When this method is used, we take some kind of sentence and
replace all letters with hyphens, say, or with some other icons. This preserves spaces between words, and
preserves punctuation marks. Next, the subject must read the sentence by pressing a key on the computer.
When he presses the key for the first time, the first word appears, when he presses the key a second time, the
first word is again masked by hyphens and the second word appears, and so on. Modern programs that can
be installed, in principle, even on any laptop, allow you to measure the reaction time with an accuracy of
about one millisecond, that is, with very high accuracy. In this way, we can record the reading time of
various sentences and compare whether, say, in the first sentence that we saw on the previous slide, the
fragment “was very happy” (“was very happy”) is read on average much more slowly than in the second the
proposal we saw on the previous slide. In addition, there is another method that allows you to measure how
quickly or slowly a person reads various words in a sentence. This method is related to the recording of eye
movements and will be discussed in the next module of this course. In general, we can say that in this part
we learned a little about how real-time parsing works. It is worth adding that there are many other
interesting topics in this area, to which many experiments are devoted. For example, psycholinguists study
agreement - say, how the subject and verb in a sentence agree, for example, in number. They study the
anaphora: after all, if there are pronouns in the sentence, you need to understand what exactly they refer to.
There are many studies on word order and many other grammatical phenomena. As we said at the beginning
of this module, it is quite difficult to study the generation of sentences. Most of the existing experiments are
devoted to the choice of one or another design.:

You might also like