Professional Documents
Culture Documents
EVOLUTION AND
DEVELOPMENTAL
IMPAIRMENTS
A R I LD LI A N
Language Evolution
and Developmental Impairments
Arild Lian
Language Evolution
and Developmental
Impairments
Arild Lian
University of Oslo
Oslo, Norway
This book is the result of notes I have taken and discussions I have had
with colleagues and friends after my retirement from the University
of Oslo, and during the years that I worked as a volunteer at Bredtvet
Resource Center (the Norwegian national resource center for special edu-
cation, located in Oslo). By interacting closely with special education
psychologists and therapists in this institution, I gained first hand experi-
ence of children with developmental language impairments and those
with special educational needs. Among the many psychologists I worked
with at this center, Ernst Ottem is responsible for part of my education
in the field of speech and language disorders, and also became a source
of inspiration for the present work. I express my sincere gratitude for his
role in my professional development in recent years!
I also express my thanks to Dr. Arnold Glass at Rutgers University,
New Jersey, USA, who read and returned very instructive comments on
a previous version of the manuscript. I also thank Dr. Glass for inspir-
ing cooperative research works that served to strengthen my general aca-
demic development.
In a different area of my work, I received general advice and important
assistance in formatting and editing my files from Bernt Andersen, Chief
Sales and Marketing Officer at RikstvAS. I am deeply grateful to Bernt
for his efforts, without which this work would not have been fulfilled in
its current form.
v
vi Acknowledgments
Last, but not least, I thank my wife, Jorunn Schwencke, who sup-
ported me from the beginning to the end of my project. I appreciate her
considerate way of protecting my work; without her assistance, this book
may not have been written.
Arild Lian
Drammen, Norway
December 4, 2015
Contents
1 Introduction 1
vii
viii Contents
Index 293
List of Figures
ix
1
Introduction
The present work addresses the ability to acquire and make use of a
language, an ability which is demonstrated by children throughout the
world. The acquisition of language shows that children are endowed with
a cognitive apparatus which is necessary for linguistic communication,
and thereby for sustenance of the human species. Language is gener-
ally learned without noticeable efforts and without formal instruction.
However, there are children who do not acquire language this easily and
who are hampered with an impaired language years into adulthood. In
Chaps. 2 and 8, I will discuss the diagnostic criteria, etiology and treat-
ment of the language impairment of this group of children. In agreement
with commonly used terminology, I shall exclude cases of recorded brain
pathology, and instead refer to this disorder as developmental language
impairment, in contrast to acquired language impairment or aphasia due
to neural damage or brain disease.
The research literature recently published on developmental lan-
guage impairments is considerable, and much of it will be reviewed in
Chaps. 2 and 8. The other chapters will deal with aspects of language
evolution which I think are relevant for a reevaluation of developmental
Syntax This is the next level at which we can describe the structure of
a language. In general, syntax is said to deal with the combination of
words into sentences; however, the lower level of syntactic structure are
made up of morphemes, both bound and free-standing. The more gen-
eral term “grammar” includes both morphology and syntax. Thus, some-
times a distinction is made between grammar and syntax. Morphology
deals with the internal economy of words, whereas syntax deals with the
external economy of words (linguistics.stackexchange.com). Moreover,
1 Introduction 7
in syntax operating units are phrases, for instance, a noun phrase like
“the old man” can be combined with the verb phrase “grew a beard” to
create a sentence. Specific rules apply for the combination of phrases into
sentences, and the meaning of a sentence is a complex function of these
structures. Phrases can be embedded within other phrases; thus, a noun
phrase can be embedded within another noun phrase, and structures can
be recursively generated. A complete sentence therefore forms a hierar-
chical structure of syntactic units.
Semantics This is the study of meaning in language; that is, a field which
is shared between linguistics and philosophy. The problem of what is the
“meaning” of meaning was raised by Grice (1957), who distinguished
between “natural” and “nonnatural” relationships between signs and
objects. Later, Lyons (1977), suggested that “meaning” in semantics will
be used as “the meaning of lexemes” (vocabulary words). I shall revert to
his discussion of the term in Chap. 5, but for the moment I shall assume
a linguistic frame reference and talk about the meaning of words, phrases
or sentences. However, in formal semantics, the study of meaning in lan-
guage has revolved around the truth value of propositions, whereas propo-
sitions are functions that map possible worlds to truth values, for example,
“Boko Haram abducted 120 girls from the city of Maradi.” The conditions
under which this proposition is right or wrong express meaning in a differ-
ent way than the way we can talk about meaning of artistic performances,
8 Language Evolution and Developmental Impairments
In this book, I will make use of Fitch’s three S’s as a referential frame-
work, both when dealing with general issues of evolution and when
10 Language Evolution and Developmental Impairments
Interactions between the three S’s will be discussed in several parts of the
book. I will return to developmental language impairment in Chap. 2,
where I will discuss several major conceptual issues and the implications of
taking an evolutionary approach. The other chapters of the present work
are briefly described in the outlines below.
In the following two sections (1.3 and 1.4), I will present a theoreti-
cally oriented description of evolutionary biology, and give an introduc-
tory presentation of contemporary research that has had a major impact
on theories of language evolution.
“Any normal child will learn language(s), based on rather sparse data in the
surrounding world, while even the brightest chimpanzee, exposed to the
same environment, will not. Why not? What are the specific cognitive
mechanisms that are present in the human child and not in the chimpan-
zee? What are their neural and genetic bases? How are they related to simi-
lar mechanisms in other species? How, and why, did they evolve in our
species and not in others?” (p. 15).
example is the jaw, which is said to have developed from the bony gill sup-
ports in fish. The change of function has adaptive value to the extent that it
greatly improves the organism’s ability to produce surviving progeny. Varney
(2002) argued that development of the ability to read can be explained as a
result of pre-adaptation. In evolution there has been little (zero evolution-
ary) time for the development of an ability to read, yet reading can be taught
in all cultures independent of previous knowledge of written characters.
Therefore, the acquisition of reading must be supported by neural structures
which were developed to do something else; the skills that pre-adapted for
reading were gestural communication and tracking of animals in the hunt.
These are radical ideas which will be discussed in Chap. 6 on Literacy and
Language. In one sense, the concept of pre-adaptation can be a misleading
one: there has been no “plan” to evolve a jaw or to acquire reading skill in the
first place; that is, evolution does not show an instance of “foresight” in such
cases. Therefore, contemporary researchers have exchanged pre-adaptation
with the new term exaptation, meaning exactly the same; that is, evolved
traits which change their functions into new ones.
However, traits may also evolve automatically as a byproduct in the
evolution of other structures. These new traits are therefore named “span-
drels” in analogy with some design constraints in architecture (e.g., the
triangular space between the outer curve of an arch and the rectangular
frame or mold enclosing it [Webster’s New Dictionary]). Exaptation dif-
fer from spandrels in that exaptations previously had a different function,
whereas spandrels originally had none. In total, the terms adaptation,
exaptation and spandrels are all applicable to theories of the evolution of
language, although their relevance differs for the various subcomponents
of language and are the issues of ongoing debates in the research litera-
tures. Thus, while Tomasello (1999) and Lieberman (2000) considered
syntax to be a spandrel (i.e., a byproduct of other adaptations), Fitch
(2012) argued for an exaptationist view on the evolution of syntax.
biology advanced in the 1990s (see Goodman & Coughlin, 2000) and more
recently discussed by Fitch (2010, 2012). He warned against the fallacy that
every trait, including language, is an adaption, and advocated a multicompo-
nent view of language which instead emphasizes a close interaction between
selection and constraints. The evolution of language contains a number of
phylogenetic and historical constraints; the latter interact with natural adap-
tation and which therefore “restricts limit, or scaffold the course of evolution
and the nature of the evolved trait” (Fitch, 2012, p. 614).
The evo-devo principle depends on the synthesis between evolutionary
theory and genetics. This may be said to have taken place in two steps:
First, neo-Darwinism took into account Mendel’s experiments, which
were unknown until after Darwin’s death. The mechanisms of inheri-
tance had not yet been clarified, and at the time Darwin believed in the
Lamarckian principle of inheritance of acquired characteristics, a prin-
ciple which is essentially incorrect. He assumed that phenotypically the
offspring would be an intermediate between the two parents. As a result,
new organisms would be a “good fit” within their local environment, and
in this way, Darwin “used up” variance, which is a prerequisite to adap-
tion by natural selection. After Mendel, the concept of genes, and the
distinction between dominant and recessive genes, meant that a trait can
reappear in new generations and thus maintain the variance apparently
lost in the first place. Therefore the marriage between Darwinism and
genetics (Neo-Darwinism) meant that “Population thinking” replaced
“typological” or “essentialist” thinking.
However, Neo-Darwinism does not warrant an interaction between
selection and constraints, which is the essence of the evo-devo principle
that formed a second step in the synthesis of evolutionary theory and
genetics. The evo-devo approach is connected with the growth of epi-
genetics, the gene environment interactions. Until the late 1980s, it was
commonly assumed that genes played strict roles in the development
of bodily structures; therefore, anatomical and physiological complex-
ity, and possibly also cognitive complexity, would depend on the num-
ber of genes by the species. However, genome sequencing showed that
this number did not differ much for most animals and humans. The
expression of genes varied considerably, making complexity dependent
on gene–environment interactions. Bickerton (2014) points out that
16 Language Evolution and Developmental Impairments
be found among many birds (parrots) and some mammals (talking seals)
that have a high resting position of larynx.
To understand the observations mentioned above, we should take
notice of the complex relationship between form (the anatomical posi-
tion of larynx) and behavior (speech). Whereas form is largely controlled
by genes, behavior is not. Thus Bickerton (2014) points out that:
Behavior is considerably further from direct genetic control than form is.
This can be shown by simply considering the nature of behavior. Suppose
we have a species X with a behavior Y. Capacity for behavior inescapably
depends on having the necessary form, a big enough brain, sufficiently
developed organs of sense, limbs in the right places, whatever—and bio-
logical factors, genetic or epigenetic, mandate that form in all normal
members of X. In other words, being a member of X mandates a capacity
to perform Y. But capacity to perform Y does not mandate that Y will be
performed. (p. 52)
is, the perception and processing of linguistic signals, which may have
preceded other subsystems (e.g., semantics) in the evolution of language.
Thus Corballis (2010) commented on the new discoveries by arguing that
mirror neurons do not necessarily mediate the extraction of meaning, in
the linguistic sense. Nonetheless, a continuity position on the evolution of
language gained strength around the turn of the century.
Arbib (2009) made some interesting notes on the biological and social
mechanisms that mediated language evolution. He advocated a pre-
adaptationist view and argued that the first creatures with a mirror neu-
ron system and the functional expression of linked brain regions did not
have language, and yet these creatures were equipped with a language-
ready brain. This assumption is equivalent to the claim that our distant
ancestors had brain structures that could support reading long before the
invention of writing (see Sect. 1.1 above). The language-ready brain was a
product of biological evolution of the hominids, whereas language itself
may have evolved incrementally through cultural evolution. Thus, the
transition from a protolanguage by our distant ancestors to the full lan-
guage capability by human beings today is a product of both biological
and social mechanisms that support language. (Important insights into
the latter type of mechanisms can be gained by studying the historical
processes that have mediated the rise and fall of particular linguistic soci-
eties [Dixon, 1997]).
Research on the mirror neuron system has laid an emphasis on the
motor action component of language. At the time when this system was
discovered, there was a tendency among several researchers to think that
language as a whole can be explained within the fold of motor action (e.g.,
Rizzolatti and Craighero, 2004). The new discoveries led to the assump-
tion that the protolanguages of our distant ancestors were gestural lan-
guages and actions of manual praxis. Consequently, there must have been
a shift from gestural language to vocally based speech. Corballis (2010)
discussed whether there was such a shift, and whether it was a sudden or
incremental transition to speech. In my view, such a shift, if it really hap-
pened, may have reflected a selection of articulators, not a major trend
in the evolution of language. Thus language may have evolved towards a
modality-independent capacity of language, not a refinement of speech.
The analogue development of signed and spoken languages shows that,
22 Language Evolution and Developmental Impairments
in motor terms, at the expense of lexical meaning. This does not mean
that the problem of lexical meaning was entirely overlooked, however,
because it also led to further discussions on the brain substrates of action
understanding, in particular on the semantics of action verbs (Hauk,
Johnsrude, and Pulvermuller, 2004). However, this research trend still
served to downgrade, or to overlook, the classical distinction between
form and meaning in language (de Saussure, 1916): Thus, some forms are
highly specific to a linguistic society or a local group of people, whereas
meaning relates to cultures across linguistic communities. For example,
the English word cat and the French word chat are different in form, but
represent the same meaning.
Apparently, Corballis (2010) drew a more optimistic view on the
tenability of a neurobiological approach. He pointed out an important
difference between the monkey mirror system and the mirror system of
humans: Brain-imaging studies have shown that mirror neurons in the
former system respond to transitive, not to intransitive acts. In humans,
however, mirror neurons respond to both transitive and intransitive acts,
and therefore the human mirror system is said to form a substrate for
the understanding of acts that are symbolic rather than object-related
(Corballis, 2010; Fadiga, Fogassi, Pavesi, & Rizzolatti, 1995). Perhaps it
is the evolution of this system that has made humans the symbolic species,
and triggered the growth of a declarative memory system.
To some extent, the limits of a cognitive neuroscience approach which
I have expressed above seem to have been acknowledged by contemporary
researchers. Also Corballis (2010), who was otherwise optimistic about a
“mirror system approach” admitted that mental time travels, as exposed
in human language, challenges a mirror system interpretation of language
evolution. Other researchers, however, have argued that mental time
travels depend on a substrate outside the classical regions involved in lan-
guage processing (Schacter, Addis, & Buckner, 2008), and that perhaps
the mirror system has no part in the processing of images across space and
time. There are a number of aspects of modern languages which defy a
mirror system interpretation. My point is that synonymy, homonymy, and
mental time travels, as well as communication about impossible objects,
all require a different approach with a prime focus on concepts and cat-
egorization. The emergence of these characteristics of language cannot
24 Language Evolution and Developmental Impairments
sounds and possibly also manual signs in sign language. We can say it has
contributed to an understanding of the first S in Fitch’s three component
description of language: Signal-Structure-Semantics. However, it remains
to be seen how mirror neurons are actually being used in language, and
how this mechanism mediates attribution of meaning to signals.
In Sect. 1.5, I will give a preliminary discussion of meaning in lan-
guage, Chap. 5 will give a more comprehensive discussion of this matter.
Now the question is whether infants can distinguish language-like stimuli
from other stimuli in the ambient environment; that is, stimuli with no
or a low level of meaning which is “comprehended” prior to the labeling
of signals to particular objects or events. Are infants tuned to “language-
like” stimuli prior to the development of semantic knowledge? Learning
constraints which attune the infant to the ambient environment of lin-
guistic stimuli may have an evolutionary origin, and therefore serve as
a basis of early language acquisition. Vouloumanos and Werker (2004)
showed that two- month-old infants listened longer to speech sounds
than to sinusoidal waves, which track the center frequencies of natural
speech. They concluded that infants are tuned to speech sounds, and that
speech therefore has a privileged status for young infants. Later, Krentz
and Corina (2008) strongly objected to this conclusion. They showed
that, in a paired-comparison, preferential-looking paradigm, six-month-
old hearing infants preferred to watch unfamiliar signs (from ASL) over
nonlinguistic pantomime. Therefore, they concluded that infants are not
specifically tuned to speech, but to human language in general.
In Chap. 7, I will discuss Krentz and Corina’s research in more detail,
because their work provides a strong argument for a modality-independent
capacity of language. Although this capacity is part of the infant’s behav-
ioral potentialities, we still find in development a modality-specific attun-
ement to linguistic stimuli (see more of this discussion in Chap. 7).
Also, we will find that language-related stimuli in all modalities have
behavioral precedence in relation to other types of stimuli within the
same modality which are not language-related. However, as a premise for
the following discussion, I assume that infants are capable of making the
more general distinction between linguistic and nonlinguistic signals or
events independent of modality. This distinction may form a develop-
mental basis from which further language development takes place, and
26 Language Evolution and Developmental Impairments
is the crit de chat syndrome which shows the deleterious effects on com-
municative interactions in early childhood when a basic language mode
is lacking in the child’s vocal activities. Most infants, however, do pro-
duce language-like vocal or manual expressions (in infants with signing
parents) that are taken by the caregivers as witnesses of normal language
development.
Cries that are not language-like lack the important features which are
commonly observed in all natural languages and which form the most
general manifestation of linguistic signals. Let me repeat, these signals
are not pre-specified, but are subject to learning constraints which will be
discussed later. First, I will briefly review some classical cognitive theo-
ries which focus on the role of linguistic signals in the general cognitive
apparatus.
The evolutionary significance of language-like stimuli also means
that these stimuli will most likely affect other cognitive processes such
as attention and working memory. Apparently, human subjects have a
specific sensitivity to stimuli, which I have called pre-semantic linguis-
tic stimuli, and which can also be produced as pseudo-words or other
speech-like stimuli by adults. The sensitivity to such stimuli is implied
in the phonological loop; that is, a component in the Baddeley and Hitch
(1974) model of verbal working memory. This component has also
been described as a language-learning device (Baddeley, 2007; Baddeley,
Gathercole, & Papagno, 1998). In the same research tradition, it has
been demonstrated that speech sounds, in addition to being processed by
a separate mechanism, also serve as effective suppressor stimuli in verbal
short-term recall tasks. Similarly, “babble noise” interferes more effec-
tively with speech perception and verbal short-term memory, compared
to white noise. Together these observations demonstrate that speech
sounds are processed differently from other nonspeech sounds. Specific
interpretations of this difference is made explicit in the motor theory of
speech perception (Liberman et al., 1967) and in recent research which
relates to this theory (see Chap. 3, Sect. 3.7) It may also be discussed
whether research on hemispheric specialization gives support to the
assumption of a general “language mode” of processing information. The
right-ear superiority for syllabic stimuli in dichotic listening experiments
has been interpreted as evidence of left hemispheric specialization for
28 Language Evolution and Developmental Impairments
unknown English words and told to match it to one of the visual stimuli.
(auditory-visual matching-to-sample). She consistently avoided choos-
ing known comparisons, and by exclusion she selected a photograph or
lexigram whose name was unknown. The fact that learning by exclusion
occurs by children as well as different species of animals, means that the
process has an evolutionary significance, and strengthen Kaminsky et al.’s
conclusion about a general learning and memory mechanism, which I
think may serve as a possible pre-adaptation to language,
Pre-adaptations like the one underlying learning by exclusion are gen-
erally beneficial for children in their early attempts to learn the words of
their local language. However, learning by exclusion does not prevent an
idiosyncratic labeling of objects and events, and therefore the principle
may also give rise to forms of communication which are incomprehen-
sible to others. Idiosyncratic labeling tends to survive in isolated families
and small communities, but may be broken and replaced by new labels
in an extended community. Social mobility may thus give rise to new
languages, where the meaning of words is based more on explicit rather
than implicit learning.
The learning of a new sign language by deaf children in Nicaragua pro-
vides an example where implicit learning in language acquisition gradu-
ally changed into explicit learning of a well-structured language. Prior
to 1979, when the Sandinistas overthrew the Somoza government, there
were no educational opportunities for deaf children in the country, and
deaf children were generally kept isolated within their families. Linguistic
interactions between members of these families have been described as a
system of gestures commonly known as “home signs” (Emmorey, 2002;
Senghas, Kita, & Özyürek, 2004). It seems that siblings learned these
signs “on their own efforts,” automatically and without an “explicit”
comprehension of meaning. This system was incomprehensible to any-
one outside the family, was idiosyncratic and action-based, and lacked
most names of everyday objects commonly present in spoken languages.
A single gesture covered a range of concepts. It had no gestures for emo-
tions, and did not represent tense. Home signs were context-dependent
and did not generalize to other social settings, and therefore we may still
consider them as the results of implicit learning. Actually, they may be
said to form a “time window” into the early evolution of meaning in
1 Introduction 33
language. Since home signs were implicitly learned concepts, they may
also be related to experimental cognitive research on concepts by animals
and humans (Smith et al., 2012). However, we cannot tell whether the
operational definitions proposed for implicit learning of concepts and
categories in this tradition apply to the phenomena of home signs in
Nicaragua. (See more discussion of recent research on concepts and cat-
egories in Chap. 5, Sect. 5.4.2).
Systems of home signs have been found as widely apart as Taiwan and
North America, and Senghas (2005) reported a similar scenario in the
emergence of a new Bedouin sign language in the Negev region of Israel.
These systems have not been considered to be languages, and the deaf
children soon exchanged the home signs with a form of “pidgin” sign lan-
guage once they started to interact with deaf children from other families.
The pidgin sign language has been said to fall between Protolanguage and
Modern languages in Jackendoff’s (1999) steps in the evolution of lan-
guage. (I shall present more information about pidgin languages below)
In Nicaragua the final transition from home signs to a standardized sign
language, such as the Nicaraguan Sign Language (NSL) took place when
the Sandinistas opened a primary school for deaf children in Managua,
where deaf children from the whole country were admitted. The chil-
dren were taught Spanish, not any of the sign languages, and the teachers
made use of finger spelling to teach them the Spanish alphabet. The edu-
cational program was no success, and few children learned any Spanish
words. Instead the children learned by themselves a creole sign language.
The words and their connected actions were said to constitute a lan-
guage game that was complete in itself. Words which are not connected
with motor actions are not part of the language game. The words “block”,
“pillar”, “slab” and “beam” could be any distinguishable expressions (signs
or vocal expressions) as long as they were action-connected and were parts
of a rule-based game. Obviously, we will consider such a language to be
incomplete, also in relation to the task of building a primitive house.
However, the incompleteness of a language game is not only a question
of language complexity. To make sense I think Wittgenstein”s language
game might serve as a hypothetical example of procedural language skills.
Therefore, it also differs from modern languages which are also based on
declarative knowledge and may be consciously recollected. I shall have
more to say about language game in Chap. 4, which deals with dialogues
as procedural skills.
1 Introduction 35
Since the work of Squire, Knowlton, and Musen (1993), the two types
of knowledge are said to depend on separate brain systems with their
own particular functions. The declarative system is specialized for one-
trial learning, is sensitive to interference and is prone to retrieval failure.
The procedural system is phylogenetically the older one, and is generally
considered to be reliable and consistent, while it also provides the myriad
of nonconscious ways of responding to the world (Eysenck and Keane,
2000; see also my presentation of the two memory systems in Chap. 3).
In my view, Wittgenstein’s language game may be compared to a pid-
gin language between home signs and a creole language. However, the
language game (and perhaps pidgin languages) cannot describe itself; that
is, it does not serve communication about own communication. In this
context, semantic meaning is implicit in the communicative actions; it
cannot be comprehended explicitly, neither by outsiders nor by partici-
pants of the game.
In many ways, some ancient languages may have evolved as systems
which have characteristics like language games. In particular, the implicit
form of communication may have been present in small and isolated
groups of people, while the transition to well-structured and standardized
languages required a certain aggregation of people in larger communities.
Among other examples of new languages that evolved within the time
window of one generation are the pidgin languages mentioned above.
It seems to me that these have been based on the procedural knowledge
in specific communities, and represented transient linguistic forms, after
home signs but preceding the form and structure of modern languages.
Pidgin is a contact language that arose as a means of communication
between speakers of different languages. Although pidgin can be under-
stood as a transient stage in language evolution, as a contact language
it can also be discussed within the conceptual framework of language
change. The best known examples are the now creolized Hawaiian pid-
gins that arose as a mixture of traditional Hawaiian dialects and English,
Japanese, Portugese and other languages of traders in the Pacific islands.
Russenorsk is another example of a dual-source pidgin that arose in an
interaction between fishermen and traders in northern Norway and the
Russian Kola peninsula (the Pomor trade). Like the Hawaiian pidgins,
Russenorsk combined elements from existing languages, and therefore this
36 Language Evolution and Developmental Impairments
At the same time, the neologisms in our time may share the procedural
aspects of the languages of our early ancestors.
Could the protolanguage of early humans some 50,000–100,000 years
ago be form-based languages of this kind? This is supposed to be the lan-
guage used by our last common ancestor, from which known languages
are believed to have evolved in small steps to form a language family. I
think the protolanguage may have been based on implicit rules of com-
munication like those found in pidgin languages. (In Chap. 3, Sect. 3.1.3,
I will also discuss Bickerton’s conception of protolanguage.) The status of
a protolanguage may have remained as long as the rules of the “game”
served the goals of the community, and the group/society did not grow
too large, or became challenged by another group or society that used a
different language; the community might do well without an explicit com-
prehension of word/sign meaning (meta-linguistic knowledge). Societal
growth and differentiation also produce a differentiation of expressive
form, of dialects or new languages. Therefore, cooperation and interac-
tion between groups required humans to transcend the implicit rules of a
“language game.” Actually, within a group or tribe that constantly adapts
to changing conditions of living, there will always be a need to transcend
the rules of the game. In consequence, particular groups of people in
early times developed an understanding of the meaning of signs across
differences in the forms of their production, which may have given rise
to languages with explicit concepts which could consciously be recalled;
that is, linguistic expressions of declarative knowledge.
Any extant pidgin languages may be studied within the framework of
cognitive neurobiology, and with an emphasis on the long-term memory
systems. In particular, the balance between nondeclarative and declarative
memory systems will be an important objective of research. (In Chap. 3,
Sect. 3.3, I will discuss Ullman’s research approach, which focuses on the
procedural and declarative memory systems)
Evolution of semantic meaning requires some flexibility in the use
of linguistic signals, which is a consequence of the arbitrary relation
between form and meaning. First, linguistic symbols, whose meaning
involves explicit or declarative knowledge, conform to a law of replace-
ment, which means that a sign may be replaced by another sign that differs
in form of production, but has the same meaning. Also, the synonymy
38 Language Evolution and Developmental Impairments
1982; Parry, 1971). The experts that have studied ancient oral traditions
seem to agree that the conserved texts may serve as important clues to
an understanding of pre-literate languages, but they also stress that these
languages were not in a sense “primitive” or fundamentally different from
languages in modern societies. On the contrary, Lyons (1981) argued
that neither global nor historical comparisons reveal any evidence of
“primitive” languages: “no correlations have yet been discovered between
the different stages of cultural development through which societies have
passed and the type of language spoken at these stages of cultural develop-
ment” (p. 28). However, there were differences between oral (pre-literate)
and modern languages.
Ong (1982) challenged his reader to imagine a culture “where no one
has ever ‘looked up’ anything” (p. 31). In this culture, words did not exist
visually; words were evanescent sounds or events. They did not constitute
tools in the recitation of a narrative, as in a literate culture. Rather, words
were motor events or actions, and recitation of the narrative was a perfor-
mance, and hence subject to the structural laws of formulary expressions.
In an oral culture, therefore, language was strongly affected by mnemonic
constraints, favoring rhythmic patterns, repetitions, alliterations, and so
on. By emphasizing the structural form of a message, it may have been
difficult to distinguish linguistic form and semantic meaning. On this
account, it may have been difficult to decode new events into formulary
expressions of the oral culture; the capacity to tell or report “new” events
may have been rare. Instead, mimetic and recollective functions of lan-
guage may have dominated human communication, compared to gen-
erative aspects, which to a greater extent have served inventive thought
and action in modern languages.
By stressing words as motor events or actions, it may seem that words
could only exist in the medium of sound. Could words be conceivable
independent of this medium; for instance, as visual gestures or visual
characters? Lyons (1977), in his classical work on semantics, stated that
medium transferability of language is as important a design feature as the
one Hockett called learnability:
and naturally manifest; and, as we have seen, written languages already have
some degree of independence as one of man’s principal means of communi-
cation (p. 87).
References
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Baddeley, A. (2007). Working memory, thought, and action. Oxford: Oxford
University Press.
Baddeley, A. D., Gathercole, S. E., & Papagno, C. (1998). The phonological
loop as a language learning device. Psychological Review, 105, 158–173.
Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. H. Bower
(Ed.), The psychology of learning and motivation (Vol. 8). London: Academic
Press.
Beran, M. J. (2010). Use of exclusion by a Chimpanzee (Pan troglodytes) during
speech perception and auditory-visual matching to sample. Behavioural
Processes, 83, 287–291.
Bickerton, D. (2003). Symbol and structure: A comprehensive framework for
language evolution. In M. H. Christiansen & S. Kirby (Eds.), Language evo-
lution: The states of the art. Oxford: Oxford University Press.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Chomsky, N. (1972). Language and mind. New York: Harcourt Brace Jovanovic.
Chomsky, N. (1980). Rules and representations. New York: Columbia University
Press.
Chomsky, N. (1988). Language and problems of knowledge. The Managua
Lectures. Cambridge, MA: MIT Press.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Creanza, N., Fogarty, L., & Feldman, M. W. (2012). Models of cultural niche
construction with selection and assortative mating. PLoS, 7, e42744.
44 Language Evolution and Developmental Impairments
de Saussure, F. (1916). Course de linguistique générale. Paris: Payot. See also the
1969 translation by Wade Baskin: Course in general linguistics. New York:
McGraw-Hill.
Dennett, D. C. (1983). Intentional systems in cognitive ethology: The ‘Pan-
glossian paradigm’ defended. Behavioral and Brain Sciences, 6, 343–390.
Di Pellegrino, G., Fadiga, L., Fogassi, L., Galese, V., & Rizzolatti, G. (1992).
Understanding motor events: A neurophysiological study. Experimental Brain
Research, 91, 176–180.
Dixon, R. M. W. (1997). The rise and fall of languages. Cambridge, UK:
Cambridge University Press.
Efron, R. (1990). The decline and fall of hemispheric specialization. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Eysenck, M. W., & Keane, M. T. (2000). Cognitive psychology: A students hand-
book. Hove: Psychology Press.
Fadiga, L., Fogassi, L., Pavesi, G., & Rizzolatti, G. (1995). Motor facilitation
during action observation: A magnetic stimulation study. Journal of
Neurophysiology, 73, 2608–2611.
Fay, N., Garrod, S., & Roberts, L. (2008). The fitness and functionality of cul-
turally evolved communication systems. Philosophical Transactions of the
Royal Society B-Biological Sciences, 363, 3553–3561.
Fay, N., Garrod, S., Roberts, L., & Swoboda, N. (2010). The interactive evolu-
tion of human communication systems. Cognitive Science, 34, 351–386.
Fitch, W. T. (2010). The evolution of language. Cambridge: Cambridge University
Press.
Fitch, W. T. (2012). Evolutionary developmental biology and human language
evolution: Constraints and adaptation. Evolutionary Biology, 39, 613–637.
Goodman, C. S., & Coughlin, B. (2000). The evolution of Evo-Devo biology.
Proceedings of the National Academy of Science, 97, 4424–4425.
Grice, H. P. (1957). Meaning. Philos Rev, 66, 377–388.
Hauk, O., Johnsrude, I., & Pulvermuller, F. (2004). Somatotopic representation
of action words in human motor and premotor cortex. Neuron, 41,
301–307.
Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The language faculty:
What is it, who has it, and how did it evolve? Science, 298, 1569–1579.
Hockett, C. D. (1960). The origin of speech. Reprint from Scientific American,
603.
1 Introduction 45
Rizzolatti, G., & Sinigaglia, C. (2008). Mirrors in the brain. How our minds share
actions and emotions. Oxford: Oxford University Press.
Rodriguez-Caballero, A., Torres-Lagares, D., Rodriguez-Perez, A., Serrera-
Figallo, M. A., Hernández-Guisado, J. M., & Machuca-Portillo, G. (2010).
Cri du chat syndrome: A critical review. Medicina Oral Patologia Oral y
Cirugia Bucal, 15, e473–8.
Ryle, G. (1949). The concept of mind. London: Hutchinson.
Saffran, J. R. (2002). Constraints on statistical language learning. Journal of
Memory and Language, 47, 172–196.
Saffran, J. R. (2003). Statistical language learning: Mechanisms and constraints.
Current Directions in Psychological Science, 12, 110–114.
Saffran, J., Hauser, M., Seibel, R., Kapfhamer, J., Tsao, F., & Cushman, F.
(2008). Grammatical pattern learning by human infants and cotton-top tam-
arin monkeys. Cognition, 107, 479–500.
Schacter, D. L., Addis, D. R., & Buckner, R. L. (2008). Episodic simulation of
future events. Annals of the New York Academy of Sciences, 1124, 39–60.
Senghas, A. (2005). Language emergence: Clues from a new Bedouin Sign
Language. Current Biology, 15, 463–465.
Senghas, A., Kita, S., & Özyürek, A. (2002). Children creating core properties
of language: Evidence from an emerging sign language in Nicaragua. Science,
305, 1779–1782.
Shanker, S. G., & King, B. J. (2002). The emergence of a new paradigm in ape
language research. Behavioral and Brain Sciences, 25, 605–656.
Smith, J. D., Crossley, M. J., Boomer, J., Church, B. A., Beran, M. J., & Ashby,
F. G. (2012). Implicit and explicit category learning by capuchin monkeys
(Cebus apella). Journal of Comparative Psychology, 126, 294–304.
Smith, K. (2004). The evolution of vocabulary. Journal of Theoretical Biology,
228, 127–142.
Squire, I. R., Knowlton, B., & Musen, G. (1993). The structure and organiza-
tion of memory. Annual Review of Psychology, 44, 453–495.
Tomasello, M. (1999). The cultural origins of human cognition. Cambridge, MA:
Harvard University Press.
Toni, I., de Lange, F. P., Noordzij, M. L., & Hagoort, P. (2008). Language
beyond action. Journal of Physiology – Paris, 102, 71–79.
Turella, L., Pierno, A. C., Tubaldi, F., & Castiello, U. (2009). Mirror neurons in
humans: Consisting or confounding evidence? Brain and Language, 108,
10–21.
1 Introduction 47
occur at any time in the life span of the individual, whereas develop-
mental language impairments arise in early childhood and tend to have
long-term consequences for the child. Both types of impairment may be
studied from an evolutionary point of view, but in the present work I will
deal only with developmental language impairment. A number of other
terms have been used about language impairment that arises in develop-
ment, for example, the DSM-5 term “language disorder;” whereas other
terms are “primary language impairment” or “language learning impair-
ment.” In this work I prefer “developmental language impairment” as the
default term. By including “developmental,” the diagnostic label indicate
impairments which are related to general developmental processes; for
example, early infant–caregiver interactions, babbling, and critical period
of language acquisition. In a report on contemporary debate about diag-
nostic terms, Reilly, Bishop, and Tomblin (2014) indicate that the few
objections raised against this term have stressed that “developmental”
makes it inappropriate for older children and adults. The main argument
in favor of the term is that “developmental” marks a contrast to “acquired,”
which is the main reason why I prefer to use this term. In the following,
however, the SLI term will be used in reviews of works where this term is
a central one. Otherwise, developmental language impairment will be the
default term in the present work. This will be used until we can come up
with a new term that can be linked to causal factors in the human brain. It
is also important that a new term can be interpreted within an evolution-
ary frame of reference (see discussion in Chap. 8, Sect. 8.6).
Bishop (2014) pointed out that diagnoses of language impairments,
in contrast to Down syndrome, cannot be based on a “clear dividing
line between normality and abnormality in its aetiology.” Lacking a firm
research basis for diagnoses, a number of false positives and false negatives
may be expected, and therefore the use of any diagnostic label may cause
tensions between clinicians and parents. Actually, Bishop asked whether
diagnostic labels should be abandoned, and may be exchanged by terms
such “special education needs” or a nonspecific term such as “speech,
language and communication needs.” She admitted, however, that this
solution will hamper research and, therefore, the concept was rejected.
A diagnostic category with explicit criteria for inclusion and exclusion in
experimental groups are needed. Hence, she retained the commonly used
2 Developmental Language Impairment... 51
The discrepancy criterion captured the notion that the impairment was
unexpected and unexplained: whereas there was an assumption that
language deficits were unsurprising in a child who had more global intel-
lectual difficulties. However, this rationale has not been supported by
evidence in either language or literacy problems. While it is true that
verbal and nonverbal impairments often co-occur, it is not the case that
nonverbal ability sets a limit on language development….Indeed, it is
possible to find children whose performance on language tests is much
2 Developmental Language Impairment... 53
mother and deaf father. His English was assessed by way the CELF,
and his signing abilities were assessed with the British Sign Language
Receptive Skills Test (BSL-RST) (Herman et al., 1999). In both tests,
JA scored age-appropriately on vocabulary, but very low on signed sen-
tences, and comprehension of sentences in English; that is, impairments
of a similar kind in the two languages. His erratic profile on the items
in both tests showed that his performance was atypical and not due to a
general language delay.
Paul’s vocabulary was assessed with a nonstandardized BSL version
of British Picture Vocabulary Scale (BPVS), and sentence comprehen-
sion was tested with the BSL-RST. Like JA, Paul showed a normal sign
vocabulary, but had great difficulties in understanding complex signing
(−1.3 standard deviations below the mean). Morgan et al. (2007) argued
that Paul’s low “performance could not be characterized as a slow learner
as by failing early items and passing more difficult ones his performance
appeared random rather than like a younger child” (p. 101). Expressive
language was documented by video recordings of Paul’s signing in BSL
with his parents, teachers, and therapist. These recordings revealed that
his expressive language “was restricted to small sentences made up of one
or two signs with very limited grammar” (p. 102).
The two cases were similar in some important respects. Both showed
a normal vocabulary, but subnormal comprehension and production of
signed sentences. Moreover, both showed an erratic and atypical perfor-
mance which differed from that of late learners or second-language learn-
ers. Also, JA’s language difficulties in speech and BSL were similar. His
problems, which showed up in both modalities, although representing
similar linguistic domains, may have been caused by a general deficit of
symbolic reference. Due to the similar pattern of difficulties for Paul and
JA, the author believed that both may have suffered from this general
linguistic deficit.
According to Morgan (Morgan, 2005; Morgan et al., 2007), JA and
Paul represented two cases of SLI by users of sign language. Later, Mason
et al. (2010) have reported sign language impairments among 13 signing
deaf children aged 5–14 years. They argued that the significant language
delay found in this group could not be explained by poor exposure to
BSL. Scores on the BSL-RST and the BSL Production Test showed that
56 Language Evolution and Developmental Impairments
most aspects of language were affected. These results have clear impli-
cations for theory and practice in the field of developmental language
impairment, in particular for our interpretation of the SLI term. As
pointed out above, it is not clear what is specific in SLI by hearing chil-
dren exposed to speech. If this diagnose is extended to include difficulties
in acquiring sign language by deaf children as well, one can no longer
maintain the term specific for this deficit. Therefore, Morgan’s studies
have given rise to further critique of the SLI term, in particular to the
discrepancy criteria in the definition of this term.
The fact that JA had similar difficulties in spoken and signed language,
and also that the two boys had similar signing difficulties, can be inter-
preted as a dysfunction of a modality-independent capacity of language.
Can we likewise assume that the two boys had similar difficulties as most
hearing children with unexplained language impairments, and that they
all can be classified by one diagnostic term, such as “developmental lan-
guage impairment?” Based on the conception of language as a modality-
independent capacity (see Chap. 7), this term may be a viable one for
both deaf and hearing children who have comparable difficulties in their
own language modalities. However, by thus abandoning important dis-
crepancy criteria, we are left with a large, complex and clinically het-
erogeneous group. These children may differ with respect to the type of
interventions/treatment they will benefit from, and therefore they should
not be subsumed in one clinical and diagnostic term. Briscoe, Bishop,
and Norbury (2001) reported that a group of children with mild-to-
moderate hearing loss had language problems which in many ways were
similar to hearing children with SLI. The former group, however, ben-
efited from reading instruction, whereas the SLI children did not, or had
severe difficulties in learning to read.
In agreement with Reilly, Bishop, et al. (2014), I will also argue for
diagnostic terms which make it possible to distinguish between chil-
dren with problems which persevere into adulthood and those who
have problems “which are likely to be resolved of their own account.”
Should we therefore distinguish between clinical groups based on pros-
pects of remedial treatments? Reilly et al. suggested building risk mod-
els of early language trajectories. This may require a distinction between
components of language which are differently impaired, and which
2 Developmental Language Impairment... 57
relatively old works, which are classic in the sense that they are often
mentioned in discussions of the etiology of developmental language
impairment, and in the final part of the section I will present a recent
work on “critical markers” in the brain structures of children with lan-
guage impairment and reading disability.
In two of the following theories, these markers did not belong to the
language domain, but were “downstream consequences of perceptual
and memory limitations” (Hsu and Bishop, 2011). For example, Tallal
(1976) argued that SLI depended on a deficit in the brain mechanisms
underlying discrimination of speech sounds. She designed the Auditory
Repetition Test (ART) for diagnostic and interventional purposes, and a
training program based on this test has been applied with some degree of
success to children with SLI (Merzenich et al., 1996). Later evaluation of
this program (Gillam, Frome Loeb, and Friel-Patty, 2001) has shown that
positive effects are limited to vocabulary and sentence length, whereas no
effects have been demonstrated on grammatical skills.
Baddeley et al. (1998) argued that SLI depended on subnormal capac-
ity of the phonological loop, an important component in the Baddeley
and Hitch (1974) model of working memory. The phonological loop
includes three subcomponents: (1) Phonological storage, which has a
limited capacity and contains spoken words and nonwords whose mem-
ory traces fade rapidly unless they are rehearsed in (2) an articulatory
buffer. (3) A Grapheme-Phoneme Converter transfers visual inputs into
articulatory movements; hence these inputs are similarly processed in the
articulatory buffer. In this way both written and spoken words gain access
to the phonological storage.
In the seminal work of Baddeley et al. (1998), it was argued that the
phonological loop serves as a “language acquisition device.” Its capac-
ity depended crucially on the subvocal rehearsal taking place in the
articulatory buffer. Thus, instruction to repeat particular sounds, for
instance, “the-the-the” while memorizing a series of words blocks rehearsal
and reduces the immediate memory span. Baddeley et al. (1998) stressed
that the function of the phonological loop is not so much the learning
of words that exist in one’s vocabulary, but the learning of new words.
Therefore, their theory rested in part on data from the Children’s Test
of Nonword Repetition (CN REP) (Dollaghan and Campbell, 1998;
2 Developmental Language Impairment... 59
of English and those which violated these rules. Which type of nonwords
served as the best predictor of development of a vocabulary? She found
that responses to nonwords of an unknown structure (those violating the
phonotactic rules) served as a good predictor of vocabulary. Responses to
the “familiar” nonwords did not correlate with later vocabulary. Together,
these observations represented an important challenge to Brown and
Hulme’s theory. Among the theories I have presented so far, only Brown
and Hulme’s theory claims that developmental language impairment is
specific to the language domain. Tallal’s theory and Baddeley et al.’s theory
claim that the core problem for the impaired children can be found within
a nonlanguage domain (yet it has major effects on the acquisition and use
of language). On this account, the SLI term is warranted only in view of
Brown and Hulme’s theory. The three theories have gained but a limited
support in the literature, and many researchers today are less optimistic
about finding a causal mechanism underlying SLI. Instead, some research-
ers have emphasized the heterogeneity of children with SLI, and suggested
that there may be subgroups of impaired children that differ clinically
and etiologically. Based on standardized language and psychometric tests,
Conti-Ramsden, Crutchley, and Botting (1997) identified subgroups in
a sample of 242 clinically defined seven-year-old children with language
impairments in England. Longitudinal data showed that they could be
classified in three subgroups: expressive SLI, expressive/receptive SLI, and
complex SLI. The latter group consisted of children with lexical, syntac-
tic, semantic and pragmatic difficulties in the absence of any phonologi-
cal difficulties. However, the distinction between expressive and receptive
difficulties has not been commonly acknowledged in the literature. In
any case, it is unlikely that we could define a core problem that is shared
by these groups. Thus, apart from some descriptive characteristics, there
is practically no agreement among contemporary researchers as to what
constitutes the set of inclusion criteria that defines SLI.
What about the neuroanatomical structures which serve language pro-
cessing? Ullman’s declarative procedural model, which will be described
in Chap. 3, postulates different structures underlying declarative and
procedural memory and that the former is linked to the lexical seman-
tic system and the latter to aspects of grammar. Ullman and Pierpoint
(2005) raised the idea that critical markers of SLI could be found by
2 Developmental Language Impairment... 61
and language skills. In Chap. 3, therefore, I will discuss the role of the
motor system and the different ways memory systems are implicated in
language. Ullman’s (2004) neurobiological model of language acquisi-
tion, which claims that the acquisition of grammar is largely dependent
on substrata underlying the procedural memory system (prime among
these are the basal ganglia, including the neostriatum with the putamen
and the caudate nucleus), whereas vocabulary and semantic knowledge
depends on structures underlying the declarative system (the medial tem-
poral lobe structures such as hippocampus, entorhinal and perirhinal
cortex). Brain imaging studies of the KE family members showed that
the affected members had abnormal basal ganglia (in addition to abnor-
malities in other language-related areas). The basal ganglia, are strongly
involved in movements; therefore, these abnormalities could explain dif-
ficulties in adequate movements of lips and tongue. In view of Ullman’s
model, it could also be argued that FOXP2 affects the procedural memory
system. Takahashi, Liu, Hirokawa, and Takahashi (2003) found FOXP2
expression in the striatum, particularly in the caudate nucleus, but not in
the hippocampus. This shows that the critical gene expression takes place
in the nervous mechanisms of the procedural not the declarative system.
Furthermore, the expression was higher in developing tissues than in
adult tissues, showing its relevance to language acquisition.
Ackermann, Hage, and Ziegler (2014) also argued that the basal gan-
glia provide a platform for the evolution of articulate speech in humans.
They suggested a two-step evolution of the mechanisms underlying these
skills: a refinement of projections of premotor cortex to the basal ganglia,
followed vocal-laryngeal elaboration of the ganglia circuitry, a process
which depends on human-specific FOXP2 mutations. In general, genetic
variants of the FOXP2 and its associated molecular networks are involved
in the balance between procedural and declarative strategies. Further sup-
port for the expression of FOXP2 in the procedural system was presented
by Chandrasekaran, Yi, Blanco, McGeary, and Maddox (2015), who
showed that a genetic variant (the GG genotype) mediated enhanced
procedural learning of speech sound categories. This is why polymor-
phism of FOXP2 may be involved in early learning of grammar.
Ullman and Pierpoint (2005) argued that basal ganglia abnormali-
ties may arise from other reasons than anomalies of the FOXP2 gene.
2 Developmental Language Impairment... 65
Early onsets of intrinsic and extrinsic neural insults may lead to atypi-
cal brain development, and therefore, “procedural language disorder”
(PLD) may depend on a diversity of etiological factors: “It is important
to emphasize that the source of the disorder is expected to vary across
individuals. Some may have mutations in the FOXP2 gene, whereas
many others show no evidence of such mutations…and instead suf-
fer from other etiologies” (p. 407). Moreover, Ullman and Pierpoint
added that FOXP2 is not the only gene that is involved in PLD. Their
procedural deficit hypothesis (PDH) explained in more details the link
between basal ganglia abnormalities and grammar impairments (see
Chap. 3, Sect. 3.3.2).
Although mutation of FOXP2 is a rare cause of language impairment,
variants of this gene and its dependent molecular network are most likely
involved in the etiology of SLI. However, as reported by Bishop (2015)
mutations in one of these genes will rarely have a Mendelian pattern.
First-degree family members often manifest subthreshold symptoms, for
example, subtle phonological difficulties, and therefor she argued that the
minor impairments in the family members show that they “correspond to
a continuum of impairment, rather than all-or-none diseases” (p. 619).
This continuum means that environmental factors account for a major
source of variance in gene expressions, and therefore a more comprehen-
sive treatment of the etiology of developmental language impairments
must include a discussion of epigenetics.
I don’t know if isolation experiments have ever been carried out on bats or
spiders, but my guess is that if a bat or spider was raised without ever seeing
another bat or spider, it would still be able to echolocate or spin a web as
well as other species members. In contrast, children for whom some acci-
dental circumstance has drastically reduced or eliminated linguistic input
may never speak, or if they do may fall far short of a full adult language
capacity (p. 46).
this is why Chap. 4 is entirely reserved for this arena of learning. The dia-
logue between child and caregiver is the main arena also for the vertical
transmission of language between generations.
In the Introduction, Sect. 1.4.2, I have argued that pre-semantic sig-
nals have temporal structures defined by transition probabilities that are
easily learned by normally developing children. These structures, when
detected, give rise to the segregation of sound sequences into words or
word-like chunks that form the important signals in child–caregiver
interactions. The statistical learning involved in the detection of these
chunks is also involved in the learning of the phrase structures (see Chap.
3, Sect. 3.2), part of which may be established prior to the acquisition of
semantic knowledge.
The instinct to learn means that normally developing children have
wired in sensitivities to temporal structures which are present in natural
languages. These sensitivities are generally also present in their mothers
or caregivers. Therefore, they give rise to “an interactive alignment in
conversation” (Menenti, Pickering, & Garrod, 2012), and accordingly,
infant and caregiver can also change roles. However, this alignment may
fail from a number of reasons: anything from insufficient exposure to lin-
guistic stimuli to full deprivation of language. The damaging effects may
vary depending on the language-related genes in either one of the two
parties. In the population, therefore, early dialogic failure constrains lan-
guage adaptation, and for the child’s “unsuccessful” epigenetics will ham-
per language acquisition and cause lasting language impairment. The first
two S’s in Fitch’s formula are insufficiently established, and clinically the
therapist has to deal with a case of “unexplained” language impairment.
Language-impaired children in this category (they may constitute the
majority of cases) form serious challenges for the language therapist; they
lack a basic comprehension of structure at any level of language from
syllables, words, phrases and sentences. For these children, training tasks
with linguistic materials will not be very helpful, but as argued in Chap.
8, the basic conception of event-structure can be reestablished by training
in general domain learning tasks.
As mentioned above early “dialogues” between infant and caregiver will
be extensively treated in Chap. 4. Here I argue that these dialogues form
examples of procedural skills that are controlled by the prefrontal–basal
68 Language Evolution and Developmental Impairments
ganglia circuitry. They also take place, with semantically decoded words,
when conversations are easy (Garrod & Pickering, 2004).
There are three groups of theories which explain how the ASD brain
functions:
1. The first maintains that ASD children lack a ToM, which means that
these children do not understand that other people have independent
mental states; that is, beliefs, desires and goals.
2. Simulation theory claims that ASD children consult their own mind in
order to find out what the beliefs of another person are. They use their
own mind as a model of intentional states, and some proponents of
this theory (Sato, Uono, and Toichi, 2013) also argue that the process
is mediated by the mirror neuron network.
3. Interaction theory stresses dysfunctions in general sensory–motor
behavior and downplays the role of internal representations in
cognition.
More details about these theories can be found in Brown and Elder
(2014) and Gallagher and Varga (2015). I will now focus on the lack
of ToM, the prevailing symptom by most children with this disease.
ToM has also been characterized as a form of mindblindness and is
typically present among a group of “high-functioning” patients with
normal intelligence and language skills; this group would be diagnosed
with Asperger syndrome on the ASD, after the Austrian pediatrician
Hans Asperger, who, in 1944, described a group of children who lacked
nonverbal communication skills, were physically clumsy and lacked an
interest in others. Should we characterize mindblindness as a language
impairment?
A ToM has been considered as the most advanced stage in the evolu-
tion of intentional systems (Dennett, 1983) (i.e., I can apprehend Ted’s
belief about X. Furthermore, I believe that Ted believes that I am aware
of his belief about X). The ability to detect believes like these have been
tested with false belief tasks such as The Sally–Anne Test:
Sally hides a marble in a basket and leaves the room. While she is away
Anne moves the marble into a box. In a short time, Sally re-enters the
room, and the child who has seen an enactment of this event is asked:
“Where will Sally look for the marble?”
70 Language Evolution and Developmental Impairments
Children under the age of three to four years consistently choose the
box; that is, their knowledge of where the marble is cannot be separated
from Sally’s false belief. Normal children above this age and developmen-
tally disabled children with Down syndrome will generally pass this test,
whereas few autistic children have passed it. These observations together
with social and communicative difficulties by ASD children have been
interpreted as a mind-reading deficit, or mindblindness. Some older chil-
dren with ASD pass the Sally–Anne test, whereas they still have troubles
in reading the intentions of others in everyday communicative settings.
Therefore, the validity of the Sally–Anne test is limited.
Language has evolved to make humans able to talk about, among
other things intentional states. Certainly, mindblindness does consti-
tute a pragmatic language impairment. The problem is whether it may
also be associated with more general semantic difficulties. Thus, Brown
and Elder (2014) said “these children have the vocabulary and even have
memorized the syntax to pass standardized language screenings, but they
struggle in real world communication settings because they lack under-
standing of meaning” (p. 220). Similarly, some of these children have
revealed a premature form of reading skill, named “hyperlexia” (Darold
& Treffert, 2011). These children are capable of relatively fast reading,
whereas their interpretation and understanding of text is poor (see Chap.
6, Sect. 6.5.2).
Similarly, ASD children have difficulties in understanding metaphors,
irony and indirect requests, which may indicate that they make use of
language merely as an instrumental device, and pay less attention to the
meaning and function of words. Gold, Faust, and Goldstein (2010) stud-
ied the semantic integration process in 17 participants with Asperger syn-
drome and 16 participants in a control group (age ranged from 17 to 31
years) who were presented with 240 pairs of words that denoted either a
“literal,” “conventional metaphoric,” novel-metaphoric,” or “unrelated”
meaning. The participants were instructed to judge whether the presented
pair conveyed a meaning or not. In an “event-related potentials” (ERP)
task N400 amplitudes showed that the Asperger patients had greater dif-
ficulties in comprehending the metaphorically related word pairs com-
pared to the control group. These difficulties were related to differences
in “linguistic information processing” by the two groups. Thus, general
2 Developmental Language Impairment... 71
References
Ackermann, H., Hage, S. R., & Ziegler, W. (2014). Brain mechanisms of acous-
tic communication in humans and nonhuman primates: An evolutionary
perspective. Behavioral and Brain Sciences, 37, 529–546.
Baddeley, A. D., Gathercole, S. E., & Papagno, C. (1998). The phonological
loop as a language learning device. Psychological Review, 105, 158–173.
Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. H. Bower
(Ed.), The psychology of learning and motivation (Vol. 8). London: Academic
Press.
Baddeley, A. D., & Wilson, B. (1985). Phonological coding and short-term
memory in patients without speech. Journal of Memory and Language, 24,
490–502.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Bishop, D. V. (1997). Uncommon understanding. Development of disorders of lan-
guage comprehension in children. East Sussex, UK: Psychology Press.
Bishop, D. V. (2010). Overlaps between autism and language impairment:
Phenomimicry or shared etiology. Behavior Genetics, 40, 618–629.
Bishop, D. V. (2014). Ten questions about terminology for children with unex-
plained language problems. International Journal of Language &
Communication Disorders, 49, 381–415.
74 Language Evolution and Developmental Impairments
Bishop, D. V. (2015). The interface between genetics and psychology: Lessons
from developmental dyslexia. Proceedings of the Royal Society B: Biological
Sciences, 282(1806), 20143139. doi:10.1098/rspb.2014.3139.
Bishop, D. V., North, T., & Donlan, C. (1995). Genetic basis of specific lan-
guage impairment: Evidence from a twin study. Developmental Medicine and
Child Neurology, 37, 56–71.
Botting, N., & Conti-Ramsden, G. (2001). Non-word repetition and language
development in children with specific language impairment (SLI).
International Journal of Language & Communication Disorders, 36, 421–432.
Briscoe, J., Bishop, D. V., & Norbury, C. F. (2001). Phonological processing,
language, and literacy: A comparison of children with mild-to-moderate sen-
sorineural hearing loss with specific language impairment. Journal of Child
Psychology and Psychiatry, 42, 329–340.
Brown, B. B., & Elder, J. H. (2014). Communication in autism spectrum dis-
order: A guide for pediatric nurses. Pediatric Nursing, 40, 219–225.
Brown, B. B., & Hulme, C. (1996). Nonword repetition, STM, and age-of-
acquisition versus pronunciation-time limits in immediate recall for
forgetting-matched acquisition: A computational model. In S. E. Gathercole
(Ed.), Models of short-term memory. Hove, UK: Psychology Press.
Chandrasekaran, B., Yi, H. G., Blanco, N. J., McGeary, J. E., & Maddox, W. T.
(2015). Enhanced procedural learning of speech sound categories in a genetic
variant of FOXP2. The Journal of Neuroscience, 35, 7808–7812.
Conti-Ramsden, G., Crutchley, A., & Botting, N. (1997). The extent to which
psychometric tests differentiate subgroups of children with SLI. Journal of
Speech, Language, and Hearing Research, 40, 765–777.
Conway, C. M., Gremp, M. A., Walk, A. D., Bauernschmidt, A., & Pisoni,
D. B. (2014). Can we enhance domain-general learning abilities to improve
language function? In P. Rebuschat & J. N. Williams (Eds.), Statistical learn-
ing and language acquisition. Berlin: De Gruyter Mouton.
Conway, C. M., & Pisoni, D. B. (2008). Neurocognitive basis of implicit learn-
ing of sequential structure and its relation to language processing. Annals of
New York Academy of Sciences, 1145, 113–131.
Darold, A., & Treffert, M. D. (2011). Hyperlexia III: Separating ‘Autistic-like’
behaviors from autistic disorder: Assessing children who read early or speak
late. WMJ, 110, 281–286.
Dennett, D. C. (1983). Intentional systems in cognitive ethology: The ‘Pan-
glossian paradigm’ defended. Behavioral and Brain Sciences, 6, 343–390.
2 Developmental Language Impairment... 75
Dollaghan, C., & Campbell, T. F. (1998). Nonword repetition and child lan-
guage impairment. Journal of Speech, Language, and Hearing Research, 41,
1136–1146.
Gallagher, S., & Varga, S. (2015). Conceptual issues in autism spectrum disor-
ders. Current Opinion in Psychiatry, 28, 127–132.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in
Cognitive Sciences, 8, 8–11.
Gathercole, S. E. (1995). Is nonword repetition a test of phonological memory
or long-term knowledge? It all depends on the nonwords. Memory &
Cognition, 23, 83–94.
Gathercole, S. E., & Baddeley, A. D. (1990). Phonological memory deficits in
language disordered children: Is there a causal connection? Journal of Memory
and Language, 29, 336–360.
Gathercole, S. E., Tiffany, C., Briscoe, J., Thorn, A., & The ALSPAC Team.
(2005). Developmental consequences of poor phonological short-term mem-
ory function in childhood: A longitudinal study. Journal of Child Psychology
and Psychiatry, 46, 598–611.
Gervain, J., & Mehler, J. (2010). Speech perception and language acquisition in
the first year of life. Annual Review of Psychology, 61, 191–218.
Gillam, R. B., Frome Loeb, D., & Friel-Patty, S. (2001). A summary of five
exploratory studies of FastFor Word. American Journal of Speech-Language
Pathology, 10, 269–273.
Girbau-Massana, D., Garcia-Marti, G., Marti-Bonmati, L., & Schwarz, R. G.
(2014). Grey-white matter and cerebrospinal fluid volume differences in chil-
dren with specific language impairment and/or reading disability.
Neuropsychologia, 56, 90–100.
Gold, R., Faust, M., & Goldstein, A. (2010). Semantic integration during meta-
phor comprehension in Asperger syndrome. Brain & Language, 113,
124–134.
Herman, R., Holmes, S., & Woll, B. (1999). Assessing British Sign Language
Development: Receptive Skills Test. UK: Forest Bookshop.
Hill, E. L. (2001). Non-specific nature of specific language impairment: A
review of the literature with regard to concomitant motor impairments.
International Journal of Language & Communication Disorders, 36, 149–171.
Hsu, H. J., & Bishop, D. V. (2011). Grammatical difficulties in children with
specific language impairment: Is learning deficient? Human Development, 55,
264–277.
76 Language Evolution and Developmental Impairments
Lai, C. S. L., Fisher, S. E., Hurst, J. A., Varga-Kadem, F., & Monaco, A. P.
(2001). A novel forkhead-domain gene is mutated in a severe speech and
language disorder. Nature, 413, 519–523.
Leonard, L. B. (1998). Children with specific language impairment. MA: MIT
Press.
Mason, K., Rowley, K., Marshall, C. R., Atkinson, J. R., Herman, R., Woll, B.,
et al. (2010). Identifying specific language impairment in deaf children
acquiring British Sign Language: Implications for theory and practice. British
Journal of Developmental Psychology, 28, 33–49.
Menenti, L., Pickering, M. J., & Garrod, S. (2012). Toward a neural basis of
interactive alignment in conversation. Frontiers in Human Neuroscience, 6,
185.
Merzenich, M. M., Jenkins, W. M., Johnston, P., Schriener, C. E., Miller, S. L.,
& Tallal, P. (1996). Temporal processing deficits of language-learning
impaired children ameliorated by training. Science, 271, 77–80.
Morgan, G. (2005). Biology and behavior: Insights from the acquisition of sign
language. In A. Cutler (Ed.), Twenty-first century psycholinguistics. Four cor-
nerstones. Mahwah, NJ: Lawrence Erlbaum.
Morgan, G., Herman, R., & Woll, B. (2007). Language impairments in sign
language: Breakthroughs and puzzles. International Journal of Communication
Disorders, 42, 97–105.
Reilly, S., Bishop, D. V., & Tomblin, B. (2014). Terminological debate over
language impairment in children: Forward movement and sticking points.
International Journal of Language & Communication Disorders, 49, 452–462.
Reilly, S., Tomblin, B., Law, J., McKean, C., Mensah, F., Morgan, A., et al.
(2014). Specific language impairment: A convenient label for whom?
International Journal of Language & Communication Disorders, 49, 416–451.
Sato, W., Uono, S., & Toichi, M. (2013). Atypical recognition of dynamic
changes in facial expressions in autism spectrum disorders. Research in Autism
Spectrum Disorders, 7, 906–912.
Sciberras, E., Mueller, K., Efron, D., Bisset, M., Anderson, V., Schilpzand, E. J.,
et al. (2014). Language problems in children with ADHD: A community-
based study. Pediatrics, 133, 793–800.
Takahashi, D. Y., Narayanan, D. Z., & Ghazanfar, A. A. (2013). Coupled oscil-
lator dynamics of vocal turn-taking in monkeys. Current Biology, 23,
2162–2168.
2 Developmental Language Impairment... 77
Takahashi, K., Liu, F.-C., Hirokawa, K., & Takahashi, H. (2003). Expression of
Foxp2, a gene involved in speech and language, in the developing and adult
striatum. Journal of Neuroscience Research, 73, 61–72.
Tallal, P. (1976). Rapid auditory processing in normal and disordered language
development. Journal of Speech, Language, and Hearing Research, 9,
182–198.
The SLI Consortium. (2002). A genomewide scan identifies two novel loci
involved in specific language impairment. The American Journal of Human
Genetics, 70, 384–398.
Ullman, M. T. (2004). Contributions of memory circuits to language: The
declarative/procedural model. Cognition, 92, 231–270.
Ullman, M. T., & Pierpoint, E. I. (2005). Specific language impairment is not
specific to language: The procedural deficit hypothesis. Cortex, 41,
399–433.
van Balkom, H., Verhoeven, L., & van Weerdenburg, M. (2010). Conversational
behaviour of children with developmental language delay and their caretak-
ers. International Journal of Language & Communication Disorders, 37,
295–319.
Weismer, S. E., & Kover, S. T. (2015). Preschool language variation, growth,
and predictors in children on the autism spectrum. Journal of Child Psychology
and Psychiatry, 56, 1327–37. doi:10.1111/jcpp.12406.
3
The Problem of Continuity in Time
and Across Domains
1. Is Homo sapiens sapiens the only species which acquires and makes use
of linguistic symbols? The symbolic species theory (Deacon, 1997) deals
with language as an emergent capacity unparalleled in the animal
kingdom. In discussing aspects of this theory, I will review classical
and some more recent works on symbol learning by human and non-
human subjects. By comparing the communicative skills of bees and
ants with humans’ ability to talk about things that are not physically
present, I will discuss Bickerton’s proposition that displacement is a
road to language. Finally, I will discuss whether we have “living fossils”
which provide “windows” to the protolanguage of man.
2. A continuity position will require an account of vertical transmission
of languages, and in my view, Saffran’s constrained statistical learning
framework is useful in dealing with this problem. I will argue that her
works (Saffran, 2003; Saffran et al., 2008) support what I have termed
“an access code to early dialogues.”
3. Ullman (2004) called attention to “the existence of biological and com-
putational substrates that are shared between language on the one hand
and nonlanguage domains on the other” (p. 232). By focusing on
82 Language Evolution and Developmental Impairments
the symbolic level, and argued that there is a logical leap from icons and
indexes on the one side and symbols on the other: “the symbolic thresh-
old.” Is there any evidence that nonhuman primates have crossed this
threshold, and does the acquisition of grammar depend on it?
Considering the great leap from indexical to symbolic representation,
many researchers have addressed the question of whether apes are able
to cross the symbolic threshold. In particular, the now-classic study by
Savage-Rumbough and Rumbaugh on chimp’s efforts to learn a rudi-
mentary form of language (Savage-Rumbaugh, 1986), has been the target
of extensive discussions (e.g., Shanker and King, 2002). Two of these
chimps, Sherman and Austin, showed a special talent for symbolic com-
munication, and the way they progressed towards skillful use of a system
of lexigrams was thoroughly analyzed by Deacon. Initially, the chimps
were trained to associate the lexigrams with a large number of food objects
and activities. Then they were trained to make use of lexigram pairs in a
simple verb–noun relationship; for example, a sequence glossed as “give-
banana” causing a dispenser to deliver the reward. In a simple combinato-
rial system of two “verbs” and four “nouns,” there are 720 possible pair
sequences, most of which are nonsensical or illicit combinations. After
a long training session with selective reinforcements, most of these were
extinguished. As a result, the two chimps were capable of producing the
correct lexigram string on every trial, which may be said to constitute a
grammatical skill (i.e., manipulation of symbols by compositional rules).
Deacon argued that the shift from word-object associations and asso-
ciative predictions to symbolic predictions involves a change in mne-
monic strategy. Lexigrams, which are known in one way, may now be
recoded in another way. They become re-represented in a system of
token-token relationship, and hence they are known “both from bottom
up, indexically, and top down symbolically.” A mental transformation
has taken place. “It is a way of offloading redundant details from working
memory, by recognizing a higher-order regularity in the mess of associa-
tions, a trick that can accomplish the same task without having to hold
all the details in mind” (Deacon, 1997, p. 89). The same strategy also
leads to recoding of symbolic tokens to create new representational pos-
sibilities. A good example is the “syntactic writing” that was found on a
tablet from Ur 2960 bc: Rather than representing numbers by simple one
3 The Problem of Continuity in Time and Across Domains 85
to one correspondences, the old Sumerians replaced the four tokens for
sheep with two tokens, one for sheep and one for the abstract number of
tallies (Schmandt-Bessarat, 1986). However, the case of “syntactic writ-
ing” reveals a conceptual development by the early Sumerians that may
have surpassed the cognitive and communicative abilities underlying the
protolanguages thousands of years earlier.
Deacon’s interpretation of the communicative skills acquired by
Sherman and Austin did not fully agree with Savage-Rumbough’s
description of the chimps’ learning process. Rather than showing the
ability to learn word combinations or sentences, she said that the proj-
ect was intended to show “what does a word means to a chimpanzee”
(Savage-Rumbaugh and Lewin, 1994, p. 49). Later, Shanker and King
(2002) commented on this (apparent) disagreement between Deacon
and Savage-Rumbough and argued that the two researchers had taken
irreconcilable positions. Deacon who explained the chimp’s language
acquisition as a “radical transformation in the[ir] mode of representation”
(p. 87), was considered as an exponent of an information-processing
paradigm, whereas Savage-Rumbough’s position was said to be highly
resonant with a dynamic-systems paradigm. This latter paradigm was
presented as a new one for ape language research by Shanker and King;
that is, a research paradigm they explicated by way of dance metaphor.
According to this metaphor Sherman and Austin acquired communi-
cative skills due to “interactional synchrony,” “mutual attunement” and
“affective resonance between participants.”
I shall not take issue with Shanker and King’s advocacy of a dance
metaphor for language acquisition, but I will quote two of their peer
commentators, Rendall and Vasey (2002), on the matter. They argued
that the emphasis on “mutual attunement between participants seri-
ously limits the scope of their proposal to situations in which the motives
and interactive goals of communicating parties are largely coincident”
(p. 637). I fully agree with this commentary to Shanker and King’s target
article. Thus, the birth of early languages, as well as the birth of languages
in recent history, may have taken place in social encounters where “affec-
tive resonance” is lacking, and where the interactive parties are involved in
negotiating behavior to avoid serious conflict. Therefore, I think Shanker
and King’s dance metaphor is inadequate for a description of language
86 Language Evolution and Developmental Impairments
identical and one odd call, AAB pattern, or contrarily the ABB pattern.
Following habituation to the former pattern, rhesus monkeys showed
significantly more orienting responses to the BAA strings. Similarly, more
responses were given to the AAB pattern after habituation to the BAA
pattern. The results indicate a capacity to extract distributional infor-
mation in entirely new sequences of vocalized calls, and this capacity
also provides a basis for development or change of communicative prac-
tice among the animals. More studies of sequential pattern learning are
reported within the research traditions of statistical and artificial learn-
ing. In Sect. 3.2 below, I will show that such patterns can be learned by
monkeys only when they do not exceed a critical level of complexity.
In the first tens of this century we saw a growing conviction that sub-
human subjects were capable of symbolic communication. Hence, it was
assumed that origins of language may be found in animal communi-
cation; thus, continuity was stressed instead of late emergence of lan-
guage by Homo sapiens. Ribeiro, Loula, de Aroújo, Gudwin, and Queiroz
(2007) argued that alarm calls by African vervet monkeys satisfy the
Percian definition of linguistic symbols. The acquisition of vocal sym-
bols in velvet monkeys was simulated in a computer program showing
that symbol learning was heavily dependent on tutor reliability, whereas
auditory noise had little effect on the rates of learning. The study was
based on a minimal brain model which “was designed to satisfy very basic
neurobiological constraints, common in principle to any animal with a
nervous system”(p. 265). However, the four representational domains
(one for each of the visual and auditory modalities, one for the secondary
sensory association and one domain for the generation of behavioral out-
put) were also included in the model. These were selected to comply with
the habitat of vervet monkeys and therefore they did not apply to “any
animal with a nervous system.” On this background, the title of their
work (“Symbols are not uniquely human”) seems to be an overstatement.
Rather, the subject matter of this work seems to have been limited to
some communicative aspects of alarm calls by vervet monkeys. Its rele-
vance to species-specific behavior patterns is clear; its relevance to general
symbolic behavior by animals is less so.
Ribeiro et al. (2007) relied heavily on an analysis of alarm calls in rela-
tion to the Percian semiotics. A main concern was therefore a distinction
88 Language Evolution and Developmental Impairments
between alarm calls as indexes and alarm calls as symbols. Like in previous
playback experiments, the model permitted presentation of alarm calls in
the absence of a corresponding predator view. Because these calls none-
theless mediated “the representation of a class of predators,” they could
not be interpreted as indexes in the Percian classification of signs. I am
not convinced that this is a critical distinction that follows from Percian
semiotics, and if it does, it may be necessary to specify the conditions
under which the alarm calls continually produce the specific effect. Also,
a conditioned stimulus in a Skinnerian type of conditioning takes on
referential power, and given sufficient resistance to extinction, it will con-
tinue to do so in several trials. However, it does not qualify as a linguistic
symbol in Percian semiotics. According to Deacon (1997), similarity does
not produce iconicity, and “physical connection nor involvement in some
conventional activity dictates that something is indexical or symbolic.”
Granted that “symbols are not uniquely human,” hominids, and may
be even lower species, may have been capable of communicating sym-
bolically. Language capacity may then be traced back to times before the
appearance of Homo sapiens, and consequently there could not have been
a symbolic threshold to cross for early man. In my view, the arguments
from ape language research are not very strong. Moreover, arguments
from this research are entirely based on Percian semiotics and other fields
of modern linguistics, whose relevance for a theory of language evolu-
tion may be questioned. The conceptual framework chosen by Ribeiro
and others necessarily favors the notion that symbolism preceded syn-
tax in evolution (Bickerton, 2003), a position that is less in agreement
with the work of Hauser and Glynn (2009) discussed above and previous
works on human infants and cotton-top tamarins (Saffran et al., 2008,
see Sect. 3.2 in this chapter). These works give support to the assump-
tion that the capacity to extract patterns of sequential stimuli is part of
primate competence, even though these patterns are not included in the
natural communicative repertoire of the monkeys.
Contrary to Bickerton’s position, it is therefore possible to argue
that grammar precedes symbolism in evolution. In Sect. 3.3, I will give
further arguments for the priority of grammar. I shall call attention to
another problem that complicates a grammar priority position based on
rule/grammar learning by the hominids. In studies of grammar learning,
3 The Problem of Continuity in Time and Across Domains 89
Both bees and ants are extractive foragers (omnivorous ones, in the case of
ants). Both exploit food sources that are often large and relatively short-
lived (patches of flowering plants in the case of bees, dead organisms in the
case of ants) and that could not be fully exploited by lone individuals.
These factors make it necessary to recruit nest mates by imparting informa-
tion about the whereabouts and in some cases the nature and quality of the
food sources. The fact that the latter are normally at a distance from where
the information is transmitted forces displaced communication (Bickerton,
2014, p. 83).
Early humans lived in the arid grassland of East Africa, where the quest
for meat was strong by all primate species. The hunting strategies of chim-
panzees could not easily be adopted by early man, who instead became
involved in scavenging behavior. They had to take carcasses of animals
that had died a natural death or had been killed by other animals; in both
cases they met with fierce competition from other predators. “Only if
they were able to recruit numbers large enough to drive away competitors
could they hope to gain first access to most carcasses” (p. 85)
Hymenoptera had found ways of informing their conspecifics about
distant sources of food, and by humans a “the first small handful of sig-
nals would have brought tangible and immediate benefits” (Bickerton,
2014, p. 89). Therefore, despite vast phyletic differences, similarities
in their ecologies have led to convergent evolution of displacement by
hymenoptera and man. By the former species the critical signals were
produced by instinct, while they were products of learning by humans.
Therefore displacement signals showed great variance by humans, and in
the time from Homo erectus to Homo sapiens their informational speci-
ficity has increased. The different ways of expressing displacement by
humans shows that this feature cannot be separated from arbitrariness,
and as argued by Bickerton, both depend on semanticity; all are men-
tioned as separate design features in Hockett’s list.
3 The Problem of Continuity in Time and Across Domains 91
for language. In any case, the conditions which either favor or arrest
the learning of displacement form the epigenetics underlying language
acquisition.
3.1.3 Protolanguage
yet associated with lexical meaning. However, Graf Estes, Evans, Alibali,
and Saffran (2007) also showed that infants can map meaning to newly
segmented words. Infants were able to learn the object labels when the
labels were newly segmented words from a stream of continuous speech
with only TB cues to word boundaries. They did not learn sequences
with labels from novel syllable sequences or sequences with low internal
probabilities. This shows that a computation of TBs, and hence statistical
learning, is also involved in high-level acquisition of language (Romberg
and Saffran, 2010).
The learning constraints studied by Saffran and her co-workers imply
that certain statistical properties of language are easily detected and
learned by human infants, and moreover, these constraints may have
shaped the languages (giving rise to linguistic universals). Saffran argued
that natural languages are characterized as predictive (P) languages, in
which predictive dependencies mark phrase units. In contrast, nonpre-
dictive (NP) languages lack these dependencies; they are uncharacter-
istic of natural languages, but nevertheless form rule-based grammars.
Artificial grammars of P and NP languages may be defined on a vocabu-
lary of nonwords, and the use of the two statistical properties may be
compared in an implicit learning task.
The P languages introduced in one of her works (Saffran et al., 2008)
contain predictive dependencies between form classes according to the
following formula:
S AP BP CP
AP A D
BP CP F
CP C G
represents not only the within-phrase structure, but also the hierarchi-
cal structure of phrases within a sentence. Sentence exemplars were con-
structed from classes of nonwords in such a way that the within-phrase
conditional probabilities always equaled 1.0.
The NP languages, lacking predictive dependencies, could be described
according to the following formula:
S AP BP
AP A D must contain at least one
BP CP F
CP C G must contain at least one
relevant knowledge, not just to those parts of the system underlying the
learning of new memories” (Ullman, 2004, p. 237).
In contrast to the declarative memory system, the procedural system
has the following characteristics:
and Watson (2002) also showed that the dorsal striatum is involved in
mental rotation, and Meck and Benson (2002) showed its part in timing
and rhythm; that is, apparently disparate functions which are nonetheless
assumed to be intimately related.
The dependence on the neostriatum and the basal ganglia is the rea-
son why the procedural system is considered to be phylogenetically older
than the declarative system. At the same time, the basal ganglia are widely
interconnected with multiple cortical areas, while the basal ganglia them-
selves are highly interconnected. They receive input projections from
frontal cortex as well as the medial temporal lobe. Output connections
via thalamus form segregated circuits/closed loops which are implicated
in the learning and control of motor programs; for example, the sequenc-
ing of motor gestures or speech sounds in language.
Among the cortical regions that are critical for the procedural memory
are the supplementary motor area (SMA) and the general area F5. By
the macaque monkey, F5 is the well-established ventral pre-motor region
that includes mirror neurons and that is assumed to be the homologue
of BA 44 in Broca’s area by humans. The linguistic function of this area
by man is well-known, but also by nonhuman primates, Broca’s area is
clearly implicated in the learning of abstract and potentially hierarchical
structures (Conway & Christiansen, 2001). As part of the procedural sys-
tem, it is also critical for the functional maintenance of these structures.
Finally, it should be mentioned that the cerebellum is strongly impli-
cated in the coordination of skilled movements. Also, imagined hand
movements are highly dependent on the cerebellum, in particular activity
within the dentate nucleus.
both systems are undamaged they may supplement each other, particu-
larly in the learning of temporal structures. The declarative system may
sometimes start the learning of new knowledge, and at a certain level of
performance the procedural system may overtake the learning process. In
that case, the procedural system learns the same or analogous knowledge,
but the retrieval of this knowledge will be different depending on which
system is activated. The two systems may also interact competitively, and
a dysfunction in one system may enhance learning in the other (see also
Chap. 8, Sect. 8.2, on interactions between the two system and their
methodological implications for designing learning tasks).
The procedural system serves the learning and practicing of skills. More
specifically, Ullman explained that this system served “the learning of new,
and the computation of already-learned, rule-based procedures that gov-
ern the regularities of language–particularly those procedures related to
combining items into complex structures that have precedence (sequen-
tial) and hierarchical relations. Thus, the system is hypothesized to have
an important role in rule-governed structure building; the sequential and
hierarchical combination— “merging”……or concatenation—of stored
forms and abstract representations into complex structures” (p. 245).
There are wide-ranging empirical demonstrations showing that the
procedural system is involved in the learning of grammar. These com-
prise the learning of sequential structures of stimuli and classification
3 The Problem of Continuity in Time and Across Domains 107
neurons in the monkey and human brain, and which is targeted at the
neural mechanisms of the “cognitive construction of action.” The lin-
guistic stimuli—the sounds and signs—are events that can be decoded
as motor actions. This decoding process requires that production and
perception are linked as expressed in the motor theory of speech per-
ception (Liberman, Cooper, Shankweiler, and Studdert-Kennedy, 1967).
Moreover, this linkage between production and perception most proba-
bly applies to all symbolic systems independent of the sensory modalities.
As will be shown below, the discovery of the so-called mirror neurons in
the ventral premotor cortex (area 5) of the macaque monkey has given
rise to claims that a substrate for this linkage does exist in the hominid
brain (Rizzolatti and Arbib, 1998): The F5 neurons discharge during
both active movements of the hand and mouth, and observation of a
similar gesture made by the experimenter. Transcranial magnetic stimula-
tion (TMS) and positron emission tomography (PET) studies also indi-
cate that systems for recognition of voluntary actions exist by man and
involve the left hemisphere. Therefore, the development of a production/
perception system may be associated with a left-hemispheric specializa-
tion for language.
A number of research works I have reviewed deal with neural sub-
strates of cognitive and linguistic functions by adult human participants.
Now, the question is how the brains of our hominid ancestors were pre-
pared for language, and moreover, whether their brains in any way were
comparable to the brains of newborn infants today. As mentioned in the
Introduction, research on the mirror neurons and equivalent systems in
the human brain has called attention to a neural mechanism which seems
to form one of the preconditions to use of language. The mirror neu-
ron system (MNS) may not form the complete mechanism underlying
language, but in some respects this system is shared by monkeys and
humans. Therefore, this research has testified to continuity in time (lan-
guage evolution), but also across domains (perception/action to linguistic
interactions), and in the following I shall extend the presentation started
in the Introduction and review some main findings and spot the main
theoretical issues.
The mirror neurons were first located in the convexity of the arcuate
sulcus within the premoter cortex (area F5) of the macaque monkey brain
114 Language Evolution and Developmental Impairments
and Arbib (1998), a language system could have evolved “atop” of a pre-
linguistic grammar of actions.
The arguments of homology have been strongly contradicted by Toni,
de Lange, Noordzij, and Hagoort (2008). When a feature occurs in two
related species, there exists a relation of homology if it can be shown
that the feature has been inherited from the latest common ancestor
of the two species. Homology according to this criterion has not been
confirmed; thus, Toni et al. argued that “given the lack of evidence for
the presence of mirror neurons in a premotor region in any common
ancestor of humans and macaques, it appears at least premature to claim
an evolutionary homology between macaque area F5c (the specific por-
tion of area F5, where mirror neurons are localized in macaques ….) and
human BA 44-45” (p. 74).
Cytoarchitectonically, Broca’s area consists of two regions: Brodmann
areas (BA) 44 and 45. In a PET study of these regions, Horwitz et al.
(2003) showed that area 44 was activated by complex hand movements,
and controlled sensory-motor learning and integration. Area 45, however,
was activated by language output, whether spoken or signed. It may be
that only BA 44 is the true analogue of area F5c by the macaque monkey,
whereas BA 45 is a more recent structure in hominid brain evolution.
Research on the mirror system in monkey brains offered a serious
challenge to theories holding that language had evolved from vocal calls
in nonhuman primates. Instead, several researchers argued for a ges-
tural origin of language (Armstrong and Wilcox, 2007; Corballis, 2010;
Rizzolatti and Arbib, 1998). More specifically, Armstrong and Wilcox
(2007) even argued that signed languages were the original and proto-
typical languages. In line with these assumptions, the mirror system for
matching of gestures observed and gestures executed was considered as
a substrate for imitation (Buccino et al., 2004. It is commonly assumed,
however, that monkeys do not imitate, although some imitation has been
observed by macaques and chimpanzees after repeating exposures to sim-
ple behaviors. As a rule, however, these are behaviors that already are in
the monkey’s repertoire (Ferrari et al., 2006).
In my view, theories of language evolution have tended to overlook a
distinction between the emergence of a general symbolic capacity and the
selection of channels of communication. Thus, according to Armstrong
3 The Problem of Continuity in Time and Across Domains 117
Probably, therefore, this area has not evolved with the sole purpose of
serving speech, but for the production and comprehension of symbolic
communication. (See also my discussion of the gestural theory of lan-
guage evolution in the beginning paragraphs of Chap. 7.)
More recently, the controversial inclusion of Broca’s area as homolo-
gous to F5 has been challenged by Cerri et al. (2015). Despite the con-
troversies mentioned above, the MNS in humans was commonly said to
include the inferior frontal gyrus (BA44/45) in addition to the inferior
parietal lobe, the intraparietal sulcus, and the superior temporal sulcus.
They assessed the “mirror” properties of the component parts of MNS
(the premotor [vPM/BA6] and primary motor [M1] cortices in addition
to Broca’s area) in an fMRI study. Participants executed three tasks in
both observation and execution conditions, designed to test the “mir-
ror” criteria. In the execution conditions, instruction was given by object
presentation, which means that no action was imitated and no verbal
instruction was given. Activation of a language production system was
identified in a fluency task, when subjects were told to covertly think
about words beginning with a presented “phoneme.” Moreover, Cerri
et al. undertook an intraoperative neurophysiological investigation with
10 gliomas affected patients who were candidates for awake surgery. This
study gave a unique opportunity to apply direct electrical stimulation to
their exposed brains and to compare the motor output of Broca’s area
with the premotor and primary motor cortices.
The experimental tasks in the fMRI study were designed to test the
“mirror” requirement (activation during both observation and execu-
tion) and the “language” requirement (activation during phonological
fluency). The results showed that vPM/BA6 met these requirements. No
“mirror” activation was reported from BA44/Broca’s area. The intraopera-
tive study showed that vPM/BA6 and Broca’s area behaved differently.
Direct electrical stimulation of Broca’s area had no direct effect on the
phono-articulatory processes, and yet halted the naming process. This
event was interpreted as a cognitive not a motor interference in contrast
to the speech arrest following upon stimulation of the BA6 area. The
authors concluded the two studies this way:
…the same system involved in speech production overlaps in BA6 with the
neural premotor circuit involved in the control of hand/arm actions and
3 The Problem of Continuity in Time and Across Domains 119
belonging to the MNS, suggesting that the role of the MNS in language
may concern more the representation of motor than the semantic compo-
nents of language (p. 1025).
rejected or downplayed; the dual route model has been limited to the
processing of sound: The ventral pathway is involved in mapping sound
to meaning, while the dorsal pathway is involved in mapping sound to
articulation (Saur et al., 2008); thus, recent studies of the two pathways
have provided new insight to the neural basis of speech perception. Hickok
and Poeppel (2015) have reviewed a number of studies which relate to
sound processing in the two pathways. Some of these have addressed the
comprehension deficits in patients with Wernicke’s aphasia, and some by
subjects whose left hemisphere have been deactivated by the Wada pro-
cedure. Neuroimaging studies have shown that listening to speech acti-
vates the superior temporal gyrus, a target region in the ventral pathway.
Both types of studies have given some support to a bilateral processing
of speech, while other studies have demonstrated computational asym-
metries for the two hemispheres; that is, a left hemisphere selectivity for
temporal and a right hemisphere selectivity for spectral resolution. The
dual-route model also holds the dependence of phonological processing
on the superior temporal sulcus, and that lexical semantic access depends
on a focal system which relates phonological to conceptual information;
that is, the anterior temporal lobe. Other studies reviewed by Hickok
and Poeppel show that mapping from sound to action (the dorsal stream)
is not bilaterally represented but depends on a left-dominant region in
the Sylvian fissure at the temporal-parietal boundary. This region is not
speech-specific, but appears to be motor-effector-selective, and damage
to this region is associated with conduction aphasia (phonemic errors
despite good comprehension of speech sounds).
The dorsal stream in the dual-route model are clearly associated with the-
ories of MNS, because both claim that motor control is involved in speech
perception, and that a sensory-motor link is critical in comprehension of
language. However, the dual-route model, without being speech-specific
is nonetheless modality specific. As indicated above, the very distinction
between a ventral and dorsal pathways arose in research on the neural bases
of visual perception, but the model described by Hickok and Poeppel deals
with “mapping from sound to meaning” and “mapping from sound to
action” and is therefore restricted to the auditory modality. Although the
dorsal pathway is not speech-specific, the dual-route model has given rise
to important research works on the neural basis of speech perception.
3 The Problem of Continuity in Time and Across Domains 121
Fig. 3.2 Second formant transitions (F2) of the /d/ phoneme followed by dif-
ferent vowel sounds. Reproduced with permission from J. Acoust. Soc. Am.
27, 769 (1955). Copyright 1955, AIP Publishing LLC
3 The Problem of Continuity in Time and Across Domains 123
nism which links perception and action in language behavior. This mecha-
nism, which has been identified as mirror neurons in the monkey and
human brain, complements the research on statistical learning by infants
and monkeys. Together they show how early vertical transmission of lan-
guage may have taken place. The research on mirror neurons has given new
attention to the role of the motor system in language, and subsequently to
the status of the classical motor theory of speech perception. I argue that
because acoustical patterns can map into motor commands on the form
level, not the semantic level, and because consonants can be identified also
when their acoustic properties are transduced into vibro-tactile patterns
on the skin, I conclude that the motor system, despite its importance in
linguistic expressions, has no special/critical role in language.
The statistical learning constraints demonstrated by Saffran and others,
together with a mirror neuron mechanism, may have formed a language
facility relatively independent of socio-cultural evolution, and may have
invited and facilitated dialogues between child and caregiver throughout
the times of human evolution. Dialogues between infant and caregiver
have both served the vertical transmission of language and the strength-
ening of a basic grammatical structure.
References
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Ardila, A. (2011). There are two different language systems in the brain. Journal
of Behavioral and Brain Science, 1, 23–36.
Armstrong, D. F., & Wilcox, S. E. (2007). The gestural origin of language. Oxford:
Oxford University Press.
Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Jacoboni, M. (2006). Congruent
embodied representations for visually presented actions and linguistic phrases
describing actions. Current Biology, 16, 1818–1823.
Bickel, B., Wizlack-Makaravich, A., Choudhary, K. K., Schlesewsky, M., &
Bornkessel-Schlesewsky, I. (2015). The neurophysiology of language process-
ing shapes the evolution of grammar: Evidence from case marking. PLos One,
10, e0132819. doi:10.1371/journal.pone.0132819.
126 Language Evolution and Developmental Impairments
Ferrari, P. F., Visalberghi, E., Paukner, A., Fogassi, L., Ruggiero, A., & Suomi,
S. J. (2006). Neonatal imitation in rhesus macaques. PLoS Biology, 4,
1501–1508.
Fitch, W. T. (2010). The evolution of language. Cambridge: Cambridge University
Press.
Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G.
(2005). Parietal lobe: From action organization to intention understanding.
Science, 308, 662–667.
Galantucci, B., Fowler, C. A., & Turvey, M. T. (2006). The motor theory of
speech perception reviewed. Psychonomic Bulletin and Review, 13,
361–377.
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition
in the premotor cortex. Brain, 119, 593–609.
Goodale, M. A. (2000). Perception and action in the human visual system. In
M. S. Gazzaniga (Ed.), The new cognitive neurosciences (pp. 365–378).
Cambridge, MA: MIT Press.
Graf Estes, K., Evans, J. L., Alibali, M. W., & Saffran, J. R. (2007). Can infants
map meaning to newly segmented words? Statistical segmentation and word
learning. Psychological Science, 18, 254–260.
Hamzei, F., Rijntjes, M., Dettmers, C., Glauch, V., Weiller, C., & Buchel, C.
(2003). The human action recognition system and its relationship to Broca’s
area: An fMRI study. NeuroImage, 19, 632–637.
Hauk, O., Johnsrude, I., & Pulvermuller, F. (2004). Somatotopic representation
of action words in human motor and premotor cortex. Neuron, 41,
301–307.
Hauser, M. D., & Glynn, D. (2009). Can free ranging rhesus monkeys (Macaca
mulatta) extract artificially created rules comprised of natural vocalizations?
Journal of Comparative Psychology, 123, 161–167.
Hickok, G., & Poeppel, D. (2015). Neural basis of speech perception. Handbook
of Clinical Neurology, 129, 149–159.
Horwitz, B., Amunts, K., Bhattacharyya, R., Patkin, D., Jeffries, K., Zilles, K.,
et al. (2003). Activation of Broca’s area during the production of spoken and
signed language: A combined cytoarchitectonic mapping and PET analysis.
Neuropsychologia, 41, 1868–1876.
Hsu, H. J., & Bishop, D. V. (2014). Sequence specific procedural learning in
children with specific language impairment. Dev Sci, 17, 352–65.
Kemény, F., & Lukács, Á. (2010). Impaired procedural learning in language
impairment: Results from probabilistic categorization. Journal of Clinical and
Experimental Neuropsychology, 32, 249–258.
128 Language Evolution and Developmental Impairments
Lashley, K. S. (1951). The problem of serial order in behavior. In L. A. Jeffress
(Ed.), Cerebral mechanisms in behavior: The Hixon symposium. New York:
John Wiley.
Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M.
(1967). Perception of the speech code. Psychological Review, 74, 431–461.
Lieberman, P. (2015). Language did not spring forth 100 000 years ago. PLoS
Biology, 13, E1002064. doi:10.1371/journal.pbio.1002064.
Lum, J. A., Conti-Ramsden, G., Morgan, A. T., & Ullman, M. T. (2014).
Procedural learning deficits in specific language impairment (SLI): A meta-
analysis of serial reaction time task performance. Cortex, 51, 1–10.
Lyon, C., Nehanive, C. L., & Saunders, J. (2012). Interactive language learning
by Robots: The transition from babbling to word forms. PLoS One, 7, e38236.
Masson, M. E. J., & Graf, P. (1993). Introduction: Looking back and into the
future. In P. Graf & M. E. J. Masson (Eds.), Implicit memory: New directions
in cognition, development and neuropsychology. Hillsdale, NJ: Lawrence
Erlbaum Associates Inc.
Meck, W. H., & Benson, A. M. (2002). Dissecting the brain’s internal clock:
How frontal-striatal circuitry keeps time and shifts attention. Brain and
Cognition, 48, 195–211.
Milner, A. D., & Goodale, M. A. (2006). The visual brain in action. ISBN
978-0-19-852472-4.
Nieder, A. (2009). Prefrontal cortex and the evolution of symbolic reference.
Current Opinion in Neurobiology, 19, 99–108.
Peterson, K. M., Folia, V., & Hagoort, P. (2010). What artificial grammar learn-
ing reveals about the neurobiology of syntax. Brain & Language. doi:10.1016/j.
bandl.2010.08.003.
Podzebenko, K., Egan, G. F., & Watson, J. D. G. (2002). Widespread dorsal
stream activation during a parametric mental rotation task, revealed with
functional magnetic resonance imaging. NeuroImage, 15, 547–558.
Rauschecker, J. P. (1998). Parallel processing in the auditory cortex of primates.
Audiology and Neurootology, 2–3, 86–103.
Rendall, D., & Vasey, P. (2002). Metaphore muddles in communication theory
(p. 637). Commentary to S. G. Shanker & B. J. King: The emergence of a
new paradigm in ape research. Behavioral and Brain Sciences, 25, 637.
Ribeiro, S., Loula, A., de Aroújo, I., Gudwin, R., & Queiroz, J. (2007). Symbols
are not uniquely human. Biosystems, 90, 263–272.
Rice, M. L., & Oetting, J. B. (1993). Morphological deficits in SLI children:
Evaluation of number marking and agreement. Journal of Speech and Hearing
Research, 36, 1249–1256.
3 The Problem of Continuity in Time and Across Domains 129
Rizzolatti, G., & Arbib, M. A. (1998). Language within a grasp. Trends in
Neoroscience, 21, 188–194.
Romberg, A. R., & Saffran, J. R. (2010). Statistical learning and language acqui-
sition. Wiley Interdisciplinary Reviews: Cognitive Science, 1, 906–914.
Ruhlen, M. (1995). Linguistic evidence for human prehistory. Cambridge
Archeological Journal, 5, 268–271.
Ryle, G. (1949). The concept of mind. London: Hutchinson.
Saffran, J. R. (2002). Constraints on statistical language learning. Journal of
Memory and Language, 47, 172–196.
Saffran, J. R. (2003). Statistical language learning: Mechanisms and constraints.
Current Directions in Psychological Science, 12, 110–114.
Saffran, J., Hauser, M., Seibel, R., Kapfhamer, J., Tsao, F., & Cushman, F.
(2008). Grammatical pattern learning by human infants and cotton-top tam-
arin monkeys. Cognition, 107, 479–500.
Saur, D., Kreher, B. W., Schnell, S., Kümmerer, D., Kellmeyer, P., Vry, M. S.,
et al. (2008). Ventral and dorsal pathways for language. Proceedings from the
National Academy of Sciences, 105, 18035–18040.
Savage-Rumbaugh, E. S., & Lewin, R. (1994). Kanzi: The ape at the brink of the
human mind. New York: John Wiley.
Schmandt-Bessarat, D. (1986). Tokens: Facts and interpretations. Visible
Language, 20, 250–272.
Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hip-
pocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20, 11–21.
Senghas, A. (2005). Language emergence: Clues from a new Bedouin Sign
Language. Current Biology, 15, 463–465.
Senghas, A., Kita, S., & Özyürek, A. (2004). Children creating core properties
of language: Evidence from an emerging sign language in Nicaragua. Science,
305, 1779–1782.
Shanker, S. G., & King, B. J. (2002). The emergence of a new paradigm in ape
language research. Behavioral and Brain Sciences, 25, 605–656.
Squire, I. R., Knowlton, B., & Musen, G. (1993). The structure and organiza-
tion of memory. Annual Review of Psychology, 44, 453–495.
Squire, L. R. (1993). The organization of declarative and nondeclarative mem-
ory. In T. Ono, L. R. Squire, M. E. Raichle, D. I. Perrett, & M. Fukuda
(Eds.), Brain mechanisms of perception and memory. From neuron to behavior
(pp. 219–227). New York: Oxford University Press.
Squire, L. R., & Alvarez, P. (1995). Retrograde amnesia and memory consolida-
tion: A neurobiological perspective. Current Opinion in Neurobiology, 2,
169–177.
130 Language Evolution and Developmental Impairments
Fig. 4.1 Marmoset monkeys (callitrix jacchus) are small animals of about 40
cm in length, weight about 350 grams, who live up to 16 years. They have
relatively small brains, but are closely related to humans in terms of structure,
behavior and physiology. They are endemic to the Atlantic forest of north-
eastern Brazil, live in extended family groups and share with humans a coop-
erative breeding strategy. Their temporal coordination of vocal responses
resembles vocal interactions in human linguistic dialogues. By permission of
Inbound TeleSales. iStockphoto.com.
4 Dialogues as Procedural Skills 137
room where the animals were placed in opposite corners and separated by
an opaque curtain to prevent visual contact.
Phee calls from the two monkeys which were not separated by more
than 30 seconds of silence, were defined “contingent exchange calls.”
There was zero overlapping among these exchange calls, which agrees
with general observations made by interacting humans. By exchanging
the time series of one animal in dyad with the time series of a randomly
selected animal in another dyad, they tested the hypotheses that zero
overlapping was due to dependent vocal interactions, and not the adverse
effect of very low rates of responding. They found that that “marmo-
sets wait for the vocal exchange partner to finish calling before respond-
ing” (p. 2162). The consistent waiting period of 5–6 s was discussed as a
possible effect of resetting some planned interval when he hears the call
from another monkey. However, “the call interval duration of an indi-
vidual is, on average, significantly shorter (median = 5.63 s) during vocal
exchanges than when the same subject produces calls without hearing
an intervening call from another individual (median = 11.53 s, p value
< 0.001)” (p. 2163). They concluded that the marmosets take turns and
that one of them waits until the other marmoset has finished his call, and
then responds following an interval that cannot be explained by a reset-
ting of its natural rhythm.
To explain the dynamics of turn-taking, Takahashi et al. tested a model
of an oscillator-like mechanism by measuring the interval between mar-
moset 1’s first call and the marmoset 2’s first call, second call, third call,
and so on. Then, this procedure was repeated for marmoset 1’s second
call, and by calculating the cross-correlation between the two call time
series a degree of coupling was assessed. It turned out that this correla-
tion peaked at regular intervals showing both that marmoset 1 pro-
duced his calls with consistent intercall intervals, but also that those
marmoset 2’s calls occurred between marmoset 1’s calls and had a con-
sistent intercall interval. These results supported the coupled oscilla-
tor model and showed that calls were produced in between the other
marmoset’s calls (antiphase) with intervals of ≈12 s. Hence it is likely
that the periodicity of the one marmoset’s calls can be modulated by the
other marmoset’s calls.
138 Language Evolution and Developmental Impairments
surprising. However, the Bornstein et al. study supported the view that
key features of turn-taking are universally present in maternal–infant
interactions, and because these key features are also observed by vocal
turn-taking in monkeys, they may be interpreted as vestiges of the evolu-
tionary origins of language.
Turn-taking has been observed also by deaf children who are exposed
to sign language from birth (Emmorey, 2002). The “speaker” signs a
few words and the addressee similarly signs his/her answer, and, like
turn-taking by hearing babies, they follow a “minimal gap minimal over-
lap” norm. However, signed turn-taking differs from vocal turn-taking in
the way that the “speaker” cannot start the conversation unless he makes
sure that the addressee can visually attend to his behavior. The hearing
baby can initiate a conversation independent of visual contact, and there-
fore starting a dialogue by typically developing children seems easy. (As
shown in the next section, this problem is more complex than what it
seemed like in the first case.)
Leclère et al. (2014) examined a number of mother–child interaction
studies by focusing on the concept of synchrony. Turn-taking is only one
of the terms which are used to refer to synchrony in mother—child inter-
actions. Other terms are mutuality, reciprocity, rhythmicity, harmonious
interaction, and shared affect. Like in studies of turn-taking they also
focused on the interactive partnership between child and caregiver with
the dyad as the unit of analysis. They examined 61 selected works in the
years between 1977 and 2013 and showed that synchrony has been assessed
by 1) global interaction scales for dyads, 2) specific synchrony scales, and
3) microcoded time-series analysis. For clinicians working with language-
impaired children, it may be worthwhile to take a look into these assess-
ment tools. They are mentioned here because the focus on synchrony as
defined by Leclère et al. does add something to my discussion of turn-
taking. Thus, verbal behavior, either spoken or signed has a particular
rhythmicity. In Chap. 7, Sect. 7.2, you will see that hand movements
which conform to sign language has a frequency close to 1 Hz, whereas
random and nonlinguistic motor activity by infants has a much higher fre-
quency, around 2.5 Hz. In speech, humans generally produce syllables at a
frequency of 3 to 8 Hz. Rates above 8 Hz are generally incomprehensible
(Fujii & Wan, 2014). Due to differences in units (hand movements vs syl-
4 Dialogues as Procedural Skills 141
lables), natural speed of production differ for the two modalities; however,
both have a selected rhythm. Therefore, mutual adjustment of spoken or
signed frequencies may also be considered as an aspect of synchrony in
linguistic dialogues. However, the content of this term is not new; thus,
in the preceding chapter I mentioned Shanker and King, who interpreted
communicative learning by chimpanzees as the resulting of “interactional
synchrony,” and in Sect. 4.7 you will see that Garrod and Pickering (2004)
use “interactive alignment” to explain why some dialogues are easy.
argued that previous research has generally avoided the problem of how
humans achieved a capacity to signal signalhood. First, previous research-
ers have had a tendency to predefine the communication channel, a solu-
tion which begs the question because “participants know that any inputs
that come to them via the communication channel are (almost certainly)
communicative in nature” (p. 226). Second, the roles of signaler and
receiver may be predefined, and thereby the receiver will easily be primed
to interpret any behavior from the signaler as communicative. Finally,
complete avoidance of the problem takes place when the possible forms of
a communicative signal are pre-specified by the researcher. Alternatively,
Scott-Phillips et al. argued that there are two logically acceptable ways
of explaining the capacity to signal “signalhood”: either it emerged from
noncommunicative behavior or it was created de novo.
To study the way people may signal signalhood in advance of a suc-
cessful dialogue, they presented “the embedded communication game”
on networked computers. In this game, there are two players, each of
them is presented with a “stick man” in a box containing 2 × 2 quad-
rants which were colored red, blue, green or yellow, and each of the
two players can move the “stick man” around from one quadrant to the
center of any of the other quadrants. The players have no interactions
with each other, and they lack shared information, except that they see
both boxes as well as the movements made by the other player, but each
player can only see the colors of his/her own box. The players press the
space bar to finish, whereupon the colors of both boxes are revealed to
both players. If they have finished on identical colors, they earn a score
of one point.
When both players press the space bar again, a new round begins. The
colors are now differently assigned to the four quadrants, but at least one
of the four colors appears in both boxes to make possible a score of one
point in the next round. The highest number of points scored in succes-
sion defines the pair’s final score. In this situation, the participants need
not only to agree on what behavior corresponds to what meaning, but
also to find a way to signal that a certain movement is a signal. Many
pairs failed to communicate; thus, the low incidence of success showed
that it was extremely difficult to co-opt their movements for the purpose
of communication.
4 Dialogues as Procedural Skills 143
The fact that some pairs were eventually able to score a point in every
round shows that signaling of signalhood was possible. Thus, some pairs
converged upon a system of movements that made possible the selection
of a default color whenever available. Scott-Phillips et al. (2009) said
that “this strategy is not communicative, but it does allow pairs, once
they have converged on the same default color, to score at above chance
levels” (p. 239). In those cases when one of the players did not have the
default color, he/she performed some unexpected movements like oscil-
lations sideways, or looping around in the box. These movements did
not have a specific meaning, but the recipient easily interpreted it as “no
default color,” whereupon their meaning changes to one of the other
colors. Hence, these movements may be said to have served to change the
direction of attention in order to initiate a dialogue.
Scott-Phillips et al. concluded that the players, when successful, solved
the problem of signaling signalhood by “a bootstrapping process, and
that this process influences the final form of the communication system”
(p. 226). Similarly, it may be assumed that early humans found different
ways of initiating communication by trial and error in a bootstrapping
fashion.
In natural languages, there are other means of signaling signalhood,
both among early hominids, and among humans today. The way we
address another person in order to initiate a dialogue, or just ask a ques-
tion or make a short statement, means to signal signalhood in an every-
day setting. For some children, this may be an overly demanding task
that prevents important communication. In most linguistic societies,
there seems to be a social “address code” that must be learned in order to
participate in a dialogue, and the dialogue itself may include a number of
skills that are the products of enduring community practice. I believe that
dialogues in prehistorical times, and in particular settings also in modern
times, may have been ritualistic and served religious practices. Also, I will
add that “small talk” may include a number of implicit rules that govern
interactions among humans today.
In an asymmetric relationship, such as the one between mother and
child, it may seem that one part, the mother, initiates the dialogue. In
other words, in dialogues where there exists a state of nonparity between
the interlocutors, initiation may be the effect of a conscious decision on
144 Language Evolution and Developmental Impairments
the adult’s part. This means that the vertical transmission of language
is entirely the responsibility of the adult members of the community.
However, this is also an oversimplification, because the mechanisms
underlying communicative interactions between child and caregiver
mean that signaling signalhood may take place both ways, from caregiver
to child and vice versa. The gestural and vocalizing behavior of the child/
infant may “invite” the caregiver to join the dialogue, but this process is
subject to certain constraints, mentioned in the preceding chapter, Sect.
3.2. Both parties must possess what I have called an access code to early
dialogues.
conditions in isolated deaf families where the children lack normal expo-
sure to speech or sign language. They formed a kind of pre-linguistic
dialogues which were developed through family members’ own efforts,
and certainly without a formal instruction. Hence there may have been
wired-in abilities that came to their advantage in learning to communi-
cate with their hands. It is the seemingly easy way of developing early
dialogues that makes these observations from Nicaragua important. Of
course, there are other easily learned dialogues, which I have described
above; for example, early vocal interactions between mother and child (in
a hearing family). Other dialogues within specific behavioral domains,
like simple types of bartering, may require greater effort to learn, but
once acquired they are easily practiced by the interlocutors. In general,
there are dialogues which are run with a certain degree of automatic-
ity, and which therefore do not lend much support from declarative
memory. Instead, they exemplify procedural skills.
Some years ago, Garrod and Pickering (2004) set out to explain
why “conversation is so easy.” In view of what has been said about SLI
children, I think Garrod and Pickering’s statement may be changed to
assert that “conversation is so easy for typically developing children.”
In their view, dialogues are so easy because of the “processing mecha-
nism that leads to alignments of linguistic representations between
partners” (p. 8). They say that conversational partners generate their
utterances on the basis of what they have just heard, and by asking a
question, the speaker has already specified “the high level goal for his
addressee’s next utterance” (p. 9). Garrod and Pickering’s description
of interactive alignment is comparable to Selton and Warglien’s coor-
dination game, wherein one player adjusts the code to the other player.
The latter work, however, is more specific on the learning of a common
code, which is described as incremental and rule-governed. Therefore,
I will argue that it is the building of dialogues as procedural skills that
makes them so easy for typically developing children and adults. In
dialogues, therefore, partners build an implicit common ground for
communicative interactions, ascertaining a parity of input and out-
put messages. In a more recent work, Menenti, Pickering, and Garrod
(2012) argue that interlocutors prime each other at different levels of
representations.
154 Language Evolution and Developmental Impairments
References
Anderson, J. R. (1976). Language, memory and thought. Hillsdale, NJ: Lawrence
Erlbaum Associates Inc.
Anderson, J. R. (1983). The architecture of cognition. Harvard: Harvard University
Press.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Borjon, J. I., & Ghazanfar, A. A. (2014). Convergent evolution of vocal coop-
eration without convergent evolution of brain size. Brain, Behavior and
Evolution, 84, 93.102.
Bornstein, M. H., Putnick, D. L., Cote, L. R., Haynes, O. M., & Suwalsky, J. T.
D. (2015). Mother-infant contingent vocalizations in 11 countries.
Psychological Science, 26(8), 1272–1284. doi:10.1177/0956797615586796.
Buccino, G., Vogt, S., Ritzl, A., Fink, G. R., Zilles, K., Freund, H.-J., et al.
(2004). Neural circuits underlying imitation learning of hand actions:
An event-related fMRI study. Journal of Cognitive Neuroscience, 16,
114–126.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Fujii, S., & Wan, C. Y. (2014). The role of rhythm in speech and language reha-
bilitation: The SEP hypothesis. Frontiers in Integrative Neuroscience, 8, 777.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in
Cognitive Sciences, 8, 8–11.
Hudson, S., Levickis, P., Down, K., Nicholls, R., & Wake, M. (2015). Maternal
responsiveness predicts child language at ages 3 and 4 in a community-based
sample of slow-to-talk toddlers. International Journal of Language &
Communication Disorders, 50, 136–42.
158 Language Evolution and Developmental Impairments
This chapter includes an array of problems that are the most difficult in
all fields of research related to language. Meaning in language belongs
to the subcomponent of semantics and has been discussed within differ-
ent conceptual frameworks. Within formal semantics, it is argued that
meaning in language is propositional; for example, the truth value of,
“the glaciers in Greenland are melting” determines the meaning of the
proposition. A proposition links the “world” to the truth value in the
mind of the speaker; thus, formal semantics has provided a system for
analyzing propositions to deal with problems of meaning in language.
It may be expected that a chapter about meaning in language should
deal in more details about formal semantics and propositional meaning.
Thus, Fitch (2010) stressed that “propositional meaning is another dis-
tinct design feature of language: a central component of semantics that
had to evolve for language in its modern sense to exist” (pp. 121–122).
He argued that music possesses both “phonology” and “syntax,” but can-
not express propositional meaning. This is a feature which belongs to
human language only.
The problem is whether analysis of propositions as described in formal
semantics presupposes a metalinguistic ability which is associated with
It has been commonly assumed that only verbal stimuli are subject to cat-
egorical perception, which is therefore an example of categorization that
takes place in a language domain only. As I will show later, this assump-
tion is lacking support in contemporary research. First, however, I will
briefly present a reminder of what categorical perception is.
The expressive form of a linguistic symbol—that is, the exact form of
manual or vocal articulation—will differ between people, and for the
same individual it may also differ from time to time. The articulatory and
acoustical expression of the English word pen will differ between indi-
viduals, and similarly will the exact manual expression of the sign for pen
in sign language. These differences are within-category variations that do
170 Language Evolution and Developmental Impairments
Phonemes and colors, which are the products of categorical perception, are
specific examples of concepts studied in cognitive psychology. I will now
turn to the general study of concepts in cognitive psychology. In this field,
172 Language Evolution and Developmental Impairments
Thus, neural substrates underlying the implicit system are very similar to
the neural basis of the procedural system. Also, the neural substrates of the
explicit system largely coincide with the substrates underlying the declara-
tive system (see Chap. 2, Sect. 2.2). However, there are important differ-
ences, in particular since the declarative system mediates verbal expression
by humans; that is, linguistic behavior which has not been demonstrated
in explicit categorization by macaques and capuchin monkeys. Moreover,
the procedural system is generally involved in serial and skill learning, yet
category learning of the type studied by Smith, Crossley, et al. (2012) may
share similar mechanisms with procedural learning.
Implications for the evolution of concept or category learning are
important. Studies by Smith et al. mentioned above demonstrate a pos-
sible line in evolution from the nonanalytic vertebrate categorization in
pigeons to the explicit dimensional analysis in nonhuman primates, and
to the declarative categorization by human subjects. However, nonhuman
primates, in spite of their commitment to dimensional analysis, do not
show declarative categorization. Their capacity of explicit categorization
can be interpreted as a pre-adaptation for the declarative learning of con-
cepts or categories by humans. The evolutionary basis of categorization
or concept learning in humans shows itself in the way that we are capable
of both explicit and implicit categorization. The question is how it has
been possible for humans to capitalize on the neural mechanisms that
emerged in primate evolution. The studies of Smith et al. reported above
demonstrated some important mechanisms for explicit categorization,
but did not account for the declarative aspects of concept learning, may
be the final attainment in human cognitive evolution. The nonanalytic
vertebrate categorization and the explicit dimensional analysis in nonhu-
man primates are all nondeclarative capacities, which is why I consider
them to be pre-semantic forms “meaning;” that is, meaning is implicit in
the act of categorization. Lexical meaning, as studied in the tradition of
Lyons, is associated with declarative memory. Thus, the transition from
nondeclarative to declarative memory also involves major leap in the evo-
lution of meaning in language, and in my view this transition has been
facilitated by the invention of writing. Therefore, language in preliter-
ate/oral cultures may have represented a transitional stage between pre-
semantic and semantic forms of linguistic communication.
5 Evolving Meaning in Language 175
control have also renewed interests in the cognitive and linguistic role of
the prefrontal cortex. I think both have substantial relevance to the brain
substrates underlying lexical meaning in language.
In this work, I give less emphasis on the distinction between a “dis-
embodied” and “embodied” framework. What matters is a cognitive
neurobiological approach, which deals with semantic meaning in terms
of category learning and conceptual knowledge. In the following, I will
discuss the role of particular neural substrata for the acquisition and use
of such knowledge.
abstract concept. Friederici and Singer (2015) points out that neurons
respond to categories, not to specific members of that category. However,
recognition of the person on a picture is informational-specific and there-
fore refers to a specific member. Friederici and Singer called this process
sparse coding, and the probability of encountering such units by chance
is small. It is made possible by “iterative recombination of feature-specific
responses” along different pathways which originate in the MTL. For
the brain it is like answering “20 questions” (actually many more) in a
few milliseconds. Combinations and recombination of feature-specific
responses run according to the same principles underlying concept for-
mation in both language and nonlanguage domains, and may therefore
be said to serve a pre-adaptation to language.
Arbib (2009) have argued that “the first creatures who had a language-
ready brain did not yet have language” (p. 264). Thus, the critical sub-
strata for comprehension of lexical meaning may have been in place by
the hominids and early Homo sapiens, but due to insufficient epigenesis
these substrata may have remained inoperative. Sociocultural evolution
may have provided a type of environmental exposure, which at some
point in the history of mankind, have made these substrata operative.
Semantic selection and integration studied by Samson and others may
not have been in place before this point in history. As long as the study of
underlying processes required verbal expressions, they could not be dem-
onstrated by animals. However, methodological constraints may not have
been the only reason why executive control of semantic processing can-
not be demonstrated by subhuman subjects. Maybe semantic declarative
meaning in language has more aspects which are not explicitly addressed
in the neurobiological studies mentioned above. Here, we may easily run
into some speculations. However, I assume that competent users of lan-
guage today have a metalinguistic capacity which enables them to treat
linguistic signals as “objects” in their own right, and due to this capacity
they can also apprehend the reciprocity aspect of language.
In the following, I will deal with some aspects of cultural evolution
that may cast some light on the growth of new communicative systems
and the emergence of lexical semantic meaning in language. I cannot
say for sure to what extent these aspects also are involved in the rise
of metalinguistic knowledge, but I assume that they form part of the
5 Evolving Meaning in Language 183
critical preconditions for such knowledge. First of all, I will call attention
to some important community factors; prime among them are the size
of linguistic communities and the frequency of interactions within and
between such communities. In short, these factors contributed to the
diversity of expression that is a prerequisite to symbolic/lexical meaning.
never directly interacted” (Fay et al., 2010, p. 361). In the isolated pair
condition each participant interacted with the same partner throughout
the game.
By studying drawing similarities in the two conditions, Fay et al.
(2010) could test the different predictions made by the individual-
istic and collaborative models. A certain alignment of drawings was
expected across games, thus in the isolated pair condition the drawings
in Round 7 should be more similar than the drawings in Round 1. In
the community condition, however, the “target” of alignment was the
community, rather than an individual partner, therefore interaction
with different community members will be crucial for the establish-
ment of a shared communicative system. Fey et al. now compared
the degree of alignment across noninteracting community members
(persons from the same community who did not interact in Round
1 and Round 7) with the degree of alignment across noninteracting
isolated pairs. At Round 7 drawings among noninteractive community
members had become increasingly similar, whereas drawings among
noninteractive isolated pairs had become increasingly dissimilar.
Members of the isolated pair condition had established a local sign
system, whereas members of the community condition had established
a global sign system.
The diversity of interactions among participants in the community
condition was critical for the establishment of a new communicative
system. On this account, it seems that the number of interacting com-
munity members is a critical factor. The community of deaf children in
the Managua primary school for special education grew rapidly from
50 to 200 and more in the early 1980s. During this decade a new sign
language emerged with a highly developed vocabulary and grammatical
structure. Skills in signing varied with year of entry into the commu-
nity (more complex signing was observed by children with entries after
1983). Taking this into consideration, (Senghas et al. 2004) found that
the younger group signed more rapidly and produced a richer and gram-
matically more complex language. The community of deaf people among
the Al-Sayyid Beduins was smaller and grew from 10 to 150 over three
generations. Therefore, BSL emerged more slowly, and has been around
about twice as long as NSL (cf. Senghas, 2005).
186 Language Evolution and Developmental Impairments
as well the older sign languages. As pointed out above, the prosodic and
paralinguistic features determine the illocutionary force of an utterance;
thus, this also occurs in sign language. Children are generally very sensi-
tive to these aspects of language, and therefore I assume that nonmanual
components have effectively influenced communication among the early
Nicuraguan and Beduin signers, as well as by language users in ancient
history. But this is not to say that illocutionary force has been conceptu-
ally distinguished from the lexical meaning of the utterance.
As far as I know there is nothing in the reports about the two sign
languages which indicates an ability among the community members to
decontextualize the signed lexemes; that is, a metalinguistic ability to deal
with the new language as an object of reflection. In the early days of the
new sign languages illocutionary force, despite its effect on behavior, may
not have been properly understood apart from the “literal meaning” of
the signs. Metalinguistics, and hence acknowledgment of reflexivity, both
in spoken and sign languages, belong to an advanced stage of evolution
that emerged with the development of writing.
References
Arbib, M. A. (2009). Evolving the language ready brain and the social mechanisms
that support language. Journal of Communication Disorders, 42, 263–271.
Ashby, F. G., Alfonso-Reese, L. A., Turken, A. U., & Waldron, E. N. (1998). A
neuropsychological theory of multiple systems in category learning.
Psychological Review, 105, 442–481.
190 Language Evolution and Developmental Impairments
Bunge, S. A., Wendelken, C., Badre, D., & Wagner, A. D. (2005). Analogical
reasoning and prefrontal cortex: Evidence for separable retrieval and integra-
tion mechanisms. Cerebral Cortex, 15, 239–249.
Cardillo, E. R., Aydelott, J., Matthews, P. M., & Devlin, J. T. (2004). Left infe-
rior prefrontal cortex activity reflects inhibitory rather than facilitatory prim-
ing. Journal of Cognitive Neuroscience, 16, 1552–1561.
Clifford, A., Franklin, A., Davies, I. R. L., & Holmes, A. (2009).
Electrophysiological markers of categorical perception of color in 7 month
old infants. Brain and Cognition, 71, 165–172.
Deacon, T. (1997). The symbolic species: The co-evolution of language and the
brain. London: Penguin books.
Devlin, J. T., Matthews, P. M., & Rushworth, M. F. (2003). Semantic process-
ing in the left inferior prefrontal cortex: A combined functional magnetic
resonance imaging and transcranial magnetic stimulation study. Journal of
Cognitive Neuroscience, 15, 71–84.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Eysenck, M. W., & Keane, M. T. (2000). Cognitive psychology: A students hand-
book. Hove: Psychology Press.
Fay, N., Garrod, S., & Roberts, L. (2008). The fitness and functionality of cul-
turally evolved communication systems. Philosophical Transactions of the
Royal Society B-Biological Sciences, 363, 3553–3561.
Fay, N., Garrod, S., Roberts, L., & Swoboda, N. (2010). The interactive evolu-
tion of human communication systems. Cognitive Science, 34, 351–386.
Fitch, W. T. (2010). The evolution of language. Cambridge: Cambridge University
Press.
Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Franklin, A., Pilling, M., & Davies, I. R. L. (2005). The nature of infant colour
categorization: Evidence from eye-movements on a target detection task.
Journal of Experimental Child Psychology, 91, 227–248.
Friederici, A. D., & Singer, W. (2015). Grounding language processing on basic
neurophysiological principles. Trends in Cognitive Sciences, 19, 329–338.
Fuster, J. M. (2002). Frontal lobe and cognitive development. Journal of
Neurocytology, 3–5, 373–385.
Galantucci, B. (2005). An experimental study of the emergence of human com-
munication systems. Cognitive Science, 29, 737–767.
Grice, H. P. (1957). Meaning. Philos Rev, 66, 377–388.
5 Evolving Meaning in Language 191
Hoffman, P., Lambon Ralph, M. A., & Rogers, T. T. (2012). Semantic diversity:
A measure of semantic ambiguity based on variability in the contextual usage
of words. Behavior Research Methods, 45, 718–730.
Jung-Beeman, M. (2005). Bilateral brain processes for comprehending natural
language. Trends in Cognitive Sciences, 9, 512–518.
Kemmerer, D., & Gonzales-Castilla, J. (2010). The two-level theory of word
meaning: An approach to integrating the semantics of action with the mirror
neuron theory. Brain and Language, 112, 54–76.
Lyons, J. (1977). Semantics (Vol. 1). Cambridge: Cambridge University Press.
Manns, J. R., & Eichenbaum, H. (2006). Evolution of declarative memory.
Hippocampus, 16, 795–808.
Mareschal, D., & Quinn, P. C. (2001). Categorization in infancy. Trends in
Cognitive Sciences, 5, 443–450.
Ong, W. (1982). Orality and literacy: The technologizing of the word. London:
Methuen.
Parry, A. (1971). Introduction. In M. Parry (Ed.), The making of Homeric Verse:
The collected papers of Adam Parry. Oxford: Clarendon Press.
Pylyshyn, Z. (1984). Computation and cognition. Cambridge, MA: MIT
Press.
Quian Quiroga, R., Reddy, L., Kreiman, G., & Fried, I. (2005). Invariant visual
representation by single neurons in the human brain. Nature, 435,
1102–1107.
Samson, D., Connolly, C., & Humphreys, G. W. (2007). When “happy”
means “sad”: Neurophysiological evidence for the right prefrontal cortex
contribution to executive semantic processing. Neuropsychologia, 45,
896–904.
Scott-Phillips, T. C. (2015). Meaning in animal and human communication.
Animal Cognition, 18, 801–805.
Seger, C. A. (1994). Implicit learning. Psychological Bulletin, 115, 163–196.
Senghas, A. (2005). Language emergence: Clues from a new Bedouin Sign
Language. Current Biology, 15, 463–465.
Smith, J. D., Ashby, F. G., Berg, M. E., Murphy, M. S., Spiering, B., Cook,
R. G., et al. (2011). Pigeons’ categorization may be exclusively nonanalytic.
Psychonomic Bulletin and Review, 18, 414–421.
Smith, J. D., Berg, M. E., Cook, R. G., Murphy, M. S., Boomer, J., Spiering, B.,
et al. (2012). Implicit and explicit categorization: A tale of four species.
Neuroscience and Biobehavioral Reviews, 36, 2355–2369.
192 Language Evolution and Developmental Impairments
Smith, J. D., Crossley, M. J., Boomer, J., Church, B. A., Beran, M. J., & Ashby,
F. G. (2012). Implicit and explicit category learning by capuchin monkeys
(Cebus apella). Journal of Comparative Psychology, 126, 294–304.
Thiel, A., Haupt, W. F., Habedank, B., Winhuisen, L., Herholtz, K., Kessler, J.,
et al. (2005). Neuroimaging-guided rTMS of the left inferior frontal gyrus
interferes with repetition priming. NeuroImage, 25, 815–823.
Toni, I., de Lange, F. P., Noordzij, M. L., & Hagoort, P. (2008). Language
beyond action. Journal of Physiology – Paris, 102, 71–79.
Wagner, A. D., Pare-Blagoev, E. J., Clark, J., & Poldrack, R. A. (2001).
Recovering meaning: Left prefrontal cortex guides controlled semantic
retrieval. Neuron, 31, 329–338.
Whorph, B. L. (1956). Language, thought and reality: Selected writings of
Benjamin Lee Whorph. New York: John Wiley.
Wilkinson, K. M., & Hennig, S. (2007). The state of research and practice in
augmentative and alternative communication for children with developmen-
tal/intellectual disabilities. Mental Retardation and Developmental Disabilities
Research Reviews, 13, 58–69.
6
Literacy and Language
describe the major writing systems where the written characters are said
to represent different levels of language. Here the question to be dis-
cussed is whether there exists an optimal writing system/orthography.
1. Drawings
2. Ideographs
3. Logographs
4. Syllabic scripts
5. Alphabets
the Big Bird, had just said. In the paraphrase trials, Teddy’s task was to
say what the other character wanted, and in these trials it did not matter
whether he used the same words or not. Practice trials were given, and the
order of verbatim and paraphrase trials were counterbalanced. Children
below the age of four were unable to answer correctly both the verbatim
and paraphrase trials. Although three-quarters of the four- and five-year-
olds correctly judged the paraphrase trials, but failed on the verbatim tri-
als; only children of six years or older were capable of judging both types
of trials correctly. Olson (1998) concluded that the youngest children
showed a “conflation of what is said with what is meant” (p. 127).
In the Torrence et al. study, age and formal schooling co-varied. Thus,
we cannot say what caused the ability to distinguish verbatim from
intended meaning, but this distinction is nonetheless a prominent char-
acteristic of most literate ways of thinking. Is it a universal characteristic
of literacy and thus independent of signaling modality? I do not know
of any studies of sign users that address the same problem. A distinction
between what is a “verbatim” message of signs and what is meant by the
message will be equally important among users of a sign language. In sign
language, the “verbatim” message will correspond to the specific sign-
expression, while the intended meaning may be a different one. We may,
therefore, talk about a general distinction between the verbatim meaning
associated with the form of expression on the one side and the intended
meaning as an abstraction from the form of expression on the other. The
ability to make this distinction is a cognitive achievement that is observed
once the child is old enough and has been adequately exposed to lan-
guage, and may be this exposure requires reading instruction. Also the
distinction is implicit in Lyons description of the reciprocity of language,
and furthermore I find this distinction to be a functional prerequisite to
the acquisition of a ToM.
Literacy may have affected the ability to comprehend metaphoric
and figurative language, but the mechanisms for this effect are largely
unknown. Historical changes in language capacity are mainly a matter
of speculation. Thus, we have no direct evidence for the way language
changed as an effect of the introduction of writing in antique Greece,
but classical literacy studies of the transition from an oral to a written
culture (Goody and Watt, 1968; Havelock, 1976, 1982; Ong, 1982)
204 Language Evolution and Developmental Impairments
have initiated an interesting debate on the issue (see also Olson, 1998),
parts of which were presented in the preceding chapter, Sect. 5.3. Here
I argued that the translatability of languages depended on the capacity
to read and write. This capacity meant that language can be treated as a
constellation of “objects,” rather than vocal (or manual) performances.
In more recent years, the quest for other empirical evidence has trig-
gered new research within psychological, educational and biological sci-
ences. In line with the issue raised by Ong and others, modern research
has addressed the problem of the cognitive consequences of illiteracy.
Performance on a number of cognitive tests and brain scanning data
from illiterate persons, have been compared to similar data from liter-
ate persons. The problem with these studies is the definition of illiter-
acy. Commonly, illiteracy has been defined by lack of formal schooling
(Kosmidis, Tsapkini, Folia, Vlahou, and Kiosseoglou, 2004; Scribner &
Cole, 1981). In Kosmidis et al.’s work, the illiterate group consisted of
elderly women (M = 71.95 years) who never attended school due to liv-
ing in a poverty-stricken, agrarian society. They were able to name a few
letters, and according to their self-report illiteracy did not prevent a sat-
isfactory integration in the local community. Healthy individuals who
are illiterate, but have lived in a literate society over years, and who have
been able to take care of themselves in nondemanding manual work, may
yet have been exposed to a literate world directly or indirectly in interac-
tions with other community members. The question is whether illiteracy
should be defined merely by educational criteria, or by educational crite-
ria in combination with socio-cultural characteristics.
In principle, there are two reasons for illiteracy in contemporary soci-
eties. 1) Social reasons: Poverty (as by the illiterate group in Kosmidis
et al.), absence of schools, sociocultural factors that cause disapproval of
education, child labor, and so on. 2) Personal reasons: Intellectual disabil-
ity, motor and sensory disorders, various central nervous system patholo-
gies that interfere with learning and language acquisition. Ardila et al.
(2010) in a recent literature review, argued that the “two main classes of
reasons for illiteracy present potential confounders for research” (p. 690):
People that are illiterate due to social reasons generally belong to a lower
socioeconomic class, have more health problems and are less exposed to
media of communication. Those who are illiterate due to personal reasons
6 Literacy and Language 205
is missing. Reis, Peterson, Castro-Caldas, and Ingvar (2001) did not find
any difference in the ability to name real objects, but illiterates performed
more poorly in a task of naming photographs and an even stronger dis-
advantage for naming drawings. Similarly, illiterates have great difficul-
ties in copying Bender drawings. In general, visually guided hand motor
behavior seems to depend on the acquisition of literacy.
Do differences in vocabulary-size rest on a language learning disor-
der by illiterate persons? In Chap. 2, I mentioned Baddeley, Gathercole,
and Papagno (1998) who argued that the phonological loop, which is a
component of the Baddeley and Hitch working memory model, serves
as a language learning device. The loop has three subcomponents: one
of them is the phonological store; spoken words or pseudo-words have
direct access to this store which holds the memory traces for a few sec-
onds. Words are therefore soon forgotten unless they are refreshed in a
subvocal rehearsal system, another subcomponent of the loop which also
receives input from a grapheme-phoneme transformation unit. Thus,
memory of visually presented words and pseudo-words also depend on
the subvocal rehearsal system, but it also depends on O – P mapping
which is not learned by illiterate people. The capacity of the phonological
loop is commonly assessed by the short-term memory span (for words
and digits), but more directly by the nonword repetition test. Auditory
presentation of stimuli may not require O – P mapping, and therefore
illiterates should not be disadvantaged. However, learning of O – P map-
ping may affect verbal short-term memory regardless of the modality
of the presented stimuli. Both Castro-Caldas et al. and Kosmidis et al.
have demonstrated that literacy influences the capacity of the phono-
logical loop, as measured by nonword repetition tasks. It does not matter
whether we emphasize literacy or schooling because both imply the learn-
ing of O – P mapping.
Kosmidis, Zafiri, and Politimou (2011) administered five tests of work-
ing memory and attention span to four groups of participants: illiterate,
functionally illiterate, self-educated literate and school-educated literate).
The literate groups outperformed the illiterate groups in digit span for-
ward and backward sentence span and the spatial span backward tests,
whereas the literate and illiterate participants did not significantly differ
on the spatial span forward and the “Remembering a New Route” tasks.
208 Language Evolution and Developmental Impairments
In the literate groups, schooling gave an advantage only in the digit span
backward test, whereas “illiterate and functionally illiterate groups were
indistinguishable from each other.” The authors therefore concluded that
differences in working memory performance can be attributed to literacy
per se and not the effects of schooling.
The studies reviewed in this section give important information about
functional differences on a number of cognitive measures, and on some
brain scanning and neurobiological measures between literates and illit-
erates. [For a more detailed discussion of this research, see Ardila et al.
(2010)]. The studies reviewed here do not show any historical effects
of literacy on the evolution of language, however, they cast light on the
cognitive changes which are most likely the results of learning to read.
The effects of literacy in the community also depend on strategies in
reading education; that is, on cultural conceptions of what reading are
(Sects. 6.5.2 and 6.6 below).
Olson (1998) stressed that written languages are models of spoken lan-
guages, but they are also communicational systems in their own right.
Hence the purpose of writing will be to communicate semantic meaning
to the reader of written texts. However, writing is also a technology by
way of which spoken language is transformed into visual characters and
vice versa (O – P mapping). For the schooled literate, it generally makes
no difference whether we talk about written language as a communica-
tional system or a technology; he/she is making use of both aspects of
reading.
To understand the development of literacy, historically and as learn-
ing achievements by children, it may be wise to keep the two aspects of
written languages apart. The technological aspect (grapheme–phoneme
conversion) is commonly the first skill taught in schools. In some cultural
and religious contexts, this skill is considered to be the main target for
reading instruction. This is the reason why reading aloud may have been
encouraged, and may sometimes have become a necessity. Islamic fami-
lies in the West (and also in the Middle East) often send their children
to Quran schools where they are instructed to read the Holy Scripture
in Arabic. In many cases, the families and hence their children speak a
different language themselves, and may be ignorant of Arabic. The child
may still be taught to read verses from the Quran aloud. He/she may not
understand what they say (reading without interpretation) but the Imam
teacher tells him/her about the meaning of the text. This shows that it
is quite possible to teach a child to read a text aloud in a different and
incomprehensible language. When the verses are spoken with the right
voice; that is, prosodic features, and may be also with the right rhythmic
movements of the body, the reading is valued as a sacred act. (Of course
reading without understanding may also take place when the text is based
on the child’s own language.) This performance shows that the child has
acquired the technological aspects of reading without interpretation; that
is, the O – P mapping runs practically errorless.
Also, historically the mastery of reading as a technological skill has
been important. The Christian Bible tells the story of the Ethiopian
210 Language Evolution and Developmental Impairments
eunuch who was busy reading the prophet Isaiah. When asked by Philip,
the evangelist, whether he could understand the text, he replied, “How
can I unless someone guides me?” This example shows that reading in
a technological sense may have preceded interpretation. The history of
Christianity, Judaism, and Islam is full of examples of reading practices
wherein technological proficiency has been a target of learning in its own
right, or a skill that has been appreciated on a par with understanding of
the text.
llliterate people today may, despite their lack of reading competence,
understand the general communicative function of writing, and they may
positively evaluate the importance of reading. Illiteracy today is mostly
due to poverty and lack of educational opportunities. In the early days of
writing, however, written texts may also have been looked upon as magic,
and few people may have understood their communicative function. For
centuries thereafter, some people regardless of educational opportunities
may still have failed to understand the idea of writing. When adopted,
after maybe years of apprenticeship, writing was seen by many as an exten-
sion of speech. The fact that written texts were generally read aloud, for
example, by monks reading the holy texts in the medieval monasteries,
shows that writing was taken as a representation of speech. Classical lit-
erature includes some counterexamples though. Thus, in St. Augustine’s
Confessions, the famous bishop Ambrose of Milan was said to read by
scanning the page rapidly with his eyes while his tongue remained silent.
This observation apparently surprised and impressed Augustine, because
scholars at that time generally read aloud. The problem of whether texts
were read silently or aloud in antiquity is thoroughly discussed by Knox
(1968). In any case, the misconception of writing as a representation of
speech lived among linguists until the modern area (Bloomfield, 1933).
However, representation of speech at the level of phonemes or syllables is
partly obtained only in most systems of writing. In this way, logographic
writing as in Chinese is different from alphabetic writing. The level and
form of representation defines the technology of writing, not its function
as a system of communication. As long as writing was seen as an exten-
sion of speech, it also came with the same authority as speech, and when
presented as the words of God in the great religions, writing created a
feeling of awe and total submission. Writing was considered as conserved
6 Literacy and Language 211
speech, and therefore, the messages it contained did not wane, but had
“eternal validity.”
The invention of writing as a technology, and subsequently reading
as a technological skill, did not change language. This took place when
written languages became communicational systems in their own right,
fully capable of conveying semantic meaning. Writing made languages
translatable, and thereby literacy also affected the evolution of language.
This is why I consider the distinction between writing as a technology
and writing as a linguistic/communicational system so important. Does
this distinction apply to written languages only, or is it equally applicable
to a discussion of spoken languages?
The examples I have mentioned above—one from reading of the
Quran and one from reading of the Christian Bible—shows that histori-
cally “technical” reading may have preceded interpretation. The concept
of reading technology may not be applicable to speech, yet spoken lan-
guages include procedural skills that form the preconditions for commu-
nication about semantic meaning. In preliterate societies, however, we
may speak about “oral literature” in the sense discussed by Ong (1982).
When recited in public this literature is being “read,” and the act of reci-
tation is itself an art, highly valued in the oral cultures.
In oral cultures the meaning of a recited poem may not have been
apprehended apart from the expressive form of recitation. The recitation
did not necessarily involve interpretation, rather interpretation tended to
be a matter for others, say the chief of the tribe, the priest, the Imam, the
elderly of the group, or even the extended community. The indigenous
people of Napa Rui tell how important agreements, transactions etc. were
conserved by public announcement in the extended group, and when
recited by a member of the group consensus was required for assigning
an interpretation.
The transition to literacy has been difficult due to both cultural precon-
ceptions of reading and to constraints and cognitive deficits in the indi-
vidual. The clinical term generally used about the latter type of difficulties
212 Language Evolution and Developmental Impairments
genes. The candidate genes mentioned above are involved in fetal brain
development, in particular neuronal migration processes; DYX1D1 also
affect cognitive skills like “one minute reading,” “digit rapid naming” and
“nonword repetition.” This means that the candidate genes are related to
the control of behavioral domains which extend beyond reading and co-
develop with reading ability.
Hyperlexia. The most severe forms of reading difficulties are gener-
ally associated with dyslexia. Could there be nondyslectic forms of read-
ing difficulties? In reading there is always a trade-off between speed and
accuracy. In other words, there is a trade-off between the speed of O – P
matching (technologically correct reading) and O – S mapping (reading
with interpretation). Thus, we have fast readers who do not grasp much
of the meaning of the text, and we have slow readers who understand
meaning very well. We will also find a number of transitions between the
two extreme cases.
The examples I have described above show that reading without
interpretation is a phenomenon which in some communities have been
socially and culturally accepted (and may be even encouraged). However,
reading without interpretation may also take place among children in a
clinical setting; that is, children who may suffer developmental delays or
belong to a spectrum disorder of a particular disease (autism). Silberberg
and Silberberg (1967) were the first researchers to describe these cases as
hyperlexia; that is, decoding ability that is out of proportion with compre-
hension ability. Also, hyperlexia often exemplify cases of precocious read-
ing by children who have been obsessed by letters and numbers from an
early age. Because precocious reading without lexical comprehension has
been associated with autism, the researchers have disagreed on whether
to consider hyperlexia a disability or a superability. Without going into
this discussion I shall briefly mention the work of Grigorenko, Klin, and
Volkmar (2003) who reviewed the literature available at that time, and
who concluded that “hyperlexia is a superability demonstrated by a very
specific group of individuals with developmental disorders, rather than a
disability exhibited by a portion of the general population” (p. 1079). As
far as I have seen, the clinical status of hyperlexia still remains undecided.
The observation that decoding ability by hyperlexic persons is out of
proportion with comprehension ability is a matter which deserves careful
216 Language Evolution and Developmental Impairments
heavily taxed than the brain regions which control the O – P mapping.
In Chinese, however, the division of labor between the two systems is
more equitable (Zhao et al., 2014). Does this mean that hyperlexia is
more likely found among readers of alphabetic writing systems com-
pared to readers of logographic writing systems?
Although decoding ability is sometimes out of proportion with com-
prehension ability, the two will develop in parallel in typical educational
settings. The two abilities will also be mutually dependent in the estab-
lishment of literate competence; however, these abilities may have differ-
ent origins in the evolution of language. The ability of O – P mapping
(the technological management of reading) is a procedural skill which
therefore may be dissociated from the comprehension ability (O – S
mapping) and declarative memory. Historically, and in some educational
settings today, the two abilities may have been confused and therefore
caused disagreement about what reading is.
Literate persons from the classical era and the medieval ages contin-
ued to put into texts, sayings from oral cultures, or from other writings
that depended on narratives or legends transmitted between generations
of people. They did not bear witness to Olson’s conceptual revolutions
with literate culture; rather their texts have generally been formed on the
premises of an oral culture. Thus, writing supported a conservative set of
mind; it reinforced sayings which may have been repeated over and over
again, and consequently the orally based literature did not serve as reports
on novel events. However, as argued by Ong, texts based on oral culture
did not lack “originality of their own kind.” The canonical Gospels seem
to show that new elements in old stories have been added. An analogue
version of the birth and life of Jesus Christ, as depicted in the Gospels,
can be found in the ancient Hindu text the Bhagavad Gita, composed
sometime between the fifth and second centuries bc. Here Krishna, like
Christ, was said to be the son of God, and both acted as healers and mira-
cle workers. The similarities between the Christian and Hindu texts show
that neither of them bears reliable evidence of novel events. This does not
mean that they lacked aspects of novelty; their narrative framework still
permitted innovations of the story.
According to the Gospel of Matthew, Jesus was born of the Virgin
Mary. This means that Mary was worshiped as a goddess, or a virgin
“creatrix.” However, the worship of the virgin and her child was com-
mon in the East and the Middle East centuries before the birth of Christ.
Thus, mythological texts indicate that the Egyptian Madonna Isis was
a virgin while giving birth to Horus, and it is still debated whether
Krishna was born of a virgin. It is commonly assumed that Krishna was
the eighth son of Devaki, yet she has been given the status of Virgin
Goddess. Also Greek mythology has presented a threefold description
of Aphrodite: Aphrodite the virgin, Aphrodite the wife, and Aphrodite
the whore.
The reasons for writing the classical texts of the kind mentioned above
were not to report historical events, but to establish and reinforce a con-
servative mindset. Intellectual experimentation was not a characteristic
of early literate texts. The discussion of philosophical, social and political
problems was a literate innovation which contrasted with the orally based
literature in the classical era. The “media” by which new information is
distributed is a recent conception in human history.
220 Language Evolution and Developmental Impairments
Creanza et al. (2012) presents a model which involved both gene-
culture and culture-culture interactions. The latter applies specifically to
literacy which is a cultural invention that has affected the evolutionary
dynamics of other cognitive and linguistic traits. It has enforced rules
of transmission, for instance, (formal) instruction and schooling, and it
involved forms of social control and power. In consequence, literacy has
become a major force of selection, in particular because vertical transmis-
sion of this trait has involved assortative mating.
However, I find it difficult to apply the Creanza et al. model directly
in the case of literacy and language. This model presupposes two defini-
tions, one of a recipient trait T (which determines a cultural phenotype)
and one of a niche constructing trait N (which determines selection and
assortative mating). Each has two possible states (T: T, t and N: N, n),
and by combinations, these give rise to four possible phenotypes. It may
be possible to conceive of a cultural phenotype of literacy, and an inter-
acting constraint in the literate world as the niche constructing trait, but
further application of the model will run the risk of an unavoidable over-
simplification. Creanza et al. themselves applied the model in relation
to religion and fertility, not to literacy and language. Maybe some major
adjustments of the model could be made to deal with the role of writing/
literacy in the evolution of language.
As pointed out above, the arguments of assortative mating and selec-
tion pressures apply to literate cultures. Therefore, an important task
will be to develop a formal/explicit model of niche construction in the
case of literacy and language evolution. Literacy, including classical as
well as computer-based technologies of writing, has more than any other
historical event formed the ecology of the human mind. It can be com-
pared to Deacon’s concept of “the other evolution” (see Chap. 5, Sect.
5.8). Human beings today, from young children to elderly persons, are
exposed to an ambient environment of letters, characters, acronyms, texts
and other literate symbols, to which adaptation becomes important. This
literate ecology of mind shapes our use and conception of language, and
determines the survival or death of linguistic communities. In fact, the
adjustment to the literate world is a major condition for the develop-
ment and survival of cultures, and finally for the reproductive capacity
of individuals.
6 Literacy and Language 223
societies, and eventually the neglect of declarative learning may affect the
language of future generations.
The development of computer skills, but also the use of mobile phones,
iPhones, iPads, and so on will affect people’s vocabularies, creating similar
content words in otherwise different languages. Furthermore, the use of
computer-mediated communication has improved the quality and effi-
ciency of second language (L2) learning. Thus, communication in face-
to-face settings encourages multidirectional interaction. Many teachers
have observed higher rates of peer-to-peer talk (but also higher rates of
human–machine interactions) and less dependence on student–teacher
interactions, in classrooms with high-tech solutions for language learning.
In short, computer-mediated communication affects the social orga-
nization and mobility of people, and this mobility has always been an
important factor in language evolution and change. Also, this mobility
and associated interactions with different ethnic and linguistic groups
will increase with the growth of computer=mediated communication.
References
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Ardila, A., Bertolucci, P. H., Braga, L. W., Castro-Caldas, A., Judd, T., Kosmidis,
M. H., et al. (2010). Illiteracy: The neuropsychology of cognition without
reading. Archives of Clinical Neuropsychology, 25, 689–712.
Baddeley, A. D., Gathercole, S. E., & Papagno, C. (1998). The phonological
loop as a language learning device. Psychological Review, 105, 158–173.
Bishop, D. V. (1997). Uncommon understanding. Development of disorders of lan-
guage comprehension in children. East Sussex, UK: Psychology Press.
Bishop, D. V., & Snowling, M. J. (2004). Developmental dyslexia and specific
language impairment: Same or different? Psychological Bulletin, 130, 858.
doi:10.1037/0033-2909.130.6.858.
Bloomfield, L. (1933). Language. New York: Holt, Reinhart & Winston.
Carreiras, M., Seghier, M. L., Baquero, S., Estevez, A., Lozano, A., Devlin, J. T.,
et al. (2009). An anatomical signature for literacy. Nature, 461, 983–986.
226 Language Evolution and Developmental Impairments
Castro-Caldas, A., Peterson, K. M., Reis, A., Askelof, S., & Ingvar, M. (1998).
Differences in inter-hemispheric interactions related to literacy, assessed by
PET. Neurology, 50, A43.
Catts, H. W., Adlof, S. M., Hogan, T. P., & Ellis Weismer, S. (2005). Are spe-
cific language impairment and dyslexia distinct disorders? Journal of Speech,
Language, and Hearing Research, 45, 1378–1396.
Coe, M. D. (1992). Breaking the Maya Code. London: Thames Hudson. ISBN
0-500-05061-9.
Coe, M. D. (2002). The Maya (6th ed.). London: Thames Hudson. ISBN
0500050619.
Creanza, N., Fogarty, L., & Feldman, M. W. (2012). Models of cultural niche
construction with selection and assortative mating. PLoS, 7, e42744.
Ehlich, K. (1983). Development of writing as social problem solving. In
K. Ehlich & F. Coulmas (Eds.), Trends in linguistics. Studies and monographs.
Writing in focus. Berlin: Mouton Publishers.
Eslinger, P. J., & Grattan, L. M. (1993). Frontal lobe and frontal-striatal sub-
strates for different forms of human cognitive flexibility. Neuropsychologia,
31, 17–28.
Farguharson, K., Centann, T. M., Franzluebbers, C. E., & Hogan, T. B. (2014).
Phonological and lexical influences on phonological awareness in children
with specific language impairment and dyslexia. Frontiers in Psychology, 5,
838.
Ferris, S. P. (2002). Writing electronically: The effects of computers on tradi-
tional writing. Journal of Electronic Publishing, 8(1).
Gelb, I. J. (1963). A study of writing (2nd ed.). Chicago: University of Chicago
Press.
Gold, R., Faust, M., & Goldstein, A. (2010). Semantic integration during meta-
phor comprehension in Asperger syndrome. Brain & Language, 113,
124–134.
Goody, J., & Watt, I. (1968). The consequences of literacy. In J. Goody (Ed.),
Literacy in traditional societies. Cambridge: Cambridge University Press.
Grigorenko, E. L., Klin, A., & Volkmar, F. (2003). Annotation: Hyperlexia:
Disability or superability? Journal of Child Psychology and psychiatry, 44,
1079–1091.
Havelock, E. (1976). Origins of Western literacy. Toronto: OISE Press.
Havelock, E. (1982). The literate revolution of Greece and its cultural consequences.
Princeton, NJ: Princeton University Press.
Henderson, L. (1984). Writing systems and reading processes. In L. Henderson
(Ed.), Orthographies and reading. Perspectives from cognitive psychology, neuro-
psychology and linguistics. Hillsdale: Lawrence Erlbaum Associates.
6 Literacy and Language 227
Knox, B. M. W. (1968). Silent reading in Antiquity. Greek, Roman, and Byzantine
Studies 9/4, Winter.
Kosmidis, M. H., Tsapkini, K., Folia, V., Vlahou, C. H., & Kiosseoglou, G.
(2004). Semantic and phonological processing in illiteracy. Journal of the
International Neuropsychological Society, 10, 912–827.
Kosmidis, M. H., Zafiri, M., & Politimou, N. (2011). Literacy versus formal
schooling: Influence on working memory. Archives of Clinical Neuropsychology,
26, 575–582.
Lecours, A. R., Mehler, J., Parente, M. A., Beltrami, M. C., Canossa de Tolipan,
L., Cary, L., et al. (1988). Illiteracy and brain damage. 3: A contribution to
the study of speech and language disorders in illiterates with unilateral brain
damage (initial testing). Neuropsychologia, 26, 575–589.
Lim, C. K., Ho, C. S., Chou, C. H., & Waye, M. M. (2011). Association of the
rs3743205 variant of DYX1C1 with dyslexia in Chinese children. Behavioral
and Brain Functions, 7, 16. doi:10.1186/1744-9081-7-16.
Linnel, P. (2005). The written language bias in linguistics. London: Routledge.
Newbury, D. F., Monaco, A. P., & Paracchini, S. (2014). Reading and language
disorders: The importance of both quantity and quality. Genes (Basel), 5,
285–309.
Olson, D. R. (1998). The world on paper. The conceptual and cognitive implica-
tions of writing and reading. Cambridge: Cambridge University Press.
Ong, W. (1982). Orality and literacy: The technologizing of the word. London:
Methuen.
Reis, A., Peterson, K. M., Castro-Caldas, A., & Ingvar, M. (2001). Formal
schooling influences two- but not three-dimensional naming skills. Brain and
Cognition, 47, 397–411.
Sasanuma, S. (1974). Kanji versus kana processing in alexia with transient
agraphia: A case report. Cortex, 10, 84–97.
Schmandt-Besserat, D. (1987). Oneness, twoness, threeness: How ancient accoun-
tants invented numbers. New York: New York Academy of Sciences.
Scribner, S., & Cole, M. (1981). The psychology of literacy. Cambridge, MA:
Harvard University Press.
Silberberg, N., & Silberberg, M. (1967). Hyperlexia: Specific word recognition
skills in young children. Exceptional Children, 34, 41–42.
Siok, W. T., Niu, Z., Jin, Z., Perfetti, C. A., & Tan, L. H. (2008). A structural-
functional basis for dyslexia in the cortex of Chinese readers. Proceedings of
the National Academy of Sciences of the United States of America, 105,
5561–5566.
Siok, W. T., Perfetti, C. A., Jin, Z., & Tan, L. H. (2004). Biological abnormality
of impaired reading is constrained by culture. Nature, 43, 71–76.
228 Language Evolution and Developmental Impairments
Tan, L. H., Laird, A. R., Li, K., & Fox, P. T. (2005). Neuroanatomical correlates
of phonological processing of Chinese characters and alphabetic words.
Human Brain Mapping, 25, 83–91.
Torrence, N., Lee, E., & Olson, D. R. (1985). Oral and literate competencies in
the early school years. In D. R. Olson, N. Torrence, & A. Hildyard (Eds.),
Literacy, language, and learning: The nature and consequences of reading and
writing (pp. 256–284). Cambridge: Cambridge University Press.
Turkeltaub, P. E., Flowers, D. L., Verbalis, A., Miranda, M., Gareau, L., &
Eden, G. F. (2004). The neural basis of hyperlexic reading: An FMRI case
study. Neuron, 41, 11–25.
Tzeng, J. L., & Wang, W. S.-Y. (1983). The first two R’s. American Scientist, 71,
238–243.
Varney, N. R. (2002). How reading works: Considerations from prehistory to
the present. Applied Neuropsychology, 9, 3–12.
Zhao, J., Wang, X., Frost, S. J., Sun, W., Fang, S.-Y., Menci, W. E., et al. (2014).
Neural division of labor in reading is constrained by culture: A training study
of reading Chinese characters. Cortex, 53, 90–106.
7
The Modality-Independent Capacity
of Language: A Milestone of Evolution
Finally I also use the term about sign and spoken languages which represent
different language modalities. I hope context will reveal the intended mean-
ing of modality.
How does this conception of language agree with theories of language
evolution? Apparently, Hockett (1960) may have used a different concep-
tion of language and argued as if speech is the ultimate goal of language
evolution. In that case, does sign language represent a more primitive
form of linguistic communication, or did language evolve as a modality
independent and abstract capacity of symbolic representation? Are signed
and spoken languages equal expressions of a modality-independent
capacity of symbolic representation?
According to the gestural theory of language evolution, intentional
communication by our hominid ancestors was based on manual and
other bodily gestures. Vocal communication belongs to an evolutionary
recent period in the history of mankind. Corballis (2010) mentioned two
arguments for this theory: 1) Only few species, such as elephants, seals,
killer whales and some birds, are capable of vocal learning, a prerequisite
to spoken language. Among the primates only humans are vocal learn-
ers. These observations are contrasted with the extensive use of bodily
gestures for communicative purposes among chimpanzees and bonobos.
2) It has not been possible to teach vocal language to the great apes.
The most successful attempts to teach intentional communication (not
vocal) were made by Savage-Rumbough and Rumbough who trained two
chimps to communicate with lexigrams (see Chap. 3). However, Kanzi
(described in Chap. 2) learned to follow spoken instructions in sentences
up to seven or eight words. This example may be interpreted as an evi-
dence of “fast mapping.” generally considered to be a capacity of human
infants. Corballis seemed to discard with the Kanzi case as an evidence of
speech comprehension. He assumed that words served as discriminative
stimuli which triggered behavior, and in any case, Kanzi never learned to
speak by taking part in dialogues with a human partner.
According to the gestural theory of language evolution there must have
been a switch from primarily gestural to primarily vocal communication.
Corballis (2010) also discussed whether this switch took place gradually
or whether it occurred suddenly, in one saltational shift. In agreement
with several other researchers, he believed that this shift was gradual and
7 The Modality-Independent Capacity of Language … 231
Krentz and Corina. (To prepare for the ensuing discussion I shall shortly
repeat them here.) The former researchers argued for a privileged status of
speech, because they observed that two-month-old infants listened lon-
ger to speech (monosyllabic nonsense words) than nonspeech analogues.
Krentz and Corina showed that their six-month-old hearing babies pre-
ferred to look at unfamiliar visual signs (from ASL) over nonlinguistic
pantomime. Therefore, these researchers claimed that infants, instead of
being tuned to speech have a “language-general bias” (p. 1).
The position taken by Krentz and Corina, which I will follow here, is
now commonly accepted in the research literature. Still this position needs
a few comments: Within each of the two modalities, hearing and vision,
stimuli differ with respect to their language relatedness. Therefore, typi-
cally developing infants are most likely tuned to speech sounds as well as to
signs, when these have features which “signal” their relevance for language.
The critical features depend on frequency of modulations, rhythmicity and
statistical characteristics, for example, transition probabilities. In Chap. 3,
I have discussed learning constraints related to statistical characteristics of
the stimulus materials, and below, Sect. 7.3, I will also deal with frequency
of modulations as a factor in linguistic pre-semantic interaction.
The capacity of symbolic reference is a prerequisite to speech, yet the
two may have co-evolved in ancient history. Speech may also be the result
of selection pressures that did not similarly apply to all forms of symbolic
communication. Symbolic reference depends on a general-purpose mech-
anism which serves social and communicative interactions in a variety of
sensory-motor conditions. Speech, however, represents a specific adapta-
tion to communicative needs (for example, communication in darkness).
Both speech and other forms of symbolic communication involve a com-
plex use of signs that is commonly referred to as symbolic reference. As a
general-purpose mechanism, symbolic reference is not dependent on the
use of particular articulators; for example, vocal-auditory signs or manual
signs. It may also evolve with other types of communicative signs (tactual-
kinesthetic sign). I therefore consider symbolic reference to be a universal
feature of language that has triggered the development of more specific
communicative skills. Deacon (1997) made the same point by arguing
that “the evolution of vocal abilities might more accurately be seen as a
consequence rather than the cause of the evolution of language” (p. 255).
7 The Modality-Independent Capacity of Language … 233
signing children produce the first signs at about 8.5 months, whereas hear-
ing children produce their first words between 10 and 13 months. Thus,
it has been commonly assumed that deaf children produce their first signs
earlier than hearing children produce their first words. However, Emmorey
(2002) pointed out that the first signs by deaf children are not actually sym-
bolic signs but “prelinguistic communicative gestures” that are produced
by both deaf and hearing children. When we take symbolic and referential
criteria into account, we find that the first signs and the first words appear
around the first birthday.
We also find similar and analogue trends in the acquisition of phonol-
ogy by hearing and deaf children. “Baby signs” are produced by altering
and simplifying the adult form, for example, by substituting one hand-
shape used by the adult signer by another that does not require the same
degree of motor control. Similarly, by hearing children acquiring speech;
fricatives and liquids are often replaced by stop consonants.
Motherese means that adults who speak to a child modify their speech
by using a higher-pitched voice, a wider range of prosodic contours, lon-
ger pauses, add emphatic stress, and so on. An equivalent to motherese
by parents of hearing children also occurs by signing parents of deaf
children. Signs produced for children are generally longer in duration,
contain more repetitions, and are made with larger and more distinct
movements. Thus, comparable milestones have been observed in the
acquisition of language for both deaf and hearing children, and these
milestones have been reached at the same developmental ages by the two
groups of children.
to language. (Note the famous case of Genie, who was isolated in her
home until the age of 13 [Curtiss, 1977] and received her first linguistic
training when the window of opportunity may have already been shut.)
The devastating effects of isolation from a linguistic community will vary
depending on the duration of deprivation in early childhood. Do deaf
children similarly depend on a critical period for the learning of sign lan-
guage? Programs for the detection of deafness among infants have only
recently been provided in developed countries, which means that some
deaf children have suffered from a period of deprivation before they were
systematically exposed to sign language. The length of this period may
vary from child to child. Therefore, the study of language acquisition by
deaf children raised within loving families offers a special opportunity to
test Lenneberg’s hypothesis.
Newport (1991) compared skills in ASL of deaf people who fell into one
of three groups: 1) Native learners, who were exposed to ASL from birth,
2) early learners, who were first exposed to ASL when they entered school
at the age of 4–6 years, and 3) late learners, who were not exposed to ASL
before the age of 12. Participants in all three groups had practiced ASL
for at least 30 years. Newport found that age of acquisition had no effect
on basic word order in ASL. This finding supports a common assumption
that word order is a robust property of language that can be learned after
puberty. On the other hand, scores on tests of ASL morphology and age of
acquisition correlated -.60 to -.70. Thus, participants who acquired ASL
early in childhood outperformed those who learned this language at later
ages. Other researchers (Mayberry, 1995; Mayberry & Eichen, 1991) have
found that phonological processing is particularly vulnerable to a late start.
To my knowledge, there are no analogue studies of the effect of a
delayed start of speech acquisition by hearing children. Therefore, we
cannot tell whether the “window of opportunity” is the same for deaf and
hearing children. Yet, the studies of late deaf starters mentioned above
do support a general formulation of Lenneberg’s hypothesis: There is a
critical period for the development of a modality-independent capacity
of language. On this account, we should expect the sign language—and
speech acquisition processes—to be affected by the maturation of the
same neuroanatomical and neurophysiological substrata.
7 The Modality-Independent Capacity of Language … 241
By the end of the last century, the neural structures underlying both
linguistic forms were considered isomorphic; thus, commonalities were
stressed by most research workers. In the beginning of the present cen-
tury, more evidence showing cross-linguistic differences were reported
(see Corina, Lawyer, & Cates, 2013, for a critical review). Also, a growing
awareness that human language may have bi-hemispheric representations
gave rise to more research on the role of the right hemisphere in linguistic
processing. Perhaps sign languages depend on right hemisphere resources
to an extent that is not observed for spoken languages. Thus, neuroimag-
ing studies have shown that comprehension of particular grammatical
constructions in ALS and BSL depend on activation of right posterior-
parietal regions in a way that has not been reported for spoken languages.
Several researchers have therefore speculated that sign languages involve
more processing of spatial relationships, which permits a coordinated
control of both hands. In this language modality relations such as “on.”
“above” “under” need no specific lexical item, but may be depicted by
the configured movement of the one hand in relation to the shape of the
other hand.
In short, contemporary research may indicate a conflict between com-
monalities and cross-linguistic differences between the two modalities of
language. The specific coupling of sensory inputs and linguistic articu-
lators in both forms of language has necessarily affected the outcomes
of neuroimaging studies as well as case studies of the aphasias. While
acknowledging the possibility that linguistic competence requires spe-
cialized and language-specific neural mechanisms, Corina et al. (2013)
concluded with the following dilemma: “The broader point is whether
aphasic deficits should be solely defined as those that have clear homolo-
gies to the left hemisphere maladies that are evidenced in spoken lan-
guages, or whether the existence of signed languages will force us to
consider the conception of linguistic deficits such as aphasia and open
the possibility that there may be multiple ways in which the human brain
may manifest linguistic abilities” (last para of e-pub issue).
I am fully cognizant of the existence of modality-specific neural mech-
anisms, and yet the observed homologies may be interpreted as the more
abstract representations of a modality-independent capacity of language.
244 Language Evolution and Developmental Impairments
These homologies do not mean that signed and spoken languages are
unconstrained developmental options. Thus, despite the functional and
structural similarities between speech and sign language, they may also
compete for limited resources to an extent that is not found between
same-modality languages. This will be the problem addressed in the fol-
lowing section.
related to the general plasticity of the human brain, which means that
visual cortex may similarly respond to spoken language in blind children
(see Bedny, Richardson, and Saxe, 2015).
Because the “colonization of the auditory cortex” after prolonged expo-
sure to sign language complicates the acquisition of speech, Teoh et al.
also argued that educational programs for cochlear implant (CI) users
that stress oral communication, may potentially reduce the “cortical colo-
nization” phenomenon, and are therefore preferable in relation to pro-
grams that stress “total communication.” Thus, educational programs that
include use of signs, in combination with oral exercise, may support the
processing of visually evoked signals in the auditory cortex. The question
is whether the two modalities of communication, in the long run, may
mutually interfere, and consequently make the full proficiency of sign-
speech bi-linguality more or less impossible. Wooi Teoh et al.’s discussion
of the consequences of the “cortical colonization” phenomenon is highly
relevant for the post-operative support for children with CI. The options
regarding language planning for these children in the twenty-first century
were discussed by Knoors and Marschark (2012). These writers did not
discuss the educational and remedial consequences of the colonization
phenomenon. yet they wisely concluded that “language planning and lan-
guage policy should be revisited in an effort to ensure that they are appro-
priate for the increasingly diverse population of deaf children” (p. 291).
Experimental works which relate to the effects of sign-speech (bimodal)
bilingualism are needed. The frequency-lag hypothesis (Gollan et al.
2011) claims that lexical retrieval is disadvantaged by bilinguals due to
a “frequency lag” in use of the two languages, in particular in the use
of the nondominant language. Emmorey, Petrich, and Gollan (2013)
reported the results of a picture-naming task with three groups of par-
ticipants: 1) Hearing ASL: English bimodal bilinguals, 2) Monolingual
deaf signers, and 3) English-speaking monolinguals. The bimodal bilin-
guals showed a higher frequency effect; that is, they were slower and
less accurate when naming pictures in ASL, both when compared with
English (their nondominant language) and with monolingual deaf sign-
ers. Picture naming in English showed no difference in naming latencies,
error rates or frequency effects when bimodal bilinguals were compared
with monolinguals.
246 Language Evolution and Developmental Impairments
other. Perhaps the viabilities of the two types of languages, speech and
sign languages, differ in some important respects.
Let me revert to the development of a new sign language in Nicaragua;
that is, the NSL (see Introduction, Sect. 1.3). In 1981, after the Sandinists
had taken power in Nicaragua, a new vocational school for the deaf was
opened in Managua. Deaf children had previously been raised in isolated
families with mainly nonsigning parents, and in this context deaf children
learned a rudimentary form of communication with manual gestures.
They developed a small “vocabulary” of gestures, and to some extent a
strategy for communicating longer sentences (also characterized as a pid-
gin sign language). Arbib (2009) stressed that these skills resulted from
the collective efforts of the family to communicate. However, the gestures
were not standardized, and therefore they were commonly labeled “home
signs,” because they were completely unintelligible to people outside the
family. (As mentioned in the Introduction, these were gradually aban-
doned and exchanged, via a pidgin sign language, with a new and well-
structured creole sign language.)
With the establishment of the vocational school in Managua, a new
situation for deaf adolescents and young adults emerged. They were
encouraged to look upon themselves as social actors who collectively cre-
ated their own identity. In other words, they became a new linguistically
defined peer group whose cohesiveness depended on the standardization
and adjustment of signs. After having met with other deaf children and
adults, their home signs were transformed into a pidgin and later into a
rather arbitrary articulation of signs agreed upon in the new community
of deaf people; the birth of a new language had taken place. This process,
however, depended strongly on teachers or administrators who provided
the community with the idea of a language. Yet, like home signs had been
created by the collective efforts of the family, the development of NSL
was made possible by the collective efforts of the community of students
(see the role of collaborative structures in Chap. 5, Sect. 5.6.1).
The social mechanisms that operated during the development of NSL
have most probably affected the emergence of any language from prehis-
toric times to the present. It should be stressed, however, that the lan-
guage communities that are created by these mechanisms are defined by
a particular modality and form of expression. Therefore, there are great
7 The Modality-Independent Capacity of Language … 249
schools. Some reports have shown that these efforts have been successful
(see, for example Antia, Jones, Reed, and Kreimeyer, 2009). However,
Rydberg, Gellerstedt, and Danemark (2010) present a less optimistic pic-
ture of the level of educational attainment. They studied 2144 people
born between 1941 and 1980 who attended a special education program
for the deaf in Sweden. These were compared to randomly chosen hear-
ing people who were born in the same period. They concluded that “the
educational reforms have not been sufficient to reduce the unequal level
of educational attainment between deaf and hearing people” (p. 313).
It may be argued that the observed differences in educational attain-
ment are due to the fact that deaf students work on the premises of the
spoken language culture. Therefore, it seems to be an impossible task to
raise the literacy rate among the deaf on an equal level with the hearing
population. Skills that are based on the comprehension and production
of speech will of course set the deaf at a disadvantage. All barriers and
inequalities that disfavor the deaf in educational settings testify to the
dominance of spoken language in society.
Could it be otherwise? The social mechanisms underlying the creation
of any language have served the sign languages as well as the spoken
languages. However, written languages have been invented and devel-
oped for the spoken languages. Sign languages, despite various attempts
to build an alphabet of signs, have not similarly been bestowed on a
written language. This explains why sign languages have been less viable,
compared to spoken languages, in the development of modern societies.
References
Antia, S. D., Jones, P. B., Reed, S., & Kreimeyer, K. H. (2009). Academic status
and progress in communication in deaf and hard-of-hearing students in gen-
eral education classrooms. Journal of Deaf Studies and Deaf Education, 14,
293–311.
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Bedny, M., Richardson, H., & Saxe, R. (2015). “Visual” cortex responses to
spoken language in blind children. The Journal of Neuroscience, 35,
11674–81.
7 The Modality-Independent Capacity of Language … 253
Bolhuis, J. J., Tattersall, I., Chomsky, N., & Berwick, R. C. (2015). Language:
UG or not to be, that is the question. PLoS Biology, 13, e1002063.
doi:10.1371/journal.pbio.1002063.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Corina, D. P. (1998). Studies of neural processing in deaf signers: Toward a
neurocognitive model of language processing in the deaf. Journal of Deaf
Studies and Deaf Education, 3, 35–48.
Corina, D. P., Lawyer, L. A., & Cates, D. (2013). Cross-linguistic differences in
the neural representation of human language: Evidence from users of signed
languages. Frontiers in Psychology, 3, 587. doi:10.3389/fpsyg.2012.00587.
Corina, D. P., McBurney, S. L., Dodrill, C., Hinshaw, K., Brinkley, J., &
Ojemann, G. (1999). Functional roles of Broca’s area and supramarginal
gyrus: Evidence from cortical stimulation mapping in a deaf signer.
NeuroImage, 10, 570–581.
Curtiss, S. (1977). Genie: A psycholinguistic study of a modern day “wild child”.
New York: Academic Press.
de Boysson-Bardies, B. (1999). How language comes to children: From birth to two
years (M. DeBevoise, Trans.). Cambridge, MA: MIT Press.
Deacon, T. (1997). The symbolic species. The co-evolution of language and the
human brain. London: Penguin books.
Dolata, J. K., Davis, B. L., & Macneilage, P. F. (2008). Characteristics of the
rhythmic organization of vocal babbling: Implications for an amodal linguis-
tic rhythm. Infant Behavior & Development, 31, 422–431.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, K., Petrich, J. A., & Gollan, T. H. (2013). Bimodal bilingualism and
the Frequency-Lag Hypothesis. Journal of Deaf Studies and Deaf Education,
18, 1–11.
Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G.
(2005). Parietal lobe: From action organization to intention understanding.
Science, 308, 662–667.
Fujii, S., & Wan, C. Y. (2014). The role of rhythm in speech and language reha-
bilitation: The SEP hypothesis. Frontiers in Integrative Neuroscience, 8, 777.
Ghazanfar, A. A., & Takahashi, D. Y. (2014). Facial expressions and the evolu-
tion of the speech rhythm. Journal of Cognitive Neuroscience, 26,
1196–1207.
254 Language Evolution and Developmental Impairments
Gollan, T. H., Slattery, T. J., Goldenberg, D., Van Assche, E., Duyck, W., &
Rayner, K. (2011). Frequency drives lexical access in reading but not in
speaking: The frequency-lag hypothesis. Journal of Experimental Psychology.
General, 140, 186–209.
Hockett, C. D. (1960). The origin of speech. Reprint from Scientific American,
603.
Klima, E. S., & Bellugi, U. (1979). The signs of language. Cambridge, MA:
Harvard University Press.
Knoors, H., & Marschark, M. (2012). Language planning for the 21st century:
Revisiting bilingual language policy for deaf children. Journal of Deaf Studies
and Deaf Education, 17, 291–305.
Kovelman, I., Mashco, K., Millott, L., Mastic, A., Moiseff, B., & Shalinsky,
M. H. (2012). At the rhythm of language: Brain bases of language-related
frequency perception in children. Neuroimage, 60, 673–682.
Krentz, U. C., & Corina, D. P. (2008). Preference for language in early infancy:
The human language bias is not speech specific. Developmental Science, 11(1),
1–9.
Lenneberg, E. (1967). Biological foundations of language. New York: Wiley.
Lieberman, P. (2000). Human language and our reptilian brain: The subcortical
bases of speech, syntax and thought. Cambridge, MA: Harvard University Press.
Lieberman, P. (2015). Language did not spring forth 100 000 years ago. PLoS
Biology, 13, E1002064. doi:10.1371/journal.pbio.1002064.
MacNeilage, P. F., & Davies, B. L. (2000). On the origin of internal structure of
word forms. Science, 288, 527–531.
Mayberry, R. (1995). Mental phonology and language comprehension or What
does that sign mistake mean? In K. Emmorey & J. Reilly (Eds.), Language,
gesture, and space (pp. 355–370). Mahwah, NJ: Lawrence Erlbaum.
Mayberry, R., & Eichen, E. (1991). The long-lasting advantage of learning sign
language in childhood. Another look at the critical period for language acqui-
sition. Journal of Memory and Language, 30, 486–512.
Newport, E. L. (1991). Contrasting conceptions of the critical period for lan-
guage. In S. Carey & R. Gelman (Eds.), The epigenesist of mind: Essays in
biology and cognition (pp. 111–130). Cambridge, UK: Lawrence Erlbaum
Associates.
Nyström, P. (2008). The infant mirror neuron system studied with high density
EEG. Social Neuroscience, 3(3-4), 334–347.
Oller, D. K., & Eilers, R. E. (1988). The role of audition in baby babbling.
Child Development, 59, 441–449.
7 The Modality-Independent Capacity of Language … 255
Petitto, L. A., Holowka, S., Sergio, L. E., Levy, B., & Ostry, D. J. (2004). Baby
hands that move to the rhythm of language: Hearing babies acquiring sign
languages babble silently on the hands. Cognition, 93, 43–73.
Petitto, L. A., & Marentetto, P. F. (1991). Babbling in the manual mode:
Evidence for the ontogeny of language. Science, 251, 1483–1496.
Pinker, S., & Bloom, P. (1990). Natural language and natural selection.
Behavioral and Brain Sciences, 13, 707–784.
Rizzolatti, G., & Arbib, M. A. (1998). Language within a grasp. Trends in
Neoroscience, 21, 188–194.
Rydberg, E., Gellerstedt, L. C., & Danemark, B. (2010). The position of the
deaf in the Swedish labor market. American Annals of the Deaf, 155, 68–77.
Teoh, S. W., Pisoni, D. B., & Miyamoto, R. T. (2004). Cochlear implantation
in adults with prelingual deafness. Part 1. Clinical results. Laryngoscope, 114,
1536–1540.
Thelen, E. (1991). Motor aspects of emergent speech: A dynamic approach. In
N. A. Krasnegor, D. M. Rumbaugh, R. L. Schiefelbush, & M. Studdert-
Kennedy (Eds.), Biological and behavioral determinants of language develop-
ment (pp. 329–362). Hillsdale, NJ: Lawrence Erlbaum.
Vouloumanos, A., & Werker, J.F. (2004). Tuned to the signal: the privileged
status of speech for young infants. Developmental Science 7(3), 270–276.
Vouloumanos, A., & Werker, J. F. (2007). Listening to language at birth:
Evidence for a bias for speech in neonates. Developmental Science, 10(2),
159–171.
8
Developmental Language Impairment:
Perspectives of Etiology and Treatment
words and phrases, but is “fallible in the sense that it is sensitive to inter-
ference and prone to retrieval failure” (Squire et al., 1993, p. 486). The
procedural system is less flexible, which means that dysfunctions of the
frontal/basal ganglia circuitry may have lasting consequences, whereas
failure of the phylogenetically more recent system may be more corrigible.
8.3.1 SRT
In SRT tasks, participants are shown four boxes or circles arranged hori-
zontally across a computer screen or ordered in a diamond configuration.
Whenever a stimulus appears in one of the four boxes, the participant is
told to press a button on the response pad that match the location of the
visual stimulus. Participants are not told that the stimuli are presented in
a fixed sequence, usually 10 items long, for example, 4,2,3,1,3,2,4,3,2,1,
where each stimulus presentation corresponds to a particular location on
the screen. Sequence learning is measured as improvements in accuracy
and/or reaction time (RT) compared to a randomly ordered sequence.
Typical performance by participants with normal language (NL) devel-
opment is an initially rapid decrease in RT followed by an asymptote. In
Tomblin, Mainela-Arnold, and Zhang (2007) adolescents with SLI were
able to learn the sequences, but only after significantly more trials com-
pared to TD adolescents. Also, the SLI participants did not approach an
asymptote at the end of training.
Later, Lum, Gelgic, and Conti-Ramsden (2010) compared 15 chil-
dren with SLI with nonimpaired children in a different version of the
SRT task. They measured procedural learning by subtracting RT in a
fourth block from RT in a pseudo-random ordered fifth block. The SLI
children were not able to learn the sequences at levels comparable to
the nonimpaired children. Lum et al. (2010) also tested the participant’s
explicit knowledge of the presented sequences. They assured that “none
8 Developmental Language Impairment... 263
of the children participating in the study was able to recall the ten-item
sequence pattern” (p. 101). They found that the language-impaired chil-
dren did not learn the sequences at the level of the nonimpaired children.
Gabriel, Maillart, Guillaume, Stefaniak, and Meulemans (2011) ran a
probabilistic version of the SRT task with 15 SLI children and 15 TD
controls. The RT difference between the final block and a subsequent con-
trol block did not differ significantly between the two groups. Children
with SLI were as fast as the controls, and hence, the authors concluded
that children with SLI “do not display global procedural system deficits.”
Explicit knowledge of the presented pattern was not examined.
The disparate results from the two last mentioned studies may have
to do with the relative number of grammar impaired children com-
pared to the number of children without grammar impairment in the
broader language-impaired SLI group. I believe a further analysis of the
data based on a re-categorization of the impaired children into grammar-
impaired (GI) and normal grammar (NG) will be needed. Finally, the
presentation rates of stimuli and manner of responding (touching the
screen rather than a keyboard) in the two studies may have caused a dif-
ferent involvement of working memory, and because explicit memory
was not examined in the Gabriel et al. study, we do not know whether
declarative knowledge may have contributed to the disparate results in
the two studies.
Hedenius et al. (2011) presented some important contributions to
the understanding of procedural learning by language-impaired chil-
dren. Their approach is innovative in at least two ways: First, the group
introduced the Alternating Serial Reaction Time (ASRT) task. A ran-
dom block that follows the fixed sequence of items is replaced by ran-
dom items that are interspersed with the pattern throughout the task;
for example, 1-r-2-r-4-r-3 (numbers correspond to specific locations and
r correspond to random locations). This procedure elicits no declarative
knowledge, and makes possible continuous examination of procedural
learning. Secondly, the Hedenius group extended the ASRT task to study
consolidation and retention of sequence knowledge (long-term learning),
an extension that is warranted by previous observations of dyslexic chil-
dren who perform well in initial training of mirror drawing but suffer a
264 Language Evolution and Developmental Impairments
setback on the same task one day later compared with the performance
of TD children. In the Hedenius et al. (2011) study, both SLI children
and TD children showed evidence of initial-sequence learning. The two
groups did not differ with respect to long-term learning, but only the TD
children showed clear evidence of consolidation. To show whether defi-
cits of sequence learning are associated specifically with grammar impair-
ment rather than broadly defined language impairments, all children
participating in the study were re-categorized into GI and NG groups.
Based on the Clinical Evaluation of Language Fundamental-3 (CELF-3)
Word Structure, Recalling Sentences and Sentence Structure subtests for
children 7–8 years, and CELF-3 Formulated Sentences and Recalling
Sentences subtests for children 9–14 years, they constructed a composite
grammar test. Z-scores at or below −1.14 were defined as GI, and those
above −1.14 were defined as NG. Both GI and NG children showed evi-
dence of initial-sequence learning, but only NG children demonstrated
clear evidence of consolidation and long-term learning.
Recently, Lum, Conti-Ramsden, Morgan, and Ullman (2014) pre-
sented a meta-analysis of eight studies where SRT tasks have been used to
test the PDH in children with SLI. The results of 186 participants with
SLI and 203 TD children were examined using a meta-regression analy-
sis. The increase in RT in the random block which is taken as a measure
of sequence learning was compared between SLI and TD children in the
sample of eight studies. They found an average effect size of .328, which
is significant, showing that PDH is supported in the meta-analysis. They
also found that effect sizes varied as a function of the age of participants
and characteristics of the SRT task.
The great challenge for the young child who is about to learn a first lan-
guage is to comprehend and make use of a hierarchical phrase structure.
This structure involves nonadjacent dependencies, as can be illustrated
in the sentence: The man on the sofa has aching legs (i.e., the man, not
the sofa, has aching legs). In Chap. 3, I reviewed some experiments by
Saffran et al. (2008), who showed that 12-month-old children are able
to learn predictive dependencies simulating the complex phrase struc-
ture of natural languages. It may be that the learning of such dependen-
cies is very difficult for some children who are language-impaired. I have
therefore argued that detection of the statistical dependencies in natu-
ral language utterances may provide an access-code to early dialogues, a
code that may be insufficiently “wired-in” by some children that turn out
to have language-learning difficulties. The learning of such dependen-
cies can be studied by use of AGL tasks comprised of series of nonsense
syllables/words.
Both adjacent and nonadjacent dependencies are learned by the TD
individual. Plante, Gomez, and Gerken (2002) presented sentence strings
that showed adjacent dependencies like the word order constraints of
a finite-state grammar. Participants made grammaticality judgments of
novel strings, and after only 5-minute exposure to the language, TD
adults performed above chance, whereas adults with language impair-
ments did not exceed chance level performance. Nonadjacent depen-
dencies are generally considered more difficult, because the learning of
such dependencies require subjects to ignore considerable variation in
intervening elements. In fact, however, the likelihood of detecting non-
adjacent dependencies increases with the variability of intervening ele-
ments. Thus Gomez (2002) presented children with three nonsense word
strings, A-X-B, where A and B were always the same, and X represented
a set of 3, 12, or 24 words. It turned out that children in a listening time
test could only discriminate between grammatical and ungrammatical
strings in the high-variability condition (24 words).
Grunow, Spaulding, Gómez, and Plante (2006) adopted the
Gomez’ task in a study of AGL, and college students with and with-
out language-learning difficulties served as participants. They listened to
8 Developmental Language Impairment... 267
members of this group also benefitted more from the high variability
condition. The authors concluded that “these findings demonstrate that
rapid learning of grammatical forms can be achieved for individuals with
language-learning disabilities, if the language input is structured in ways
that facilitates rapid, unguided learning” (p. 625).
Hsu and Bishop (2011) examined evidence that language-impaired
persons have particular problems in extracting statistical dependencies,
and argued that due to these problems the language-impaired child or
adult becomes more dependent on rote learning (exemplar-based learn-
ing). In a previous AGL experiment by Hsu et al. (2008), token frequency
was varied independent of variability in an A-X-B paradigm. Because the
test strings were all heard during training, token frequency was as high
as 72 in the set size = 2 condition with only 6 different sentence strings.
In set size = 12 there were 36 different sentences with a token frequency
of 12, and in set size = 24, there were 72 different sentences each with
a token frequency of 6. Thus variability was negatively correlated with
token frequency. Among the TD participants the number of participants
who reached 100 % accuracy in at least 1 nonadjacent pair was highest in
the high variability condition, as expected. 15 % of the language-impaired
participants reached this level of performance in the same condition, and
25 % in the other variability conditions. Thus more language-impaired
participants reached the 100 % level of performance when variability was
low and token frequency high. These results agree with clinical observa-
tions showing that overlap of utterances produced by SLI children with
those produced by their caregiver is greater than with those produced by
their siblings. Thus language acquisition in this group is hampered with
exemplar-based, rather than rule-based learning, and therefore becomes
more dependent on rote learning. This observation is clinically relevant,
but is not informative about the etiology of grammar impairments.
The literature that presents major support to the PDH has emphasized
statistical learning, and the experimental tasks are described in terms
of procedural learning. Perrechut and Pacton (2006), who emphasized
8 Developmental Language Impairment... 269
only after prolonged exposure to the speech stream. This result also sup-
ports other studies showing that SLI children perform poorly in AGL
tasks under nonoptimal conditions. Also a 42-minute tone condition
turned out to be very difficult for the language-impaired children. Evans
et al. (2009) constructed a tone stream out of 11 pure tones from the
same octave (starting at middle C). These were combined into groups
to form “tone words,” which were not separated by any form of acous-
tic markers. The only clues to the beginning and end of a “tone word”
were the transitional probabilities between tones. Again the children were
occupied with a drawing task while listening to the tone stream for 42
minutes. After the implicit learning session, the children were presented
with 36 test-pairs each consisting of a “word” and “nonword.” They were
then asked to choose the sound sequence that sounded most familiar.
Again, the performance of the control group was significantly different
from chance, while the performance of the language-impaired children
did not differ from chance. These studies show that learning of linguistic
signals, words or “basic chunks” are most likely mediated by the proce-
dural, not the declarative system.
Counter-evidence to the PDH. According to the PDH, people with
basal ganglia dysfunction will have problems in learning AG tasks. In
addition, dysfunctions of the cerebellum, in particular the dentate
nucleus, will interfere with AG learning. However, Witt, Nühsma, and
Deuschl (2002) have shown that patients with advanced Parkinson’s dis-
ease can accomplish AG learning. This observation provides an important
counter-evidence for the PDH. However, whereas people with grammar
impairment tend to have abnormalities in the basal ganglia and/or cer-
ebellar structures, all people with abnormalities in these structures do not
necessarily have grammar impairments. The particular interconnections
between these structures and parts of the frontal cortex influence the
way neural abnormalities might interfere with grammar development.
Thus Ullman and Pierpoint (2005) argued that not all frontal regions
are involved in procedural memory. The most important parts are the
Supplementary Motor Area and in part Broca’s area containing BA 44
and 45. More research is needed to show the critical nerve circuitry
underlying early grammar learning.
8 Developmental Language Impairment... 271
Notice also that anomalies of the brain structures underlying the pro-
cedural system also predict phonological problems. Phonological rep-
resentations of new words, in particular words whose sound structure
are hard to memorize, may not be established, or learned only with
great efforts. Thus repeated exposure with guided listening and talking
is often necessary for new word learning. However, frequent words may
be spared.
Language-impaired children have great difficulties in tasks which
require repetition of nonwords. This problem has been taken as a diag-
nostic marker of language impairments (see Chap. 2, Sect. 2.3). Also,
it has been shown that one of the affected members of the KE family
acquired phonological structures of English only with an extremely
delayed rate (Fee, 1995). Phonological structures are sequential struc-
tures, the learning of which depends on the neural system underlying
the procedural memory. Therefore, phonological difficulties will be cor-
related with problems in the learning of AG.
retrieval. Thus Ullman and Pierpont have argued that both declarative
and procedural systems are involved in vocabulary learning; which one of
the systems will be most heavily taxed depend on the methods of assess-
ment (see Sect. 8.2 above). In the other task, the participants were told
to match a complex nonverbal sound with a visual pattern; that is, a task
which was said to depend on the declarative system without demands of
phonological analysis. In this way, they could compare declarative learn-
ing on verbal and nonverbal paired associate learning tasks.
An errorless learning procedure was followed in both tasks. The child
heard a target word and was told to select a picture by clicking on it.
The picture of the animal was removed by the robot, and when cor-
rect the robot also said the target word. When incorrect, the robot said
nothing and the child was told to try again until the correct picture was
selected. The same procedure was followed in the other task with visual
patterns and meaningless sounds. No spoken responses were needed, and
the errorless procedures were adopted to minimize demands of working
memory.
In the vocabulary task, the age-matched TD children outperformed
the other groups. The level of performance at the start was higher,
whereas their rate of improvement was the same as the other two
groups; only the intersection of curves differed between the groups. In
the nonverbal paired associate task, there were no reliable differences
between the groups. Because the results showed spared declarative
learning by the language-impaired group, they were said to be consis-
tent with the PDH.
The intact declarative system in the cross-modal associate learning
task was given considerable attention by Bishop and Hsu. This fact
shows that the declarative system may be more effectively used in treat-
ment. However, declarative failure may still be found among language-
impaired children as a consequence of grammatical difficulties. However,
the relative sparing of declarative abilities may be exploited in attempts
to develop alternative methods of treatment. The balance between the
procedural and declarative system may tip in favor of the latter system,
but this does not mean that language-impaired children have no lexical/
semantic problems.
8 Developmental Language Impairment... 273
6. The new term means that language impairments share some charac-
teristics with other neurodevelopmental disorders.
7. Other labels for unexplained language problems generally do not
have a link to evolutionary theory.
8. The consequences of the “lack of agreed terminology” are severe. To
avoid misunderstanding and “doubts of reality” the new term also
needs “marketing” in the field of public health.
9. The new term, PLD, means there are good reasons why impaired
children “should also undergo an evaluation to identify areas of
strength: activities they may enjoy and have the possibility of suc-
ceeding at” (Bishop, 2014, p. 390).
10. The proposed term, PLD is the answer.
are treated; the second one makes use of computer games which form a
“family” of methods with mostly nonlinguistic materials.
Semantic coaching. This method is relevant for most children with
language problems, because in many cases they also struggle with social
and emotional problems which accompany their language difficulties.
Therefore, the solution to these problems requires the creation of an edu-
cational setting where the teacher gains the child’s trust, while awakening
a curiosity for words. This is of course a task for the devoted teacher or cli-
nician skilled in special education, and cannot be outlined in details here.
Its objectives will be a dialogue about the meaning of words: Incite the
child to talk, or to take active part in dialogues about concepts/ events/
objects, while the same words are repeatedly used in different linguis-
tic contexts. The face-to-face dialogic setting is important, but semantic
training may as well be undertaken in small (selected) groups of children.
Different institutions or resource centers have gained practical and clini-
cal experience in organizing this form of treatment; that is, professional
experience that may easily be shared with others.
For children with a low vocabulary, we should also take into consid-
eration Fay et al.’s (2010) research on the evolution of new communi-
cative systems (see Chap. 5, Sect. 5.6.1). These researchers stressed the
importance of interactions in a community setting where communica-
tion between new partners take place. In consequence, therapists, as
part of a coaching program, should encourage communication between
same-generation members. Thus semantic coaching by teachers or clini-
cal workers is not enough, and may sometimes produce signs of contra-
indications. In addition to semantic coaching in special schools or clinics,
it is important to provide conditions for interactions with other children.
Has the child attended kindergarten or nursery school, and what has the
quality of interactions been in those institutions? Does the child have
same-age friends, and to what extent has the child attended a peer group
in school? If not, it is essential to change the environmental conditions to
make the most out of language learning in peer groups of other children.
Some language-impaired children may also perform poorly on cogni-
tive, nonlinguistic tasks, and some may have a symptomatology of co-
morbidity with other cognitive and behavioral disorders. In these cases,
282 Language Evolution and Developmental Impairments
tasks), and the WPT. A relatively smaller proportion the SLI children
showed evidence of learning in the two sequence learning tasks com-
pared to the TD children. In contrast to their previous study (Kemény
& Lukács, 2010) there was an equal proportion of learners in the two
groups on the WP task. (By the way, this task can be solved by.) The
two sequence learning tasks were not directly comparable to the adaptive
training procedures used in the Conway et al. study, however, I agree that
they may be linked to a domain general mechanism of learning.
Gabay, Thiessen, and Holt (2015) have also reported impaired statisti-
cal learning by children with developmental dyslexia (DD). These children
performed significantly more poorly than a control group on a statisti-
cal learning task with both linguistic and nonlinguistic stimuli. Gabay
et al. therefore concluded that the reading problems of the DD children
did not arise from phonological impairment but a “more general proce-
dural learning deficit.” Does this mean that dyslexia and developmental
language impairment are similar disorders? It may be that PLD, due to
different developmental trajectories, gives rise to different surface impair-
ments, but are etiologically the same disease.
In summary, I find SSP training to be the most adequate method of
treatment for children (and adults) with PLD. SSP training as defined in
the Conway et al. and Smith et al. studies represent a remarkable improve-
ment in treatment methodology, because it applies to groups which show
superficially different impairments (reading difficulties, delayed language
by hard of hearing people). However, much research remains to define
the specific mechanisms involved in SSP; that is, the distinctive factors
for typical versus anomalous development of language.
In view of the research reviewed in the present chapter, I will address
policy-making in the field: Institutions which offer remedial work for
children with developmental disorders, in particular children with PLD,
cannot improve practice unless they have experts who engage themselves
in clinically oriented research. These will be experts who are familiar
with most of the research works reviewed in this chapter, and who are
also involved in clinical assessment and treatment of children and adults
with developmental disorders. The design and testing of new remedial
programs will have to be done stepwise in a constant interaction with
research and clinical practice.
8 Developmental Language Impairment... 289
References
Bahl, M., Plante, E., & Gerken, L. A. (2009). Processing prosodic structure by
adults with language based disability. Journal of Communication Disorders,
42, 313–323.
Bickerton, D. (2003). Symbol and structure: A comprehensive framework for
language evolution. In M. H. Christiansen & S. Kirby (Eds.), Language evo-
lution: The states of the art. Oxford: Oxford University Press.
Bishop, D. V. (2014). Ten questions about terminology for children with unex-
plained language problems. International Journal of Language &
Communication Disorders, 49, 381–415.
Bishop, D. V., & Hsu, H. J. (2015). The declarative system in children with
specific language impairment: A comparison of meaningful and meaningless
auditory-visual paired associate learning. BMC Psychology, 3(1), 3.
doi:10.1186/s40359-015-0062-7.
Christiansen, M. H., Conway, C. M., & Onnis, L. (2011). Similar neural cor-
relates for language and sequential learning: Evidence from event-related
brain potentials. Language & Cognitive Processes, 27, 231–256.
Collins, A. M., & Lofthus, E. F. (1975). A spreading activation theory of seman-
tic processing. Psychological Review, 82, 407–428.
Conway, A. R. A., Kane, M. J., Bunting, M. F., Zach Hambrich, D.,
Wilhelm, O., & Engle, R. W. (2005). Working memory span tasks: A
methodological review and user’s guide. Psychonomic Bulletin & Review,
12, 769–786.
Conway, Gremp, Walk, Bauernschmidt and Pisoni (2012). Can we enhance
domain-general learning abilities to improve language function? In
P. Rebuschat & J. N. Williams (Eds.), Statistical learning and language acqui-
sition. Berlin: De Gruyter Mouton.
Evans, J. L., Saffran, J. R., & Robe-Torres, K. (2009). Statistical learning in
children with specific language impairment. Journal of Speech, Language, and
Hearing Research, 52, 321–335.
Fay, N., Garrod, S., Roberts, L., & Swoboda, N. (2010). The interactive evolu-
tion of human communication systems. Cognitive Science, 34, 351–386.
Fee, E. J. (1995). The phonological system of a specifically language-impaired
population. Clinical Linguistics and Phonetics, 9, 189–209.
Gabay, Y., Thiessen, E. D., & Holt, L. (2015). Impaired statistical learning in
developmental dyslexia. Journal of Speech, Language, and Hearing Research,
58, 934–945.
290 Language Evolution and Developmental Impairments
Gabriel, A., Maillart, C., Guillaume, M., Stefaniak, N., & Meulemans, T.
(2011). Exploration of serial structure procedural learning in children with
language impairment. Journal of the International Neuropsychological Society,
17, 336–343.
Gathercole, S. E., & Baddeley, A. D. (1990). Phonological memory deficits in
language disordered children: Is there a causal connection? Journal of Memory
and Language, 29, 336–360.
Gomez, R. L. (2002). Variability and detection of invariant structure.
Psychological Science, 13, 431–436.
Grunow, H., Spaulding, T. J., Gómez, R. L., & Plante, E. (2006). The effects of
variation on learning word order rules by adults with and without language-
based learning disabilities. Journal of Communication Disorders, 39,
158–170.
Hedenius, M., Persson, J., Tremblay, A., Adi-Japha, E., Veríssimo, J., Dye,
C. D., et al. (2011). Grammar predicts procedural learning and consolida-
tion deficits in children with specific language impairment. Research in
Developmental Disabilities, 32, 2362–2375.
Hsu, H. J., & Bishop, D. V. (2011). Grammatical difficulties in children with
specific language impairment: Is learning deficient? Human Development, 55,
264–277.
Hsu, H. J., Tomblin, J. B., & Christiansen, M. H. (2008). The effect of vari-
ability in learning nonadjacent dependencies in typically-developing indi-
viduals and individuals with language impairments. In A. Owen (Chair)
(Ed.). The role of input variability on language acquisition and use; Symposium
presented at the XI International Congress for the Study of Child Language
(IASCL); Edinburgh.
Kemény, F., & Lukács, Á. (2010). Impaired procedural learning in language
impairment: Results from probabilistic categorization. Journal of Clinical and
Experimental Neuropsychology, 32, 249–258.
Knowlton, B. J., Squire, L. R., & Gluck, M. A. (1994). Probabilistic category
learning in amnesia. Learning & Memory, 1, 106–120.
Kronenberger, W. G., Pisoni, D. B., Henning, S. C., Colson, B. G., & Hazzard,
L. M. (2011). Working memory training for children with cochlear implants:
A pilot study. Journal of Speech, Language, and Hearing Research, 54,
1182–1196.
Lukács, A., & Kemény, F. (2014). Domain-general sequence learning deficit in
specific language impairment. Neuropsychology, 28, 472–483.
8 Developmental Language Impairment... 291
Lum, J. A., Conti-Ramsden, G., Morgan, A. T., & Ullman, M. T. (2014).
Procedural learning deficits in specific language impairment (SLI): A meta-
analysis of serial reaction time task performance. Cortex, 51, 1–10.
Lum, J. A., Gelgic, C., & Conti-Ramsden, G. (2010). Procedural and declara-
tive memory in children with and without specific language impairment.
International Journal of Language and Communication Disorders, 45, 96–107.
Perrechut, P., & Pacton, S. (2006). Implicit learning and statistical learning. One
phenomenon, two approaches. Trends in Cognitive Sciences, 10, 233–238.
Peterson, K. M., Folia, V., & Hagoort, P. (2010). What artificial grammar learn-
ing reveals about the neurobiology of syntax. Brain & Language. doi:10.1016/j.
bandl.2010.08.003.
Plante, E., Bahl, M., Vance, R., & Gerken, L. A. (2010). Children with specific
language impairment show rapid implicit learning of stress assignment rules.
Journal of Communication Disorders, 43, 397–406.
Plante, E., Gomez, R., & Gerken, L. (2002). Sensitivity to word order cues by
normal and language/learning disabled adults. Journal of Communication
Disorders, 35, 453–462.
Saffran, J. R. (2001). The use of predictive dependencies in language learning.
Journal of Memory and Language, 44, 483–515.
Saffran, J., Hauser, M., Seibel, R., Kapfhamer, J., Tsao, F., & Cushman, F.
(2008). Grammatical pattern learning by human infants and cotton-top tam-
arin monkeys. Cognition, 107, 479–500.
Shoamy, D., Myers, C. E., Onlaor, S., & Gluck, M. A. (2004). Role of the basal
ganglia in category learning: How do patients with Parkinson’s disease learn?
Behavioral Neuroscience, 118, 676–686.
Smith, G. N. L., Conway, C. M., Baurenschmidt, A., & Pisoni, D. B. (2015).
Can we improve structured sequence processing? Exploring the direct and
indirect effects of computerized training using a mediational model. PLoS
One, 10, e0127148. doi:10.1371/journal.pone.0127148.
Squire, I. R., Knowlton, B., & Musen, G. (1993). The structure and organiza-
tion of memory. Annual Review of Psychology, 44, 453–495.
Tallal, P., Stark, R., & Mellits, E. (1985). Identification of language-impaired
children on the basis of rapid perception and production skills. Brain and
Language, 25, 314–322.
Tomblin, J. B., Mainela-Arnold, E., & Zhang, X. (2007). Procedural learning in
adolescents with and without specific language impairment. Language
Learning and Development, 3, 269–293.
292 Language Evolution and Developmental Impairments
Ullman, M. T., & Pierpoint, E. I. (2005). Specific language impairment is not
specific to language: The procedural deficit hypothesis. Cortex, 41,
399–433.
von Koss Torkildsen, J., Dailey, N. S., Aguilar, J. M., Gómez, R., & Plante, E.
(2013). Exemplar variability facilitates rapid learning of an otherwise unlearn-
able grammar by individuals with language-based learning disability. Journal
of Speech, Language, and Hearing Research, 56, 618–629.
Witt, K., Nühsma, A., & Deuschl, G. (2002). Intact artificial grammar learning
in patients with cerebellar degeneration and advanced Parkinson’s disease.
Neuropsychologia, 40, 1534–1540.
Index
A articulatory buffer, 58
Ackermann, H., 64 artificial grammar learning (AGL),
affective resonance, 85 83, 97, 259, 266–8
Aguilar, J.M., 267 artificial language, 146–9
Alfonso-Reese, L.A., 172 Ashby, F.G., 172
Alibali, M.W., 95 Askelof, S., 205
alphabets, 197, 199 Asperger syndrome, 71, 220–1
Alternating Serial Reaction Time asymmetric relationship, 143
(ASRT) task, 263 Attention Deficit Hyperactivity
Alvarez, P., 102–3 Disorder (ADHD), 52
American Sign Language (ASL), 5, auditory cortex, 244, 245
12, 186, 240 Auditory Repetition Test (ART), 58
a-modal language rhythm, 241 Augustine
anarthria, 59 Confessions, 149
Anderson, J.R., 133 autism spectrum disorder (ASD), 52,
aphasia, 49, 120, 201, 205, 217, 68–71
242, 243
Arbib, M.A., 20, 21, 115–7, 182,
212, 248 B
Ardila, A., 108, 119, 204, 205 babbling, 42, 50, 144–6, 236–8
Armstrong, D.F., 116 Baby signs, 239
H
Hage, S.R., 64 I
Hagoort, P., 22, 107, 116, 176 ideographics, 197
Halle, M., 6 ideographs, 197, 213
Hamzei, F., 114 if-then rules, 133, 216
Hauser, M.D., 4, 86–7, 88 impaired procedural learning, 100
Hawaiian pidgins, 35 infant-caregiver interactions, 50,
Haynes, O.M., 139 139–41
Hazzard, L.M., 282 information–integration (II) tasks,
Headturn Preference Procedure, 96 173
Hedenius, M., 263, 264 Ingvar, M., 205, 207
Henderson, L., 200 instinct to learn, 13, 19, 66, 67, 141,
Henning, S.C., 282 247
Herman, R., 54 intention, 69, 70, 141–4, 160–1
Hickok, G., 120, 121 interactional synchrony, 85, 141
hippocampus, 64, 102, 103, 133, interaction theory, 69
177–8 interactive alignment, 67, 141, 153–5
Index 299
R Selton,R., 147–9
reading disability (RD), 58, 61 semantic coaching, 281–2
Recalling Sentences and Sentence semantics, 5, 7–8, 160, 181–2, 258
Structure, 264 Senghas, A., 183, 185
reflexivity, 41, 163–5, 164, 187, 193, Sergio, L.E., 237
217 Serial Reaction Time (SRT) task,
Reilly, S., 50, 56 262–4
Reis, A., 205, 207 sesquipedalian, 164
Rendall, D., 85 Shanker, S.G., 85
Ribeiro, S., 87–8 Shoamy, D., 265
Ritchie,G.R., 141 sign–sign relationships, 83, 233
Rizzolatti,G., 20, 115–7, 154 Silberberg, N., 215
Roberts, L., 41, 184 simulation theory, 69
Robe-Torres, K., 269 Singer, W., 182
Rogers, T.T., 169 Siok, W.T., 201
Ruhlen, M., 80 small talk, 31, 33, 138, 143, 155–6
rule-based (RB) category, 34, 106, Smith, K., 173, 174, 286, 288
172–3 social disengagement, 279–80
Rumbaugh, E., 84, 86, 230 social mobility, 32
Russenorsk, 35 spandrels, 14
Rydberg, E., 251 Spaulding, T.J., 266
Ryle, G., 31, 101 special-purpose instrument, 36
specific language impairment (SLI),
40, 42, 50–6, 57–61, 99–100
S Squire, M.E., 35, 101–3, 107, 259,
Saffran, Jenny, 19, 28, 41, 42, 81, 264
94, 95, 97, 98, 266, 269, 276 Stark, R., 273
Sally–Anne Test, 69, 70 statistical/artificial grammar learning,
saltations, 13 124–5
Samson, D., 180, 181 Stefaniak, N., 263
Sasanuma, S., 201 Stroop effect, 198
Saunders, 93, 145 structural impairment, 258
Savage-Rumbough, E.S., 84–6, 230 structural sequence processing (SSP),
Schlesewsky, M., 110 43, 285–8
Schmandt-Besserat, D., 196 Subject-Verb-Object, 92, 110, 111
Schwarz, R.G., 61 subsong, 66
Scott-Phillips, T.C., 141–3, 160 supplementary motor area (SMA),
Scoville, W.B., 102 105, 270
Searle, John, 132 Suwalsky, T.J., 139
Index 305