You are on page 1of 313

LANGUAGE

EVOLUTION AND
DEVELOPMENTAL
IMPAIRMENTS

A R I LD LI A N
Language Evolution
and Developmental Impairments
Arild Lian

Language Evolution
and Developmental
Impairments
Arild Lian
University of Oslo
Oslo, Norway

ISBN 978-1-137-58745-9 ISBN 978-1-137-58746-6 (eBook)


DOI 10.1057/978-1-137-58746-6

Library of Congress Control Number: 2016944704

© The Editor(s) (if applicable) and The Author(s) 2016


The author(s) has/have asserted their right(s) to be identified as the author(s) of this work in accordance
with the Copyright, Designs and Patents Act 1988.
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and trans-
mission or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made.

Printed on acid-free paper

This Palgrave Macmillan imprint is published by Springer Nature


The registered company is Macmillan Publishers Ltd. London
Acknowledgments

This book is the result of notes I have taken and discussions I have had
with colleagues and friends after my retirement from the University
of Oslo, and during the years that I worked as a volunteer at Bredtvet
Resource Center (the Norwegian national resource center for special edu-
cation, located in Oslo). By interacting closely with special education
psychologists and therapists in this institution, I gained first hand experi-
ence of children with developmental language impairments and those
with special educational needs. Among the many psychologists I worked
with at this center, Ernst Ottem is responsible for part of my education
in the field of speech and language disorders, and also became a source
of inspiration for the present work. I express my sincere gratitude for his
role in my professional development in recent years!
I also express my thanks to Dr. Arnold Glass at Rutgers University,
New Jersey, USA, who read and returned very instructive comments on
a previous version of the manuscript. I also thank Dr. Glass for inspir-
ing cooperative research works that served to strengthen my general aca-
demic development.
In a different area of my work, I received general advice and important
assistance in formatting and editing my files from Bernt Andersen, Chief
Sales and Marketing Officer at RikstvAS. I am deeply grateful to Bernt
for his efforts, without which this work would not have been fulfilled in
its current form.
v
vi Acknowledgments

Last, but not least, I thank my wife, Jorunn Schwencke, who sup-
ported me from the beginning to the end of my project. I appreciate her
considerate way of protecting my work; without her assistance, this book
may not have been written.

Arild Lian
Drammen, Norway
December 4, 2015
Contents

1 Introduction 1

2 Developmental Language Impairment: Conceptual


Issues and Prospects of an Evolutionary Approach 49

3 The Problem of Continuity in Time


and Across Domains 79

4 Dialogues as Procedural Skills 131

5 Evolving Meaning in Language 159

6 Literacy and Language 193

7 The Modality-Independent Capacity of Language:


A Milestone of Evolution 229

vii
viii Contents

8 Developmental Language Impairment: Perspectives


of Etiology and Treatment 257

Index 293
List of Figures

Fig. 3.1 Organization of long-term memory 104


Fig. 3.2 Second formant transitions (F2) of the /d/ phoneme
followed by different vowel sounds. Reproduced with
permission from J. Acoust. Soc. Amer. 27, 769 (1955).
Copyright 1955, AIP Publishing LLC 122
Fig. 4.1 Marmoset monkeys (callitrix jacchus) are small
animals of about 40 cm in length, weight about 350 grams,
who live up to 16 years. They have relatively small brains,
but are closely related to humans in terms of structure,
behavior and physiology. They are endemic to the
Atlantic forest of north-eastern Brazil, live in extended
family groups and share with humans a cooperative
breeding strategy. Their temporal coordination of
vocal responses resembles vocal interactions in
human linguistic dialogues. By permission
of Inbound TeleSales. iStockphoto.com. 136

ix
1
Introduction

The present work addresses the ability to acquire and make use of a
language, an ability which is demonstrated by children throughout the
world. The acquisition of language shows that children are endowed with
a cognitive apparatus which is necessary for linguistic communication,
and thereby for sustenance of the human species. Language is gener-
ally learned without noticeable efforts and without formal instruction.
However, there are children who do not acquire language this easily and
who are hampered with an impaired language years into adulthood. In
Chaps. 2 and 8, I will discuss the diagnostic criteria, etiology and treat-
ment of the language impairment of this group of children. In agreement
with commonly used terminology, I shall exclude cases of recorded brain
pathology, and instead refer to this disorder as developmental language
impairment, in contrast to acquired language impairment or aphasia due
to neural damage or brain disease.
The research literature recently published on developmental lan-
guage impairments is considerable, and much of it will be reviewed in
Chaps. 2 and 8. The other chapters will deal with aspects of language
evolution which I think are relevant for a reevaluation of developmental

© The Editor(s) (if applicable) and The Author(s) 2016 1


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_1
2 Language Evolution and Developmental Impairments

language impairments. Many theories of the evolution of language do


have implications for the way we deal with such impairments; however,
these implications are rarely stated explicitly. At the same time, theories
on developmental language impairment generally lack an evolutionary
frame of reference.
The study of language evolution—how humans came to speak, use
signs and write—has engaged researchers in a wide range of research
fields, from cognitive neuroscience, linguistics and evolutionary
anthropology to psychology and socio-linguistics. They all address,
directly or indirectly, the problem of whether language emerged as
a wholesale innovation, which made language unique in the natural
world, or whether language evolved continuously as a reconfigura-
tion of cognitive capacities that were present in the pre-human homi-
nids. Also, cognitive capacities that evolved later in humans may have
become integrated with evolutionary early capacities to form language
in the modern era. The present work, which presents a new perspective
on developmental language impairments, also addresses the different
fields of expertise on language evolution and makes an attempt to inte-
grate some influential research and discussion within these fields. In
addition to the prehistory of language, speech and communication, I
will also discuss language evolution in historical time since the inven-
tion of writing.
The literature reviews and discussions presented in this work were all
selected and undertaken to provide a reevaluation of research on develop-
mental language impairments, and in the long run to improve diagnostics
and remedial treatment of such impairments. Let me therefore explain
why a reconsideration of theories of evolution will also serve research on
language impairments: If language did not evolve as a complete inno-
vation (a position that lacks support from most researchers today), but
rather as a continuous establishment of different linguistic capacities that
are ultimately reconfigured to serve the use of language in contemporary
societies, we will deal with evolutionary stages of linguistic competence,
which are linked to different aspects of language and which may be selec-
tively impaired in children. Some of these capacities evolved early, others
belong to a later or recent epoch in the history of mankind. The selective
1 Introduction 3

impairments of capacities may correspond to different subgroups of


language-impaired children, necessitating differentiated diagnostics and
remedial treatment.
The prevalence of developmental language impairments in mod-
ern societies is considerable, yet disagreements exist about diagnoses
as well as remedial treatment. I argue in this work that further prog-
ress of research on developmental language impairments can only be
achieved by making use of new insights about the evolution of lan-
guage, and therefore I aim to cover the combined field of evolution
and development of language. I have therefore written this book hop-
ing to improve and extend the theoretical basis for clinical work with
language-impaired children.
In Chap. 2, I will explain why the major issues and controversies in
the literature on language impairments may be, to a great extent, resolved
when treated using the perspective of language evolution. The following
chapters will deal with issues in theories of language evolution which
have great relevance for an understanding of developmental language
impairments. I will review and discuss a number of research works within
a cognitive and neurobiological framework, and in Chap. 6 I will also dis-
cuss the growth of literacy since the invention of writing, which I think
is also relevant for a renewed interpretation of developmental language
impairment (see Sect.  1.5 below). Finally, in Chap. 8, I will summarize
the implications of an evolutionary approach to the study of develop-
mental language impairments, and—in agreement with the evolutionary
perspective—I will survey important new methods of diagnostics and
treatment.
Although the main aim of this book is an improved understand-
ing of developmental language impairment, many of the following
chapters will deal with general issues in studies of language evolu-
tion. The book does not aspire to be a comprehensive and up-to-
date treatment of language evolution (for an expert introduction, see
W. Tecumseh Fitch, The Evolution of Language, 2010); however, the
following chapters need preparatory notes on (1) the concept of lan-
guage and its subsystems, and (2) the conceptual framework of evo-
lutionary biology.
4 Language Evolution and Developmental Impairments

1.1 Language and Its Subsystems


The Concept of Language Many fields of research are involved in studies
of language and language evolution. Thus it may be difficult to agree on
a single definition of “language.” The multicomponent approach taken
by Fitch indicates that language was considered to be a “complex system
made up of several independent subsystems.” Each of these subsystems
has different functions, as demonstrated by the effects of brain lesions
and the different maturation rates of language skills. However, no general
agreement exists as to what, precisely, are the subsystems of language,
and how are they organized in one complex system. Some subsystems
are shared with other animals, others are not. Some are shared with other
cognitive domains such as vision, procedural memory and music. To cast
the net widely, Hauser, Chomsky, and Fitch (2002) introduced the term
“Faculty of Language in a Broad sense” (FLB), which prevents any pre-
conceptions as to whether or not some likely candidates of communica-
tive mechanisms are actually part of language.

In linguistics and neurolinguistics, however, researchers have argued for


more specific mechanisms that are both special to language and unique to
humans. The definition of the “Faculty of Language in a Narrow sense” (FLN)
presupposes an identification of such mechanisms or subsystems. Therefore,
this type of definition is important, because, as indicated in the heading to
this section, we shall shortly deal with the subcomponents of language. At the
same time, it also raises a number of problems that will be explained below.
I consider the faculty of language to be an ability which evolved with
humans, and this ability means that children are generally capable of learn-
ing and practicing the language of their caregivers. However, the ability
is an abstraction from the specific expressions of language use. Therefore,
language may also be considered as a learning potential that is present in
the infant even when sensory and motor mechanisms are impaired. Thus,
although we may consider the auditory–vocal channel to be the default
mode of linguistic communication, other channels of linguistic signaling
and other equipotential means of articulation are generally available. In gen-
eral, deaf and deaf/blind children have a potential for language that is real-
1 Introduction 5

ized on the premise of an adequate linguistic exposure in the environment


(e.g., Helen Keller’s case). Therefore, the definition of the subsystems must
not be modality-biased; rather, each of the subsystems will, in principle,
apply across the sensory and motor modalities. (See Chap. 7 on language as
a modality-independent capacity.)
Hockett (1960) suggested a list of “design features,” also called “language
universals,” wherein the features numbered 1–5 referred to characteristics
of speech, i.e., use of the auditory–vocal channel. Later, contemporary
researchers generally agreed that sign languages such as the American Sign
Language (ASL), are well-structured languages on par with any spoken
language; hence, the first five features in Hockett’s list were no longer
considered to be language universals. Feature 8 in Hockett’s list says that
meaning is arbitrarily related to the expressive form of signals (semantic-
ity). An object may be labeled by signals of any modality, and therefore
this feature may be said to invalidate the first features that involved speech
only. In other words, a linguistic signal could be expressed in any modal-
ity, downplaying any role of iconicity. Bickerton (2014) described seman-
ticity as displacement (the ability to talk about things which are not present
here and now), rather than arbitrariness, thereby merging two different
terms in Hockett’s list. In this way, the concept of semanticity/ displace-
ment also provided a link to mental time travels. Productivity/openness,
the concept that an infinite number of sentences can be produced and
understood, and duality of patterning, the concept that meaningless units
can be combined to form meaningful utterances, were also emphasized as
unique characteristics of human language. I shall have more to say about
these features in other sections of the book.

The Subsystems of Language I will make use of a general linguistic


classification of the main subsystems of language, while emphasizing
that each of them are aspects of a capacity abstracted from the modular
expressions of particular linguistic responses. The classification presented
here may be deemed a superficial one by linguistic researchers, and it may
lack necessary descriptions of the interrelatedness of the described cat-
egories. However, it serves a preliminary and necessary reference for later
discussions; hence, the following categories/subsystems will be addressed:
6 Language Evolution and Developmental Impairments

Signals In contemporary linguistics, signals are generally considered parts


of the phonological system, which means that at one level they are con-
sidered as meaningless segments that can be combined into larger mean-
ingful strings (words), and these larger strings can also be combined into
potentially meaningful utterances (see duality of patterning in Hockett’s list
of design features). This definition of signals, as parts of the phonological
system, also means that they are treated as units in the perception of speech
and can be further analyzed in terms of acoustic and phonetic subunits
(phones). I find this link to phonology and phonetics unsatisfactory because
other stimulus characteristics besides the vocal articulatory features may be
included in the definition of signals in language (see Sect. 1.4.2 below).

Phonology According to Fitch, phonology deals with “generative gram-


mar level one,” that is, the first level of description of the structure of lan-
guage. The generative character of phonology is expressed in the principle
of duality of patterning (see above). Thus, basic units such as the phoneme,
are, by themselves, meaningless, but are defined by the way they signal
distinctions of meaning. The phoneme may also be defined by a set of dis-
tinctive features such as voicing, nasality, manner and place of articulation.
Jakobson and Halle (1971) defined a set of 12 articulatory defined fea-
tures, most of which have survived in contemporary phonological theories.
Meaningful units, such as morphemes and words, can be generated accord-
ing to phonological rules, which are specific for each language. These rules
also permit us to construct pseudo-words (nonwords), which may become
words when systematically used to label new objects and actions. There
are, however, phonotactic constraints for each language that define both
the possible and impossible combinations of phonemes in each language.

Syntax This is the next level at which we can describe the structure of
a language. In general, syntax is said to deal with the combination of
words into sentences; however, the lower level of syntactic structure are
made up of morphemes, both bound and free-standing. The more gen-
eral term “grammar” includes both morphology and syntax. Thus, some-
times a distinction is made between grammar and syntax. Morphology
deals with the internal economy of words, whereas syntax deals with the
external economy of words (linguistics.stackexchange.com). Moreover,
1 Introduction 7

in syntax operating units are phrases, for instance, a noun phrase like
“the old man” can be combined with the verb phrase “grew a beard” to
create a sentence. Specific rules apply for the combination of phrases into
sentences, and the meaning of a sentence is a complex function of these
structures. Phrases can be embedded within other phrases; thus, a noun
phrase can be embedded within another noun phrase, and structures can
be recursively generated. A complete sentence therefore forms a hierar-
chical structure of syntactic units.

Although meaning is a complex function of the generated structures,


the relation between syntax and semantics can be debated. According to
Chomsky, we may easily generate sentences that are meaningless and yet
syntactically correct (e.g., colorless green ideas sleep furiously). Thus,
we may ask how it is possible to distinguish among (a) sentences that
are meaningless but grammatically correct, (b) sentences that are mean-
ingless and grammatically incorrect, (c) sentences that are meaningful
and grammatically incorrect, and (d) sentences that are meaningful and
grammatically correct. Typically, language-developing children will be
able to distinguish between the four types of sentences; thus, perhaps
these four types of sentences can be used as a screening test of language
impairment.

Semantics This is the study of meaning in language; that is, a field which
is shared between linguistics and philosophy. The problem of what is the
“meaning” of meaning was raised by Grice (1957), who distinguished
between “natural” and “nonnatural” relationships between signs and
objects. Later, Lyons (1977), suggested that “meaning” in semantics will
be used as “the meaning of lexemes” (vocabulary words). I shall revert to
his discussion of the term in Chap. 5, but for the moment I shall assume
a linguistic frame reference and talk about the meaning of words, phrases
or sentences. However, in formal semantics, the study of meaning in lan-
guage has revolved around the truth value of propositions, whereas propo-
sitions are functions that map possible worlds to truth values, for example,
“Boko Haram abducted 120 girls from the city of Maradi.” The conditions
under which this proposition is right or wrong express meaning in a differ-
ent way than the way we can talk about meaning of artistic performances,
8 Language Evolution and Developmental Impairments

say music. Thus meaning in language is propositional, whereas meaning in


music is considered to be abstract, relational and emotional.

A long controversy relates to the meaning of concepts and sentences. Do


they represent situations or objects directly, the “realist” position? Or do
they represent objects indirectly via the operation of the human mind, the
“cognitive” position? The issue has occupied linguists, psychologists and
philosophers equally and still remains an unsettled controversy. So-called
truth-conditional semantics will have great problems, when propositions
include words with imaginary referents. A realist position will meet with
similar problems, whereas a cognitive position means that a “unicorn” is
a concept in the human mind. Thus, concepts exist pre-linguistically and
serve as models for the examination of the external world. I will not deal
any further with the controversy between a realist and a cognitive posi-
tion, which I think will be worthless in view of the main task and goal for
the present book. I do, however, take a “cognitive” position to the study of
meaning in language, and I will go into more details about this position in
Sect. 1.5 below and in Chap. 5 on the evolution of meaning in language.

Pragmatics Like semantics, this subcomponent also deals with problems


of meaning in language. However, semantics deals with the meaning of
words and propositions, whereas pragmatics deals with the meanings
intended by speakers. I assume that pragmatic skills are highly dependent
on meta-cognitive and meta-linguistic skills, which are associated with
the acquisition of literacy. The evolution of intentional systems (Dennett,
1983) are said to be a prerequisite to “Theory of Mind” in human sub-
jects, another aspect of pragmatics that will be discussed in Chap. 2, Sect.
2.6 and later in Chap. 6 on Literacy and Language.

In modern languages, the subcomponents mentioned above are


equally developed and functionally interwoven. Competence in the
use of any language requires attendance and responsiveness to linguis-
tic signals, comprehension and use of grammatical structures, and the
conception of meaning in communicative messages. It is difficult to
conceive of a scenario in which one of these “departments” of linguistic
competence have dominated in early languages relative to the others.
1 Introduction 9

Yet their functional significance may have varied in different epochs


of evolution, and some researchers have argued that symbolic refer-
ence/semantics have had priority relative to grammatical competence
(Bickerton, 2003), whereas others (Ullman, 2004) have reversed the
sequence by arguing for the priority of grammar. I will have more to say
about this problem in Chap. 3.
In retrospect, I find it very difficult to define a language domain. The
problem is whether we can define such a domain as independent, or
without overlap of a “domain of thinking?” Is the use of metaphors a way
of thinking or a characteristic of language? Do children with Asperger
syndrome, who generally fail to understand metaphors, have language
impairment or a disordered way of thinking? It is difficult to answer these
questions decisively, and therefore it is also difficult to settle with a final
definition of “language.” In view of the complex relationship between lan-
guage and thought, I have extended the research literature to be reviewed,
and thereby the subject matter of this work, to cover some trends in
socio-cultural evolution. Thus, you will find in this book some discus-
sions of preliterate languages and oral culture, while I ask whether these
languages form a late but important stage in the evolution of modern
languages. However, the main issues belong to a cognitive and neurobio-
logical research framework, and when discussing research within these
fields, we shall also find “grey zones,” with great overlap between what is
customarily called a language domain and what belongs to cognitive and
neurobiological domains.

The Three S’s of Language Fitch (2010) summarized his description of


the subcomponents of language by three S’s: signal, structure and seman-
tics. First, there is a large vocabulary of learned signals used in commu-
nication where signalers and perceivers can switch roles. Secondly, there
is a well-established structure mediated by the sequencing and duality of
patterning in phonology and the hierarchical phrase structure of syntax.
Finally, there is the semantics of meaning, which was said to include both
formal semantics and pragmatics (Fitch).

In this book, I will make use of Fitch’s three S’s as a referential frame-
work, both when dealing with general issues of evolution and when
10 Language Evolution and Developmental Impairments

discussing an evolutionary approach to developmental language impair-


ment. The three S’s can be read as a line of development in the way that
syntax presupposes learning of signals and the structure of phonology,
and that semantics presupposes the learning of structure. However, this
sequence is debatable, and I will add that semantics is also dependent
on the growth of literacy. Finally, I will argue the first two S’s are both
dependent on the learning of structure; thus, signals and phrase structure
both involve statistical structures (see Chap. 3, Sect. 3.2).

1.2 Developmental Impairments


and the Subsystems of Language
Developmental language impairments, in contrast to impairments
caused by brain injuries or disease, arise in development, and may affect
all subsystems of language. They may also show a primary deficit in
one of its subsystems, thus phonological problems may dominate the
clinical description for some language-impaired children. The dominant
problems for other children may belong to the semantic or the pragmatic
subsystem; therefore, we may ask whether a linguistic classification will
be adequate for clinical work with language-impaired children. I believe
not, because a linguistic system will at best be a descriptive one and may
relate to the surface aspects of language impairments, whereas etiological
factors remain unknown.
The three S’s in the language component analysis mentioned above rest
on a linguistic classification; however, they may serve as a frame of refer-
ence in a preliminary description of developmental language impairments.
This does not involve a linguistic approach to language impairments
because the conception of structure advocated in this book is also based on
cognitive neuropsychology, not primarily linguistics. On a general level,
we may ask whether there are developmental language impairments that
are language-specific with a particular deficit in one of the subsystems,
or whether most of these impairments also affect functions in a non-
language domain. The interdependence between the subsystems means
that the clinical picture of the language-impaired child is a complex one.
1 Introduction 11

Interactions between the three S’s will be discussed in several parts of the
book. I will return to developmental language impairment in Chap. 2,
where I will discuss several major conceptual issues and the implications of
taking an evolutionary approach. The other chapters of the present work
are briefly described in the outlines below.
In the following two sections (1.3 and 1.4), I will present a theoreti-
cally oriented description of evolutionary biology, and give an introduc-
tory presentation of contemporary research that has had a major impact
on theories of language evolution.

1.3 Theoretical Approaches in Evolutionary


Biology
This section will not provide a comprehensive description of evolution-
ary biology, but will be restricted to the approaches most relevant for the
evolution of language. Therefore, I will stress the distinction of historical
linguistics and the study of language evolution, and I will present the
general conceptual framework of evolutionary biology. Finally, I shall
present two theoretical positions that have had a major impact on con-
temporary research.

1.3.1 Language Evolution and Language Change

It should be stressed that the evolution of language is primarily an expression


of biological evolution, while also involving language change. Biological
evolution and its relationship to language change have been vigorously
debated since the dawn of evolutionary theory. Language change has been
conceptualized by way of a family-tree model, for example, by describ-
ing the Indo-European language family. In historical linguistics, protolan-
guages have been dated back to about 6000 years ago, but archeologists
and comparative biologists believe that humans may have developed a
language capacity at least 100,000 years ago, and therefore language evo-
lution covers a time scale beyond the scope of historical linguistics.
12 Language Evolution and Developmental Impairments

This book discusses the evolution of language as a human cognitive


capacity, whereas problems of language change and the historical rela-
tionships between particular languages is of minor concern. Language as
an expression of biological evolution shows itself most clearly when we
compare the way humans acquire their mother tongue with the results
of experiments undertaken to teach chimpanzees the use of language.
In contrast to normal human infants, no nonhuman primate has been
shown to spontaneously produce a word of any local language. However,
some communicative competence has been demonstrated by chimpan-
zees when using plastic chips or a system like ASL. Moreover, linguistic
vocalizations are observed by more evolutionarily distant animals, such as
parrots and harbor seals, that are capable of producing some words and
word-like phrases. None of these species have been capable of taking part
in anything but very boring conversations (see Chap. 3, Sect. 3.2).
The contrasts between animals and humans with respect to their capac-
ities for learning language formed the starting point in Fitch’s monumen-
tal work, The Evolution of Language. Fitch argued that apes do not fail to
acquire a language because of a lack of intelligence or a lack of ability to
use tools and to solve problems.

“Any normal child will learn language(s), based on rather sparse data in the
surrounding world, while even the brightest chimpanzee, exposed to the
same environment, will not. Why not? What are the specific cognitive
mechanisms that are present in the human child and not in the chimpan-
zee? What are their neural and genetic bases? How are they related to simi-
lar mechanisms in other species? How, and why, did they evolve in our
species and not in others?” (p. 15).

Researchers have disagreed on whether language is the result of slowly


acting forces of natural selection, or whether it appeared as a discontinu-
ity caused by stochastic mutations. Chomsky (1980) argued that lan-
guage is the result of a specialized organ, found in humans only, which
is endowed with an innate mental grammar capable of combinatorial
manipulations of symbols. Out of a finite set of means, this organ is
capable of producing an infinite set of sentences. The innateness of gram-
mar and the “language instinct” as advocated by Pinker (1994) may have
1 Introduction 13

been contrasted with a position on learning and adaptation which devel-


oped incrementally in the history of mankind. This problem, however,
is generally considered to be an oversimplification, and rather than talk-
ing about the innateness of language, many researchers will argue for an
“instinct to learn.” Just as birds have an instinct to learn the song of their
conspecifics, humans have an instinct to learn the language of their care-
takers. The “instinct-to-learn” position can more clearly be formulated
as constraints on language learning, as exemplified in the works of Jenny
Saffran et al. (2002, 2003, 2008), which will be reviewed and discussed
below in Sect. 1.3.3, and in Chap. 3, Sect. 3.2.2.

1.3.2 The Conceptual Framework

The factors underlying natural selection, variance, inheritance and differ-


ential survival in early human communities do permit the conception of
language as an adaptation, obtained in small steps over tens of thousands
of years. Fitch (2012) argued that there can be no doubt that language as a
whole is beneficial to man and therefore treated as an adaptation, yet there
are aspects such as phonological restrictions to syllabic structure which may
not be characterized in this way. For those aspects which may be charac-
terized as adaptations, for example (artificial) grammar, word segregation
and turn taking, specific constraints of learning apply. (I will discuss the
acquisition of these aspects in Chaps. 3 and 4.) The differences between
the adaptability of traits argue for a multicomponent approach to language.
There are limits to adaptation and natural selection, which mean that
we have to deal with discontinuities or sudden leaps in the evolution of
language. The so-called macromutations, well known to Darwin, were
threatening the role of gradualism in his original theory of evolution.
The role of such discontinuities, sometimes referred to as “saltations,”
is still an issue of debate, but is mainly resolved in the neo-Darwinian
synthesis of genetics and evolutionary theory (see Sect.  1.3.2 below and
“Evolution: consensus and controversy” in Fitch, 2010).
Some aspects of language may also have evolved as the result of pre-
adaptation. This concept means that a structure which is currently used for
one function had previously developed in support of another. A well-known
14 Language Evolution and Developmental Impairments

example is the jaw, which is said to have developed from the bony gill sup-
ports in fish. The change of function has adaptive value to the extent that it
greatly improves the organism’s ability to produce surviving progeny. Varney
(2002) argued that development of the ability to read can be explained as a
result of pre-adaptation. In evolution there has been little (zero evolution-
ary) time for the development of an ability to read, yet reading can be taught
in all cultures independent of previous knowledge of written characters.
Therefore, the acquisition of reading must be supported by neural structures
which were developed to do something else; the skills that pre-adapted for
reading were gestural communication and tracking of animals in the hunt.
These are radical ideas which will be discussed in Chap. 6 on Literacy and
Language. In one sense, the concept of pre-adaptation can be a misleading
one: there has been no “plan” to evolve a jaw or to acquire reading skill in the
first place; that is, evolution does not show an instance of “foresight” in such
cases. Therefore, contemporary researchers have exchanged pre-adaptation
with the new term exaptation, meaning exactly the same; that is, evolved
traits which change their functions into new ones.
However, traits may also evolve automatically as a byproduct in the
evolution of other structures. These new traits are therefore named “span-
drels” in analogy with some design constraints in architecture (e.g., the
triangular space between the outer curve of an arch and the rectangular
frame or mold enclosing it [Webster’s New Dictionary]). Exaptation dif-
fer from spandrels in that exaptations previously had a different function,
whereas spandrels originally had none. In total, the terms adaptation,
exaptation and spandrels are all applicable to theories of the evolution of
language, although their relevance differs for the various subcomponents
of language and are the issues of ongoing debates in the research litera-
tures. Thus, while Tomasello (1999) and Lieberman (2000) considered
syntax to be a spandrel (i.e., a byproduct of other adaptations), Fitch
(2012) argued for an exaptationist view on the evolution of syntax.

1.3.3 Evolutionary-Developmental Biology (Evo-Devo)

We now turn to theoretical contributions in evolutionary biology that were


presented after Darwin’s death, and that have greatly influenced contempo-
rary research. The subject is an integrated view of evolutionary-developmental
1 Introduction 15

biology advanced in the 1990s (see Goodman & Coughlin, 2000) and more
recently discussed by Fitch (2010, 2012). He warned against the fallacy that
every trait, including language, is an adaption, and advocated a multicompo-
nent view of language which instead emphasizes a close interaction between
selection and constraints. The evolution of language contains a number of
phylogenetic and historical constraints; the latter interact with natural adap-
tation and which therefore “restricts limit, or scaffold the course of evolution
and the nature of the evolved trait” (Fitch, 2012, p. 614).
The evo-devo principle depends on the synthesis between evolutionary
theory and genetics. This may be said to have taken place in two steps:
First, neo-Darwinism took into account Mendel’s experiments, which
were unknown until after Darwin’s death. The mechanisms of inheri-
tance had not yet been clarified, and at the time Darwin believed in the
Lamarckian principle of inheritance of acquired characteristics, a prin-
ciple which is essentially incorrect. He assumed that phenotypically the
offspring would be an intermediate between the two parents. As a result,
new organisms would be a “good fit” within their local environment, and
in this way, Darwin “used up” variance, which is a prerequisite to adap-
tion by natural selection. After Mendel, the concept of genes, and the
distinction between dominant and recessive genes, meant that a trait can
reappear in new generations and thus maintain the variance apparently
lost in the first place. Therefore the marriage between Darwinism and
genetics (Neo-Darwinism) meant that “Population thinking” replaced
“typological” or “essentialist” thinking.
However, Neo-Darwinism does not warrant an interaction between
selection and constraints, which is the essence of the evo-devo principle
that formed a second step in the synthesis of evolutionary theory and
genetics. The evo-devo approach is connected with the growth of epi-
genetics, the gene environment interactions. Until the late 1980s, it was
commonly assumed that genes played strict roles in the development
of bodily structures; therefore, anatomical and physiological complex-
ity, and possibly also cognitive complexity, would depend on the num-
ber of genes by the species. However, genome sequencing showed that
this number did not differ much for most animals and humans. The
expression of genes varied considerably, making complexity dependent
on gene–environment interactions. Bickerton (2014) points out that
16 Language Evolution and Developmental Impairments

“developmental changes were powerful determinants of apparent evo-


lutionary novelties.” They gave rise to deep homologies which provide
“links between organisms that might be only distantly related” (p. 52).
However, these links apply to structural forms, not to behaviors.
Epigenetics, and therefore evo-devo, may as well be stated in terms of
interactions among adaptation, exaptation and constraints. As an exam-
ple, Fitch (2010, 2012) mentioned that humans, in contrast to most
other mammals have a low-lying larynx. Darwin knew that this charac-
teristic increased the risk of choking, so what could have been the adap-
tive value of a descended larynx? Up till recent years, many researchers
believed this to be an obvious adaptation to speech. The descended larynx
means that the anchor base of the tongue was retracted caudally, which
changed the shape of the vocal tract and thereby the conditions for speech
sound production. It also made possible the closing of the nasal cavity,
thereby preventing the nasalizing of vowel sounds. Subhuman primates
did not have a low-lying larynx, and many researchers believed that this
fact explained why they did not develop speech.
Fitch mentioned a surprising discovery that he made with his colleague
Reby in the beginning of this century: They found that some deer species
had permanently descended larynges. Later, similar observations have
also been made with several gazelles and all of the big cats. Therefore, the
descent of larynx in humans could not have been a direct adaptation to
speech, whereas it permitted the production of lowered formant frequen-
cies (low voices). A secondary descent of larynx takes place in puberty
by human males, which influences the acoustic characteristics of their
speech. Many speculations have been made about the adaptive value of a
permanently descended larynx. Since its resting position correlates with
body size, this fact has given rise to the size exaggeration hypothesis.
Fitch also mentions the possibility that the descent of larynx, which
takes place in infancy, and which had evolved to exaggerate size, exapted
for the production of speech sounds by humans. The problem is whether
the second descent that takes place in males during puberty also can be
explained this way, or whether this change has other adaptive values in
the interaction between the sexes. Finally, it should be mentioned that
the production of vowels, and the imitation of human speech, also can
1 Introduction 17

be found among many birds (parrots) and some mammals (talking seals)
that have a high resting position of larynx.
To understand the observations mentioned above, we should take
notice of the complex relationship between form (the anatomical posi-
tion of larynx) and behavior (speech). Whereas form is largely controlled
by genes, behavior is not. Thus Bickerton (2014) points out that:

Behavior is considerably further from direct genetic control than form is.
This can be shown by simply considering the nature of behavior. Suppose
we have a species X with a behavior Y. Capacity for behavior inescapably
depends on having the necessary form, a big enough brain, sufficiently
developed organs of sense, limbs in the right places, whatever—and bio-
logical factors, genetic or epigenetic, mandate that form in all normal
members of X. In other words, being a member of X mandates a capacity
to perform Y. But capacity to perform Y does not mandate that Y will be
performed. (p. 52)

1.3.4 Niche Construction Theory

A parallel and supplementary development to evo-devo can be found in


niche construction theory (Laland, Oddling-Smee, & Gilbert, 2008),
which further emphasizes the role of environmental factors in evolution.
Creanza, Fogarty, and Feldman (2012) presented a model of niche construc-
tion which involved both gene-culture and culture-culture interactions. In
Chap. 6, Sect. 6.8, I discuss the invention of writing as the beginning of
niche construction in historical time from antiquity to the present. Here,
I will merely present the general principles of niche construction theory.
In neoDarwinism, it has been generally assumed that organisms adapt
to their environment, never vice versa. Thus, the role played by the evolv-
ing organism was highly restricted. However, species living in an environ-
ment which changes abruptly through climate changes or the appearance
of new predators may go extinct, adapt to the new environment, or move
into a new niche. There are plenty of examples where terrestrial animals
have returned to the water, and aquatic animals have come on land; that
is, animals that have played an active role in relation to their environment.
18 Language Evolution and Developmental Impairments

However, organisms have also changed their environment, for exam-


ple, beavers who built a network of channels and bridges; that is, a
niche to which they soon adapted. In this way, the animals and their
niche mutually influenced each other, and in the long run the organ-
isms transformed the niche and the niche transformed the organisms.
An example often used about niche construction by humans is the intro-
duction of dairy farming in Europe, which affected the frequency of the
allele for lactase persistence. Consequently, more individuals benefited
from drinking milk into adulthood. Thus, human-constructed practices
affected the transmission of genes and hence the general health condi-
tions in the community.
When used about humans the concept of a niche may be said to overlap
the concept of “culture.” We might perhaps speak of the cultures of termites
and beavers, but as a rule the concept of “culture” is given a human flavor;
for example, refined and sophisticated works of art or patterns of behavior
based on symbolic reference. In an evolutionary context, however, these
behaviors may be related to the behavioral patterns by many animals. Thus
Bickerton proposed that “if instead of calling it ‘culture’ we regard the whole
range of variable human behaviors as simply an example of niche construc-
tion, we place humans on a continuum that links them with many other
species including some as phylogenetically remote as termites.” (p. 66)

1.4 Neurobiological and Cognitive Research


Related to the Evolution of Language
The legacy of Charles Darwin’s works has justified an engagement in
research on language evolution. However, language evolution was long
considered to be a topic beyond serious inquiry in the academic and sci-
entific world. Furthermore, the general impact of the seminal works of
Noam Chomsky (1972, 1980, 1988) may have downplayed the role of
evolution: The fact that linguistic systems of the world share deep simi-
larities was taken as an argument for an innate Universal Grammar, and
hence linguistic universals were not learned but pre-specified in the child’s
linguistic endowments. Such accounts also had vast impacts on concep-
tions of evolution in the way that language was said to have emerged
1 Introduction 19

as a wholesale innovation by human beings. As mentioned above, argu-


ments against this position have been raised by Fitch and others, and
in the beginning of this century, the concept of a “language instinct”
(Pinker, 1994) changed into the concept of an “instinct to learn,” which
has been advocated by several researchers in recent years and discussed
by Bickerton (2014). The issue of innateness has also been discussed on
empirical grounds by Jenny Saffran and colleagues (2003, 2008). She
challenged Chomsky’s position and asked whether learning-oriented
theories can also account for the existence of language universals. She
presented the constrained statistical learning framework. Learners do not
respond to new language exposure in an open-minded way; rather, their
learning is constrained to “calculate some statistics more readily than oth-
ers.” Saffran’s works represented a new approach to both the acquisition
and the evolution of grammar, and to the segregation of words/linguistic
signals. I will therefore return to the general impact of her research in
other parts of the book (Sect. 1.3 and Chap. 3, Sect. 3.1.2); in particular,
I will show why Saffran’s works have an important impact on studies of
developmental language impairments.
I will now give a brief presentation of some research contributions
which explicitly or implicitly relates to the evolution of language. My
preliminary discussions in the next three sections will be followed up in
later chapters. It should be noted, however, that the introductory selec-
tion of works is not complete. More research on the brain mechanisms
underlying language will also be discussed in Chaps. 3 and 5. The follow-
ing three sections are short presentations of research which, in my view,
have special relevance to the evolution of language, and which will be
more thoroughly discussed in later chapters.

1.4.1 The Discovery of “Mirror Neurons”


in the Monkey Brain: A New Impetus
to the Study of Language Evolution

Within cognitive neurobiology, the low interest in research on evolution


prevailed until new techniques and approaches were published in the
early 1990s. These techniques were presented by Di Pellegrino, Fadiga,
20 Language Evolution and Developmental Impairments

Fogassi, Galese, and Rizzolatti (1992), who discovered an important


brain mechanism linking perception and action in the macaque brain.
It was long recognized that insight into language comprehension, for
example, speech perception, would require the identification of a neural
substrate which served to link perception and action. The F5 area of the
macaque brain was said to form this mechanism, and hence this area
was said to contain “mirror neurons.” These cells discharge both when
the animal grasps or manipulates a certain object and when the animal
observes the experimenter making a similar action. Notice that these cells
did not respond to any kind of motor gesture by the experimenter, only
to object-related actions. Rizzolatti and Arbib (1998) argued that “the
observation/execution matching system provides a necessary bridge from
‘doing’ to ‘communicating,’ as the link between sender and receiver of
each message.” (p. 188)
Furthermore, the F5 area in the macaque brain was supposed to be
homologous to Broca’s area in humans, and therefore, the function of this
area was claimed to be a hominid precursor to language. A human puta-
tive analogue of the mirror neuron system (Rizzolatti & Craighero, 2004)
strengthened this claim, and invoked researchers from a number of other
disciplines to become involved in studies of the origin of language (Fay,
Garrod, and Roberts, 2008; Shanker and King, 2002; Smith, 2004). In the
beginning of this century, therefore, language evolution became a cross-
disciplinary inquiry, but it was soon clear that the optimistic new wave of
research was released by the new techniques developed in neurobiology.
The discovery of mirror neurons in the monkey brain had an enormous
impact on theories of language evolution (Ramachandran, 2000); they
were thought to explain diverse phenomena (which could barely be dem-
onstrated in pre-human hominids) such as imitation, theory of mind, and
language. Other researchers (Corballis, 2010; Rizzolatti and Sinigaglia,
2008) took a more sober position and argued that the primary role of the
mirror neurons was action understanding: Actions which are performed
by others could be mapped into actions that can be performed by one-
self, therefore the discovery of mirror neurons supplemented and sup-
ported the now classical motor theory of speech perception (Liberman,
Cooper, Shankweiler, and Studdert-Kennedy, 1967). Hence, the new dis-
covery was said to explain an important new subsystem of language; that
1 Introduction 21

is, the perception and processing of linguistic signals, which may have
preceded other subsystems (e.g., semantics) in the evolution of language.
Thus Corballis (2010) commented on the new discoveries by arguing that
mirror neurons do not necessarily mediate the extraction of meaning, in
the linguistic sense. Nonetheless, a continuity position on the evolution of
language gained strength around the turn of the century.
Arbib (2009) made some interesting notes on the biological and social
mechanisms that mediated language evolution. He advocated a pre-
adaptationist view and argued that the first creatures with a mirror neu-
ron system and the functional expression of linked brain regions did not
have language, and yet these creatures were equipped with a language-
ready brain. This assumption is equivalent to the claim that our distant
ancestors had brain structures that could support reading long before the
invention of writing (see Sect. 1.1 above). The language-ready brain was a
product of biological evolution of the hominids, whereas language itself
may have evolved incrementally through cultural evolution. Thus, the
transition from a protolanguage by our distant ancestors to the full lan-
guage capability by human beings today is a product of both biological
and social mechanisms that support language. (Important insights into
the latter type of mechanisms can be gained by studying the historical
processes that have mediated the rise and fall of particular linguistic soci-
eties [Dixon, 1997]).
Research on the mirror neuron system has laid an emphasis on the
motor action component of language. At the time when this system was
discovered, there was a tendency among several researchers to think that
language as a whole can be explained within the fold of motor action (e.g.,
Rizzolatti and Craighero, 2004). The new discoveries led to the assump-
tion that the protolanguages of our distant ancestors were gestural lan-
guages and actions of manual praxis. Consequently, there must have been
a shift from gestural language to vocally based speech. Corballis (2010)
discussed whether there was such a shift, and whether it was a sudden or
incremental transition to speech. In my view, such a shift, if it really hap-
pened, may have reflected a selection of articulators, not a major trend
in the evolution of language. Thus language may have evolved towards a
modality-independent capacity of language, not a refinement of speech.
The analogue development of signed and spoken languages shows that,
22 Language Evolution and Developmental Impairments

across the modality differences of communicative expressions, there exists


a more general/linguistic mechanism (see the discussion of the language
concept above and further discussions in Chap. 7).
A motor action component is involved in both speech and sign lan-
guages, and has of course also been involved in any form of communica-
tive interactions among animals and pre-historic man. However, motor
action does not define a linguistic subsystem. Understanding of language
behavior involves an understanding of motor action, but, as indicated by
Corballis, the reverse is not necessarily true. Motor action, considered as
a serial structure of events, implies a form of grammar or syntax; that is,
the grammar of action. Does this mean that the mirror neuron system
present in subhuman hominids, and presumably also in early man, may
have equipped these individuals with a capacity to understand grammar?
The understanding of actions which are mediated by the mirror neuron
system involves only motor actions which belong to the response rep-
ertoire of the perceiving subject. The grammar of actions in a symbolic
system, for instance, actions in early protolanguages, has most likely had
a novel structure. Therefore, the mirror neuron system could not per se
have mediated grammar in animals or humans. Instead, we have to look
for learning constraints, or predispositions, which serve detection of sta-
tistical structures present in all natural languages today, and which most
probably have been present also in the early protolanguages.
The position taken by several researchers that states that language can
be understood within the fold of motor action has some merits. First of
all, it means that the statistical and serial structure of language behavior
has been given primary attention, and second, that most researchers have
acknowledged the motor aspect of all linguistic symbols. This position has
also been opposed and critically discussed by Toni, de Lange, Noordzij,
and Hagoort (2008) and Turella, Pierno, Tubaldi, and Castiello (2009).
They all question the general claim that language comprehension requires
the motor system. In Chap. 3, Sect. 3.7, I will present a general discus-
sion of the role of the motor system in language. Here, I will merely
argue that a theoretical emphasis on the role of the motor system may
have caused a neglect of the semantic aspect of language. How did mean-
ing become an important aspect of language? Many researchers seem to
have focused on the form of linguistic expression, which can be described
1 Introduction 23

in motor terms, at the expense of lexical meaning. This does not mean
that the problem of lexical meaning was entirely overlooked, however,
because it also led to further discussions on the brain substrates of action
understanding, in particular on the semantics of action verbs (Hauk,
Johnsrude, and Pulvermuller, 2004). However, this research trend still
served to downgrade, or to overlook, the classical distinction between
form and meaning in language (de Saussure, 1916): Thus, some forms are
highly specific to a linguistic society or a local group of people, whereas
meaning relates to cultures across linguistic communities. For example,
the English word cat and the French word chat are different in form, but
represent the same meaning.
Apparently, Corballis (2010) drew a more optimistic view on the
tenability of a neurobiological approach. He pointed out an important
difference between the monkey mirror system and the mirror system of
humans: Brain-imaging studies have shown that mirror neurons in the
former system respond to transitive, not to intransitive acts. In humans,
however, mirror neurons respond to both transitive and intransitive acts,
and therefore the human mirror system is said to form a substrate for
the understanding of acts that are symbolic rather than object-related
(Corballis, 2010; Fadiga, Fogassi, Pavesi, & Rizzolatti, 1995). Perhaps it
is the evolution of this system that has made humans the symbolic species,
and triggered the growth of a declarative memory system.
To some extent, the limits of a cognitive neuroscience approach which
I have expressed above seem to have been acknowledged by contemporary
researchers. Also Corballis (2010), who was otherwise optimistic about a
“mirror system approach” admitted that mental time travels, as exposed
in human language, challenges a mirror system interpretation of language
evolution. Other researchers, however, have argued that mental time
travels depend on a substrate outside the classical regions involved in lan-
guage processing (Schacter, Addis, & Buckner, 2008), and that perhaps
the mirror system has no part in the processing of images across space and
time. There are a number of aspects of modern languages which defy a
mirror system interpretation. My point is that synonymy, homonymy, and
mental time travels, as well as communication about impossible objects,
all require a different approach with a prime focus on concepts and cat-
egorization. The emergence of these characteristics of language cannot
24 Language Evolution and Developmental Impairments

be fully explained within cognitive and biological neuroscience without


transcending a motor-action frame of reference.
In short, neurobiological research on mirror neurons has provided
an important insight of the link between perception and action in lan-
guage, but it still remains to explain the arbitrary relation between form
of expression and meaning of linguistic symbols. Thus, apart from words
that are onomatopoeic, and some signs that are iconic, most words are
arbitrarily related to meaning. Corballis (2010) however, argued, con-
trary to both Saussure and Hockett, that arbitrariness is not a necessary
property of language, but “a matter of expedience, and of the constraints
imposed by the language medium. Speech, for example requires that the
information be linearized, squeezed into a sequence of sounds that are
necessarily limited in terms of how they can capture the physical nature
of what they represent” (p. 28). Corballis also pointed out that signed
languages are less constrained and may therefore more easily “mimic the
shapes of real world objects and actions” (p. 28). The reasons why I think
this interpretation is not sufficiently warranted by modern brain research
will be presented in more details in Chap. 3.
The problem of meaning and the way it evolved in language is so far
unresolved within the new research tradition of neurobiology. In Sect.
1.6, I will discuss the communication of meaning in early pre-literate lan-
guages, and in Chap. 4 I will present a more thorough discussion of the
complexity of problems related to the evolution of meaning in language.
A more detailed discussion of mirror neuron research will be pre-
sented in Chap. 3, Sect. 3.5. Here focus is laid on the problem of
whether F5 is the monkey homolog to Broca’s area in humans and
whether research on the mirror neurons support a gestural theory of
language evolution.

1.4.2 Pre-semantic Signaling and Its Role in Vertical


Transmission of Language

The mechanism which serves as a link between perception and action is a


major prerequisite to language. Mirror neuron research has given rise to
an understanding of a possible mechanism underlying imitation of speech
1 Introduction 25

sounds and possibly also manual signs in sign language. We can say it has
contributed to an understanding of the first S in Fitch’s three component
description of language: Signal-Structure-Semantics. However, it remains
to be seen how mirror neurons are actually being used in language, and
how this mechanism mediates attribution of meaning to signals.
In Sect.  1.5, I will give a preliminary discussion of meaning in lan-
guage, Chap. 5 will give a more comprehensive discussion of this matter.
Now the question is whether infants can distinguish language-like stimuli
from other stimuli in the ambient environment; that is, stimuli with no
or a low level of meaning which is “comprehended” prior to the labeling
of signals to particular objects or events. Are infants tuned to “language-
like” stimuli prior to the development of semantic knowledge? Learning
constraints which attune the infant to the ambient environment of lin-
guistic stimuli may have an evolutionary origin, and therefore serve as
a basis of early language acquisition. Vouloumanos and Werker (2004)
showed that two- month-old infants listened longer to speech sounds
than to sinusoidal waves, which track the center frequencies of natural
speech. They concluded that infants are tuned to speech sounds, and that
speech therefore has a privileged status for young infants. Later, Krentz
and Corina (2008) strongly objected to this conclusion. They showed
that, in a paired-comparison, preferential-looking paradigm, six-month-
old hearing infants preferred to watch unfamiliar signs (from ASL) over
nonlinguistic pantomime. Therefore, they concluded that infants are not
specifically tuned to speech, but to human language in general.
In Chap. 7, I will discuss Krentz and Corina’s research in more detail,
because their work provides a strong argument for a modality-independent
capacity of language. Although this capacity is part of the infant’s behav-
ioral potentialities, we still find in development a modality-specific attun-
ement to linguistic stimuli (see more of this discussion in Chap. 7).
Also, we will find that language-related stimuli in all modalities have
behavioral precedence in relation to other types of stimuli within the
same modality which are not language-related. However, as a premise for
the following discussion, I assume that infants are capable of making the
more general distinction between linguistic and nonlinguistic signals or
events independent of modality. This distinction may form a develop-
mental basis from which further language development takes place, and
26 Language Evolution and Developmental Impairments

in the following I shall refer to this level of linguistic competence as the


“basic language mode” of communication. So what is the evolutionary
significance of this mode of communication: Does it have an evolution-
ary priority in relation to other linguistic skills, and how is it acquired by
the developing child?
A theory of language evolution must account for some mechanism
underlying an effective transmission of language skills between genera-
tions, that is, a mechanism which facilitates the communicative inter-
action between child and caregiver. The basic language mode has an
important role in vertical language transmission, because the linguistic
responses of caregivers, including syllabic nonword utterances, include
general features which are easily learned and responded to by the grow-
ing infant. These features form the basic level of linguistic signals and are
acquired in a pre-semantic stage of development. Later in development,
linguistic signals will convey more information, become more modality-
specific, and thereby give rise to higher levels of signaling.
I believe that linguistic signals which are learned in a pre-semantic
stage are basic both in a developmental and evolutionary sense. I shall
start by considering their role in language acquisition. These signals may
be conceived of as language-like stimuli which interact with a number of
other cognitive processes, and which share some features with nonlinguis-
tic stimuli and events. The sensitivity to language-like stimuli is a crucial
precondition for language transmittance both by infant and caregiver,
and for the child’s language acquisition. Let us see what happens when
the child is not tuned to “language-like stimuli.” In general, such stim-
uli trigger feelings of empathy and belongingness, and initiate a process
of socialization, both on the part of the infant and the caregiver. These
expressions do not have to be decoded in terms of semantic meaning, and
yet they will form a major premise for the ensuing development of lan-
guage. Some gesticulatory movements by deaf children are sign-language-
like (manual babbling) and which therefore seem to play a similar role in
a deaf community. In the rare cases when language-like articulations are
missing, for example, in infants with a chromosomal deficiency, who pro-
duce cries or vocal responses which are not language-like (e.g., crit de chat,
see Rodriguez-Caballero et al., 2010), parents become severely concerned
for the child’s social, emotional and cognitive development. An example
1 Introduction 27

is the crit de chat syndrome which shows the deleterious effects on com-
municative interactions in early childhood when a basic language mode
is lacking in the child’s vocal activities. Most infants, however, do pro-
duce language-like vocal or manual expressions (in infants with signing
parents) that are taken by the caregivers as witnesses of normal language
development.
Cries that are not language-like lack the important features which are
commonly observed in all natural languages and which form the most
general manifestation of linguistic signals. Let me repeat, these signals
are not pre-specified, but are subject to learning constraints which will be
discussed later. First, I will briefly review some classical cognitive theo-
ries which focus on the role of linguistic signals in the general cognitive
apparatus.
The evolutionary significance of language-like stimuli also means
that these stimuli will most likely affect other cognitive processes such
as attention and working memory. Apparently, human subjects have a
specific sensitivity to stimuli, which I have called pre-semantic linguis-
tic stimuli, and which can also be produced as pseudo-words or other
speech-like stimuli by adults. The sensitivity to such stimuli is implied
in the phonological loop; that is, a component in the Baddeley and Hitch
(1974) model of verbal working memory. This component has also
been described as a language-learning device (Baddeley, 2007; Baddeley,
Gathercole, & Papagno, 1998). In the same research tradition, it has
been demonstrated that speech sounds, in addition to being processed by
a separate mechanism, also serve as effective suppressor stimuli in verbal
short-term recall tasks. Similarly, “babble noise” interferes more effec-
tively with speech perception and verbal short-term memory, compared
to white noise. Together these observations demonstrate that speech
sounds are processed differently from other nonspeech sounds. Specific
interpretations of this difference is made explicit in the motor theory of
speech perception (Liberman et al., 1967) and in recent research which
relates to this theory (see Chap. 3, Sect. 3.7) It may also be discussed
whether research on hemispheric specialization gives support to the
assumption of a general “language mode” of processing information. The
right-ear superiority for syllabic stimuli in dichotic listening experiments
has been interpreted as evidence of left hemispheric specialization for
28 Language Evolution and Developmental Impairments

linguistic stimuli (nonsemantic syllables). However, a right-ear advan-


tage has also been demonstrated for a Morse code, both by Morse code
operators and by subjects who did not know this code. Therefore, the
right-ear advantage for linguistic stimuli did not necessarily show left
hemisphere specialization for language per se, but right-ear superiority for
complex temporal microstructures (Efron, 1990). Furthermore, visual
hemifield presentations have shown a left hemisphere advantage for sign
language. However, this effect is not observed when static images of signs
are presented, only when the presentation of signs include their move-
ments (Emmorey, 2002). Therefore, this observation supports the inter-
pretation of a left-hemisphere specialization for stimuli with a complex
temporal microstructure.
Research on hemispheric specialization shows that basic linguistic sig-
nals may share some features with nonlinguistic stimuli, and which there-
fore attest to the continuity between language and nonlanguage domains.
Linguistic signals have temporal structures; they constitute events, not
objects. The temporal structures of linguistic signals also involve transi-
tional probabilities, and therefore the learning of basic linguistic signals
has to do with statistical learning.
As mentioned above, Saffran argued that human infants are con-
strained to calculate some statistics more easily than others. There exist
structural similarities between all natural languages which correspond
to learning constraints/wired-in abilities in human infants. Saffran et al.
(2008) showed that these similarities can be described in terms of predic-
tive languages (P-languages), which are implicitly learned by infants. Thus
infants rapidly learned statistical predictive patterns, but failed to learn
nonpredictive patterns, while tamarin monkeys only “exploited predictive
patterns when learning relatively simple grammatical structures” (Saffran
et al., 2008). This work supports the claim that predictive dependencies
may have prevailed as important characteristics in the evolution of lan-
guage across differences in nonstatistical modes of expression. In Chap.
3, which addresses the problem of continuity, I will review their work
in more detail; at present, I merely suggest that basic linguistic signals
can be defined in terms of these structural similarities, which also refer
to phrases of manual gestures and speech sounds (see also Chap. 7, Sect.
7.2, where I argue that gestures and vocal utterances are equipotential
1 Introduction 29

articulators by human infants). In this sense, basic linguistic signals are


therefore modality-independent, though we tend to be preoccupied with
the vocally articulated signals.
The predictive dependencies in the artificial grammars studied by
Saffran et al. (2008) constituted temporal structures of speech-like items
and may be interpreted in agreement with the position taken by Efron. He
pointed out that language is inherently a temporal activity, and a deficit in
temporal-order judgments is generally associated with a language deficit.
Thus, an ability to analyze the temporal microstructure of stimuli warrants
an ability to detect and respond to important linguistic stimuli, whether
these are generated in a visual or an acoustic medium. Early detection of
such stimuli will serve to initiate communicative interactions between
infant and caregiver, and thereby to ensure the process of language acquisi-
tion. Saffran et al., however, went further by specifying the temporal struc-
ture of phrases/events into predictive and nonpredictive dependencies, and
by arguing that the predisposition of human infants to learn predictive
patterns in the ambient linguistic environment not only affects language
acquisition, but also has sculpted the general form of natural languages.
In Chap. 3, Sect. 3.2, I suggest that the statistical predictive patterns
of natural languages form an access code to early dialogues. This means
that infants/children who easily detect such patterns, and are capable of
repeating them in their own vocal responses, are also likely to be involved
in early dialogues with their caregivers. Responses to other language-like
stimuli, which I assume are detected due to the same statistical micro-
structures, may have a similar effect. Detection of statistical structures
of linguistic stimuli may also be cast in terms of pre-linguistic abilities
that have an early origin in language evolution. When these abilities are
poorly developed, I assume that children will rarely engage themselves
in dialogues, and will be at a disadvantage in language development
(see more discussions of this problem in Chap. 8). It may be discussed
whether the detection of statistical predictive patterns formed an adapta-
tion, and that pre-semantic versions of these patterns were included in
the vocabulary of a protolanguage by our last common ancestor (LCA)
(see Chap. 3, Sect. 3.1.3).
In short, language deprivation in infancy—meaning that children are
rarely exposed to basic linguistic signals—will delay language development
30 Language Evolution and Developmental Impairments

and make them vulnerable to language impairments. The awareness of this


risk may have motivated a recent study of children enrolled in the U.S. child
welfare system (Merritt & Klein, 2014). Many of these children suffered
language deprivation and trauma and were therefore vulnerable to devel-
opmental problems. The children who were enrolled in Early Care and
Education (ECE) programs had better language development 18 months
later than those who were not enrolled in ECE. The preschool program
which involves children who differ with respect to semantic development,
may still address the risks of early signal deprivation. The most severe type
of deprivation takes place by deaf children in families with no knowledge
of sign language. Such deprivation prevents early communicative dialogues
and thereby the transmission of language between generations.

1.5 Communicating Meaning:


An Introductory Discussion
Meaning is said to be the sine qua non of language; thus, it is a major
task for any theory of language evolution to explain how communica-
tion of meaning evolved in the human species. As pointed out above,
many researchers consider truth-conditional semantics to be inadequate
as a complete model of meaning. Thus, it will be difficult to define the
truth values of propositions which include words with imaginary ref-
erents (e.g., unicorn). Rather than dealing with propositions, and the
conditions under which they can be said to be true or false, I will keep to
a simple conception of semantics as the study of the meaning of words
and phrases. Furthermore, I will maintain a cognitive model which
asserts that words link to objects via mental concepts (see the distinction
between a realist and a cognitive model above).
According to the position taken here, meanings are concepts; that is, a
position that complies with ancient and intuitive models in the philoso-
phy of knowledge. Concepts (and categories) are also the subject mat-
ter of research within cognitive psychology, and therefore I find research
approaches within this field highly relevant for discussions of meaning in
language. Here, a main distinction has been made between implicit and
explicit learning of categories and concepts. The former involves learning
1 Introduction 31

of complex information which is not accessible to conscious recall, but


tends to be relatively specific and does not generalize to related tasks.
The latter represents verbalizable knowledge and will more easily transfer
to related tasks. A similar distinction is the one between procedural and
declarative knowledge, which is based on Ryle’s (1949) classical distinc-
tion between “knowing how” and “knowing that.” I will deal with this
latter distinction in more details in Chap. 3, and I will deal with contem-
porary research on implicit and explicit learning of concepts by animals
and human subjects in Chap. 5.
Does the acquisition of word meaning follow the same trends which
characterize the transition from implicit to explicit meanings of concepts?
The task of guessing the meanings of words is most likely constrained
by a heuristic device which says that words have meanings, and which
is utterly specified by a whole object assumption. These constraints allow
children to pick out a particular referent when hearing the word, but the
label may not apply in novel situations. Although this may be a first step
to finding the meaning of nouns, the new skill is generally characterized
by transfer specificity, and may therefore be related to implicit learning.
Similar constraints on word learning have been demonstrated by ani-
mals. Kaminski, Call, and Fisher (2004) reported an experiment with the
Border collie named Rico who learned the labels of 200 objects, a skill
which is comparable to children’s learning of object names after a single
exposure (fast mapping). Rico was then given a novel label and told to
pick out one object among a set of familiar objects and one novel object.
In 70 % of the trials, he picked out the novel object; thus, he inferred
the name of an object by exclusion, and this skill was demonstrated four
weeks after the initial exposure. Kaminski et al. concluded that “fast map-
ping…appears to be mediated by a general learning and memory mecha-
nisms also found in other animals and not by a language acquisition
device that is special to humans” (p. 1683). Later, Beran (2010) showed
that a female chimpanzee called Panzee was able to learn new labels by
exclusion in speech perception and auditory-visual matching to sample.
She had previously learned to associate eight sets of stimuli (photographs
and lexigrams) to a spoken English word, and in the experiment she was
also presented with eight undefined sets of stimuli with names which were
unknown to Panzee. On some trials, she was presented with one of the
32 Language Evolution and Developmental Impairments

unknown English words and told to match it to one of the visual stimuli.
(auditory-visual matching-to-sample). She consistently avoided choos-
ing known comparisons, and by exclusion she selected a photograph or
lexigram whose name was unknown. The fact that learning by exclusion
occurs by children as well as different species of animals, means that the
process has an evolutionary significance, and strengthen Kaminsky et al.’s
conclusion about a general learning and memory mechanism, which I
think may serve as a possible pre-adaptation to language,
Pre-adaptations like the one underlying learning by exclusion are gen-
erally beneficial for children in their early attempts to learn the words of
their local language. However, learning by exclusion does not prevent an
idiosyncratic labeling of objects and events, and therefore the principle
may also give rise to forms of communication which are incomprehen-
sible to others. Idiosyncratic labeling tends to survive in isolated families
and small communities, but may be broken and replaced by new labels
in an extended community. Social mobility may thus give rise to new
languages, where the meaning of words is based more on explicit rather
than implicit learning.
The learning of a new sign language by deaf children in Nicaragua pro-
vides an example where implicit learning in language acquisition gradu-
ally changed into explicit learning of a well-structured language. Prior
to 1979, when the Sandinistas overthrew the Somoza government, there
were no educational opportunities for deaf children in the country, and
deaf children were generally kept isolated within their families. Linguistic
interactions between members of these families have been described as a
system of gestures commonly known as “home signs” (Emmorey, 2002;
Senghas, Kita, & Özyürek, 2004). It seems that siblings learned these
signs “on their own efforts,” automatically and without an “explicit”
comprehension of meaning. This system was incomprehensible to any-
one outside the family, was idiosyncratic and action-based, and lacked
most names of everyday objects commonly present in spoken languages.
A single gesture covered a range of concepts. It had no gestures for emo-
tions, and did not represent tense. Home signs were context-dependent
and did not generalize to other social settings, and therefore we may still
consider them as the results of implicit learning. Actually, they may be
said to form a “time window” into the early evolution of meaning in
1 Introduction 33

language. Since home signs were implicitly learned concepts, they may
also be related to experimental cognitive research on concepts by animals
and humans (Smith et al., 2012). However, we cannot tell whether the
operational definitions proposed for implicit learning of concepts and
categories in this tradition apply to the phenomena of home signs in
Nicaragua. (See more discussion of recent research on concepts and cat-
egories in Chap. 5, Sect. 5.4.2).
Systems of home signs have been found as widely apart as Taiwan and
North America, and Senghas (2005) reported a similar scenario in the
emergence of a new Bedouin sign language in the Negev region of Israel.
These systems have not been considered to be languages, and the deaf
children soon exchanged the home signs with a form of “pidgin” sign lan-
guage once they started to interact with deaf children from other families.
The pidgin sign language has been said to fall between Protolanguage and
Modern languages in Jackendoff’s (1999) steps in the evolution of lan-
guage. (I shall present more information about pidgin languages below)
In Nicaragua the final transition from home signs to a standardized sign
language, such as the Nicaraguan Sign Language (NSL) took place when
the Sandinistas opened a primary school for deaf children in Managua,
where deaf children from the whole country were admitted. The chil-
dren were taught Spanish, not any of the sign languages, and the teachers
made use of finger spelling to teach them the Spanish alphabet. The edu-
cational program was no success, and few children learned any Spanish
words. Instead the children learned by themselves a creole sign language.

Pidgin is a grammatically impoverished contact language which has arisen


between groups who initially do not understand each other but need to
communicate for work or trade purposes. Children of the second genera-
tion develop a creole language with more complex grammar and Subject-
Verb-Object as the ‘default’ word order. This is broken only when an
element is singled out and presented first in the sentence (topicalization),
for example by the second generation signers in the Managua school (see
Emmorey, 2002, pp. 4–7 and 44–46).

The emergence of a standardized new language evidently required a


community of users, which exceeded 400 in Nicaragua at the time of the
34 Language Evolution and Developmental Impairments

Sandinistas’ revolution. The children who developed the new language


generally abandoned the system of home signs, and therefore these chil-
dren did not become bi-lingual with home signs and NSL. Due to its
relevance for theories of language evolution, I shall revert to the develop-
ment of NSL and the characteristics of creole languages in three other
chapters (Chap. 4, Sect. 4.8; Chap. 5, Sect. 5.5; Chap. 6, Sect. 6.8).
The rules controlling the use of home signs were implicit rules; they
may also have been established in other forms of communicative interac-
tions. The development, maintenance and control of a social organiza-
tion may be possible in a limited and relatively isolated society when
signs are used implicitly in very simple languages. Wittgenstein (1958)
may have had this kind of “language” in mind when he described the
concept of a “language game:”

The language is meant to serve for communication between a builder A and


an assistant B. A is building with building-stones: there are blocks. Pillars,
slabs and beams. B has to pass the stones, and that in the order in which A
needs them. For this purpose they use a language consisting of the words
“block,” “pillar,” “slab,” ,“beam.” A calls them out; - B brings the stone which
he has learnt to bring at such-and-such a call. ---- Conceive this as a com-
plete primitive language” (Philosophical Investigations, Sec. 2, Part 1, p. 3e).

The words and their connected actions were said to constitute a lan-
guage game that was complete in itself. Words which are not connected
with motor actions are not part of the language game. The words “block”,
“pillar”, “slab” and “beam” could be any distinguishable expressions (signs
or vocal expressions) as long as they were action-connected and were parts
of a rule-based game. Obviously, we will consider such a language to be
incomplete, also in relation to the task of building a primitive house.
However, the incompleteness of a language game is not only a question
of language complexity. To make sense I think Wittgenstein”s language
game might serve as a hypothetical example of procedural language skills.
Therefore, it also differs from modern languages which are also based on
declarative knowledge and may be consciously recollected. I shall have
more to say about language game in Chap. 4, which deals with dialogues
as procedural skills.
1 Introduction 35

Since the work of Squire, Knowlton, and Musen (1993), the two types
of knowledge are said to depend on separate brain systems with their
own particular functions. The declarative system is specialized for one-
trial learning, is sensitive to interference and is prone to retrieval failure.
The procedural system is phylogenetically the older one, and is generally
considered to be reliable and consistent, while it also provides the myriad
of nonconscious ways of responding to the world (Eysenck and Keane,
2000; see also my presentation of the two memory systems in Chap. 3).
In my view, Wittgenstein’s language game may be compared to a pid-
gin language between home signs and a creole language. However, the
language game (and perhaps pidgin languages) cannot describe itself; that
is, it does not serve communication about own communication. In this
context, semantic meaning is implicit in the communicative actions; it
cannot be comprehended explicitly, neither by outsiders nor by partici-
pants of the game.
In many ways, some ancient languages may have evolved as systems
which have characteristics like language games. In particular, the implicit
form of communication may have been present in small and isolated
groups of people, while the transition to well-structured and standardized
languages required a certain aggregation of people in larger communities.
Among other examples of new languages that evolved within the time
window of one generation are the pidgin languages mentioned above.
It seems to me that these have been based on the procedural knowledge
in specific communities, and represented transient linguistic forms, after
home signs but preceding the form and structure of modern languages.
Pidgin is a contact language that arose as a means of communication
between speakers of different languages. Although pidgin can be under-
stood as a transient stage in language evolution, as a contact language
it can also be discussed within the conceptual framework of language
change. The best known examples are the now creolized Hawaiian pid-
gins that arose as a mixture of traditional Hawaiian dialects and English,
Japanese, Portugese and other languages of traders in the Pacific islands.
Russenorsk is another example of a dual-source pidgin that arose in an
interaction between fishermen and traders in northern Norway and the
Russian Kola peninsula (the Pomor trade). Like the Hawaiian pidgins,
Russenorsk combined elements from existing languages, and therefore this
36 Language Evolution and Developmental Impairments

language was not a “new” language like NSL in Nicargua. Consequently,


the linguistic status of many pidgin languages has been a matter of dis-
cussion (e.g., Jahr, 1996). At the same time, a pidgin language represents
a spontaneous solution of a communicative problem. Thus the pidgin
creation has some interesting characteristics that are comparable to home
signs and maybe the early stages of NSL development. As pointed out,
pidgin may also be associated with Wittgenstein’s language game, in par-
ticular when we consider this form of language to be primarily a product
of procedural learning.
Most new languages develop incrementally from other languages.
Language change is incessantly taking place, more or less, in all societies.
It is therefore difficult to decide what has become a new language, not a
minor change of a present language. As pointed out, pidgin is a contact
language that develops in particular communities within one generation
and in a creolized form within two or three generations. Like Russenorsk,
some of them have also become relatively stable languages. On this
ground, I consider pidgin creation to be different from other forms of
language change. As a response to new societal and interactional demands
that explain why we call them contact languages, I still prefer to think of
them as “new” languages. There must be characteristics of pidgin creation
that set this process apart from other forms of language change. One of
these characteristics has to do with the time-scale of change, another has
to do with the instrumentality of pidgin languages. In contrast to lan-
guages that have evolved from other languages over many generations,
pidgin languages may be considered as a “special-purpose instrument”
of communication. To some extent, these languages may be considered
domain-specific (e.g., Russenorsk, which served the Pomor trade). I there-
fore assume that the pidgin user is more concerned of the efficacy of this
language rather than the way it forms a system of linguistic signs.
At present, the neologism taking place in particular groups within
information technology, data hackers, or criminal gangs is similarly ruled
by a consideration of instrumental efficacy, and is most probably acquired
by way of implicit or procedural learning. The result is a language cre-
ation that seems artificial to “outsiders.” Perhaps this is the only way that
linguistic creations take place over relatively short time periods, whereas
awareness of language as a system of signs takes more time to develop.
1 Introduction 37

At the same time, the neologisms in our time may share the procedural
aspects of the languages of our early ancestors.
Could the protolanguage of early humans some 50,000–100,000 years
ago be form-based languages of this kind? This is supposed to be the lan-
guage used by our last common ancestor, from which known languages
are believed to have evolved in small steps to form a language family. I
think the protolanguage may have been based on implicit rules of com-
munication like those found in pidgin languages. (In Chap. 3, Sect. 3.1.3,
I will also discuss Bickerton’s conception of protolanguage.) The status of
a protolanguage may have remained as long as the rules of the “game”
served the goals of the community, and the group/society did not grow
too large, or became challenged by another group or society that used a
different language; the community might do well without an explicit com-
prehension of word/sign meaning (meta-linguistic knowledge). Societal
growth and differentiation also produce a differentiation of expressive
form, of dialects or new languages. Therefore, cooperation and interac-
tion between groups required humans to transcend the implicit rules of a
“language game.” Actually, within a group or tribe that constantly adapts
to changing conditions of living, there will always be a need to transcend
the rules of the game. In consequence, particular groups of people in
early times developed an understanding of the meaning of signs across
differences in the forms of their production, which may have given rise
to languages with explicit concepts which could consciously be recalled;
that is, linguistic expressions of declarative knowledge.
Any extant pidgin languages may be studied within the framework of
cognitive neurobiology, and with an emphasis on the long-term memory
systems. In particular, the balance between nondeclarative and declarative
memory systems will be an important objective of research. (In Chap. 3,
Sect. 3.3, I will discuss Ullman’s research approach, which focuses on the
procedural and declarative memory systems)
Evolution of semantic meaning requires some flexibility in the use
of linguistic signals, which is a consequence of the arbitrary relation
between form and meaning. First, linguistic symbols, whose meaning
involves explicit or declarative knowledge, conform to a law of replace-
ment, which means that a sign may be replaced by another sign that differs
in form of production, but has the same meaning. Also, the synonymy
38 Language Evolution and Developmental Impairments

of signs/words becomes a critical aspect of all human languages. Next,


homonymy, the property that one sign may have different referents (i.e.,
same expression different meanings) depending on the context of com-
munication, is another important characteristic of linguistic symbols.
Finally, linguistic symbols do not only refer to events and objects which
are present in a particular setting, but also to distant objects, and to past
events as well as future scenarios. (This is one of the design features of
language called displacement in Hockett’s list, which will be discussed in
Chap. 3, Sect. 3.1.2). It also permits communication about impossibili-
ties and paradoxes. All of these characteristics depend on the learning of
explicit concepts; none of them could be present in languages which are
restricted by the type of signals characterized as home signs.

1.6 Language-Culture Interactions


To some extent, language mirror the ways of thinking in the culture.
As pointed out, therefore, it is difficult to distinguish clearly between
the “domain of language” and the “domain of thought.” Trends in con-
temporary culture constrain language. Thus, it is commonly agreed that
cultural development has shaped the semantics of language, in particular
the involvement of explicit concepts and declarative knowledge. At the
same time, language evolution may also have affected cultural change; for
example, the invention of writing, which eventually changed language,
and which has given rise to civilizations and further cultural and techno-
logical innovations.
Studies of language–culture interactions address a relatively recent
period in the evolutionary history of language. This is the period of oral
culture before the invention of writing and the growth of literacy; that
is, a period of equal interest for studies of language change and language
evolution. Linguistic aspects of languages have changed, but have lan-
guage as a human cognitive capacity changed in the last 6000 years since
the invention of writing?
Can we characterize languages before the invention of writing? We
have only indirect evidence of preliterate languages through studies of
the Homeric poems and other texts from ancient oral traditions (Ong,
1 Introduction 39

1982; Parry, 1971). The experts that have studied ancient oral traditions
seem to agree that the conserved texts may serve as important clues to
an understanding of pre-literate languages, but they also stress that these
languages were not in a sense “primitive” or fundamentally different from
languages in modern societies. On the contrary, Lyons (1981) argued
that neither global nor historical comparisons reveal any evidence of
“primitive” languages: “no correlations have yet been discovered between
the different stages of cultural development through which societies have
passed and the type of language spoken at these stages of cultural develop-
ment” (p. 28). However, there were differences between oral (pre-literate)
and modern languages.
Ong (1982) challenged his reader to imagine a culture “where no one
has ever ‘looked up’ anything” (p. 31). In this culture, words did not exist
visually; words were evanescent sounds or events. They did not constitute
tools in the recitation of a narrative, as in a literate culture. Rather, words
were motor events or actions, and recitation of the narrative was a perfor-
mance, and hence subject to the structural laws of formulary expressions.
In an oral culture, therefore, language was strongly affected by mnemonic
constraints, favoring rhythmic patterns, repetitions, alliterations, and so
on. By emphasizing the structural form of a message, it may have been
difficult to distinguish linguistic form and semantic meaning. On this
account, it may have been difficult to decode new events into formulary
expressions of the oral culture; the capacity to tell or report “new” events
may have been rare. Instead, mimetic and recollective functions of lan-
guage may have dominated human communication, compared to gen-
erative aspects, which to a greater extent have served inventive thought
and action in modern languages.
By stressing words as motor events or actions, it may seem that words
could only exist in the medium of sound. Could words be conceivable
independent of this medium; for instance, as visual gestures or visual
characters? Lyons (1977), in his classical work on semantics, stated that
medium transferability of language is as important a design feature as the
one Hockett called learnability:

The point is that languages. or at least the verbal component of languages,


can be considered independently of the medium in which they are primarily
40 Language Evolution and Developmental Impairments

and naturally manifest; and, as we have seen, written languages already have
some degree of independence as one of man’s principal means of communi-
cation (p. 87).

Writing, as well as the development of sign languages, supports the


principle of medium transferability of language. This principle also
involves a distinction between speech and language; the latter is con-
ceived of as modality-independent capacity which can be expressed by
different kinds of articulators (see Chap. 7).
Comparative studies of pre-literate and post-literate languages show
that language as a cognitive capacity may have changed in the modern
era. However, the specific ways language has changed remain to be seen.
The brief description of the pre-literate languages given above may also
indicate that the process of language acquisition, despite similarities, may
also have been different from language acquisition by modern infants.
Our conception of “typical language development” is to some extent
time-culture-specific and should be taken into consideration when “con-
trol groups” in research on language acquisition and language impair-
ment are defined.

1.7 Outlines of the Present Work


The contents of the next chapters were selected according to the follow-
ing considerations: First, they cover important issues and discussions
in theories of language evolution. Second, they are related to language
acquisition and language impairments. However, issues of language evo-
lution may dominate in a greater number of chapters, and to keep bal-
ance between the two objectives Chaps. 2 and 8, deal exclusively with
developmental language impairment. Chapter 2 concerns what devel-
opmental impairment is; hence, it addresses the terminological discus-
sion among researchers and clinicians in the field. Should we abandon
the term specific language impairment (SLI), and, if so, what term should
replace it? This chapter also has a section on the genetic etiology of devel-
opmental language impairment, and another section on the role of early
interactions between child and caregiver. Both will demonstrate the main
1 Introduction 41

objectives of the chapter; namely, to show the prospects and importance


of an evolutionary approach.
Chapter 3 addresses the problem of continuity in language evolu-
tion. Although it recognizes relative gradualism, it does not reject the
importance of macromutations in evolution. The chapter distinguishes
between continuity in evolutionary time (from subhuman primates to
humans) and continuity across behavioral domains (does language share
a neural substrate with nonlinguistic domains of behavior?). Studies of
communicative/linguistic abilities by subhuman species are discussed
(continuity in time), and Ullman’s procedural declarative (PD) model
and Saffran’s constrained statistical learning paradigm are presented, as
they both relate particularly to the continuity-across-domain problem.
The chapter also presents more recent research on mirror neurons and
discusses whether the F5 area in the monkey brain is a homologue to
Broca’s area by humans. It also raises the problem of what may have
served as pre-adaptations for grammar in language, and it discusses the
question of whether the motor system has a special role in language.
Chapter 4 explains why some dialogues are procedural skills (they are
learned implicitly and have the characteristics of the procedural mem-
ory system). Turn-taking by marmoset monkeys and human infants are
precursors to procedural dialogues by children and adults. The chapter
addresses the problem of how intention to communicate can be signaled
to others and some models of language acquisition in dyads are discussed.
Procedural dialogues are related to pidgin languages, and the chapter
explains why such dialogues are easy for the typically developing children
and hard for the language-impaired child.
Chapter 5 addresses the problem of how meaning in language evolved.
It distinguishes between meaning as intention and meaning as knowledge.
Whereas the former interpretation involves a temporary state of affairs,
the latter involves “stored” meaning of words and objects. The chapter
presents and comments on Lyons’ classical discussion of the “meaning
of meaning.” Meaning in pre-literate languages is discussed in relation
to Lyons’ concept of reflexivity of language. The cognitive approach as
represented by contemporary studies of categories and concepts and the
prospects of a neurobiology of lexical meaning are discussed. With refer-
ence to the studies of Fay, Garrod, Roberts, and Swoboda (2010), it is
42 Language Evolution and Developmental Impairments

argued why collaborative structures are important in the acquisition of


meaning in language.
Chapter 6 (Literacy and Language) may not be said to cover main-
stream discussions of language evolution, but is included here because
it can be argued that language may have changed more rapidly since
the invention of writing. It presents an outline of writing systems of the
world and discusses whether there is an optimal level of representation of
language in written languages. The chapter presents a review of cognitive
research on the effects of illiteracy and raises the question of how writing
may have changed language and the human brain. It also describes the
difficult transition to literacy in modern times, both because of develop-
mental impairments by individuals and because of cultural preconcep-
tions of reading. Finally, the chapter discusses the invention of writing as
niche construction.
Chapter 7 addresses the difference between speech and language and
argues for a modality-independent ability of language and against the
gestural theory of language evolution. Speech and sign language are both
expressions of a general language capacity; thus, acquisition of the two
language modalities show a number of similarities, for example, charac-
teristics of vocal babbling by hearing babies and manual babbling by deaf
babies. I review research works where it is argued that vocal and manual
responses are equipotential articulators at birth. The chapter also dis-
cusses similarities and differences in the neural representation of speech
and sign language, and the consequences of long-term sound deprivation
by deaf children who receive cochlear implants. Also, the chapter dis-
cusses reasons for the global dominance of spoken languages.
Chapter 8 summarizes the main lessons in the preceding chapters
about an evolutionary perspective on developmental language impair-
ment. These lessons have their origin in two research frameworks:
Ullman’s procedural declarative model and Saffran’s constrained statisti-
cal learning paradigm, both address the mechanisms underlying the first
two S’s (signal and structure) of language. The review of research pre-
sented in this chapter give considerable support to Ullman and Pierpont’s
procedural deficit hypothesis which motivates a change of term from
specific language impairment (SLI) to Ullman’s procedural language dis-
order (PLD).
1 Introduction 43

The reviewed literature argues for a domain-general processing mecha-


nism rather than a language-specific mechanism. The former is said to be
the one underlying structural sequence processing (SSP), which is opera-
tionally well-defined in a number of recent experiments. Therefore, SSP
has been applied in several attempts to enhance language functions, and
I end the chapter by discussing the prospects of improved treatment of
developmental language impairment based on this mode of processing.

References
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Baddeley, A. (2007). Working memory, thought, and action. Oxford: Oxford
University Press.
Baddeley, A. D., Gathercole, S. E., & Papagno, C. (1998). The phonological
loop as a language learning device. Psychological Review, 105, 158–173.
Baddeley, A.  D., & Hitch, G.  J. (1974). Working memory. In G.  H. Bower
(Ed.), The psychology of learning and motivation (Vol. 8). London: Academic
Press.
Beran, M. J. (2010). Use of exclusion by a Chimpanzee (Pan troglodytes) during
speech perception and auditory-visual matching to sample. Behavioural
Processes, 83, 287–291.
Bickerton, D. (2003). Symbol and structure: A comprehensive framework for
language evolution. In M. H. Christiansen & S. Kirby (Eds.), Language evo-
lution: The states of the art. Oxford: Oxford University Press.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Chomsky, N. (1972). Language and mind. New York: Harcourt Brace Jovanovic.
Chomsky, N. (1980). Rules and representations. New York: Columbia University
Press.
Chomsky, N. (1988). Language and problems of knowledge. The Managua
Lectures. Cambridge, MA: MIT Press.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Creanza, N., Fogarty, L., & Feldman, M. W. (2012). Models of cultural niche
construction with selection and assortative mating. PLoS, 7, e42744.
44 Language Evolution and Developmental Impairments

de Saussure, F. (1916). Course de linguistique générale. Paris: Payot. See also the
1969 translation by Wade Baskin: Course in general linguistics. New  York:
McGraw-Hill.
Dennett, D.  C. (1983). Intentional systems in cognitive ethology: The ‘Pan-
glossian paradigm’ defended. Behavioral and Brain Sciences, 6, 343–390.
Di Pellegrino, G., Fadiga, L., Fogassi, L., Galese, V., & Rizzolatti, G. (1992).
Understanding motor events: A neurophysiological study. Experimental Brain
Research, 91, 176–180.
Dixon, R.  M. W. (1997). The rise and fall of languages. Cambridge, UK:
Cambridge University Press.
Efron, R. (1990). The decline and fall of hemispheric specialization. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Eysenck, M. W., & Keane, M. T. (2000). Cognitive psychology: A students hand-
book. Hove: Psychology Press.
Fadiga, L., Fogassi, L., Pavesi, G., & Rizzolatti, G. (1995). Motor facilitation
during action observation: A magnetic stimulation study. Journal of
Neurophysiology, 73, 2608–2611.
Fay, N., Garrod, S., & Roberts, L. (2008). The fitness and functionality of cul-
turally evolved communication systems. Philosophical Transactions of the
Royal Society B-Biological Sciences, 363, 3553–3561.
Fay, N., Garrod, S., Roberts, L., & Swoboda, N. (2010). The interactive evolu-
tion of human communication systems. Cognitive Science, 34, 351–386.
Fitch, W. T. (2010). The evolution of language. Cambridge: Cambridge University
Press.
Fitch, W. T. (2012). Evolutionary developmental biology and human language
evolution: Constraints and adaptation. Evolutionary Biology, 39, 613–637.
Goodman, C. S., & Coughlin, B. (2000). The evolution of Evo-Devo biology.
Proceedings of the National Academy of Science, 97, 4424–4425.
Grice, H. P. (1957). Meaning. Philos Rev, 66, 377–388.
Hauk, O., Johnsrude, I., & Pulvermuller, F. (2004). Somatotopic representation
of action words in human motor and premotor cortex. Neuron, 41,
301–307.
Hauser, M.  D., Chomsky, N., & Fitch, W.  T. (2002). The language faculty:
What is it, who has it, and how did it evolve? Science, 298, 1569–1579.
Hockett, C. D. (1960). The origin of speech. Reprint from Scientific American,
603.
1 Introduction 45

Jackendoff, R. (1999). Possible stages in the evolution of the language capacity.


Trends in Cognitive Sciences, 3, 272–279.
Jahr, E. H. (1996). On the pidgin status of Russenorsk. In E. H. Jahr & I. Broch
(Eds.), Language contact in the Arctic: Northern pidgins and contact languages.
Berlin/New York: De Gruyter Mouton.
Jakobson, R., & Halle, M. (1971). Fundamental of language. The Hague:
Mouton.
Kaminski, J., Call, J., & Fisher, J. (2004). Word learning in a domestic dog.
Evidence for fast mapping. Science, 304, 1682–1683.
Krentz, U. C., & Corina, D. P. (2008). Preference for language in early infancy:
The human language bias is not speech specific. Developmental Science, 11(1),
1–9.
Laland, K. N., Oddling-Smee, J., & Gilbert, S. F. (2008). Evo-Devo and niche
constructions building bridges. Journal of Experimental Zoology Part B:
Molecular and Developmental Evolution, 310B, 549–566.
Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M.
(1967). Perception of the speech code. Psychological Review, 74, 431–461.
Lieberman, P. (2000). Human language and our reptilian brain: The subcortical
bases of speech, syntax and thought. Cambridge, MA: Harvard University Press.
Lyons, J. (1977). Semantics (Vol. 1). Cambridge: Cambridge University Press.
Lyons, J. (1981). Language and linguistics: An introduction. Cambridge:
Cambridge University Press.
Merritt, D. H., & Klein, S. (2014). Do early care and education services improve
language development for maltreated children? Evidence from a national
child welfare sample. Child Abuse & Neglect, 39, 185–196. doi:10.1016/j.
chiabu. pii: S0145-2134(14)00344-5.
Ong, W. (1982). Orality and literacy: The technologizing of the word. London:
Methuen.
Parry, A. (1971). Introduction. In M. Parry (Ed.), The making of Homeric Verse:
The collected papers of Adam Parry. Oxford: Clarendon Press.
Pinker, S. (1994). The Language Instinct. New York, NY: William Morrow and
Company.
Ramachandran, V. S. (2000). Mirror neurons and imitation learning as the driv-
ing force behind “the great leap forward” in human evolution. Edge, 69(29).
Rizzolatti, G., & Arbib, M.  A. (1998). Language within a grasp. Trends in
Neoroscience, 21, 188–194.
Rizzolatti, G., & Craighero, L. (2004). The mirror neuron system. Annual
Review of Neuroscience, 27, 169–192.
46 Language Evolution and Developmental Impairments

Rizzolatti, G., & Sinigaglia, C. (2008). Mirrors in the brain. How our minds share
actions and emotions. Oxford: Oxford University Press.
Rodriguez-Caballero, A., Torres-Lagares, D., Rodriguez-Perez, A., Serrera-
Figallo, M. A., Hernández-Guisado, J. M., & Machuca-Portillo, G. (2010).
Cri du chat syndrome: A critical review. Medicina Oral Patologia Oral y
Cirugia Bucal, 15, e473–8.
Ryle, G. (1949). The concept of mind. London: Hutchinson.
Saffran, J.  R. (2002). Constraints on statistical language learning. Journal of
Memory and Language, 47, 172–196.
Saffran, J. R. (2003). Statistical language learning: Mechanisms and constraints.
Current Directions in Psychological Science, 12, 110–114.
Saffran, J., Hauser, M., Seibel, R., Kapfhamer, J., Tsao, F., & Cushman, F.
(2008). Grammatical pattern learning by human infants and cotton-top tam-
arin monkeys. Cognition, 107, 479–500.
Schacter, D. L., Addis, D. R., & Buckner, R. L. (2008). Episodic simulation of
future events. Annals of the New York Academy of Sciences, 1124, 39–60.
Senghas, A. (2005). Language emergence: Clues from a new Bedouin Sign
Language. Current Biology, 15, 463–465.
Senghas, A., Kita, S., & Özyürek, A. (2002). Children creating core properties
of language: Evidence from an emerging sign language in Nicaragua. Science,
305, 1779–1782.
Shanker, S. G., & King, B. J. (2002). The emergence of a new paradigm in ape
language research. Behavioral and Brain Sciences, 25, 605–656.
Smith, J. D., Crossley, M. J., Boomer, J., Church, B. A., Beran, M. J., & Ashby,
F. G. (2012). Implicit and explicit category learning by capuchin monkeys
(Cebus apella). Journal of Comparative Psychology, 126, 294–304.
Smith, K. (2004). The evolution of vocabulary. Journal of Theoretical Biology,
228, 127–142.
Squire, I. R., Knowlton, B., & Musen, G. (1993). The structure and organiza-
tion of memory. Annual Review of Psychology, 44, 453–495.
Tomasello, M. (1999). The cultural origins of human cognition. Cambridge, MA:
Harvard University Press.
Toni, I., de Lange, F.  P., Noordzij, M.  L., & Hagoort, P. (2008). Language
beyond action. Journal of Physiology – Paris, 102, 71–79.
Turella, L., Pierno, A. C., Tubaldi, F., & Castiello, U. (2009). Mirror neurons in
humans: Consisting or confounding evidence? Brain and Language, 108,
10–21.
1 Introduction 47

Ullman, M.  T. (2004). Contributions of memory circuits to language: The


declarative/procedural model. Cognition, 92, 231–270.
Varney, N. R. (2002). How reading works: Considerations from prehistory to
the present. Applied Neuropsychology, 9, 3–12.
Vouloumanos, A., & Werker, J.  F. (2004). Listening to language at birth:
Evidence for a bias for speech in neonates. Developmental Science, 10(2),
159–171.
Wittgenstein, L. (1958). Philosophical investigations. (The English text of the
third edition). Englewood Cliffs, NJ: Prentice Hall.
2
Developmental Language Impairment:
Conceptual Issues and Prospects
of an Evolutionary Approach

As mentioned in the Introduction, I make a distinction between speech


and language. This is a commonly accepted distinction, but the reason
it is stressed here is the amodal concept of language presented in this
book. Thus, speech depends on a vocal auditory channel of communi-
cation, whereas language is a modality-independent capacity (see Chap.
7). Therefore I also make a distinction between speech disorders and
language impairments. These may be related impairments; for example,
speech sound disorder (phonological difficulties) may be one of several
symptoms occurring by a language-impaired child. However, a produc-
tion error such as a speech sound confusion tends to be accompanied by
other difficulties. A production error by itself does not qualify as language
impairment. Similarly, it should be noticed that many language-impaired
children have reading difficulties; however, dyslexia and language impair-
ments are generally treated as nosologically different impairments.
Furthermore, in language pathology a major distinction is made
between acquired impairments due to brain damage or disease, and
developmental language impairments, which occur in the absence of any
diagnosed brain pathology. The former type of impairments, which are
generally inflicted by a stroke, and commonly referred to as aphasia, may

© The Editor(s) (if applicable) and The Author(s) 2016 49


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_2
50 Language Evolution and Developmental Impairments

occur at any time in the life span of the individual, whereas develop-
mental language impairments arise in early childhood and tend to have
long-term consequences for the child. Both types of impairment may be
studied from an evolutionary point of view, but in the present work I will
deal only with developmental language impairment. A number of other
terms have been used about language impairment that arises in develop-
ment, for example, the DSM-5 term “language disorder;” whereas other
terms are “primary language impairment” or “language learning impair-
ment.” In this work I prefer “developmental language impairment” as the
default term. By including “developmental,” the diagnostic label indicate
impairments which are related to general developmental processes; for
example, early infant–caregiver interactions, babbling, and critical period
of language acquisition. In a report on contemporary debate about diag-
nostic terms, Reilly, Bishop, and Tomblin (2014) indicate that the few
objections raised against this term have stressed that “developmental”
makes it inappropriate for older children and adults. The main argument
in favor of the term is that “developmental” marks a contrast to “acquired,”
which is the main reason why I prefer to use this term. In the following,
however, the SLI term will be used in reviews of works where this term is
a central one. Otherwise, developmental language impairment will be the
default term in the present work. This will be used until we can come up
with a new term that can be linked to causal factors in the human brain. It
is also important that a new term can be interpreted within an evolution-
ary frame of reference (see discussion in Chap. 8, Sect. 8.6).
Bishop (2014) pointed out that diagnoses of language impairments,
in contrast to Down syndrome, cannot be based on a “clear dividing
line between normality and abnormality in its aetiology.” Lacking a firm
research basis for diagnoses, a number of false positives and false negatives
may be expected, and therefore the use of any diagnostic label may cause
tensions between clinicians and parents. Actually, Bishop asked whether
diagnostic labels should be abandoned, and may be exchanged by terms
such “special education needs” or a nonspecific term such as “speech,
language and communication needs.” She admitted, however, that this
solution will hamper research and, therefore, the concept was rejected.
A diagnostic category with explicit criteria for inclusion and exclusion in
experimental groups are needed. Hence, she retained the commonly used
2 Developmental Language Impairment... 51

term “specific language impairment” (SLI) but suggested that “‘specific’


means idiopathic (i.e., of unknown origin) rather than implying there
are no other problems beyond language.” In the following section I shall
comment on the terminological discussion raised by Bishop (2014).

2.1 Diagnostic Labels and the Problem


of Specificity of Impairment
I fully acknowledge Bishop’s reasons for retaining SLI for research purposes;
however, I will add that “specific” has more connotations than “idiopathic.”
(It may be argued that her interpretation of the SLI term is itself an “idio-
pathic” one.) In particular we will contrast “specific” with “general” as in
“general domain of behavior,” and this is why some objections may be raised
against the SLI term in the context of language evolution. Despite Bishop’s
proposal to equate “specific” with “idiopathic” I think that “specificity of
impairment” is an unavoidable connotation of the term. On this condition
I will give more arguments for why SLI is a problematic term both in view
of recent research, and in relation to theories of language evolution.
Let me recapitulate that the study of language evolution has revealed a
number of commonalities between language and nonlanguage domains:
Contemporary research on mirror neurons has demonstrated a mechanism
for the linking of perception and motor action, which is a prerequisite for
both cognitive and linguistic skills. Also, linguistic signals, which are both
learned and symbolic, depend on learning constraints, which serve pre-lin-
guistic acquisition of concepts, and are present by both animals and humans.
Moreover, the acquisition of grammar in language has some features in com-
mon with the learning of a “grammar” of actions by animals. In general,
language depends on brain systems that also mediate other functions, and
from a biological point of view it is difficult to define a sharp distinction
between language and nonlanguage domains. In Chap. 3, which deals with
the continuity problem in language evolution, I will also discuss the prob-
lem of continuity across behavioral domains. Therefore, this section, as well
as the next one, will bear upon the problem of specificity of impairment.
In an evolutionary perspective, a dysfunction of any of the mechanisms
underlying language acquisition is likely to affect not only components
52 Language Evolution and Developmental Impairments

of language, but also nonlinguistic cognitive functions. Language impair-


ments tend to occur in comorbid contexts with other disorders. This is why
many children with autism spectrum disorders are also language-impaired
or remain minimally verbal at age five (Weismer & Kover, 2015), that many
children with Attention Deficit Hyperactivity Disorder (ADHD) also have
language problems (Sciberras et al., 2014), that language impairments are
linked to working-memory disorder (Botting and Conti-Ramsden, 2001),
motor impairments (Hill, 2001) and temporal processing deficits (Leonard,
1998). Therefore, it is difficult to decide what is “specific” in SLI.
SLI is defined by a set of criteria for inclusion and exclusion. The for-
mer type of criteria is generally based on standardized language tests such
as the Clinical Evaluation of Language Fluency (CELF), and the test for
receptive syntactic-language abilities (Test for Reception of Grammar;
TROG). Some researchers have also studied “critical markers” of devel-
opmental language impairments with the objective of defining new crite-
ria of inclusion. In Sect. 2.3 below, I will discuss some of this research and
discuss possible criteria for inclusion; first I will turn to the criteria for
exclusion. These involve other sensory-motor or neurological disorders,
and nonverbal IQ below a critical level. By excluding children with these
disorders, the study group may be said to form “the rest variance,” which
remains when other causal factors are controlled. In other words, SLI is
defined by an observed discrepancy between impaired language function
and normal nonverbal ability by excluding cases below a critical nonver-
bal IQ. Does this mean that few children with language problems con-
form to the definition of SLI, and that we therefore run the risk of using
a diagnostic category which is nearly an empty one? Arguments against
the use of a discrepancy criterion have been raised by several researchers,
but perhaps the strongest one is formulated by Bishop (2014):

The discrepancy criterion captured the notion that the impairment was
unexpected and unexplained: whereas there was an assumption that
language deficits were unsurprising in a child who had more global intel-
lectual difficulties. However, this rationale has not been supported by
evidence in either language or literacy problems. While it is true that
verbal and nonverbal impairments often co-occur, it is not the case that
nonverbal ability sets a limit on language development….Indeed, it is
possible to find children whose performance on language tests is much
2 Developmental Language Impairment... 53

better than performance on nonverbal tests–the opposite pattern to what


is seen in SLI. Furthermore, inclusion discrepancy criteria in diagnostic
formulations can be a barrier to progress in studies of aetiology. For
instance Bishop (2014) found that twin data were more interpretable if
children were categorized according to language deficits, regardless of
nonverbal ability, than if a conventional diagnosis of SLI were used. In
short, where low nonverbal ability accompanies poor language skills, it
should be seen as a correlate rather than an explanation (p. 388).

There are high rates of co-occurrence between language problems and


neurodevelopmental disorders, which means that a child with SLI is likely
to have other developmental problems as well. In Sect. 2.2 below, I will give
further arguments against an uncritical use of the discrepancy criterion by
reviewing some cases of language impairments by deaf children. The prob-
lem is whether the SLI term should be omitted in the diagnostic terminol-
ogy or whether it still serves an important function in therapeutic work with
language-impaired children. Because the DSM-5 term “language disorder”
has been considered too wide, Reilly, Tomblin, et al. (2014) argued against a
replacement of this term with “specific language impairment.” The latter term
they considered to be a “convenient label for researchers,” whereas in the cur-
rent classification system it is “unacceptably arbitrary.” These researchers also
argued for a strong relaxation of the use of exclusionary criteria. Without
this relaxation, it seems that we can end up with an “empty” term. Thus,
Bishop (2014) stressed that children with a substantial discrepancy between
language and nonverbal ability and with no other neurodevelopmental
problems constitute “a vanishingly small proportion of language impaired
children.” Nonetheless, Bishop also argued that although “labels can have
negative consequences, the consequences of avoiding them are worse.”

2.2 Language Impairments by Deaf Children


Exposed to Sign Language from Birth
I will now present some evidence of modality independent language
impairment. (Compare this section with the concept of language that is
presented in Chap. 7) Among the inclusion criteria of SLI, standardized
54 Language Evolution and Developmental Impairments

language tests, late onset of speech and some aspects of phonological


processing show that SLI is supposed to be a deficit of speech acquisi-
tion. Also, some of the most influential theories (Baddeley, Gathercole,
& Papagno, 1998; Bishop, 1997; Tallal, 1976) have tacitly assumed that
SLI is a deficit of spoken language acquisition. Due to an “implicit” defi-
nition of this disorder, there was no question of an occurrence of SLI by
children solely exposed to sign language. Furthermore, because hearing
loss was specifically excluded in diagnoses of SLI, it became impossible to
investigate the occurrence of SLI in deaf children. In short, SLI in deaf
children was a contradiction of terms.
If SLI depends on a deficit in modality-independent language process-
ing, we may expect the same incidence of language impairment among
deaf and hearing children. In other words, about 7 % of deaf children
(according to figures from Leonard, 1998) will have this impairment.
Morgan (2005) also argued that due to neurological insults that often
accompany deafness the incidence of language impairment may be
higher. He, therefore, set out to investigate what language impairments
looked like and what parts of language are affected among deaf children
who have noticeable difficulties in acquiring sign language. The problem
is that late learners are frequent in the signing community, and therefore
language delay caused by language deprivation must be distinguished
from a general language disorder. Morgan (2005) argued that late learn-
ers will show a typical developmental path (same milestones but different
ages), whereas disordered children will show a different developmental
path. Previous research had shown that late learners generally follow a
typical developmental path. Morgan and Herman (see Morgan, 2005)
designed a test of grammar and Herman, Holmes, and Woll (1999)
designed a test of receptive skills in British Sign Language (BSL), both
were used to detect a deviant or atypical development, and to pinpoint
what parts of sign language caused some major difficulties of learning.
Morgan has reported two cases of language impairment by children
exposed to sign language from birth. The first one was a hearing male
(JA) of 5;11 years at testing (Morgan, 2005), and the second one was a
deaf male named Paul aged 5;2 at testing (Morgan, Herman, and Woll,
2007). Both had troubles in learning BSL.  JA was exposed to spoken
English in school while communicating at home in BSL with his deaf
2 Developmental Language Impairment... 55

mother and deaf father. His English was assessed by way the CELF,
and his signing abilities were assessed with the British Sign Language
Receptive Skills Test (BSL-RST) (Herman et  al., 1999). In both tests,
JA scored age-appropriately on vocabulary, but very low on signed sen-
tences, and comprehension of sentences in English; that is, impairments
of a similar kind in the two languages. His erratic profile on the items
in both tests showed that his performance was atypical and not due to a
general language delay.
Paul’s vocabulary was assessed with a nonstandardized BSL version
of British Picture Vocabulary Scale (BPVS), and sentence comprehen-
sion was tested with the BSL-RST. Like JA, Paul showed a normal sign
vocabulary, but had great difficulties in understanding complex signing
(−1.3 standard deviations below the mean). Morgan et al. (2007) argued
that Paul’s low “performance could not be characterized as a slow learner
as by failing early items and passing more difficult ones his performance
appeared random rather than like a younger child” (p. 101). Expressive
language was documented by video recordings of Paul’s signing in BSL
with his parents, teachers, and therapist. These recordings revealed that
his expressive language “was restricted to small sentences made up of one
or two signs with very limited grammar” (p. 102).
The two cases were similar in some important respects. Both showed
a normal vocabulary, but subnormal comprehension and production of
signed sentences. Moreover, both showed an erratic and atypical perfor-
mance which differed from that of late learners or second-language learn-
ers. Also, JA’s language difficulties in speech and BSL were similar. His
problems, which showed up in both modalities, although representing
similar linguistic domains, may have been caused by a general deficit of
symbolic reference. Due to the similar pattern of difficulties for Paul and
JA, the author believed that both may have suffered from this general
linguistic deficit.
According to Morgan (Morgan, 2005; Morgan et al., 2007), JA and
Paul represented two cases of SLI by users of sign language. Later, Mason
et al. (2010) have reported sign language impairments among 13 signing
deaf children aged 5–14 years. They argued that the significant language
delay found in this group could not be explained by poor exposure to
BSL. Scores on the BSL-RST and the BSL Production Test showed that
56 Language Evolution and Developmental Impairments

most aspects of language were affected. These results have clear impli-
cations for theory and practice in the field of developmental language
impairment, in particular for our interpretation of the SLI term. As
pointed out above, it is not clear what is specific in SLI by hearing chil-
dren exposed to speech. If this diagnose is extended to include difficulties
in acquiring sign language by deaf children as well, one can no longer
maintain the term specific for this deficit. Therefore, Morgan’s studies
have given rise to further critique of the SLI term, in particular to the
discrepancy criteria in the definition of this term.
The fact that JA had similar difficulties in spoken and signed language,
and also that the two boys had similar signing difficulties, can be inter-
preted as a dysfunction of a modality-independent capacity of language.
Can we likewise assume that the two boys had similar difficulties as most
hearing children with unexplained language impairments, and that they
all can be classified by one diagnostic term, such as “developmental lan-
guage impairment?” Based on the conception of language as a modality-
independent capacity (see Chap. 7), this term may be a viable one for
both deaf and hearing children who have comparable difficulties in their
own language modalities. However, by thus abandoning important dis-
crepancy criteria, we are left with a large, complex and clinically het-
erogeneous group. These children may differ with respect to the type of
interventions/treatment they will benefit from, and therefore they should
not be subsumed in one clinical and diagnostic term. Briscoe, Bishop,
and Norbury (2001) reported that a group of children with mild-to-
moderate hearing loss had language problems which in many ways were
similar to hearing children with SLI. The former group, however, ben-
efited from reading instruction, whereas the SLI children did not, or had
severe difficulties in learning to read.
In agreement with Reilly, Bishop, et al. (2014), I will also argue for
diagnostic terms which make it possible to distinguish between chil-
dren with problems which persevere into adulthood and those who
have problems “which are likely to be resolved of their own account.”
Should we therefore distinguish between clinical groups based on pros-
pects of remedial treatments? Reilly et al. suggested building risk mod-
els of early language trajectories. This may require a distinction between
components of language which are differently impaired, and which
2 Developmental Language Impairment... 57

have different evolutionary origins. From the perspective of language


evolution, impairments that appear to be similar may have different
evolutionary origins and developmental trajectories, and perhaps would
respond differently to remedial treatment. Other impairments that at
first sight appear to be different (because they involve different articula-
tors) may be evolutionary related and subject to similar developmen-
tal trajectories. This is why Morgan’s research mentioned above is so
important, both from a clinical and a theoretical perspective. Therefore,
further research into the comparability of language impairments by
hearing children exposed to speech and deaf signers exposed to sign
language from birth is required. Also, we need population studies of
deaf communities in order to evaluate the incidence of sign-language
impairments.
Because developmental language impairments can be found both
among deaf children exposed to sign language and among hearing chil-
dren exposed to speech, and because these impairments affect similar
linguistic domains, we should be looking for anomalies in mechanisms
underlying the acquisition of both languages. I assume that these will
be phylogenetically older mechanisms that are involved in all aspects of
language acquisition.

2.3 Criteria of Inclusion: Can We Define


“Critical Markers” of SLI
As pointed out above, the criteria of inclusion have been based on stan-
dardized tests of language fluency and receptive grammar abilities. These
criteria were descriptive and did not indicate any causal mechanisms
underlying a deficit in normal language acquisition. Also, these criteria
defined problems only within a language domain. In addition to subnor-
mal language scores, the researchers have also tried to find “critical mark-
ers,” which extended beyond a language domain and were supposed to
identify SLI children in contrast to typically developing children. These
markers served as diagnostic remedies, although they also involved a par-
ticular research approach to SLI.  I shall first present a brief review of
58 Language Evolution and Developmental Impairments

relatively old works, which are classic in the sense that they are often
mentioned in discussions of the etiology of developmental language
impairment, and in the final part of the section I will present a recent
work on “critical markers” in the brain structures of children with lan-
guage impairment and reading disability.
In two of the following theories, these markers did not belong to the
language domain, but were “downstream consequences of perceptual
and memory limitations” (Hsu and Bishop, 2011). For example, Tallal
(1976) argued that SLI depended on a deficit in the brain mechanisms
underlying discrimination of speech sounds. She designed the Auditory
Repetition Test (ART) for diagnostic and interventional purposes, and a
training program based on this test has been applied with some degree of
success to children with SLI (Merzenich et al., 1996). Later evaluation of
this program (Gillam, Frome Loeb, and Friel-Patty, 2001) has shown that
positive effects are limited to vocabulary and sentence length, whereas no
effects have been demonstrated on grammatical skills.
Baddeley et al. (1998) argued that SLI depended on subnormal capac-
ity of the phonological loop, an important component in the Baddeley
and Hitch (1974) model of working memory. The phonological loop
includes three subcomponents: (1) Phonological storage, which has a
limited capacity and contains spoken words and nonwords whose mem-
ory traces fade rapidly unless they are rehearsed in (2) an articulatory
buffer. (3) A Grapheme-Phoneme Converter transfers visual inputs into
articulatory movements; hence these inputs are similarly processed in the
articulatory buffer. In this way both written and spoken words gain access
to the phonological storage.
In the seminal work of Baddeley et al. (1998), it was argued that the
phonological loop serves as a “language acquisition device.” Its capac-
ity depended crucially on the subvocal rehearsal taking place in the
articulatory buffer. Thus, instruction to repeat particular sounds, for
instance, “the-the-the” while memorizing a series of words blocks rehearsal
and reduces the immediate memory span. Baddeley et al. (1998) stressed
that the function of the phonological loop is not so much the learning
of words that exist in one’s vocabulary, but the learning of new words.
Therefore, their theory rested in part on data from the Children’s Test
of Nonword Repetition (CN REP) (Dollaghan and Campbell, 1998;
2 Developmental Language Impairment... 59

Gathercole and Baddeley, 1990), which turned out to be a good predic-


tor of language impairment among 4- to 5-year-old children. However,
Gathercole, Tiffany, Briscoe, Thorn, and The ALSPAC Team (2005) have
shown that the nonword repetition test score is a poor predictor of lan-
guage skills among older children.
Memory data from experiments with rehearsal suppression apparently
show a crucial role of subarticulation in language acquisition. More gen-
erally, these observations may be said to support theories which claim that
language may be understood “within the fold of motor action.” However,
an old experiment by Baddeley and Wilson (1985) complicates this posi-
tion. They studied a group of patients who all but one suffered from
dysarthria, an impairment which interferes with the control of speech.
The remaining patients suffered from anarthria, an even worse condition
which totally interferes with speech. Also, with these patients, subvocal
rehearsal interfered with short-term memory, as it does for unimpaired
participants. Thus, rehearsal suppression by these patients could not have
acted on the peripheral articulators, and, therefore, Baddeley and Wilson
concluded that the rehearsal processes must have operated at a deeper
level.
Brown and Hulme (1996), in contrast to the theories of both Baddeley
and Tallal, argued that the impaired mechanism belonged to language,
not a nonlanguage domain. They admit that SLI children may have
impaired working memory, or problems in discriminating speech sounds,
but these problems are the effect rather than the cause of SLI. Low scores
on tests of verbal short-term memory are commonly observed by chil-
dren with SLI, because an impaired language naturally is a disadvantage
in relation to any test of verbal memory. Problems in verbal memory, in
the absence of other problems, have no general effect on other aspects of
cognitive development.
Using the CN REP, Gathercole (1995) observed that nonwords which
agree with the phonotactic rules of English (e.g., stirple) are more eas-
ily repeated than nonwords which violate these rules (e.g., kipser). These
observations show that the ability to repeat a sequence of phonemes in a
spoken nonword is dependent on language habits, and are therefore said
to support the theory of Brown and Hulme. Gathercole then divided the
nonwords into two classes, those which agreed with the phonotactic rules
60 Language Evolution and Developmental Impairments

of English and those which violated these rules. Which type of nonwords
served as the best predictor of development of a vocabulary? She found
that responses to nonwords of an unknown structure (those violating the
phonotactic rules) served as a good predictor of vocabulary. Responses to
the “familiar” nonwords did not correlate with later vocabulary. Together,
these observations represented an important challenge to Brown and
Hulme’s theory. Among the theories I have presented so far, only Brown
and Hulme’s theory claims that developmental language impairment is
specific to the language domain. Tallal’s theory and Baddeley et al.’s theory
claim that the core problem for the impaired children can be found within
a nonlanguage domain (yet it has major effects on the acquisition and use
of language). On this account, the SLI term is warranted only in view of
Brown and Hulme’s theory. The three theories have gained but a limited
support in the literature, and many researchers today are less optimistic
about finding a causal mechanism underlying SLI. Instead, some research-
ers have emphasized the heterogeneity of children with SLI, and suggested
that there may be subgroups of impaired children that differ clinically
and etiologically. Based on standardized language and psychometric tests,
Conti-Ramsden, Crutchley, and Botting (1997) identified subgroups in
a sample of 242 clinically defined seven-year-old children with language
impairments in England. Longitudinal data showed that they could be
classified in three subgroups: expressive SLI, expressive/receptive SLI, and
complex SLI. The latter group consisted of children with lexical, syntac-
tic, semantic and pragmatic difficulties in the absence of any phonologi-
cal difficulties. However, the distinction between expressive and receptive
difficulties has not been commonly acknowledged in the literature. In
any case, it is unlikely that we could define a core problem that is shared
by these groups. Thus, apart from some descriptive characteristics, there
is practically no agreement among contemporary researchers as to what
constitutes the set of inclusion criteria that defines SLI.
What about the neuroanatomical structures which serve language pro-
cessing? Ullman’s declarative procedural model, which will be described
in Chap. 3, postulates different structures underlying declarative and
procedural memory and that the former is linked to the lexical seman-
tic system and the latter to aspects of grammar. Ullman and Pierpoint
(2005) raised the idea that critical markers of SLI could be found by
2 Developmental Language Impairment... 61

analyzing these structures: “Very early detection or confirmation of SLI


may be possible by examining the neuroanatomical structures posited
to underlie the disorder (e.g., with volumetric analysis of structural MR
data)” (p. 423). Since then, a number of studies have been reported about
volumetric characteristics of brain structures in children with language
impairments. However, these children have not been diagnosed with SLI,
but formed heterogeneous groups of language-impaired children. More
recently, Girbau-Massana, Garcia-Marti, Marti-Bonmati, and Schwarz
(2014) reported a voxel-based morphometry (VBM) study of 10 children
with SLI (8.5–10.9 years) and 14 typically language developing (TLD)
children (8.2–11.8 years). Analysis of volumetric changes in gray-white
matter, gray to white matter ratios, and cerebrospinal fluid (CSF) relative
to the typically developing children was undertaken using intelligence,
age, gender and total intracranial volume as covariates. They also analyzed
a subgroup of six children who had both SLI and reading disability (RD).
The results showed that SLI children had a significantly smaller volume
of grey matter in the right postcentral parietal (BA 4) and the right and
left medial occipital gyri (BA 19), but a greater volume of grey matter in
the right superior occipital gyrus which they interpreted as a “neuroplas-
tic change associated with brain reorganization” (p. 96). Children with
SLI + RD had an overall lower grey matter volume compared to the TLD
children. Moreover, SLI children had a significantly higher CSF volume
compared to the TLD children. No significant overall differences in white
matter were observed, but SLI + RD children had a significantly smaller
volume of white matter in the right inferior longitudinal fasciculus.
Girbau-Massana et al. concluded that the significant group difference
in grey matter volume in the postcentral parietal gyrus together with the
difference in CSF may be taken as a critical marker of SLI.  However,
post hoc analysis showed no significant differences between volumetric
measurements in the four grey matter areas and the composite z scores for
receptive/expressive language and reading comprehension. More research
is needed to study development of the grey matter areas, especially in
relation to general cognitive development across the life span. Girbau-
Massana et al. have presented some remarkable observations on volumet-
ric brain changes by SLI and RD children, but I do not think that these
observations will “prove to be a unique marker for SLI” (p. 98).
62 Language Evolution and Developmental Impairments

2.4 The Genetic Etiology of Language


Impairments
There is ample evidence of a genetic etiology of language disorders (Bishop,
North, and Donlan, 1995; The SLI Consortium, 2002). Heritability
studies of SLI affected families have been undertaken with measures of
receptive syntactic-language abilities (TROG) and expressive-language
skills (Clinical Evaluation of Language Fundamentals; CELF-R) (both
commonly used as inclusion criteria for SLI) as well as measures of more
specific processes claimed to be involved in language acquisition (non-
word repetition). These studies showed levels of heritability close to 1.0.
This means that genes may play a significant role in the etiology of lan-
guage impairments; however, Bishop et  al. concluded that the genetic
basis is likely to be complex. A breakthrough appeared with the work
of Lai, Fisher, Hurst, Varga-Kadem, and Monaco (2001), which led to
the identification of the first gene (FOXP2) to be involved in speech
and language development: FOXP2 encodes a transcription factor that
regulates the expression of other genes that are involved in development
and patterning of the central nervous system. Also FOXP2 may bind
directly to a large number of gene promoters in the human genome,
which underscores the complexity of the genetic basis of language. The
transcription factor is a protein (Forkhead box P2) that in humans is
located in chromosome 7. Orthologs of the human FOXP2 are found
in other mammals (Foxp2 in mice and FoxP2 in other species), and the
proteins encoded in them are all amino acids, which are important for
the development of brain structures. Yet, human FOXP2 differs from
transcription factors in gorillas, chimpanzees and macaques in two amino
acids, and from mice in three amino acids.
Lai et  al. (2001) reported a three-generation pedigree of the family
KE where half of the members had severe difficulties in speaking. Their
study implicated a mutation in a monogenic form of the FOXP2 gene,
and therefore this gene will not activate the normal sequence of genes
required for early brain development. The affected members of the KE
family were incapable of producing intelligible speech, had an aberrant
grammar, could not move the mouth, tongue and face appropriately
2 Developmental Language Impairment... 63

while speaking, and had a significantly reduced IQ. Later Jane Hurst at


Oxford Radcliff Hospital has identified a British boy (CS) with a muta-
tion in the FOXP2 gene and an almost identical impairment of speech.
This boy also had a visible defect in chromosome 7.
Contrary to popular interpretations of these observations, many
researchers warned against thinking about FOXP2 as the language gene.
This is not the major gene to be involved in developmental language
impairments by 4–7 % of children in Western countries. These impair-
ments affect most aspects of language, as well as language which depends
on manual articulators (as in sign language), whereas the KE family mem-
bers suffered from an inability to produce intelligible speech, particularly
the control of muscles used in speech production. Therefore, FOXP2 has
been causally related to a specific speech phenotype, called developmental
verbal dyspraxia, alternatively “childhood apraxia of speech” (CAS). On
the other hand, developmental language impairment is a more compre-
hensive impairment than verbal dyslexia. Therefore, mutation of FOXP2
is now considered to be a rare cause of language impairment.
Language is a polygenetic trait and therefore likely to depend on a
cluster of genes with coordinated effects in development. Genetic studies
of developmental language impairment no longer focus on single genes
of large effects, but emphasize a complex and multifactorial etiology.
FOXP2 controls a number of other cells, and some of these are clearly
implicated in language pathology. In the FOXP2-dependent molecular
network, we find the CNTNAP2, which is located in chromosome 7q35.
Variations in this gene are associated with a number of developmen-
tal disorders, one of which is SLI. Thus, CNTNAP2 is more generally
involved in brain developmental processes.
KIAA0319 in chromosome 6, which has been associated with dyslexia
(see Chap. 6, Sect. 6.5.2) can also be mentioned in this connection. A
mutation of this gene has a key role in the etiology of developmental lan-
guage impairment. ATP2C2 and CMIP, which are both located on chro-
mosome 16q, may also be involved in language impairment. However,
these genes are primarily involved in memory-related circuitry, whereas
FOXP2 is primarily involved in oro-facial motor skills. Memory and
motor functions are indispensable components of language acquisition;
hence, variations in these genes will affect both language comprehension
64 Language Evolution and Developmental Impairments

and language skills. In Chap. 3, therefore, I will discuss the role of the
motor system and the different ways memory systems are implicated in
language. Ullman’s (2004) neurobiological model of language acquisi-
tion, which claims that the acquisition of grammar is largely dependent
on substrata underlying the procedural memory system (prime among
these are the basal ganglia, including the neostriatum with the putamen
and the caudate nucleus), whereas vocabulary and semantic knowledge
depends on structures underlying the declarative system (the medial tem-
poral lobe structures such as hippocampus, entorhinal and perirhinal
cortex). Brain imaging studies of the KE family members showed that
the affected members had abnormal basal ganglia (in addition to abnor-
malities in other language-related areas). The basal ganglia, are strongly
involved in movements; therefore, these abnormalities could explain dif-
ficulties in adequate movements of lips and tongue. In view of Ullman’s
model, it could also be argued that FOXP2 affects the procedural memory
system. Takahashi, Liu, Hirokawa, and Takahashi (2003) found FOXP2
expression in the striatum, particularly in the caudate nucleus, but not in
the hippocampus. This shows that the critical gene expression takes place
in the nervous mechanisms of the procedural not the declarative system.
Furthermore, the expression was higher in developing tissues than in
adult tissues, showing its relevance to language acquisition.
Ackermann, Hage, and Ziegler (2014) also argued that the basal gan-
glia provide a platform for the evolution of articulate speech in humans.
They suggested a two-step evolution of the mechanisms underlying these
skills: a refinement of projections of premotor cortex to the basal ganglia,
followed vocal-laryngeal elaboration of the ganglia circuitry, a process
which depends on human-specific FOXP2 mutations. In general, genetic
variants of the FOXP2 and its associated molecular networks are involved
in the balance between procedural and declarative strategies. Further sup-
port for the expression of FOXP2 in the procedural system was presented
by Chandrasekaran, Yi, Blanco, McGeary, and Maddox (2015), who
showed that a genetic variant (the GG genotype) mediated enhanced
procedural learning of speech sound categories. This is why polymor-
phism of FOXP2 may be involved in early learning of grammar.
Ullman and Pierpoint (2005) argued that basal ganglia abnormali-
ties may arise from other reasons than anomalies of the FOXP2 gene.
2 Developmental Language Impairment... 65

Early onsets of intrinsic and extrinsic neural insults may lead to atypi-
cal brain development, and therefore, “procedural language disorder”
(PLD) may depend on a diversity of etiological factors: “It is important
to emphasize that the source of the disorder is expected to vary across
individuals. Some may have mutations in the FOXP2 gene, whereas
many others show no evidence of such mutations…and instead suf-
fer from other etiologies” (p.  407). Moreover, Ullman and Pierpoint
added that FOXP2 is not the only gene that is involved in PLD. Their
procedural deficit hypothesis (PDH) explained in more details the link
between basal ganglia abnormalities and grammar impairments (see
Chap. 3, Sect. 3.3.2).
Although mutation of FOXP2 is a rare cause of language impairment,
variants of this gene and its dependent molecular network are most likely
involved in the etiology of SLI. However, as reported by Bishop (2015)
mutations in one of these genes will rarely have a Mendelian pattern.
First-degree family members often manifest subthreshold symptoms, for
example, subtle phonological difficulties, and therefor she argued that the
minor impairments in the family members show that they “correspond to
a continuum of impairment, rather than all-or-none diseases” (p. 619).
This continuum means that environmental factors account for a major
source of variance in gene expressions, and therefore a more comprehen-
sive treatment of the etiology of developmental language impairments
must include a discussion of epigenetics.

2.5 The Role of Early Interactions Between


Child and Caregiver
The etiology of developmental language impairment is formed by a series
of events which take place in the transition from the genotype to the phe-
notype. These include a number of arbitrary constraints, physically and
socially, in the environment of the developing individual. However, most
of them are learning processes underlying the acquisition of language,
and in this section I will address the most important arenas of early learn-
ing: interactions between child and caregiver.
66 Language Evolution and Developmental Impairments

As pointed out in the Introduction, the “language instinct” has been


replaced by an “instinct to learn.” The rationale for the latter concept was
clearly expressed by Bickerton (2014). He compared language with web
spinning by spiders and echolocation by bats.

I don’t know if isolation experiments have ever been carried out on bats or
spiders, but my guess is that if a bat or spider was raised without ever seeing
another bat or spider, it would still be able to echolocate or spin a web as
well as other species members. In contrast, children for whom some acci-
dental circumstance has drastically reduced or eliminated linguistic input
may never speak, or if they do may fall far short of a full adult language
capacity (p. 46).

Bickerton found language to be more comparable to birdsong. Though


birdsong in some species is more like echolocation by bats, in most birds
it is at least partially learned. Moreover, he stressed the incremental pro-
cess of learning “when immature members of the species begin by pro-
ducing what is referred to as “subsong” and later something that has been
termed “plastic song” (p.  46).” A similar process is found by humans;
thus, he compared “subsong” to babbling by infants. Bickerton, how-
ever, did not believe the “instinct to learn” concept could resolve the
long-standing conflict between empiricists and nativists. He argued that
language consists of two parts, one which makes languages differ and
therefore necessitates learning, and another which makes them similar.
The universal part of language, he said, is comparable to echolocation by
bats and web spinning by spiders. I cannot follow Bickerton on this mat-
ter, because the part which means that languages do not differ, will be an
abstraction equally fallible as the one of “universal grammar.”
The contrast between learning of language and birdsong by some spe-
cies on the one side and echolocation by bats on the other has particular
merits; first, because it warrants great variance in language performance,
without which natural selection of language capacity could not work.
Language acquisition depends on stimulation from the local caregiver(s).
Moreover, Bickerton’s description also allows for a pre-linguistic (or, bet-
ter, pre-semantic) stage in language acquisition. Finally, it shows that the
main arena of acquisition is the “dialogue” between infant and caregiver;
2 Developmental Language Impairment... 67

this is why Chap. 4 is entirely reserved for this arena of learning. The dia-
logue between child and caregiver is the main arena also for the vertical
transmission of language between generations.
In the Introduction, Sect. 1.4.2, I have argued that pre-semantic sig-
nals have temporal structures defined by transition probabilities that are
easily learned by normally developing children. These structures, when
detected, give rise to the segregation of sound sequences into words or
word-like chunks that form the important signals in child–caregiver
interactions. The statistical learning involved in the detection of these
chunks is also involved in the learning of the phrase structures (see Chap.
3, Sect. 3.2), part of which may be established prior to the acquisition of
semantic knowledge.
The instinct to learn means that normally developing children have
wired in sensitivities to temporal structures which are present in natural
languages. These sensitivities are generally also present in their mothers
or caregivers. Therefore, they give rise to “an interactive alignment in
conversation” (Menenti, Pickering, & Garrod, 2012), and accordingly,
infant and caregiver can also change roles. However, this alignment may
fail from a number of reasons: anything from insufficient exposure to lin-
guistic stimuli to full deprivation of language. The damaging effects may
vary depending on the language-related genes in either one of the two
parties. In the population, therefore, early dialogic failure constrains lan-
guage adaptation, and for the child’s “unsuccessful” epigenetics will ham-
per language acquisition and cause lasting language impairment. The first
two S’s in Fitch’s formula are insufficiently established, and clinically the
therapist has to deal with a case of “unexplained” language impairment.
Language-impaired children in this category (they may constitute the
majority of cases) form serious challenges for the language therapist; they
lack a basic comprehension of structure at any level of language from
syllables, words, phrases and sentences. For these children, training tasks
with linguistic materials will not be very helpful, but as argued in Chap.
8, the basic conception of event-structure can be reestablished by training
in general domain learning tasks.
As mentioned above early “dialogues” between infant and caregiver will
be extensively treated in Chap. 4. Here I argue that these dialogues form
examples of procedural skills that are controlled by the prefrontal–basal
68 Language Evolution and Developmental Impairments

ganglia circuitry. They also take place, with semantically decoded words,
when conversations are easy (Garrod & Pickering, 2004).

2.6 Problems of Differential Diagnostics


The relationship between developmental language impairment and other
neurodevelopmental disorders may be a subject of interest for the present
work; particularly in view of the vast number of comorbidities between
these disorders. I shall limit this presentation to children with the autism
spectrum disorder (ASD). This had been a relatively rare congenital dis-
order; however, one in every 68 children have been diagnosed with ASD
in the US, which, according to Brown and Elder (2014) and Centers
for Disease Control and Prevention (CDC) indicates “a 78 % increase in
prevalence in six years” (p. 219). The hallmark of the disease is impair-
ment in social interaction and communication; other characteristics
include repetitive patterns of behavior (echolalia), contact gestures, pro-
noun reversals and neologisms.
There are great overlaps in the clinical manifestations of ASD and
developmental language impairment/SLI.  Bishop (2010) discussed
whether the two diseases have a shared etiology, which may account for
the great number of comorbid cases, or whether this overlap is more
apparent than real “since the causal route for one disorder can lead to an
outcome resembling the other disorder” (phenomimicry, p.  623). She
discusses two models which simulate overlapping etiology, one with addi-
tive genetic risks and one with nonadditive interaction between genes.
Her discussions are technical and will not be presented in any details
here, but a few comments on the diagnostic criteria used in family studies
will be in order: Diagnoses of SLI are based on vocabulary and structural
aspects of language, “they do not assess how effectively language is used
to communicate in everyday situations” (p.  620). On the other hand,
ASD diagnoses are based on the pragmatic aspects of language. This is
an apparently sharp distinction between the two disorders which sets the
premises for a discussion of shared or separate etiologies. In the following
sections, I provide some critical remarks about this distinction, and I will
therefore have more to say about important clinical manifestations.
2 Developmental Language Impairment... 69

There are three groups of theories which explain how the ASD brain
functions:

1. The first maintains that ASD children lack a ToM, which means that
these children do not understand that other people have independent
mental states; that is, beliefs, desires and goals.
2. Simulation theory claims that ASD children consult their own mind in
order to find out what the beliefs of another person are. They use their
own mind as a model of intentional states, and some proponents of
this theory (Sato, Uono, and Toichi, 2013) also argue that the process
is mediated by the mirror neuron network.
3. Interaction theory stresses dysfunctions in general sensory–motor
behavior and downplays the role of internal representations in
cognition.

More details about these theories can be found in Brown and Elder
(2014) and Gallagher and Varga (2015). I will now focus on the lack
of ToM, the prevailing symptom by most children with this disease.
ToM has also been characterized as a form of mindblindness and is
typically present among a group of “high-functioning” patients with
normal intelligence and language skills; this group would be diagnosed
with Asperger syndrome on the ASD, after the Austrian pediatrician
Hans Asperger, who, in 1944, described a group of children who lacked
nonverbal communication skills, were physically clumsy and lacked an
interest in others. Should we characterize mindblindness as a language
impairment?
A ToM has been considered as the most advanced stage in the evolu-
tion of intentional systems (Dennett, 1983) (i.e., I can apprehend Ted’s
belief about X. Furthermore, I believe that Ted believes that I am aware
of his belief about X). The ability to detect believes like these have been
tested with false belief tasks such as The Sally–Anne Test:

Sally hides a marble in a basket and leaves the room. While she is away
Anne moves the marble into a box. In a short time, Sally re-enters the
room, and the child who has seen an enactment of this event is asked:
“Where will Sally look for the marble?”
70 Language Evolution and Developmental Impairments

Children under the age of three to four years consistently choose the
box; that is, their knowledge of where the marble is cannot be separated
from Sally’s false belief. Normal children above this age and developmen-
tally disabled children with Down syndrome will generally pass this test,
whereas few autistic children have passed it. These observations together
with social and communicative difficulties by ASD children have been
interpreted as a mind-reading deficit, or mindblindness. Some older chil-
dren with ASD pass the Sally–Anne test, whereas they still have troubles
in reading the intentions of others in everyday communicative settings.
Therefore, the validity of the Sally–Anne test is limited.
Language has evolved to make humans able to talk about, among
other things intentional states. Certainly, mindblindness does consti-
tute a pragmatic language impairment. The problem is whether it may
also be associated with more general semantic difficulties. Thus, Brown
and Elder (2014) said “these children have the vocabulary and even have
memorized the syntax to pass standardized language screenings, but they
struggle in real world communication settings because they lack under-
standing of meaning” (p.  220). Similarly, some of these children have
revealed a premature form of reading skill, named “hyperlexia” (Darold
& Treffert, 2011). These children are capable of relatively fast reading,
whereas their interpretation and understanding of text is poor (see Chap.
6, Sect. 6.5.2).
Similarly, ASD children have difficulties in understanding metaphors,
irony and indirect requests, which may indicate that they make use of
language merely as an instrumental device, and pay less attention to the
meaning and function of words. Gold, Faust, and Goldstein (2010) stud-
ied the semantic integration process in 17 participants with Asperger syn-
drome and 16 participants in a control group (age ranged from 17 to 31
years) who were presented with 240 pairs of words that denoted either a
“literal,” “conventional metaphoric,” novel-metaphoric,” or “unrelated”
meaning. The participants were instructed to judge whether the presented
pair conveyed a meaning or not. In an “event-related potentials” (ERP)
task N400 amplitudes showed that the Asperger patients had greater dif-
ficulties in comprehending the metaphorically related word pairs com-
pared to the control group. These difficulties were related to differences
in “linguistic information processing” by the two groups. Thus, general
2 Developmental Language Impairment... 71

language impairments may have contributed to difficulties in metaphor


comprehension.
On this account it may also be argued that communicative difficul-
ties by the ASD group are a mixture of pragmatic and semantic diffi-
culties, although no clear line of distinction can be drawn to separate
the two aspects. When taken together, the two difficulties may be said
to constitute a form of language impairment; however, SLI is commonly
diagnosed without the combined pragmatic/semantic impairment that is
observed in ASD children. The difficulties that are revealed in most ASD
children are pragmatic, whereas conventional language difficulties may be
“latent,” particularly in children with Asperger syndrome. The problem is
whether lack of ToM and mindblindness are also a combination of prag-
matic and semantic difficulties. In that case, diagnoses of ASD should not
be solely based on pragmatic aspects, whereas there is massive overlap in
clinical manifestations of ASD and developmental language impairments.
Studies of the communicative difficulties by ASD children also call
into attention metacognition and metalinguistic skills which constitute
important aspects of language acquisition. Metalinguistic skills may be
considered the most advanced stage of language evolution and develop-
ment. Both pragmatic and semantic difficulties may also depend on lack
of adequately developed metalinguistic skills. I shall, therefore, pay more
attention to these skills in the upcoming parts of the book. Actually I
consider metacognition and metalinguistics as products of the develop-
ment of literacy which has taken place in the most recent part of language
evolution. This is also part of the reason why a full chapter of the book,
Chap. 6, deals with language and literacy. As shown in that chapter,
Sect. 6.5.1, the distinction between “technological” and “interpretative”
aspects of reading is a product of the development of metacognitive and
metalinguistic awareness.

2.7 Perspectives for Research


Problems of terminology in the field of developmental language impair-
ments show that commonalities between language and other nonverbal
cognitive skills will be attended to in future research. Focus on these
72 Language Evolution and Developmental Impairments

commonalities also agrees with the evolutionary perspective taken in this


work, and in the following I shall be more specific as to which are its
prospects for theory construction, as well clinical work in the applied
fields. First of all, this perspective provides an optimistic view on remedial
work and treatment, and in the long run diagnostic terminology should
be formed in agreement with effects of new methods of treatment. Most
subcomponents of language are all learned and depend on mechanisms
which serve both linguistic and nonlinguistic skills. This insight has led to
a preoccupation with domain-general learning abilities, such as working
memory capacity, and statistical learning, including implicit and sequen-
tial learning. All represent research traditions with a long history, but
their significance to language processing has been clearly demonstrated
by more recent researchers such Conway and Pisoni (2008) and Gervain
and Mehler (2010). In Chap. 8, I will discuss the involvement of domain-
general learning in language acquisition, and hence its role in diagnos-
tics and treatment of developmental language impairment. In particular,
I will discuss the work of Conway, Gremp, Walk, Bauernschmidt, and
Pisoni (2014), who studied adults (study 1) and hard-of-hearing children
(study 2) in a computerized visual training task with nonrandom sequen-
tial patterns. By demonstrating how training of domain-general learning
abilities can enhance language function, they also showed the prospects of
an evolutionary approach to diagnoses and treatment.
As pointed out above, a theory of language evolution must account
for a mechanism underlying effective transmittance of language between
generations (vertical transmittance). Constrained learning of linguistic
signals, as mentioned in Sect. 1.4.2, is a prerequisite to vertical trans-
mittance of language. (Further discussion of constrained learning of
basic signals will be presented in Chap. 3, Sect. 3.2.2) Moreover, we
also need to know how coordinated vocalizations (or signing) evolved
to make possible conversations or dialogues between child and caregiver.
Coordinations of communicative responses take place at different levels
of linguistic skills; the most basic one is generally referred to as “turn-
taking.” Takahashi, Narayanan, and Ghazanfar (2013) recently published
a study of vocal turn-taking by marmoset monkeys. They argue that this
behavior by monkeys depends on a mechanism of coupled oscillators,
which is similar to the behavior observed in conversational turn-taking
2 Developmental Language Impairment... 73

by humans. Because marmoset monkeys belong to a different evolution-


ary branch, turn-taking by the two species may be the result of conver-
gent evolution. Early dialogues by infants and the way these dialogues
depend on the procedural memory system will be discussed in Chap. 4.
By focusing on turn-taking and other aspects of early dialogues, we
also call attention to important etiological factors in developmental
language impairments. For example, van Balkom, Verhoeven, and van
Weerdenburg (2010) showed that children with a language production
delay of 10–20 months had difficulties in turn-taking, and a proneness
to use a nonverbal register. These difficulties affect the conversational
style between child and caregiver, and in the most serious cases they may
lead to language deprivation. (Recall also the study of Merrit and Klein,
which was discussed in Sect. 1.4.2.)

References
Ackermann, H., Hage, S. R., & Ziegler, W. (2014). Brain mechanisms of acous-
tic communication in humans and nonhuman primates: An evolutionary
perspective. Behavioral and Brain Sciences, 37, 529–546.
Baddeley, A. D., Gathercole, S. E., & Papagno, C. (1998). The phonological
loop as a language learning device. Psychological Review, 105, 158–173.
Baddeley, A.  D., & Hitch, G.  J. (1974). Working memory. In G.  H. Bower
(Ed.), The psychology of learning and motivation (Vol. 8). London: Academic
Press.
Baddeley, A.  D., & Wilson, B. (1985). Phonological coding and short-term
memory in patients without speech. Journal of Memory and Language, 24,
490–502.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Bishop, D. V. (1997). Uncommon understanding. Development of disorders of lan-
guage comprehension in children. East Sussex, UK: Psychology Press.
Bishop, D.  V. (2010). Overlaps between autism and language impairment:
Phenomimicry or shared etiology. Behavior Genetics, 40, 618–629.
Bishop, D. V. (2014). Ten questions about terminology for children with unex-
plained language problems. International Journal of Language &
Communication Disorders, 49, 381–415.
74 Language Evolution and Developmental Impairments

Bishop, D. V. (2015). The interface between genetics and psychology: Lessons
from developmental dyslexia. Proceedings of the Royal Society B: Biological
Sciences, 282(1806), 20143139. doi:10.1098/rspb.2014.3139.
Bishop, D. V., North, T., & Donlan, C. (1995). Genetic basis of specific lan-
guage impairment: Evidence from a twin study. Developmental Medicine and
Child Neurology, 37, 56–71.
Botting, N., & Conti-Ramsden, G. (2001). Non-word repetition and language
development in children with specific language impairment (SLI).
International Journal of Language & Communication Disorders, 36, 421–432.
Briscoe, J., Bishop, D. V., & Norbury, C. F. (2001). Phonological processing,
language, and literacy: A comparison of children with mild-to-moderate sen-
sorineural hearing loss with specific language impairment. Journal of Child
Psychology and Psychiatry, 42, 329–340.
Brown, B. B., & Elder, J. H. (2014). Communication in autism spectrum dis-
order: A guide for pediatric nurses. Pediatric Nursing, 40, 219–225.
Brown, B.  B., & Hulme, C. (1996). Nonword repetition, STM, and age-of-
acquisition versus pronunciation-time limits in immediate recall for
forgetting-matched acquisition: A computational model. In S. E. Gathercole
(Ed.), Models of short-term memory. Hove, UK: Psychology Press.
Chandrasekaran, B., Yi, H. G., Blanco, N. J., McGeary, J. E., & Maddox, W. T.
(2015). Enhanced procedural learning of speech sound categories in a genetic
variant of FOXP2. The Journal of Neuroscience, 35, 7808–7812.
Conti-Ramsden, G., Crutchley, A., & Botting, N. (1997). The extent to which
psychometric tests differentiate subgroups of children with SLI. Journal of
Speech, Language, and Hearing Research, 40, 765–777.
Conway, C.  M., Gremp, M.  A., Walk, A.  D., Bauernschmidt, A., & Pisoni,
D. B. (2014). Can we enhance domain-general learning abilities to improve
language function? In P. Rebuschat & J. N. Williams (Eds.), Statistical learn-
ing and language acquisition. Berlin: De Gruyter Mouton.
Conway, C. M., & Pisoni, D. B. (2008). Neurocognitive basis of implicit learn-
ing of sequential structure and its relation to language processing. Annals of
New York Academy of Sciences, 1145, 113–131.
Darold, A., & Treffert, M. D. (2011). Hyperlexia III: Separating ‘Autistic-like’
behaviors from autistic disorder: Assessing children who read early or speak
late. WMJ, 110, 281–286.
Dennett, D.  C. (1983). Intentional systems in cognitive ethology: The ‘Pan-
glossian paradigm’ defended. Behavioral and Brain Sciences, 6, 343–390.
2 Developmental Language Impairment... 75

Dollaghan, C., & Campbell, T. F. (1998). Nonword repetition and child lan-
guage impairment. Journal of Speech, Language, and Hearing Research, 41,
1136–1146.
Gallagher, S., & Varga, S. (2015). Conceptual issues in autism spectrum disor-
ders. Current Opinion in Psychiatry, 28, 127–132.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in
Cognitive Sciences, 8, 8–11.
Gathercole, S. E. (1995). Is nonword repetition a test of phonological memory
or long-term knowledge? It all depends on the nonwords. Memory &
Cognition, 23, 83–94.
Gathercole, S. E., & Baddeley, A. D. (1990). Phonological memory deficits in
language disordered children: Is there a causal connection? Journal of Memory
and Language, 29, 336–360.
Gathercole, S.  E., Tiffany, C., Briscoe, J., Thorn, A., & The ALSPAC Team.
(2005). Developmental consequences of poor phonological short-term mem-
ory function in childhood: A longitudinal study. Journal of Child Psychology
and Psychiatry, 46, 598–611.
Gervain, J., & Mehler, J. (2010). Speech perception and language acquisition in
the first year of life. Annual Review of Psychology, 61, 191–218.
Gillam, R.  B., Frome Loeb, D., & Friel-Patty, S. (2001). A summary of five
exploratory studies of FastFor Word. American Journal of Speech-Language
Pathology, 10, 269–273.
Girbau-Massana, D., Garcia-Marti, G., Marti-Bonmati, L., & Schwarz, R. G.
(2014). Grey-white matter and cerebrospinal fluid volume differences in chil-
dren with specific language impairment and/or reading disability.
Neuropsychologia, 56, 90–100.
Gold, R., Faust, M., & Goldstein, A. (2010). Semantic integration during meta-
phor comprehension in Asperger syndrome. Brain & Language, 113,
124–134.
Herman, R., Holmes, S., & Woll, B. (1999). Assessing British Sign Language
Development: Receptive Skills Test. UK: Forest Bookshop.
Hill, E.  L. (2001). Non-specific nature of specific language impairment: A
review of the literature with regard to concomitant motor impairments.
International Journal of Language & Communication Disorders, 36, 149–171.
Hsu, H. J., & Bishop, D. V. (2011). Grammatical difficulties in children with
specific language impairment: Is learning deficient? Human Development, 55,
264–277.
76 Language Evolution and Developmental Impairments

Lai, C.  S. L., Fisher, S.  E., Hurst, J.  A., Varga-Kadem, F., & Monaco, A.  P.
(2001). A novel forkhead-domain gene is mutated in a severe speech and
language disorder. Nature, 413, 519–523.
Leonard, L. B. (1998). Children with specific language impairment. MA: MIT
Press.
Mason, K., Rowley, K., Marshall, C. R., Atkinson, J. R., Herman, R., Woll, B.,
et  al. (2010). Identifying specific language impairment in deaf children
acquiring British Sign Language: Implications for theory and practice. British
Journal of Developmental Psychology, 28, 33–49.
Menenti, L., Pickering, M. J., & Garrod, S. (2012). Toward a neural basis of
interactive alignment in conversation. Frontiers in Human Neuroscience, 6,
185.
Merzenich, M. M., Jenkins, W. M., Johnston, P., Schriener, C. E., Miller, S. L.,
& Tallal, P. (1996). Temporal processing deficits of language-learning
impaired children ameliorated by training. Science, 271, 77–80.
Morgan, G. (2005). Biology and behavior: Insights from the acquisition of sign
language. In A. Cutler (Ed.), Twenty-first century psycholinguistics. Four cor-
nerstones. Mahwah, NJ: Lawrence Erlbaum.
Morgan, G., Herman, R., & Woll, B. (2007). Language impairments in sign
language: Breakthroughs and puzzles. International Journal of Communication
Disorders, 42, 97–105.
Reilly, S., Bishop, D.  V., & Tomblin, B. (2014). Terminological debate over
language impairment in children: Forward movement and sticking points.
International Journal of Language & Communication Disorders, 49, 452–462.
Reilly, S., Tomblin, B., Law, J., McKean, C., Mensah, F., Morgan, A., et  al.
(2014). Specific language impairment: A convenient label for whom?
International Journal of Language & Communication Disorders, 49, 416–451.
Sato, W., Uono, S., & Toichi, M. (2013). Atypical recognition of dynamic
changes in facial expressions in autism spectrum disorders. Research in Autism
Spectrum Disorders, 7, 906–912.
Sciberras, E., Mueller, K., Efron, D., Bisset, M., Anderson, V., Schilpzand, E. J.,
et al. (2014). Language problems in children with ADHD: A community-
based study. Pediatrics, 133, 793–800.
Takahashi, D. Y., Narayanan, D. Z., & Ghazanfar, A. A. (2013). Coupled oscil-
lator dynamics of vocal turn-taking in monkeys. Current Biology, 23,
2162–2168.
2 Developmental Language Impairment... 77

Takahashi, K., Liu, F.-C., Hirokawa, K., & Takahashi, H. (2003). Expression of
Foxp2, a gene involved in speech and language, in the developing and adult
striatum. Journal of Neuroscience Research, 73, 61–72.
Tallal, P. (1976). Rapid auditory processing in normal and disordered language
development. Journal of Speech, Language, and Hearing Research, 9,
182–198.
The SLI Consortium. (2002). A genomewide scan identifies two novel loci
involved in specific language impairment. The American Journal of Human
Genetics, 70, 384–398.
Ullman, M.  T. (2004). Contributions of memory circuits to language: The
declarative/procedural model. Cognition, 92, 231–270.
Ullman, M. T., & Pierpoint, E. I. (2005). Specific language impairment is not
specific to language: The procedural deficit hypothesis. Cortex, 41,
399–433.
van Balkom, H., Verhoeven, L., & van Weerdenburg, M. (2010). Conversational
behaviour of children with developmental language delay and their caretak-
ers. International Journal of Language & Communication Disorders, 37,
295–319.
Weismer, S. E., & Kover, S. T. (2015). Preschool language variation, growth,
and predictors in children on the autism spectrum. Journal of Child Psychology
and Psychiatry, 56, 1327–37. doi:10.1111/jcpp.12406.
3
The Problem of Continuity in Time
and Across Domains

A theory of language evolution is supposed to give a tentative descrip-


tion of early protolanguages. How did they differ from communicative
interactions between animals, in particular the anthropoid apes? Did
the protolanguages among early Homo sapiens represent a discontinuity
in evolution, or did they realize a continuous development of cognitive
capacities that have been observed by the great apes.
Can we trace the extant languages back to a common origin? What
may have been the world’s first language? As noted in the Introduction,
language evolution was long considered to be a topic beyond serious
inquiry in the academic and scientific world, and hence the question
of a common origin was considered to be indeterminable. Recent
development in molecular genetics has given rise to a more optimis-
tic approach. Bradshaw (1997) pointed out that a phylogenetic tree
of human populations, constructed from genetic data, “turns out to
resemble very closely a tree based upon linguistic classifications” (p. 77).
Therefore, extant languages may be analyzed with respect to linguistic
commonalities to identify lines of descent from a common ancestor.
The affinities of languages such as Sanskrit, Persian, Greek, Latin and
Gothic have led to the reconstruction of ancestral Indo-European, one

© The Editor(s) (if applicable) and The Author(s) 2016 79


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_3
80 Language Evolution and Developmental Impairments

of a number of super-families of languages that existed about 6000


years ago.
Bradshaw (1997) also mentioned Nostratic, from around 15,000 years
ago, which was an antecedent to Indo-European, Altaic, (Turkish and
Mongolian), Dravidian (southern Indian), Uralic (Finnish and Samoyed),
Afro-Asiatic (Berber and Arabic) and Kartvelian (South Caucasian). This
is a superfamily of languages which may have been preceded by other
protolanguages, such as proto-Australian, Austro-Asiatic, and Thai; all
of which may have descended from an original language spoken around
35,000  years ago. By this time, archaic Homo sapiens who emerged in
Africa and Europe, and the Neanderthals, who were not yet extinct, may
have both had language. However, the supra-laryngeal vocal tract of the
latter species, especially the high larynx, which reduced phonological
control, shows that the Neanderthals may not have had speech (see the
discussion of the descended larynx in Sect. 1.3.3). On the other hand,
there are other observations which show that a modality-independent
capacity of language may have emerged in this and other species in the
evolution of the genus Homo. Thus, convolutions left impressed in the
inner surface of the retrieved crania shows that brain structures such as
Broca’s and Wernicke’s area may have been in place by the species Homo
habilis and Home erectus. The neural substrates underlying language may
have evolved thousands of years prior to the actual realization of language
as a behavioral capacity. Moreover, language may have evolved stepwise
by giving precedence to some of its substructures, while others emerged
later. Without knowing this sequence of events, the structure and con-
tent of the (putative) first language, spoken around 35,000  years ago,
may be an enigma, though Ruhlen (1995) claimed to have reconstructed
up to 200 of its words. In my view, we cannot tell how modern con-
ceptions of vocabulary words apply to ancient communicative systems.
Irrespective of what we may reconstruct from a putative first language,
some important principles remain; prime among these is the co-evolution
of genotype and language, as discussed by Bradshaw. This principle does
not only support a convergence on a putative first language, but it also
shows that genetic populations and language communities are subject to
similar mechanisms of differentiation and diversification. The associa-
tion between genes and language depends on vertical transmission; that
3 The Problem of Continuity in Time and Across Domains 81

is, from parent to offspring, which also warrants continuity in language


evolution.
However, horizontal transmission due to interactions with immigrants
and invaders has also taken place, and such transmission has reduced
the gene–language correlation. In consequence, the extent of horizontal
transmission (language replacement) has served to contradict a continu-
ous lineage of languages. The difference between the two forms of lan-
guage transmission should therefore be taken into consideration when we
raise the problem of whether language evolved as a unique human capac-
ity, unlike other systems of communication among animals, or whether it
evolved as the product of a continuous development of cognitive capaci-
ties that overlap language and nonlanguage domains.
In principle, I find two ways of discussing the problem of continuity
in language evolution: continuity in evolutionary time and continuity
across behavioral domains. Because the problem of continuity in time
can also be said to deal with mechanisms of vertical transmission, the first
three sections of this chapter deal with the following issues:

1. Is Homo sapiens sapiens the only species which acquires and makes use
of linguistic symbols? The symbolic species theory (Deacon, 1997) deals
with language as an emergent capacity unparalleled in the animal
kingdom. In discussing aspects of this theory, I will review classical
and some more recent works on symbol learning by human and non-
human subjects. By comparing the communicative skills of bees and
ants with humans’ ability to talk about things that are not physically
present, I will discuss Bickerton’s proposition that displacement is a
road to language. Finally, I will discuss whether we have “living fossils”
which provide “windows” to the protolanguage of man.
2. A continuity position will require an account of vertical transmission
of languages, and in my view, Saffran’s constrained statistical learning
framework is useful in dealing with this problem. I will argue that her
works (Saffran, 2003; Saffran et al., 2008) support what I have termed
“an access code to early dialogues.”
3. Ullman (2004) called attention to “the existence of biological and com-
putational substrates that are shared between language on the one hand
and nonlanguage domains on the other” (p.  232). By focusing on
82 Language Evolution and Developmental Impairments

commonalities between the two domains, we stress continuity rather


than the emergence of a biologically new and uniquely human capacity.
I will therefore discuss Ullman’s procedural-deficit hypothesis (PDH)
and its relevance for a theory of language evolution.
4. The issue of the priority of grammar/syntax raises the question of
whether pre-adaptions of grammar have taken place among subhuman
primates, which will be discussed in a separate section.

In further sections, I will discuss research on the neural substrates


of language, which has dominated much of the research literature in
recent years. First, I provide more detail about an issue presented in the
Introduction: the role of the mirror neurons in the monkey and human
brain. Second, I will present another bipartite distinction, similar to the
one presented by Ullman, of the neural structures which support lan-
guage: ventral and dorsal pathways in language processing. Finally, I will
discuss the problem of whether the motor system has a special role in
language.
My discussion of continuity will also indirectly relate to the question
of whether there are subcomponents of language which have evolution-
ary priority in relation to other subcomponents. Evolutionary differences
between the subcomponents (grammar and semanticity) will most likely
affect the way these components are acquired in infancy and childhood.
Furthermore, insights of the evolutionary priorities also have implications
for theoretical and clinical works on developmental language impairment.

3.1 Communicative and Linguistic Skills


Across the Species
I now turn to studies of communicative skills in subhuman primates,
some of which may be language-like skills. Most of these deal with the
learning of human-invented systems, and according to Fitch (2010) they
depend on the cognitive abilities of the animal subjects, rather than the
particular abilities underlying the species-specific communicative sys-
tems. In any case, these studies are important because they may reveal
some “latent capacities” for language.
3 The Problem of Continuity in Time and Across Domains 83

3.1.1 The Symbolic Threshold

It follows from my introductory discussion in Sect. 1.4.2 that signals in


language are both learned and symbolic. Signals which serve as linguistic
symbols are necessarily learned, but the opposite is not true. The signals
that are acquired in artificial grammar learning (AGL) are not necessar-
ily linguistic symbols. The segregation of signals in a stream of stimuli,
either auditory or visual, should be distinguished from a process whereby
signals become linguistic symbols. One or both of these processes may be
continuous with analogue processes among animals.
Peirce focused on three classes of signs in semiotics: icons which were
defined by similarity, indexes which were defined by contiguity or cor-
relation, and symbols which had all the properties of icons and indexes
but were also like words in language. Nieder (2009) argued that “sign
understanding in any animal – be it in the domain of communication or
number – does not go beyond indexical associations” (p. 100).
So what do symbolic associations mean that make them inaccessible
to subhuman subjects? Symbols are based on a combinatorial system of
sign–sign relationships. They may point to objects, but they may also be
used in the absence of any determinate referent. They always point to
other words; thus, symbolic reference is crucially based on sign–sign rela-
tions, not individual sign–object relations.
Saussure and Peirce symbols have generally been defined by an arbi-
trary and conventional relation between signifier and signified referent.
Deacon, however, argued that arbitrariness is not a necessary aspect of
symbolic reference, because symbols are subject to—and can be manip-
ulated by—compositional rules. Thereby, sequential structure becomes
important; in other words, syntax becomes an indispensable aspect of
symbolic reference. On this account, we may question whether syntax/
grammar has evolutionary priority to symbolism/semantics. Prima facie,
this seems to be a hen and egg problem, but arguments for the primacy
of grammar will be raised in other sections of this chapter.
Deacon’s theory of symbolic reference involved a kind of semiotic
reductionism in the way that complex forms of representation are ana-
lyzable to simpler forms. Each level of representation implicates a lower
level of representation. In this hierarchy, he attributed a special role to
84 Language Evolution and Developmental Impairments

the symbolic level, and argued that there is a logical leap from icons and
indexes on the one side and symbols on the other: “the symbolic thresh-
old.” Is there any evidence that nonhuman primates have crossed this
threshold, and does the acquisition of grammar depend on it?
Considering the great leap from indexical to symbolic representation,
many researchers have addressed the question of whether apes are able
to cross the symbolic threshold. In particular, the now-classic study by
Savage-Rumbough and Rumbaugh on chimp’s efforts to learn a rudi-
mentary form of language (Savage-Rumbaugh, 1986), has been the target
of extensive discussions (e.g., Shanker and King, 2002). Two of these
chimps, Sherman and Austin, showed a special talent for symbolic com-
munication, and the way they progressed towards skillful use of a system
of lexigrams was thoroughly analyzed by Deacon. Initially, the chimps
were trained to associate the lexigrams with a large number of food objects
and activities. Then they were trained to make use of lexigram pairs in a
simple verb–noun relationship; for example, a sequence glossed as “give-
banana” causing a dispenser to deliver the reward. In a simple combinato-
rial system of two “verbs” and four “nouns,” there are 720 possible pair
sequences, most of which are nonsensical or illicit combinations. After
a long training session with selective reinforcements, most of these were
extinguished. As a result, the two chimps were capable of producing the
correct lexigram string on every trial, which may be said to constitute a
grammatical skill (i.e., manipulation of symbols by compositional rules).
Deacon argued that the shift from word-object associations and asso-
ciative predictions to symbolic predictions involves a change in mne-
monic strategy. Lexigrams, which are known in one way, may now be
recoded in another way. They become re-represented in a system of
token-token relationship, and hence they are known “both from bottom
up, indexically, and top down symbolically.” A mental transformation
has taken place. “It is a way of offloading redundant details from working
memory, by recognizing a higher-order regularity in the mess of associa-
tions, a trick that can accomplish the same task without having to hold
all the details in mind” (Deacon, 1997, p. 89). The same strategy also
leads to recoding of symbolic tokens to create new representational pos-
sibilities. A good example is the “syntactic writing” that was found on a
tablet from Ur 2960 bc: Rather than representing numbers by simple one
3 The Problem of Continuity in Time and Across Domains 85

to one correspondences, the old Sumerians replaced the four tokens for
sheep with two tokens, one for sheep and one for the abstract number of
tallies (Schmandt-Bessarat, 1986). However, the case of “syntactic writ-
ing” reveals a conceptual development by the early Sumerians that may
have surpassed the cognitive and communicative abilities underlying the
protolanguages thousands of years earlier.
Deacon’s interpretation of the communicative skills acquired by
Sherman and Austin did not fully agree with Savage-Rumbough’s
description of the chimps’ learning process. Rather than showing the
ability to learn word combinations or sentences, she said that the proj-
ect was intended to show “what does a word means to a chimpanzee”
(Savage-Rumbaugh and Lewin, 1994, p. 49). Later, Shanker and King
(2002) commented on this (apparent) disagreement between Deacon
and Savage-Rumbough and argued that the two researchers had taken
irreconcilable positions. Deacon who explained the chimp’s language
acquisition as a “radical transformation in the[ir] mode of representation”
(p.  87), was considered as an exponent of an information-processing
paradigm, whereas Savage-Rumbough’s position was said to be highly
resonant with a dynamic-systems paradigm. This latter paradigm was
presented as a new one for ape language research by Shanker and King;
that is, a research paradigm they explicated by way of dance metaphor.
According to this metaphor Sherman and Austin acquired communi-
cative skills due to “interactional synchrony,” “mutual attunement” and
“affective resonance between participants.”
I shall not take issue with Shanker and King’s advocacy of a dance
metaphor for language acquisition, but I will quote two of their peer
commentators, Rendall and Vasey (2002), on the matter. They argued
that the emphasis on “mutual attunement between participants seri-
ously limits the scope of their proposal to situations in which the motives
and interactive goals of communicating parties are largely coincident”
(p. 637). I fully agree with this commentary to Shanker and King’s target
article. Thus, the birth of early languages, as well as the birth of languages
in recent history, may have taken place in social encounters where “affec-
tive resonance” is lacking, and where the interactive parties are involved in
negotiating behavior to avoid serious conflict. Therefore, I think Shanker
and King’s dance metaphor is inadequate for a description of language
86 Language Evolution and Developmental Impairments

acquisition by hominids and early man. What we need is a paradigm that


is applicable across a number of social situations and encounters. In my
view, Deacon’s semiotic and cognitive approach to symbolic reference
and language acquisition provides a fully adequate paradigm. The ques-
tion is whether it also provides a sufficient approach to an understanding
of early human language development and evolution.
The dynamic aspects of ape—as well human—languages seem to be
generally acknowledged among contemporary researchers. However, this
fact does not preclude a conception of language as a symbolic system that
is transferable between generations. Therefore, any language must have
properties that are independent of the individual taking part in linguis-
tic communications. The semiotic reductionism described by Deacon
represents an interesting attempt to understand these properties. The
lexigram-lexigram rules that Sherman and Austin finally learned may
represent such properties, given that they are easily transferable to new
generations. In a study that followed the one with Sherman and Austin,
Savage-Rumbough and Rumbough made an attempt to teach Matata,
a pygmy chimpanzee, to communicate via the same lexigram keyboard.
While Matata was struggling to learn lexigram-lexigram rules, she also
fostered a young bonobo, Kanzi, who climbed on Matata during the pro-
cess. Kanzi did not take part in the learning experiment. However, when
the experimenters turned their attention to him, they discovered that he
was fully capable of communicating with the keyboard, and moreover,
that he showed sophisticated understanding of normal spoken English.
The case of Kanzi shows that, despite the learning struggles of adult
chimpanzees, the symbolic system of lexigrams is transferable to new
generations. The transferability of this system may have depended on
the way lexigram tokens were organized to form symbolic systems. For
example, the verb-noun pairs to be learned in these experiments, may
have provided some distributional information that is absolutely essential
for its transmittance between generations of language users (i.e., vertical
transmissions). Perhaps this information should be taken as one of the
defining criteria of symbolic systems. Other researchers have reported
some highly promising results showing that free-ranging rhesus monkeys
can extract patterns of calls that were vocalized in entirely new sequences.
Hauser and Glynn (2009) created artificial strings of rhesus calls with two
3 The Problem of Continuity in Time and Across Domains 87

identical and one odd call, AAB pattern, or contrarily the ABB pattern.
Following habituation to the former pattern, rhesus monkeys showed
significantly more orienting responses to the BAA strings. Similarly, more
responses were given to the AAB pattern after habituation to the BAA
pattern. The results indicate a capacity to extract distributional infor-
mation in entirely new sequences of vocalized calls, and this capacity
also provides a basis for development or change of communicative prac-
tice among the animals. More studies of sequential pattern learning are
reported within the research traditions of statistical and artificial learn-
ing. In Sect.  3.2 below, I will show that such patterns can be learned by
monkeys only when they do not exceed a critical level of complexity.
In the first tens of this century we saw a growing conviction that sub-
human subjects were capable of symbolic communication. Hence, it was
assumed that origins of language may be found in animal communi-
cation; thus, continuity was stressed instead of late emergence of lan-
guage by Homo sapiens. Ribeiro, Loula, de Aroújo, Gudwin, and Queiroz
(2007) argued that alarm calls by African vervet monkeys satisfy the
Percian definition of linguistic symbols. The acquisition of vocal sym-
bols in velvet monkeys was simulated in a computer program showing
that symbol learning was heavily dependent on tutor reliability, whereas
auditory noise had little effect on the rates of learning. The study was
based on a minimal brain model which “was designed to satisfy very basic
neurobiological constraints, common in principle to any animal with a
nervous system”(p. 265). However, the four representational domains
(one for each of the visual and auditory modalities, one for the secondary
sensory association and one domain for the generation of behavioral out-
put) were also included in the model. These were selected to comply with
the habitat of vervet monkeys and therefore they did not apply to “any
animal with a nervous system.” On this background, the title of their
work (“Symbols are not uniquely human”) seems to be an overstatement.
Rather, the subject matter of this work seems to have been limited to
some communicative aspects of alarm calls by vervet monkeys. Its rele-
vance to species-specific behavior patterns is clear; its relevance to general
symbolic behavior by animals is less so.
Ribeiro et al. (2007) relied heavily on an analysis of alarm calls in rela-
tion to the Percian semiotics. A main concern was therefore a distinction
88 Language Evolution and Developmental Impairments

between alarm calls as indexes and alarm calls as symbols. Like in previous
playback experiments, the model permitted presentation of alarm calls in
the absence of a corresponding predator view. Because these calls none-
theless mediated “the representation of a class of predators,” they could
not be interpreted as indexes in the Percian classification of signs. I am
not convinced that this is a critical distinction that follows from Percian
semiotics, and if it does, it may be necessary to specify the conditions
under which the alarm calls continually produce the specific effect. Also,
a conditioned stimulus in a Skinnerian type of conditioning takes on
referential power, and given sufficient resistance to extinction, it will con-
tinue to do so in several trials. However, it does not qualify as a linguistic
symbol in Percian semiotics. According to Deacon (1997), similarity does
not produce iconicity, and “physical connection nor involvement in some
conventional activity dictates that something is indexical or symbolic.”
Granted that “symbols are not uniquely human,” hominids, and may
be even lower species, may have been capable of communicating sym-
bolically. Language capacity may then be traced back to times before the
appearance of Homo sapiens, and consequently there could not have been
a symbolic threshold to cross for early man. In my view, the arguments
from ape language research are not very strong. Moreover, arguments
from this research are entirely based on Percian semiotics and other fields
of modern linguistics, whose relevance for a theory of language evolu-
tion may be questioned. The conceptual framework chosen by Ribeiro
and others necessarily favors the notion that symbolism preceded syn-
tax in evolution (Bickerton, 2003), a position that is less in agreement
with the work of Hauser and Glynn (2009) discussed above and previous
works on human infants and cotton-top tamarins (Saffran et al., 2008,
see Sect.  3.2 in this chapter). These works give support to the assump-
tion that the capacity to extract patterns of sequential stimuli is part of
primate competence, even though these patterns are not included in the
natural communicative repertoire of the monkeys.
Contrary to Bickerton’s position, it is therefore possible to argue
that grammar precedes symbolism in evolution. In Sect.  3.3, I will give
further arguments for the priority of grammar. I shall call attention to
another problem that complicates a grammar priority position based on
rule/grammar learning by the hominids. In studies of grammar learning,
3 The Problem of Continuity in Time and Across Domains 89

sequential patterns are generally constructed from novel or learned


categories. The question is whether the rule/grammar learning by the
hominids is restricted to already acquired categories, or whether such
learning may also take place for novel categories. Finally, the validity of
the grammar–category distinction may be questioned, because the learn-
ing of both may depend on extraction of statistical dependencies in the
linguistic input, the subject matter of Sect. 3.2.

3.1.2 Is Displacement a Road to Language?

The cognitive skills demonstrated by Sherman and Austin, and later by


Kanzi, are impressive, and may be said to form a cognitive prerequisite
for the acquisition of language. As mentioned above, however, the tasks
solved by these animals were human-invented systems and did not dem-
onstrate abilities underlying species-specific communicative systems.
Therefore, despite their impressive achievements, there is an evo-
lutionary gap between the communicative skills of these animals and
human language. They did not have syntax; thus Bickerton (1990,
2014) has maintained that compositional rules are not enough; syntax
does not reduce to word order. Moreover, the acquisition of concepts,
even the abstract superordinate classes of “food” and “tool” were context-
dependent. In general, he argued that concepts learned by animals can-
not be arbitrarily retrieved and are not continuously accessible; therefore,
Sherman, Austin and Kanzi did not cross the symbolic threshold.
In what way may the transition from communicative skills by ani-
mals to human language have taken place? A possible answer has
been proposed by Bickerton (2014) who argued that “displacement is
a road to language.” Displacement is one of the design features (lan-
guage universals) mentioned by Hockett (see Introduction, Sect. 1.1); it
refers to the ability to talk about objects which are not physically pres-
ent. Apart from humans, the only species which possess a mechanism
for displacement are bees and ants. This mechanism, however, differs
noticeably from the one used by humans. According to Bickerton, it
operates by instinct, has a different organization (eusocial vs social), and
depends on a minute brain. The phyletic distance between humans and
90 Language Evolution and Developmental Impairments

hymenoptera may have discouraged linguists from studying the dis-


placement mechanism by bees and ants, yet there are similarities in the
ecologies of these species and early man. Consider first the forage prob-
lems faced by hymenoptera:

Both bees and ants are extractive foragers (omnivorous ones, in the case of
ants). Both exploit food sources that are often large and relatively short-
lived (patches of flowering plants in the case of bees, dead organisms in the
case of ants) and that could not be fully exploited by lone individuals.
These factors make it necessary to recruit nest mates by imparting informa-
tion about the whereabouts and in some cases the nature and quality of the
food sources. The fact that the latter are normally at a distance from where
the information is transmitted forces displaced communication (Bickerton,
2014, p. 83).

Early humans lived in the arid grassland of East Africa, where the quest
for meat was strong by all primate species. The hunting strategies of chim-
panzees could not easily be adopted by early man, who instead became
involved in scavenging behavior. They had to take carcasses of animals
that had died a natural death or had been killed by other animals; in both
cases they met with fierce competition from other predators. “Only if
they were able to recruit numbers large enough to drive away competitors
could they hope to gain first access to most carcasses” (p. 85)
Hymenoptera had found ways of informing their conspecifics about
distant sources of food, and by humans a “the first small handful of sig-
nals would have brought tangible and immediate benefits” (Bickerton,
2014, p.  89). Therefore, despite vast phyletic differences, similarities
in their ecologies have led to convergent evolution of displacement by
hymenoptera and man. By the former species the critical signals were
produced by instinct, while they were products of learning by humans.
Therefore displacement signals showed great variance by humans, and in
the time from Homo erectus to Homo sapiens their informational speci-
ficity has increased. The different ways of expressing displacement by
humans shows that this feature cannot be separated from arbitrariness,
and as argued by Bickerton, both depend on semanticity; all are men-
tioned as separate design features in Hockett’s list.
3 The Problem of Continuity in Time and Across Domains 91

Implications for Language Acquisition Displacement is generally


learned by children in their first two years of life; it is a process which
occurs easily, without any form of instruction. Most two-year-old chil-
dren are able to talk about things which are not physically present here
and now. How did this capacity develop by linguistically competent chil-
dren? The way Bickerton explains the evolution of displacement by early
humans does not apply, without considerable modifications to children.
The problems faced in the ecologies of early humans, which evoked the
evolution of displacement in the first place, are no longer present in the
ecologies of modern infants. However, it may be discussed whether social
and to some extent geographical mobility, are equivalent to foraging
patterns by early man. Given proper interactions with their caregiver,
these may be factors which contribute to the learning of displacement by
children. Such learning is prevented by social and physical immobility.
In secluded families, isolated pairs of siblings, or in the tragic cases of
abandoned children, the critical conditions for the learning of displace-
ment are absent. The learning of home signs in secluded families with
deaf children are likely examples of arrested learning of displacement (see
more about the development of NSL in Chap. 5, Sect. 5.6 and in Chap.
7, Sect. 7.7.).

The idiosyncrasy of home signs makes semanticity and, therefore dis-


placement, an illusive aspect of language by some siblings. Social condi-
tions which enforce interactions with others change linguistic expressions
and make them comprehensible by others in a community setting. (This
is what happened to the children in a school for the deaf in Nicaragua.)
In principle, the critical communicative setting which favors the develop-
ment of displacement, and hence semanticity, can be stripped down to
three individuals: A and B, the interlocutors of a dialogue, and C, who
represents the linguistic community. Once communication is extended
to C, semanticity of the transmitted signals is a fact. In practice, however,
this means that communication can be extended to all members of the
language community.
Without adequate learning of displacement/semanticity, the child will
be language-impaired, and the prospects of interference for treatment
purposes will depend on the child’s age and his/her genetic equipment
92 Language Evolution and Developmental Impairments

for language. In any case, the conditions which either favor or arrest
the learning of displacement form the epigenetics underlying language
acquisition.

3.1.3 Protolanguage

The process of displacement led to a protolanguage supposedly spoken


by the LCA of subhuman primates and Homo sapiens. This language is
said to consist of a vocabulary of meaningful words with no syntax. It
presupposes an ability of vocal imitation and a drive for referential com-
munication, and is therefore also called lexical protolanguage. The alterna-
tive interpretation assumes a gestural protolanguage of signed words. Here
I will only address Bickerton’s model of a lexical protolanguage.
We have no fossil records which can give information about a protolan-
guage that may possibly have been spoken hundred thousand years ago.
However, we have contemporary “time windows” or “living fossils” such
as pidgin languages, the pidgin/creole transition, and the language behav-
ior of abandoned children (the case of Genie), which may be relevant for
speculations/assumptions about protolanguage. Bickerton (1990) also
mentioned child language and utterances of apes in artificial settings.
Similar sources of information, although most of them may now belong
to recent history, are the home signs (mentioned above) which appeared
in the development of the NSL (Senghas, Kita, & Özyürek, 2004) and in
the emergence of a new Bedouin sign language (Senghas, 2005).
Do the living fossils generate more than assumptions, say evidence,
about the nature of a protolanguage? Can we argue that a protolanguage
has existed without syntax? General knowledge about the pidgin/cre-
ole transition is highly relevant to this question. As mentioned in the
Introduction, second generation users of a pidgin language developed a
creole language with syntax, albeit simpler and less sophisticated than the
grammar/syntax of modern languages. Creole languages have been created
independent of each other in different parts of the world, yet all of these
languages have syntactic similarities among which is the Subject- Verb-
Object word order. There is no widely accepted theory which account for
the observed similarities. Yet these similarities show that arguments from
3 The Problem of Continuity in Time and Across Domains 93

creole languages do not support Bickerton’s claim that a protolanguage


existed without syntax. The question still remains whether other living
fossils support his assumption of a protolanguage without syntax.
What about other language components? Can we argue from the living
fossils that a protolanguage consisted of semantically meaningful words?
Let me take the question of semanticity first, because I assume that this
component is closely linked to the ability of referential communication.
For Bickerton, the protolanguage presupposes displacement and hence
the semanticity of its constituent words. However, he says that, “it is
quite unrealistic to suppose that, one to two million years ago ‘words’
could have been anything like the words of modern languages as used by
adults” (2014, p. 104). If these were words which turn out to be incom-
prehensible to any present-day users of language, as well as when complex
decoding machines were used, they may still have been meaningful words
to a number of individuals among our LCAs. In my view, it makes no
sense to attribute semanticity to signs/words that are comprehended only
by a few individuals; for example, the home signs used by a pair of sib-
lings. These signs may be characterized by idiosyncrasy, not semanticity
which is a language feature shared only by a sufficiently large community
of users. Therefore, the question of semanticity in protolanguage depends
on the size and organization of the group/tribe.
My arguments indicate that home signs represent, not only pre-
syntactic, but also pre-semantic words. On the other hand, pidgin words,
considering they have been used communicatively by a larger group
of first-generation individuals, may be seen as semantically meaning-
ful words. Thus, idiosyncrasy is inversely related to semanticity. Notice,
however, that the use of words which are pre-semantic due to a high level
of idiosyncrasy may still depend on an ability of vocal imitation.
Use of child language as a contemporary window into protolanguage
may be even more problematic. Lyon, Nehanive, and Saunders (2012)
studied the process of “transition from babbling to word forms” by a
humanoid robot which interacted with a human participant. Details
of this study will be presented in Chap. 4, Sect. 4.4; it is mentioned
here because the continuum of utterances between babbling and “word
forms” studied in Lyon et al.’s research work. Where in this continuum
do we find models of protolanguage? In the Introduction, Sect. 1.4.2, I
94 Language Evolution and Developmental Impairments

described language-like stimuli and responses which are comprehended


prior to the labeling of these stimuli to particular objects. Also signals
used in interactions between infant and caregiver may have “word forms”
which are pre-sematic; that is, they are not necessarily used as labels for
particular objects or events. Rather, these signals may represent emotional
states, belongingness, and so on. However, utterances may form identifi-
able chunks, based on transition probabilities between sounds/gestures.
In short it is not possibly to raise any strong arguments based on
living fossils that a protolanguage lacked syntax. Rather observations
of creole languages support the opposite conclusion. Also semanticity
may have been a feature of protolanguage, whereas this matter is highly
dependent of the size and structure of language communities among
members of LCA.

3.2 Constrained Statistical Learning:


A Mechanism of Vertical Transmission
of Language?
Saffran’s constrained statistical learning framework mediates a new
approach to language acquisition, whereas her works on statistical learn-
ing also have relevance for a theory of language evolution. Let me first
describe a general statistical learning paradigm where listeners are exposed
to a continuous sequence of sounds (tones, phones, syllables) defined
according to transitional probabilities (TB); that is, the conditional
probability of Y given X. These probabilities form cues for the detection
of segment- or word-boundaries; thus, it has been shown that infants
track tone-word boundaries via such cues. Learners may also compute
other statistics such as frequency of individual elements, frequency of
co-occurrence and so on, all of which may be summarized as statistical
learning.
Natural languages can be characterized as predictive (P) languages
where segments, words and phrases are defined by TBs. This is why word
segmentation can take place as a result of statistical learning, a low-level
aspect of language acquisition which accounts for signals which are not
3 The Problem of Continuity in Time and Across Domains 95

yet associated with lexical meaning. However, Graf Estes, Evans, Alibali,
and Saffran (2007) also showed that infants can map meaning to newly
segmented words. Infants were able to learn the object labels when the
labels were newly segmented words from a stream of continuous speech
with only TB cues to word boundaries. They did not learn sequences
with labels from novel syllable sequences or sequences with low internal
probabilities. This shows that a computation of TBs, and hence statistical
learning, is also involved in high-level acquisition of language (Romberg
and Saffran, 2010).
The learning constraints studied by Saffran and her co-workers imply
that certain statistical properties of language are easily detected and
learned by human infants, and moreover, these constraints may have
shaped the languages (giving rise to linguistic universals). Saffran argued
that natural languages are characterized as predictive (P) languages, in
which predictive dependencies mark phrase units. In contrast, nonpre-
dictive (NP) languages lack these dependencies; they are uncharacter-
istic of natural languages, but nevertheless form rule-based grammars.
Artificial grammars of P and NP languages may be defined on a vocabu-
lary of nonwords, and the use of the two statistical properties may be
compared in an implicit learning task.
The P languages introduced in one of her works (Saffran et al., 2008)
contain predictive dependencies between form classes according to the
following formula:

S  AP  BP   CP 
AP  A   D 
BP  CP  F
CP  C   G 

S refers to a complete sentence, AP to A-phrase, BP to B-phrase, and so


on, and letters in parentheses refer to optional elements. The predictive
patterns are unidirectional; for example, a D element must be preceded
by an A element, whereas an A element does not predict the presence of
a D element. The same relations hold for C and G elements. The formula
96 Language Evolution and Developmental Impairments

represents not only the within-phrase structure, but also the hierarchi-
cal structure of phrases within a sentence. Sentence exemplars were con-
structed from classes of nonwords in such a way that the within-phrase
conditional probabilities always equaled 1.0.
The NP languages, lacking predictive dependencies, could be described
according to the following formula:

S  AP  BP
AP   A    D   must contain at least one 
BP  CP  F
CP  C   G   must contain at least one 

Although this language lacks predictive dependencies, it does have a sort


of phrase structure; for example, CP is defined as the union of C and
G. When C is lacking, G must be present and vice versa. This structure is
uncharacteristic of natural languages.
The unidirectional dependency relations of the P language facilitate
discovery of the underlying structure; for example, that determiners pre-
cede nouns and not vice versa. In an experiment using the Headturn
Preference Procedure, Saffran et  al. (2008) observed the looking times
towards concealed audio speakers for grammatical and ungrammatical
sentences after being familiarized with P and NP languages. The authors
showed that 12-month-old infants are capable of learning complex gram-
matical patterns (P languages) of nonword items, while failing to learn
nonpredictive structures of the same items. The question is whether the
learning preferences for predictive patterns observed by infants could also
be demonstrated by monkeys, or whether this is a species-specific ability
that allows symbolic systems to proliferate among humans. Adult cotton-
top tamarin monkeys could learn very simple predictive patterns (with a
vocabulary of five nonwords). In an experiment with the same languages
but with multiple tokens from each word class, the tamarins maintained
the same level of responses but failed to discriminate between grammati-
cal and ungrammatical strings in both the P language and NP language
condition.
3 The Problem of Continuity in Time and Across Domains 97

It was long commonly assumed that only humans can spontaneously


acquire both finite state (AB)n and phrase structure (AnBn) grammars.
The observations made by Saffran et al. (2008) show that this position is
subject to some modification, and the continuity problem raised by this
research has also been addressed within artificial grammar (AG); that is,
the strings of sentences presented in Saffran’s experiments can be said
to represent different AG structures. Wilson et al. (2013) developed a
quantitative parameter space in order to compare these structures: One
source of variability is the size of the vocabulary, or the number of tokens
in a class of elements; another is the degree of predictability in a string
of elements; that is, the degree to which an element can be predicted
by previous elements. Wilson et al. (2013) then proposed an index or
linearity (L):

Number of stimulus classes or structural elements  1


L .
Number of legal transitions

A linearity index of 1 describes an entirely predictable AG structure. The


various AG structures used in experiments with nonhuman subjects may
now be plotted as a function of two dimensions, number of unique stim-
ulus elements and linearity of the structural elements. Previous research
with nonhuman subjects could now be described with respect to this
parameter space, and Wilson et al. (2013) also reported an experiment
on auditory AG with macaque and marmoset monkeys. These species
can be described with respect to their phylogenetic relationships to man:
Marmosets who belong to the New World monkeys shared a common
ancestor with humans 40 million years ago, whereas the rhesus macaques
shared a common ancestor with man 25 million years ago. Video record-
ings after habituation to AG showed the extent to which the monkeys
discriminated between test sequences which conformed to the AG struc-
ture and those who violated this structure. The results showed that the
macaques (Old World monkeys) were capable of more complex AG
learning compared to the marmosets (New World Monkeys).
Both Saffran et al. (2008) and Wilson et al. (2013) have demonstrated
statistical /AG learning by nonhuman subjects with less complex pat-
terns. The question is whether the difference in complexity is a qualitative
98 Language Evolution and Developmental Impairments

shift from monkeys to humans or whether there is a phylogenetic contin-


uum of learning capacities. Perhaps we could describe learning capacities
by subhuman primates as a pre-adaptation to language.
The problem of continuity in statistical/AG learning also relates to
the problem of domain-specificity of this type of learning. In a previous
work, Saffran (2002) demonstrated that learners of a P language outper-
form learners of a NP language both with sequentially presented auditory
stimuli and with simultaneously presented visual stimuli. However, no
preference for predictive patterns was observed for sequentially presented
visual arrays of nonlinguistic shapes. The preferred learning of a P lan-
guage was constrained by the most appropriate manner of presentation
in each modality. Apparently, this finding indicates that the predictive
dependencies cannot serve as a code to language structure independent
of modality.
According to Saffran (2002) the predictive dependencies are easily
learned “when the dependencies lie between elements presented in a
manner appropriate to the perceptual learning capacities in each modal-
ity” (p. 191). In the auditory modality, people tend to link together ele-
ments across time, and in the visual modality people tend to link together
spatially distinguishable and simultaneously available elements. Further,
Saffran argued that these differences may be caused by learning mecha-
nisms in the two modalities that are differently specialized, independent
of experience. Alternatively, these differences may be caused by process-
ing capacities in the two modalities that are “shaped via experience to
specialize in different types of learning” (p. 191). To tease apart the two
causes of modality differences in statistical language learning, Saffran
suggested running the visual experiment with participants who know
sign language well.
In my view, Saffran’s works on statistical language learning are
clearly relevant to discussions on the priority of lexicality/seman-
tics versus grammar in early human languages. As pointed out, the
knowledge of words or other lexical items are generally presupposed in
definitions of grammar. The vocabulary lists in Saffran’s experiments
were composed of just nonwords that may be associated with words,
and thus become parts of a semantic network. However, the statistical
learning constraints do not depend on a pre-established vocabulary.
3 The Problem of Continuity in Time and Across Domains 99

Rather, the principles of statistical language learning also apply to


the acquisition of words or other lexical items; thus, segmentation of
words or word-like segments in a stream of speech sounds depend on
the statistical properties of phone sequences (Saffran, 2003, see also
Chap. 8, Sect. 8.3.3 AGL and language impairment). In this way, the
learning of grammatical structure and the learning of a vocabulary
depend on the same mechanisms and constraints. In early language
acquisition, words can be segregated independent of lexical meaning,
indicating that grammar has developmental and evolutionary priority
to semantics.
In my opinion, statistical/AG learning represents an important factor
in vertical transmission of languages. A neural mechanism which links
perception and action is another factor which I will discuss in Sect. 3.5.
Taken together, these factors contribute to, and in most cases warrant, an
interactive alignment characteristic of early dialogues.
I like to add, however, that Saffran’s works do not explain the acquisi-
tion of symbolic reference. She did not discuss the problem of meaning
in language, and yet her works have great implications for the evolu-
tion of language. Symbolic reference, and hence meaning, grows out of
the praxis of a linguistic community. At the same time, the linguistic
community is nurtured by the constraints that bias infants to “preferen-
tially perform certain kinds of computations over certain kinds of input”
(Saffran, 2002, p. 173). Hence, there is a mutual dependence between the
language-learning mechanisms and constraints on the one side and the
praxis of linguistic communities on the other. Furthermore, I consider
the dialogue as the main behavioral expression of a linguistic community,
and the predictive dependencies that are part of all natural languages
may be considered an access-code to dialogues in a pre-semantic stage
of language development. Once the predictive relationships in linguistic
inputs are learned, the child may not only take part in the dialogue, but
also initiate a dialogue with others (see also Chap. 4, in particular Sects.
4.6 and 4.7). As mentioned above, statistical learning also contributes
to the labeling of newly segmented words; that is, it is also involved in
semantic development.
Children with SLI have developmental language disorders that cannot
be attributed to any social, psychological or neurological cause (see the
100 Language Evolution and Developmental Impairments

discrepancy criteria discussed in Chap. 2, Sect. 2.1). More specifically,


SLI has long been characterized as a grammar learning disorder (Rice and
Oetting, 1993; van der Lely and Stollwerck, 1996). On this account, it
seems likely that many SLI children are unable to make use of statisti-
cal dependencies in the linguistic input to acquire phrase structure. This
theory is consistent with clinical observations which show that SLI chil-
dren rarely initiate, and take little part in dialogues.
Saffran argued that the learning mechanisms involved in the acquisi-
tion of grammar are also involved in nonlanguage learning domains.
This is a position which agrees with the one taken by Ullman and
Pierpoint (2005), who argued that impaired procedural memory due
to a dysfunction of the underlying neural substrates affects both the
acquisition of grammar and nonlanguage skills (see the next section).
Impaired procedural learning by SLI children was recently observed
by Kemény and Lukács (2010). They compared the performances of
16 SLI children (mean age 11;3) with 16 adults and typically develop-
ing (TD) children on the Weather Prediction Task (WPT). This task,
which has been used for examining the dissociation of procedural and
declarative memory, is dissimilar from AG tasks because it does not
involve sequential information. (A specific description of this task will
be given in Chap. 8.) The SLI children performed significantly worse
compared to the adults and the TD children on the WP task. The defi-
cient learning by the SLI children appeared already at the early stages
of the task. These observations, showing impaired learning in nonlan-
guage domains, can be used diagnostically in works with language-
impaired children.
Both Ullman and Saffran compared processing in language and non-
language domains. They found important similarities that indicate a
continuity between language and other cognitive domains. However,
the restricted learning capacity of cotton-top tamarin monkeys com-
pared to data from human infants in one of Saffran’s studies, may also be
taken as an indication of discontinuity in language evolution. However,
Saffran’s constrained statistical learning framework has strengthened a
continuity position on language evolution. Observations on impaired
procedural learning by SLI children also support an assumption of
continuity.
3 The Problem of Continuity in Time and Across Domains 101

3.3 Ullman’s Declarative Procedural Model


To challenge the position taken by Ribeiro et al. (2007) and Bickerton
(2003) on the evolutionary priority of symbolism, we need to make use
of a conceptual framework which differs from semiotics and modern lin-
guistics. The declarative/procedural (DP) model of Ullman (2004) repre-
sents such a framework, which is the product of research in neurobiology
and cognitive neuropsychology. This model deals with memory systems
and their relationships to brain structures of different evolutionary ori-
gins, and can be traced back to early studies of amnesia (Cohen & Squire,
1980; Masson & Graf, 1993). In this research tradition, it was com-
monly assumed that there are two or more long-term memory systems.
Thus, Cohen and Squire were among the first psychologists who pro-
posed a distinction between declarative and procedural knowledge; that
is, a distinction which was clearly related to that made by Ryle (1949)
between “knowing that” and “knowing how.” Declarative knowledge
corresponds to “knowing that” and includes both semantic and episodic
memory, whereas procedural knowledge relates to “knowing how” and
refers to the ability to perform skilled actions. Squire (1993) proposed an
alternative taxonomy of long-term memory, where the main distinction
is between declarative and nondeclarative memory, and where procedural
memory is considered as only one form of nondeclarative memory (see
below Sect. 3.3.2).
Memory belongs to a nonlanguage domain, so what is its relationship
to language? Ullman (2004) pointed out that language may share impor-
tant biological and computational substrates with the domains of mem-
ory, in particular the domains of declarative and procedural memory.
(It should be added that language also share important substrates with
working memory.) By focusing on the memory systems and their under-
lying neural substrates which serve language, rather than language in a
linguistic and semiotic frame of reference, we are in a better position
to study the evolution of language. This is because the underlying sub-
strates of declarative and procedural memories differ phylogenetically,
and therefore the relative weight of the corresponding linguistic expres-
sions may have changed in hominid evolution. Ullman also argues that
the research tools we have at our disposal to understand language is quite
102 Language Evolution and Developmental Impairments

impoverished, compared to those available to the investigation of other


neurocognitive domains, “a research program limited to language neces-
sarily restricts language theories and their predictions” (p. 232). At the
same time, he stressed that a research program directed solely to lan-
guage should not be replaced with one directed to nonlanguage cognitive
domains, only that “the latter type of research program must crucially
complement the former” (p. 233).
In general, biological structures are assumed to evolve from already
existing structures. Therefore, a research program which links language
to memory systems whose neural substrates are relatively well known also
tends to stress continuity in language evolution rather than discontinuity
or emergence of a new communicative capacity by humans. Moreover,
Ullman proposed that declarative memory and its neural substrates was
linked to the mental lexicon, whereas procedural memory was linked to
aspects of grammar. Granted that the substrates underlying declarative
memory are evolutionarily more recent than the structures underlying
procedural memory, this assumption implicates that grammar or syntax
may have an evolutionary priority in relation to the mental lexicon.

3.3.1 The Declarative Memory System

Declarative memory involves the learning, storage and retrieval of memo-


ries that are consciously accessible. It includes semantic knowledge and
knowledge of “facts” (in contrast to skills, which are stored as procedural
memory), but also memory of episodes. The system is fast and specialized
for one-trial learning, but it is also fallible and sensitive to interference.
Much remains to clarify about the underlying neural structures of declar-
ative memory. However, it is generally assumed that declarative memory
depends, for the most part, on the medial temporal lobe structures such
as the hippocampus, the entorhinal cortex, and the perirhinal cortex. The
classical case of H.M. reported by Scoville and Milner (1957) brought
the hippocampus into focus of neurocognitive research on long-term
memory. (H.M. had the hippocampus removed in an attempt to treat his
epilepsy and was left with an extremely dense amnesic syndrome while
his procedural skills were spared.) Later, Squire and Alvarez (1995) have
3 The Problem of Continuity in Time and Across Domains 103

argued that the hippocampus plays a prime role in the consolidation of


new memories which are temporarily stored in the hippocampus until
they are transferred to a more stable storage system in the neocortex. The
entorhinal cortex forms an interface between the hippocampus and the
neocortex and may be considered as a “hub” in the widespread network
of memories and information transfer in the human brain. The perirhinal
cortex receives highly processed sensory information, and whereas it plays
a major role in memory, this structure also sends output to the basal gan-
glia, the thalamus, the basal forebrain, and the amygdala.
The functions of the medial temporal lobes are many-faceted because
they are involved in encoding as well as consolidation and retrieval of
new memories; yet it is now commonly believed that memories become
largely independent of the medial temporal lobe structures and become
more dependent on neocortical regions. The medial temporal lobe struc-
tures are said to have a “binding” function of all long-term memory.
The declarative memory system is not only involved in the learning,
consolidation and retrieval of new memories, but also in the mainte-
nance and retrieval of all memories that are accessible or potentially
accessible to other cognitive systems. Therefore, other brain systems
play a role in declarative memory; for example, the ventro-lateral pre-
frontal cortex (VL-PFC), which includes the inferior frontal gyrus and
Brodmann’s areas 44, 45, that controls language performance (both
speech and sign production), and area 47. Finally, it should be men-
tioned that the cerebellum also plays an important part in processing
declarative memories.

3.3.2 The Procedural Memory System

The nondeclarative systems include not only procedural memory, but


also conditioning and nonassociative memories (habituation, sensitiza-
tion; see Fig. 3.1). These are also considered “implicit memories” because
they are generally unavailable to conscious control and reflection. The
procedural memory system refers to the learning and control of both new
and established skills and habits. In the DP model, procedural memory
uses “the entire system involved in the learning, representation and use of
104 Language Evolution and Developmental Impairments

Fig. 3.1 Organization of long-term memory.

relevant knowledge, not just to those parts of the system underlying the
learning of new memories” (Ullman, 2004, p. 237).
In contrast to the declarative memory system, the procedural system
has the following characteristics:

• Slow, incremental learning


• Informational encapsulation, inaccessible to conscious control
• Context-dependent learning of stimulus-response rule-like relationships
• Acquired rules apply quickly and triggered by specific stimuli
• Apply to real-time sequences: sensory, motor or cognitive

Linear and probabilistic sequences of behavior can be learned by mon-


keys, apes and humans, but it is not yet clear to what extent hierarchical
structures can be acquired by other species than humans. The products of
procedural learning (i.e., procedural skills) will be more comprehensively
described in Chap. 4. At present, I will deal with important brain sub-
strates which serve the procedural memory system. Prime among these
are the basal ganglia, including the neostriatum with the putamen and
the caudate nucleus. While ventral parts of these structures are impli-
cated in emotional memory, dorsal parts are involved in sequence learn-
ing and the learning of sensory-motor relationships. Podzebenko, Egan,
3 The Problem of Continuity in Time and Across Domains 105

and Watson (2002) also showed that the dorsal striatum is involved in
mental rotation, and Meck and Benson (2002) showed its part in timing
and rhythm; that is, apparently disparate functions which are nonetheless
assumed to be intimately related.
The dependence on the neostriatum and the basal ganglia is the rea-
son why the procedural system is considered to be phylogenetically older
than the declarative system. At the same time, the basal ganglia are widely
interconnected with multiple cortical areas, while the basal ganglia them-
selves are highly interconnected. They receive input projections from
frontal cortex as well as the medial temporal lobe. Output connections
via thalamus form segregated circuits/closed loops which are implicated
in the learning and control of motor programs; for example, the sequenc-
ing of motor gestures or speech sounds in language.
Among the cortical regions that are critical for the procedural memory
are the supplementary motor area (SMA) and the general area F5. By
the macaque monkey, F5 is the well-established ventral pre-motor region
that includes mirror neurons and that is assumed to be the homologue
of BA 44 in Broca’s area by humans. The linguistic function of this area
by man is well-known, but also by nonhuman primates, Broca’s area is
clearly implicated in the learning of abstract and potentially hierarchical
structures (Conway & Christiansen, 2001). As part of the procedural sys-
tem, it is also critical for the functional maintenance of these structures.
Finally, it should be mentioned that the cerebellum is strongly impli-
cated in the coordination of skilled movements. Also, imagined hand
movements are highly dependent on the cerebellum, in particular activity
within the dentate nucleus.

Interactions Between the Memory Systems in Language Notice that


although the procedural system depends on phylogenetically older struc-
tures than the declarative system, there are areas of the neocortex which
serve both systems. Thus, superior aspects of the temporal lobe serve
both the procedural and declarative system. The functional distinction
depends on the specific circuitry which interconnects various parts of
the brain. There are also a number of ways the two memory systems may
interact. In working memory tasks, the procedural memory system serves
to select knowledge stored in declarative memory. Furthermore, when
106 Language Evolution and Developmental Impairments

both systems are undamaged they may supplement each other, particu-
larly in the learning of temporal structures. The declarative system may
sometimes start the learning of new knowledge, and at a certain level of
performance the procedural system may overtake the learning process. In
that case, the procedural system learns the same or analogous knowledge,
but the retrieval of this knowledge will be different depending on which
system is activated. The two systems may also interact competitively, and
a dysfunction in one system may enhance learning in the other (see also
Chap. 8, Sect. 8.2, on interactions between the two system and their
methodological implications for designing learning tasks).

According to the DP model, the brain systems which are underly-


ing declarative and procedural memory serve analogue roles in language
and in nonlanguage domains. The brain system underlying declarative
memory:

Subserves acquisition, representation and use not only of knowledge about


facts and events, but also about words. It stores all arbitrary, idiosyncratic
word specific knowledge, including meanings, word sounds, and abstract
representations such as word category. It includes among other things rep-
resentations of simple (nonderivable) words such as cat, bound morphemes
such as the past-tense suffix ed, irregular morphological forms, word com-
plements and idioms (Ullman, 2004, pp. 244–245).

The procedural system serves the learning and practicing of skills. More
specifically, Ullman explained that this system served “the learning of new,
and the computation of already-learned, rule-based procedures that gov-
ern the regularities of language–particularly those procedures related to
combining items into complex structures that have precedence (sequen-
tial) and hierarchical relations. Thus, the system is hypothesized to have
an important role in rule-governed structure building; the sequential and
hierarchical combination— “merging”……or concatenation—of stored
forms and abstract representations into complex structures” (p. 245).
There are wide-ranging empirical demonstrations showing that the
procedural system is involved in the learning of grammar. These com-
prise the learning of sequential structures of stimuli and classification
3 The Problem of Continuity in Time and Across Domains 107

of exemplars of artificial grammar, or the acquisition of any rule-based


structures. The question is whether artificial grammar can be learned
independent of the declarative memory system. Peterson, Folia, and
Hagoort (2010) reported the neurobiological correlates in an FMRI study
of artificial grammar learning. They constructed a right-linear unification
grammar of letters presented (letter by letter) on a computer screen, while
the subject was instructed to reconstruct the sequence on a keyboard.
The main FMRI results showed that the left inferior frontal region was
engaged during the processing of the presented letter sequences, and in
view of the neural circuitry between this region and the basal ganglia,
their results may also indicate the involvement of the procedural system.
Surprisingly, however, these researchers also found that the medial tem-
poral lobe was deactivated during learning of the grammatical sequences,
and therefore, they concluded that the implicit learning of grammar was
not dependent on declarative memory mechanisms. The deactivation of
the temporal lobe supports the claim that the two systems may also have
complementary roles.
The basic claim of the DP model that the declarative and procedural
systems play analogous roles in language and nonlanguage domains
implies continuity between language and nonlanguage domains. Ullman
(2004) ends his work by asserting that brain systems underlying language
are homologues to systems in other animals, which consequently means
that the DP model “has implications for the evolution of language”
(p. 257).
Apart from taking a continuity position, Ullman did not describe
any further implications of the DP model for the evolution of language.
So what is the evolutionary status of the two brain systems underlying
declarative and procedural memories? As pointed out, Squire, Knowlton,
and Musen (1993) argued that the limbic/diencephalic structures under-
lying declarative memory are phylogenetically more recent than the
structures underlying nondeclarative memories. Such memories (which
include priming and conditioning in addition to the procedural memo-
ries) depend on the cortical-striatal system; that is, projections from the
neocortex to the basal ganglia. Hence, Squire et al. (1993) claimed that
these memories can be acquired, stored and retrieved without the partici-
pation of the limbic/diencephalic brain system. In their view therefore,
108 Language Evolution and Developmental Impairments

the brain systems underlying declarative and procedural memory differ


phylogenetically. When we link the mental lexicon and the mental gram-
mar to each of these brain systems, the two linguistic domains cannot be
on a par with each other. One must have evolutionary priority in relation
to the other.
The idea that the lexical/semantic and the grammatical systems of lan-
guage depend on different brain systems has recently been re-invoked by
Ardila (2011). He described two brain systems (temporal and frontal)
underlying language, which are to some extent similar to the brain sys-
tems hypothesized in Ullman’s DP model. Thus, Ardila claims that the
lexical/semantic system is supported by the temporal structures, and the
grammatical system is supported by the frontal structures. Furthermore,
he argues that brain pathology shows that the two systems are indepen-
dently impaired (Wernicke aphasia and Broca aphasia).
Ardila also mentioned that the two brain systems are separately involved
in declarative and procedural memory, and he said briefly that “procedural
memory is related with frontal/subcortical circuitries” (p. 29). However,
Ardila did not expand on the role the basal ganglia played in the frontal
system, and in consequence, there is no focus on evolutionary differences
between the two brain systems underlying language. Ardila lent support
to Bickerton (1990), who argued that, from characteristics of the pidgin
languages and from trends in language acquisition by children, a seman-
tic system must have preceded grammar in the protolanguages. These
seem to be compelling arguments for the primacy of symbolism/semantic
lexicality. Thus, while “hominids existing before the contemporary Homo
sapiens sapiens could have developed certain complex lexical/semantic
communication systems” (p. 26), Ardila argued that grammar is “histori-
cally recent and can be observed only in the Homo sapiens likely linked to
some specific genetic mutations” (p. 24). In addition, despite his claim
of an innate grammar competence, Chomsky’s early demonstration that
a lexical/semantic system is independent of grammar was said to support
the primacy of a lexical/semantic system.
Ardila (2011), however, did not define grammar independent of
declarative knowledge. He presupposed the existence of words or other
categories known to the early language user, and combinations of these cat-
egories, were said to define/describe grammar. In this way the primacy of
3 The Problem of Continuity in Time and Across Domains 109

the lexical/semantic system becomes a logical necessity, while its empirical


support remains undecided. The supremacy of the lexical/semantic system
can be interpreted in two ways: (1) The acquisition of grammar depends
on a well-established lexical/semantic system. In this case, it becomes
difficult to explain the learning of artificial grammar. (2) Although the
potentiality for learning grammar does not depend on a lexical/semantic
system, grammar is the more recent attainment in the evolution of lan-
guage. The supremacy of a lexical system may be questioned, regardless of
whether the one or the other interpretation is the correct one.

3.3.3 The Procedural-Deficit Hypothesis

Ullman and Pierpoint (2005) discussed the etiology of developmental


language impairment in relation to the DP model of language process-
ing. They argued against contemporary theories which explained devel-
opmental language impairment either as a deficit which is specific to
grammar or as a nonlinguistic processing deficit. Their third alternative,
which was based on the DP model, claimed that developmental lan-
guage impairment was due to abnormalities of brain structures underly-
ing the procedural memory system. This alternative has been called the
procedural-deficit hypothesis (PDH), which means that the deficit affects
all aspects of rule-learning; not only grammar, but also both sensory-
motor and cognitive skills. In consequence, the impaired children will
also have lexical retrieval deficits, while declarative (vocabulary) learning
is relatively spared. In short, these children were said to have a procedural
language deficit (PLD).
The main consequence of the PDH is that affected children not only
have impaired grammar and lexical retrieval, but also are impaired in a
number of nonlanguage functions. These include motor functions; for
example, oral and facial praxis, working memory, temporal processing
and mental imagery. However, deficits in nonlinguistic domains may be
subtle and in some cases they have not been found in language-impaired
children, but the review of research works presented in Ullman and
Pierpoint (2005) give substantial evidence in support of the PDH. The
question is how the PDH has been addressed by researchers in the 10
110 Language Evolution and Developmental Impairments

years following upon Ullman and Pierpont’s article. Has it withstood


the test of time or has its impact on research waned in recent years? A
number of more recent research works show that it still has consider-
able impact on studies of developmental language impairment (see Hsu
and Bishop, 2014, and the review presented by Lum, Conti-Ramsden,
Morgan, and Ullman, 2014). In particular, I will mention Bishop and
Hsu (2015) who showed that procedural demands of learning was a
disadvantage for language-impaired children in a verbal paired associate
task, while declarative learning by these children was spared in a nonlin-
guistic task. I will present this work and discuss related works in more
details in Chap. 8.

3.4 Arguments for Pre-adaptation


of Grammar
Granted that grammar precedes the lexical/sematic system in the evolu-
tion of language, we should look for any evidence of pre-adaptation of
grammar in the pre-human primates. In the following section, I will start
to describe some constraints on the processing of sentences in modern
languages and consider the possibility that similar constraints operate in
the structure of skilled motor actions.
As mentioned in Sect.  3.1.3, creole languages have syntactic similari-
ties such as the Subject-Verb-Object word order. Other languages that do
not originate in a pidgin/creole transition show great variance of linguis-
tic structures, often due to distinctive geographical patterns that compli-
cate the question of universality of word order preferences. In a recent
study of event-related potentials (ERP), Bickel, Wizlack-Makaravich,
Choudhary, Schlesewsky, and Bornkessel-Schlesewsky (2015) showed
that Hindi participants interpreted the first base-form noun phrase (NP)
in German sentences as an agent, and also when the remaining sentence
required the interpretation of a patient role. This is a neurophysiolog-
ical constraint on the processing of sentences which operates in most
languages of the world, and which has influenced the marking of noun
phrases by case. In Hindu, noun phrases are given a special case marker
(“ergative”) that denotes the patient role but is limited to transitive verbs,
3 The Problem of Continuity in Time and Across Domains 111

which makes the A argument easily distinguishable from the S arguments


of intransitive verbs.
The preferred interpretation of noun phrases as agents makes ergatives
redundant or superfluous. Therefore the ergative marker is often dropped
in spoken language; in English, no such marker exists. According to
Bickel et al. (2015) the processing system will assume that a base-form
noun phrase, like “the old man” refers to the S argument of intransitive
verbs (“the old man slept”) or the A argument of transitive verbs (“the old
man hit the car”). In a sentence like, “the old man I sold a car” the S or
A assumption of the noun phrase is falsified in the processing of the rest
of the sentence. The ERP potential observed after reading of the sentence
indicates a reanalysis of the NP.
The S/A preference in the interpretation of NP’s is a fundamental
principle of simplicity in language processing and may also support the
Subject-Verb-Object word order where reanalysis of NP’s are avoided.
Moreover, this principle is consistent with nonlinguistic processing of
actions, and may therefore indicate an evolutionary origin of syntax.
Thus, Bickel et al. (2015) points out that the “privileged assignment of
agents is consistent with the finding that agents are the point of departure
for cognitive construction of action in general, also outside language—
possibly because this type of event construction became hard-wired in the
evolutionary history of our brains” (p. 3 of online publication).
In accordance with Ullman’s DP model described above, the “cogni-
tive construction of action,” depends on the procedural memory system.
Common activities among our early ancestors, like hunting, use of tools,
and so on, have given rise to procedural skills with componential/sequen-
tial structures that may have served as pre-adaptations to phrase structures
and syntax in language (see also Sect.  3.2). This means that evolution of
syntax, despite the role of syntax in semantic processing of modern lan-
guages, have been largely independent of the declarative memory system.
Thus, phrase structures of agent–patient relationships have become some
of the most hard-wired constructions in language.
A pre-adaptionist view of syntax which emphasizes cognitive process-
ing underlying motor skills contrasts with “Universal Grammar.” This is
an innately specified computational operation which works on a set of
small meaningful units (morphemes) and accounts for an unbounded
112 Language Evolution and Developmental Impairments

generation of hierarchical structures (Humboldt’s phrase “making infi-


nite use of finite means”). As announced by Chomsky (1988) and more
recently by Bolhuis, Tattersall, Chomsky, and Berwick (2015), the pro-
gram has one basic operation called “merge” which puts any two syntactic
elements together and thereby creates the hierarchically structured sen-
tences of any language (The Strong Minimalist Position). They therefore
argue that hierarchical, not serial or linear order, is the critical condition
in syntax. They argue that our interpretation of pronouns and names in
sentences does not depend on a left to right order, “Rather, it is whether a
pronoun bears a particular hierarchical structural relationship to a name”
(p. 2 of online publication).
However, hierarchical structuring and Humboldt’s principle of discrete
infinity is not limited to the language domain. Since Lashley’s (1951)
seminal work on the problem of serial order, it has been commonly
acknowledged in the realm of motor action. Hierarchical structures of
motor control are most clearly realized in music and dance performances.
Control of the finger movements of a skilled violinist is not obtained by
sensory motor feedback alone, but rests upon a hierarchically structured
motor program in the brain of the player.
In agreement with classical Darwinian principles, hierarchical motor
programs which originally evolved for the control of motor action in
a number of everyday activities have been converted to control phrase
structures in language. The generativity of hierarchical structuring in
motor action is no less than the generativity in language; however, both
depend on (procedural) learning and cannot be “innately specified,” as
argued by UG supporters. The position I have taken here is also clearly
expressed in a recent article published by Lieberman (2015).

3.5 More About Mirror Neurons


in the Monkey and Human Brain
In the following section I will turn to research that applies equally to
speech and sign language, and that treats both language modalities
within a conceptual framework of motor action. This research, which was
briefly mentioned in the Introduction, deals with the so-called mirror
3 The Problem of Continuity in Time and Across Domains 113

neurons in the monkey and human brain, and which is targeted at the
neural mechanisms of the “cognitive construction of action.” The lin-
guistic stimuli—the sounds and signs—are events that can be decoded
as motor actions. This decoding process requires that production and
perception are linked as expressed in the motor theory of speech per-
ception (Liberman, Cooper, Shankweiler, and Studdert-Kennedy, 1967).
Moreover, this linkage between production and perception most proba-
bly applies to all symbolic systems independent of the sensory modalities.
As will be shown below, the discovery of the so-called mirror neurons in
the ventral premotor cortex (area 5) of the macaque monkey has given
rise to claims that a substrate for this linkage does exist in the hominid
brain (Rizzolatti and Arbib, 1998): The F5 neurons discharge during
both active movements of the hand and mouth, and observation of a
similar gesture made by the experimenter. Transcranial magnetic stimula-
tion (TMS) and positron emission tomography (PET) studies also indi-
cate that systems for recognition of voluntary actions exist by man and
involve the left hemisphere. Therefore, the development of a production/
perception system may be associated with a left-hemispheric specializa-
tion for language.
A number of research works I have reviewed deal with neural sub-
strates of cognitive and linguistic functions by adult human participants.
Now, the question is how the brains of our hominid ancestors were pre-
pared for language, and moreover, whether their brains in any way were
comparable to the brains of newborn infants today. As mentioned in the
Introduction, research on the mirror neurons and equivalent systems in
the human brain has called attention to a neural mechanism which seems
to form one of the preconditions to use of language. The mirror neu-
ron system (MNS) may not form the complete mechanism underlying
language, but in some respects this system is shared by monkeys and
humans. Therefore, this research has testified to continuity in time (lan-
guage evolution), but also across domains (perception/action to linguistic
interactions), and in the following I shall extend the presentation started
in the Introduction and review some main findings and spot the main
theoretical issues.
The mirror neurons were first located in the convexity of the arcuate
sulcus within the premoter cortex (area F5) of the macaque monkey brain
114 Language Evolution and Developmental Impairments

(Di Pellegrino, Fadiga, Fogassi, Galese, and Rizzolatti, 1992; Gallese,


Fadiga, Fogassi, and Rizzolatti, 1996). As mentioned in the Introduction,
these cells do not only discharge when the monkey grasps or manipu-
lates an object, but also when the monkey observes a conspecific or an
experimenter performs a similar action. (In contrast, canonical neurons
are grasp-related and discharge only during execution, but not during
observation.) Due to similarities in the cytoarchitectural properties these
observations have led to the controversial claim that F5 in the macaque
brain is the monkey homolog of Broca’s area in humans (Rizzolatti &
Arbib, 1998).
More specifically, the coding characteristics of F5 cells in the monkey
brain are shown by the fact that they do not discharge in response to
the presentation of an object, only to the observation and execution of
a specific object-related action. Neurons with the same response charac-
teristics have later been found in the convexity of the inferior parietal
cortex (Fogassi et  al., 2005). Therefore, this area in the human brain
is often mentioned as another potential homolog of the mirror neuron
area in the macaque brain. The question is whether it has been possible
to give more consistent evidence of mirror cells with the same response
characteristics in humans. A number of PET and fMRI studies in the
beginning of the present century have been undertaken to answer this
question. Many of these have been reviewed and critically analyzed by
Turella, Pierno, Tubaldi, and Castiello (2009). They argue that it has not
been possible to give consistent evidence of neurons that are activated
both to the execution of an action and to the observation of an agent
performing the same action (the mirror criteria). Due to methodologi-
cal flaws, it has been difficult to compare neural activity within both an
execution and an observation condition. For example, in the fMRI study
by Hamzei et  al. (2003), data analysis has been undertaken by merg-
ing files from two different experiments, one which deals with execution
and one which deals with observation and execution. In other experi-
ments reviewed by Turella et al. (2009), the type of action execution may
have differed in the two conditions. According to procedures used in
experiments with monkeys, the entire agent performing the action must
be seen by the subject; merely seeing the hand detached from the body
has not elicited any mirror activity in experiments with animal subjects.
3 The Problem of Continuity in Time and Across Domains 115

These are methodological prerequisites that, according to Turella et  al.


(2009), have been insufficiently met in PET and fMRI studies up to the
publication of their review paper.
Although it has been difficult to give consistent and nonconfound-
ing evidence of neuron systems which show “mirror activity” in the
human brain, the link between perception and action demonstrated in
the macaque brain is certainly also embodied by the human brain, albeit
by a more complex circuitry of nerve cells. In any case, a neurobiologi-
cal approach to language requires an explanation of how perception and
action are linked, and hence we cannot underestimate the importance of
the macaque mirror cells for a theory of language evolution.
The early observations of the mirror neurons in the monkey brain
turned out to have an important impact on discussions of the origin
of language in hominid evolution, and the functional characteristics
of these cells were taken as the defining features of the language-ready
brain by our distant ancestors. First, these observations led to discus-
sions on the anatomical homology of the F5 area of the macaque brain
and the Broca’s area of the human brain. As we shall see, the question of
how they were functionally related remained an issue of more complex
discussions. Granted that the rostral part of the monkey ventral premo-
tor cortex, which includes the F5 neurons, is the homolog of Broca’s
area in the human brain, Rizzolatti and Arbib (1998) argued that the
discovery of these neurons by monkeys are clearly relevant for an under-
standing of language by humans. In fact, these observations have led to
speculations on the gesticulatory origin of human language (the “mirror
hypothesis of language evolution”). However, the assumed connection
was contentious, because F5 was commonly thought of as a substrate for
intentional hand movements, whereas Broca’s area is thought of as an
area of speech. Rizzolatti and Arbib, however, argued that it could not
be a mere coincidence that the area which links action recognition and
action production in the monkey has been proposed as the homolog of
Broca’s area. They knew that Broca’s area does not relate only to speech;
this area also becomes active during execution of hand and arm move-
ments. Could it be that a form of intentional and gesticulatory com-
munication, mediated by the F5 neurons in the monkey, constituted
an evolutionary antecedent to speech by man? As argued by Rizzolatti
116 Language Evolution and Developmental Impairments

and Arbib (1998), a language system could have evolved “atop” of a pre-
linguistic grammar of actions.
The arguments of homology have been strongly contradicted by Toni,
de Lange, Noordzij, and Hagoort (2008). When a feature occurs in two
related species, there exists a relation of homology if it can be shown
that the feature has been inherited from the latest common ancestor
of the two species. Homology according to this criterion has not been
confirmed; thus, Toni et al. argued that “given the lack of evidence for
the presence of mirror neurons in a premotor region in any common
ancestor of humans and macaques, it appears at least premature to claim
an evolutionary homology between macaque area F5c (the specific por-
tion of area F5, where mirror neurons are localized in macaques ….) and
human BA 44-45” (p. 74).
Cytoarchitectonically, Broca’s area consists of two regions: Brodmann
areas (BA) 44 and 45. In a PET study of these regions, Horwitz et al.
(2003) showed that area 44 was activated by complex hand movements,
and controlled sensory-motor learning and integration. Area 45, however,
was activated by language output, whether spoken or signed. It may be
that only BA 44 is the true analogue of area F5c by the macaque monkey,
whereas BA 45 is a more recent structure in hominid brain evolution.
Research on the mirror system in monkey brains offered a serious
challenge to theories holding that language had evolved from vocal calls
in nonhuman primates. Instead, several researchers argued for a ges-
tural origin of language (Armstrong and Wilcox, 2007; Corballis, 2010;
Rizzolatti and Arbib, 1998). More specifically, Armstrong and Wilcox
(2007) even argued that signed languages were the original and proto-
typical languages. In line with these assumptions, the mirror system for
matching of gestures observed and gestures executed was considered as
a substrate for imitation (Buccino et al., 2004. It is commonly assumed,
however, that monkeys do not imitate, although some imitation has been
observed by macaques and chimpanzees after repeating exposures to sim-
ple behaviors. As a rule, however, these are behaviors that already are in
the monkey’s repertoire (Ferrari et al., 2006).
In my view, theories of language evolution have tended to overlook a
distinction between the emergence of a general symbolic capacity and the
selection of channels of communication. Thus, according to Armstrong
3 The Problem of Continuity in Time and Across Domains 117

and Wilcox the visual-motor channel served as the defining criterion of


protolanguage. Because, however, this language was dependent on the
homolog of Broca’s area in the monkey brain, it was necessary to con-
ceive of a gradual switch of function in order for this area to serve speech.
Gestural language evolved to the stage of “protosigns” while full language
depended on the emergence of vocalization (Arbib, 2009). At this stage,
the MNS also expanded functionally from understanding of transitive
actions to the intransitive use of communicative actions. According to
Corballis (2010) “the assumption that it (language) evolved from manual
and facial gestures allows us to consider a more gradual and evolution-
arily realistic progression, going back perhaps 2 million years to the ori-
gins of the genus Homo” (p. 31).
There are also a number of discrete and qualitative shifts to be
accounted for in a comparison between the functional characteristics of
mirror neurons by monkeys and humans. Thus, the putative homologue
system of mirror neurons in humans differs from mirror neurons in the
monkey brain with respect to some important characteristics. In agree-
ment with the grammar of actions described by Rizzolatti and Arbib
(1998), mirror neurons in the monkey brain respond only to transitive
acts, whereas mirror neurons in man have been shown to respond to both
transitive and intransitive acts (Fadiga, Fogassi, Pavesi, and Rizzolatti,
1995). As pointed out in the Introduction, these neurons by humans
may therefore mediate an understanding of acts that are symbolic rather
than object-related. Moreover, it has been shown that mirror neurons by
man are activated not only when observers watch a limb movement, but
also when they read phrases about this movement (Aziz-Zadeh, Wilson,
Rizzolatti, and Jacoboni, 2006). These are response properties that are
essential for speech, but they are also essential for a general capacity of
symbolic reference.
A gradual switch of function of Broca’s area may be hard to reconcile
with observations on the brain mechanisms underlying sign language.
As shown by Emmorey (2002) these mechanisms are very similar for
signed and spoken languages. Both reveal left hemisphere superiority
for comprehension and production of linguistic utterances, and Broca’s
area controls signing responses by deaf individuals, just as this area is
in control of speech for hearing persons (see Chap. 7 for more details).
118 Language Evolution and Developmental Impairments

Probably, therefore, this area has not evolved with the sole purpose of
serving speech, but for the production and comprehension of symbolic
communication. (See also my discussion of the gestural theory of lan-
guage evolution in the beginning paragraphs of Chap. 7.)
More recently, the controversial inclusion of Broca’s area as homolo-
gous to F5 has been challenged by Cerri et al. (2015). Despite the con-
troversies mentioned above, the MNS in humans was commonly said to
include the inferior frontal gyrus (BA44/45) in addition to the inferior
parietal lobe, the intraparietal sulcus, and the superior temporal sulcus.
They assessed the “mirror” properties of the component parts of MNS
(the premotor [vPM/BA6] and primary motor [M1] cortices in addition
to Broca’s area) in an fMRI study. Participants executed three tasks in
both observation and execution conditions, designed to test the “mir-
ror” criteria. In the execution conditions, instruction was given by object
presentation, which means that no action was imitated and no verbal
instruction was given. Activation of a language production system was
identified in a fluency task, when subjects were told to covertly think
about words beginning with a presented “phoneme.” Moreover, Cerri
et al. undertook an intraoperative neurophysiological investigation with
10 gliomas affected patients who were candidates for awake surgery. This
study gave a unique opportunity to apply direct electrical stimulation to
their exposed brains and to compare the motor output of Broca’s area
with the premotor and primary motor cortices.
The experimental tasks in the fMRI study were designed to test the
“mirror” requirement (activation during both observation and execu-
tion) and the “language” requirement (activation during phonological
fluency). The results showed that vPM/BA6 met these requirements. No
“mirror” activation was reported from BA44/Broca’s area. The intraopera-
tive study showed that vPM/BA6 and Broca’s area behaved differently.
Direct electrical stimulation of Broca’s area had no direct effect on the
phono-articulatory processes, and yet halted the naming process. This
event was interpreted as a cognitive not a motor interference in contrast
to the speech arrest following upon stimulation of the BA6 area. The
authors concluded the two studies this way:

…the same system involved in speech production overlaps in BA6 with the
neural premotor circuit involved in the control of hand/arm actions and
3 The Problem of Continuity in Time and Across Domains 119

belonging to the MNS, suggesting that the role of the MNS in language
may concern more the representation of motor than the semantic compo-
nents of language (p. 1025).

The reported experiments of Cerri et al. (2015) may be said to sup-


port continuity in evolutionary time, and also continuity across domains
because the MNS is shared by human language and communicative
behavior in prehumen subjects. We may ask whether the semantic com-
ponent of language represents a discontinuity in evolution, or whether
this component also depends on structures which evolved gradually from
monkeys to man. As mentioned in Sect. 3.3.1 above semantic knowledge
depends on declarative memory and the neural structures underlying this
system. In Chap. 5, Sect. 5.5, I will have more to say about the evolution
of lexical meaning and its neural substrata.

3.6 Ventral and Dorsal Pathways


in Language Processing
A bipartite distinction between neural structures underlying language
has been proposed by several researchers. Ullman is one of them; Ardila
(2011) is another (see Sect.  3.3.2). Here I will briefly present another
bipartite distinction in language. With analogy from vision research, it
has been argued that two processing pathways support different aspects
of language. Visual information which exits from in the occipital lobe
follows two main pathways or streams: a ventral and a dorsal pathway.
The former travels to the middle temporal lobe, and has been called the
“what” pathway because it is involved in object recognition. The dorsal
pathway travels to the parietal lobe and is involved in the processing of
spatial location (Goodale, 2000; Milner and Goodale, 2006).
Ullman and Pierpoint (2005) argued that the declarative memory sys-
tem is closely related to the ventral or “what” system in vision. This system,
which involves the temporal lobe (in particular the hippocampus), might
support the lexical/semantic aspect of language. On the other hand, it
may be questioned whether the procedural system in Ullman’s DP model
can be associated with the dorsal stream in Milner and Goodale’s model.
Later, the analogy with cortical processing of visual information is either
120 Language Evolution and Developmental Impairments

rejected or downplayed; the dual route model has been limited to the
processing of sound: The ventral pathway is involved in mapping sound
to meaning, while the dorsal pathway is involved in mapping sound to
articulation (Saur et al., 2008); thus, recent studies of the two pathways
have provided new insight to the neural basis of speech perception. Hickok
and Poeppel (2015) have reviewed a number of studies which relate to
sound processing in the two pathways. Some of these have addressed the
comprehension deficits in patients with Wernicke’s aphasia, and some by
subjects whose left hemisphere have been deactivated by the Wada pro-
cedure. Neuroimaging studies have shown that listening to speech acti-
vates the superior temporal gyrus, a target region in the ventral pathway.
Both types of studies have given some support to a bilateral processing
of speech, while other studies have demonstrated computational asym-
metries for the two hemispheres; that is, a left hemisphere selectivity for
temporal and a right hemisphere selectivity for spectral resolution. The
dual-route model also holds the dependence of phonological processing
on the superior temporal sulcus, and that lexical semantic access depends
on a focal system which relates phonological to conceptual information;
that is, the anterior temporal lobe. Other studies reviewed by Hickok
and Poeppel show that mapping from sound to action (the dorsal stream)
is not bilaterally represented but depends on a left-dominant region in
the Sylvian fissure at the temporal-parietal boundary. This region is not
speech-specific, but appears to be motor-effector-selective, and damage
to this region is associated with conduction aphasia (phonemic errors
despite good comprehension of speech sounds).
The dorsal stream in the dual-route model are clearly associated with the-
ories of MNS, because both claim that motor control is involved in speech
perception, and that a sensory-motor link is critical in comprehension of
language. However, the dual-route model, without being speech-specific
is nonetheless modality specific. As indicated above, the very distinction
between a ventral and dorsal pathways arose in research on the neural bases
of visual perception, but the model described by Hickok and Poeppel deals
with “mapping from sound to meaning” and “mapping from sound to
action” and is therefore restricted to the auditory modality. Although the
dorsal pathway is not speech-specific, the dual-route model has given rise
to important research works on the neural basis of speech perception.
3 The Problem of Continuity in Time and Across Domains 121

In an evolutionary context, the “what” and “where” models of visual


perception (Milner and Goodale, 2006) and spatial hearing (Rauschecker,
1998) are more interesting. These models involve neural mechanisms
which largely overlap the mechanisms described in Hickok and Poeppel’s
model (though the visual perception model implicates a stronger involve-
ment of parietal regions). Object perception, as described in these previ-
ous models, represents skills that are possible pre-adaptations to language,
and the neural mechanisms underlying these skills may later have been
exploited to serve language. Most clearly the mechanisms described in the
Milner and Goodale model may be implicated in as well sign language
as speech, and is therefore more relevant for the evolution of modality-
independent capacity of language (Chap. 7).

3.7 Does the Motor System Have a Special


Role in Language?
The research on mirror systems in the monkey brain and the analogue
system of mirror neurons in the human brain has led to the strong
claim saying that language comprehension requires the involvement of
motor systems (Galantucci, Fowler, and Turvey, 2006). Also the dual-
route model of Hickok and Poeppel described above requires this strong
involvement of the motor system. Therefore it can be argued that the
motor system has a special role in language. In the following, I will pres-
ent some counter arguments raised by Toni et al. (2009) in their critical
essay on “Language beyond action.”
The close link between language and the motor system, which was
stressed in research on the mirror systems, led to a revival of the now
classical motor theory of speech perception (Liberman et  al., 1967;
Galantucci et  al., 2006). The theory was introduced as an attempt to
solve the invariance problem in speech perception:
There is no one-to-one relation between acoustic events and the
repertoire of articulatory gestures in speech. Speech sounds are highly
context-dependent; they overlap temporally, and thus vocal tract gestures
are influenced by the following phoneme in a speech sequence. When
we pronounce a consonant-vowel (CV) syllable, such as ba or di, the
122 Language Evolution and Developmental Impairments

spectrograph shows bursts of energy at different frequency bands (for-


mants). The second formant (F2) specifies the place of articulation, thus
the F2 for /b/ is found in a lower frequency band than F2 for /d/. The
burst of energy released when we pronounce a stop consonant, as shown
in the formant transitions in the spectrographic patterns, will differ
depending on the “steady state” of the ensuing vowel sound. Thus, the
formant transition of F2 in di raises from below to a little above 2400 Hz,
while the formant transition of F2  in du is falling in frequency from
below 1200 Hz to near 600 Hz (Fig. 3.2).
How do the different characteristics of the second formant transitions
give rise to the invariant percept of the consonant /d/?
The various proposals that have been raised to solve this problem do sug-
gest alternative units of perception, although Liberman et al. (1967) sug-
gested a radical new approach: The invariance problem is not solved within
an auditory domain, but in a motor domain. Galantucci et al. (2006) for-
mulated Lieberman’s solution this way: “When acoustic patterns are differ-
ent but the articulatory gestures that would have caused them in natural
speech are the same, or vice versa, perception tracks articulation” (p. 362).
According to my interpretation of Lieberman’s solution, people can
only “track articulation” of another person when they are capable of
undertaking the same articulatory gestures themselves. In other words,
Liberman’s solution can be read as follows: Vocal gestures are the objects
of speech perception, and therefore, speech perception is impossible, or
severely impaired, when the signal-receiver is incapable of performing (or
issuing central commands to) the same gestures. This is a strong version
of the motor theory; weaker versions of the theory focusing on the role
of articulatory movements in speech perception may also be formulated.

Fig. 3.2 Second formant transitions (F2) of the /d/ phoneme followed by dif-
ferent vowel sounds. Reproduced with permission from J. Acoust. Soc. Am.
27, 769 (1955). Copyright 1955, AIP Publishing LLC
3 The Problem of Continuity in Time and Across Domains 123

In view of the research on mirror neuron systems, Lieberman’s approach


have gained a new plausibility and appeal. However, it has its weaknesses,
in particular when it is given the strong interpretation mentioned above.
Thus, Toni et al. (2009) called attention to speech recognition capabili-
ties in species that lack a speech production system. They referred to
Kluender’s study of the Japanese qails who responded to /d/ in different
vowel contexts without confusing this consonant with /b/ or /g/ in the
same contexts. Furthermore, when acoustic properties of speech are arti-
ficially transduced into vibrio-tactile patterns on the skin, listeners are
still able to identify phonemes.
The mirror system hypothesis (Rizzolatti and Arbib, 1998) and the
motor theory (Liberman et al., 1967; Galantucci et al., 2006) mutually
supported each other by claiming a special role of the motor systems in
language. Certainly, the available evidence show that listeners may iden-
tify phonemes in a spoken message by mapping acoustical patterns into
motor commands, but as argued by Toni et al., this mapping takes place
on the form level. The problem is whether this mapping also takes place
on the semantic level. Studies showing the role of motor areas in the com-
prehension of action words (Hauk, Johnsrude, & Pulvermuller, 2004)
indicate that mapping may also take place on the semantic level. Whether
it also takes place for other categories of words is undecided, and hence
the relation between language and the motor system remains a matter of
debate.
However, a special role for the motor system can still be admitted once
we realize that this role is independent of form of expression. Language
behavior always implicates motor responses, but these responses are not
effector-specific, and therefore speech and sign language are both well-
structured and true human languages. Also, manual and vocal babbling
(by hearing and deaf infants) represent early stages of language acquisition,
and as argued in Chap. 7, Sect. 7.2, these are equipotential articulators.

3.8 Concluding Remarks


In this chapter I have discussed three ways of studying the problem of
continuity in language evolution. These discussions can be summarized
as follows:
124 Language Evolution and Developmental Impairments

1. Research on communicative learning by subhuman primates show


that apparently some animals have been able to cross “the symbolic
threshold.” Because these animals also learned the correct lexigram
strings, and hence learned some compositional rules as in grammar,
language may have evolved from simple communicative behavior
among animals. However, their rudimentary grammar demonstrated
by the acquisition of lexical strings or sequential patterns does not
match the complex phrase structure of human languages.
2. If language evolved continuously from pre-linguistic behavior by ani-
mals, it is likely that both share important neurobiological substrates.
Ullman has convincingly argued for a link between grammar and the
procedural memory system, and that both depend on the basal ganglia
and important areas of the premotor cortex, and a link between lexical
semantic functions and declarative memory which both depends on
temporal lobe structures such as the hippocampus, entorhinal cortex
and perirhinal cortex. Due to the different origins of these substrata, it
may be argued that grammar precedes semantics in evolution. The
problem is how vertical transmission of language has taken place with
a grammar and a rudimentary form of semantics.
3. Studies of statistical/artificial grammar demonstrate that important
aspects of language may be acquired independent of lexical meaning.
(Labeling of newly segmented words take place after initial segrega-
tion.) Therefore, I assume that statistical learning has had an impor-
tant role in evolution of language as well as in the acquisition of
language by human infants. Moreover, this research also gives support
to the primacy of grammar. I have also proposed that the learning
constraints demonstrated in statistical/artificial grammar learning also
involves an access code to early dialogues, and therefore, these learning
constraints contribute to vertical transmission of language between
generations.
4. Due to structural similarities between grammar and motors skills, it is
argued that motor learning by subhuman primates and LCA may have
formed a pre-adaption to language; that is, to grammar and syntax.

However, statistical/artificial grammar learning does not guarantee a


vertical transmission of language unless we can also demonstrate a mecha-
3 The Problem of Continuity in Time and Across Domains 125

nism which links perception and action in language behavior. This mecha-
nism, which has been identified as mirror neurons in the monkey and
human brain, complements the research on statistical learning by infants
and monkeys. Together they show how early vertical transmission of lan-
guage may have taken place. The research on mirror neurons has given new
attention to the role of the motor system in language, and subsequently to
the status of the classical motor theory of speech perception. I argue that
because acoustical patterns can map into motor commands on the form
level, not the semantic level, and because consonants can be identified also
when their acoustic properties are transduced into vibro-tactile patterns
on the skin, I conclude that the motor system, despite its importance in
linguistic expressions, has no special/critical role in language.
The statistical learning constraints demonstrated by Saffran and others,
together with a mirror neuron mechanism, may have formed a language
facility relatively independent of socio-cultural evolution, and may have
invited and facilitated dialogues between child and caregiver throughout
the times of human evolution. Dialogues between infant and caregiver
have both served the vertical transmission of language and the strength-
ening of a basic grammatical structure.

References
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Ardila, A. (2011). There are two different language systems in the brain. Journal
of Behavioral and Brain Science, 1, 23–36.
Armstrong, D. F., & Wilcox, S. E. (2007). The gestural origin of language. Oxford:
Oxford University Press.
Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Jacoboni, M. (2006). Congruent
embodied representations for visually presented actions and linguistic phrases
describing actions. Current Biology, 16, 1818–1823.
Bickel, B., Wizlack-Makaravich, A., Choudhary, K.  K., Schlesewsky, M., &
Bornkessel-Schlesewsky, I. (2015). The neurophysiology of language process-
ing shapes the evolution of grammar: Evidence from case marking. PLos One,
10, e0132819. doi:10.1371/journal.pone.0132819.
126 Language Evolution and Developmental Impairments

Bickerton, D. (1990). Language and species. Chicago: University of Chicago Press.


Bickerton, D. (2003). Symbol and structure: A comprehensive framework for
language evolution. In M. H. Christiansen & S. Kirby (Eds.), Language evo-
lution: The states of the art. Oxford: Oxford University Press.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Bishop, D. V., & Hsu, H. J. (2015). The declarative system in children with
specific language impairment: A comparison of meaningful and meaningless
auditory-visual paired associate learning. BMC Psychology, 3(1), 3.
doi:10.1186/s40359-015-0062-7.
Bolhuis, J. J., Tattersall, I., Chomsky, N., & Berwick, R. C. (2015). Language:
UG or not to be, that is the question. PLoS Biology, 13, e1002063.
doi:10.1371/journal.pbio.1002063.
Bradshaw, J. L. (1997). Human evolution. A neuropsychological perspective. Hove:
Psychology Press.
Buccino, G., Vogt, S., Ritzl, A., Fink, G.  R., Zilles, K., Freund, H.-J., et  al.
(2004). Neural circuits underlying imitation learning of hand actions: An
event-related fMRI study. Journal of Cognitive Neuroscience, 16, 114–126.
Cerri, G., Cabinio, M., Blasi, V., Borroni, P., Iadanza, A., Fava, E., et al. (2015).
The mirror neuron system and the strange case of Broca’s area. Human Brain
Mapping, 36, 1010–1027.
Chomsky, N. (1988). Language and problems of knowledge. The Managua
Lectures. Cambridge, MA: MIT Press.
Cohen, N. J., & Squire, L. R. (1980). Retrograde amnesia and remote memory
impairment. Neuropsychologia, 19, 337–356.
Conway, C., & Christiansen, M. (2001). Sequential learning in non-human
primates. Trends in Cognitive Sciences, 5, 539–546.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Deacon, T. (1997). The symbolic species: The co-evolution of language and the
brain. London: Penguin books.
Di Pellegrino, G., Fadiga, L., Fogassi, L., Galese, V., & Rizzolatti, G. (1992).
Understanding motor events: A neurophysiological study. Experimental Brain
Research, 91, 176–180.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Fadiga, L., Fogassi, L., Pavesi, G., & Rizzolatti, G. (1995). Motor facilitation
during action observation: A magnetic stimulation study. Journal of Neuro-
physiology, 73, 2608–2611.
3 The Problem of Continuity in Time and Across Domains 127

Ferrari, P. F., Visalberghi, E., Paukner, A., Fogassi, L., Ruggiero, A., & Suomi,
S.  J. (2006). Neonatal imitation in rhesus macaques. PLoS Biology, 4,
1501–1508.
Fitch, W. T. (2010). The evolution of language. Cambridge: Cambridge University
Press.
Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G.
(2005). Parietal lobe: From action organization to intention understanding.
Science, 308, 662–667.
Galantucci, B., Fowler, C. A., & Turvey, M. T. (2006). The motor theory of
speech perception reviewed. Psychonomic Bulletin and Review, 13,
361–377.
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition
in the premotor cortex. Brain, 119, 593–609.
Goodale, M. A. (2000). Perception and action in the human visual system. In
M.  S. Gazzaniga (Ed.), The new cognitive neurosciences (pp.  365–378).
Cambridge, MA: MIT Press.
Graf Estes, K., Evans, J. L., Alibali, M. W., & Saffran, J. R. (2007). Can infants
map meaning to newly segmented words? Statistical segmentation and word
learning. Psychological Science, 18, 254–260.
Hamzei, F., Rijntjes, M., Dettmers, C., Glauch, V., Weiller, C., & Buchel, C.
(2003). The human action recognition system and its relationship to Broca’s
area: An fMRI study. NeuroImage, 19, 632–637.
Hauk, O., Johnsrude, I., & Pulvermuller, F. (2004). Somatotopic representation
of action words in human motor and premotor cortex. Neuron, 41,
301–307.
Hauser, M. D., & Glynn, D. (2009). Can free ranging rhesus monkeys (Macaca
mulatta) extract artificially created rules comprised of natural vocalizations?
Journal of Comparative Psychology, 123, 161–167.
Hickok, G., & Poeppel, D. (2015). Neural basis of speech perception. Handbook
of Clinical Neurology, 129, 149–159.
Horwitz, B., Amunts, K., Bhattacharyya, R., Patkin, D., Jeffries, K., Zilles, K.,
et al. (2003). Activation of Broca’s area during the production of spoken and
signed language: A combined cytoarchitectonic mapping and PET analysis.
Neuropsychologia, 41, 1868–1876.
Hsu, H. J., & Bishop, D. V. (2014). Sequence specific procedural learning in
children with specific language impairment. Dev Sci, 17, 352–65.
Kemény, F., & Lukács, Á. (2010). Impaired procedural learning in language
impairment: Results from probabilistic categorization. Journal of Clinical and
Experimental Neuropsychology, 32, 249–258.
128 Language Evolution and Developmental Impairments

Lashley, K. S. (1951). The problem of serial order in behavior. In L. A. Jeffress
(Ed.), Cerebral mechanisms in behavior: The Hixon symposium. New  York:
John Wiley.
Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M.
(1967). Perception of the speech code. Psychological Review, 74, 431–461.
Lieberman, P. (2015). Language did not spring forth 100 000 years ago. PLoS
Biology, 13, E1002064. doi:10.1371/journal.pbio.1002064.
Lum, J.  A., Conti-Ramsden, G., Morgan, A.  T., & Ullman, M.  T. (2014).
Procedural learning deficits in specific language impairment (SLI): A meta-
analysis of serial reaction time task performance. Cortex, 51, 1–10.
Lyon, C., Nehanive, C. L., & Saunders, J. (2012). Interactive language learning
by Robots: The transition from babbling to word forms. PLoS One, 7, e38236.
Masson, M. E. J., & Graf, P. (1993). Introduction: Looking back and into the
future. In P. Graf & M. E. J. Masson (Eds.), Implicit memory: New directions
in cognition, development and neuropsychology. Hillsdale, NJ: Lawrence
Erlbaum Associates Inc.
Meck, W. H., & Benson, A. M. (2002). Dissecting the brain’s internal clock:
How frontal-striatal circuitry keeps time and shifts attention. Brain and
Cognition, 48, 195–211.
Milner, A.  D., & Goodale, M.  A. (2006). The visual brain in action. ISBN
978-0-19-852472-4.
Nieder, A. (2009). Prefrontal cortex and the evolution of symbolic reference.
Current Opinion in Neurobiology, 19, 99–108.
Peterson, K. M., Folia, V., & Hagoort, P. (2010). What artificial grammar learn-
ing reveals about the neurobiology of syntax. Brain & Language. doi:10.1016/j.
bandl.2010.08.003.
Podzebenko, K., Egan, G. F., & Watson, J. D. G. (2002). Widespread dorsal
stream activation during a parametric mental rotation task, revealed with
functional magnetic resonance imaging. NeuroImage, 15, 547–558.
Rauschecker, J. P. (1998). Parallel processing in the auditory cortex of primates.
Audiology and Neurootology, 2–3, 86–103.
Rendall, D., & Vasey, P. (2002). Metaphore muddles in communication theory
(p. 637). Commentary to S. G. Shanker & B. J. King: The emergence of a
new paradigm in ape research. Behavioral and Brain Sciences, 25, 637.
Ribeiro, S., Loula, A., de Aroújo, I., Gudwin, R., & Queiroz, J. (2007). Symbols
are not uniquely human. Biosystems, 90, 263–272.
Rice, M. L., & Oetting, J. B. (1993). Morphological deficits in SLI children:
Evaluation of number marking and agreement. Journal of Speech and Hearing
Research, 36, 1249–1256.
3 The Problem of Continuity in Time and Across Domains 129

Rizzolatti, G., & Arbib, M.  A. (1998). Language within a grasp. Trends in
Neoroscience, 21, 188–194.
Romberg, A. R., & Saffran, J. R. (2010). Statistical learning and language acqui-
sition. Wiley Interdisciplinary Reviews: Cognitive Science, 1, 906–914.
Ruhlen, M. (1995). Linguistic evidence for human prehistory. Cambridge
Archeological Journal, 5, 268–271.
Ryle, G. (1949). The concept of mind. London: Hutchinson.
Saffran, J.  R. (2002). Constraints on statistical language learning. Journal of
Memory and Language, 47, 172–196.
Saffran, J. R. (2003). Statistical language learning: Mechanisms and constraints.
Current Directions in Psychological Science, 12, 110–114.
Saffran, J., Hauser, M., Seibel, R., Kapfhamer, J., Tsao, F., & Cushman, F.
(2008). Grammatical pattern learning by human infants and cotton-top tam-
arin monkeys. Cognition, 107, 479–500.
Saur, D., Kreher, B. W., Schnell, S., Kümmerer, D., Kellmeyer, P., Vry, M. S.,
et al. (2008). Ventral and dorsal pathways for language. Proceedings from the
National Academy of Sciences, 105, 18035–18040.
Savage-Rumbaugh, E. S., & Lewin, R. (1994). Kanzi: The ape at the brink of the
human mind. New York: John Wiley.
Schmandt-Bessarat, D. (1986). Tokens: Facts and interpretations. Visible
Language, 20, 250–272.
Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hip-
pocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20, 11–21.
Senghas, A. (2005). Language emergence: Clues from a new Bedouin Sign
Language. Current Biology, 15, 463–465.
Senghas, A., Kita, S., & Özyürek, A. (2004). Children creating core properties
of language: Evidence from an emerging sign language in Nicaragua. Science,
305, 1779–1782.
Shanker, S. G., & King, B. J. (2002). The emergence of a new paradigm in ape
language research. Behavioral and Brain Sciences, 25, 605–656.
Squire, I. R., Knowlton, B., & Musen, G. (1993). The structure and organiza-
tion of memory. Annual Review of Psychology, 44, 453–495.
Squire, L. R. (1993). The organization of declarative and nondeclarative mem-
ory. In T.  Ono, L.  R. Squire, M.  E. Raichle, D.  I. Perrett, & M.  Fukuda
(Eds.), Brain mechanisms of perception and memory. From neuron to behavior
(pp. 219–227). New York: Oxford University Press.
Squire, L. R., & Alvarez, P. (1995). Retrograde amnesia and memory consolida-
tion: A neurobiological perspective. Current Opinion in Neurobiology, 2,
169–177.
130 Language Evolution and Developmental Impairments

Sue Savage-Rumbaugh, E. (1986). Ape language: From conditioned response to


symbol. New York: Columbia University Press.
Toni, I., de Lange, F.  P., Noordzij, M.  L., & Hagoort, P. (2009). Language
beyond action. Journal of Physiology – Paris, 102, 71–79.
Turella, L., Pierno, A. C., Tubaldi, F., & Castiello, U. (2009). Mirror neurons in
humans: Consisting or confounding evidence? Brain and Language, 108,
10–21.
Ullman, M.  T. (2004). Contributions of memory circuits to language: The
declarative/procedural model. Cognition, 92, 231–270.
Ullman, M. T., & Pierpoint, E. I. (2005). Specific language impairment is not
specific to language: The procedural deficit hypothesis. Cortex, 41,
399–433.
van der Lely, H. K. J., & Stollwerck, L. (1996). A grammatical specific language
impairment in children: An autosomal dominant inheritance? Brain and
Language, 52, 484–504.
Wilson, B., Slater, H., Kikuchi, Y., Milne, A. E., Marslen-Wilson, W. D., Smith,
K., et al. (2013). Auditory artificial grammar learning in macaque and mar-
moset monkeys. Journal of Neuroscience, 33, 18825–18835.
4
Dialogues as Procedural Skills

Dialogues are important gateways to a linguistic community. In gen-


eral, dialogues include all aspects of language and are therefore impor-
tant “arenas” for the practicing and the maintenance of linguistic skills.
Cognitively, dialogues rest on different memory functions; in particular,
procedural and declarative memory, but also on sensory and working
memory. There are, however, different types of dialogues which are dif-
ferently supported by the cognitive resources. First, dialogues differ in
complexity; for example, the dialogue between adult competent users of
language differs in many ways from the dialogue between mother and
child in a nurturing situation. In the former case, great demands are made
on both procedural and declarative knowledge, whereas declarative mem-
ory is not similarly taxed in the mother–child dialogue. In this chapter,
I will address early dialogues which take place in mother–child interac-
tions and prior to the development of declarative memory by the infant.
They are characterized by an exchange of pre-semantic utterances, which
apparently serve to strengthen the bond between infant and caregiver.
However, I will also address the type of dialogues, sometimes character-
ized as “small talk” by cognitively mature people. These are the “easy” dia-
logues with no great demands on intelligence or declarative knowledge.

© The Editor(s) (if applicable) and The Author(s) 2016 131


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_4
132 Language Evolution and Developmental Impairments

In contrast, many dialogues between adult users of language medi-


ate exchange of new information, and therefore depend on declarative
memory and comprehension of symbolic reference. Such dialogues may
also serve the establishment of mutual understanding between members
of opposing parties on vital political and social issues. These are diffi-
cult dialogues which make great demands on linguistic and cognitive
resources and which sometimes serve as alternatives to use of violence
and discrimination, and therefore require a conceptual understanding of
the debated issues. (Emphasis on dialogues in political conflicts.) In the
words of John Searle (1983), there must exist a state of intentionality that
is shared by the two parties. I will also disregard the type of dialogues
which are conscious and deliberate attempts by two parties to solve prac-
tical problems, and I will also disregard the dialogues where one person
is interrogated by another. In these situations, language behavior of each
participant is heavily dependent on explicit and declarative memory.
My subject matter in this chapter is therefore the apparently easy-
running dialogues by dyads of individuals. These are all dialogues which
take place with some degree of automaticity, and I will therefore discuss
whether, and on what grounds, we may call them procedural dialogues.

4.1 Procedural Skills and Early Dialogues


In the preceding chapter, I gave a short description of the brain substrates
underlying the procedural memory system; now I will focus more on some
of its functional aspects. As pointed out, procedural memory is charac-
terized by slow and incremental learning (e.g., riding a bicycle, solving
a puzzle), and therefore the learning process is resource-demanding. The
result of procedural learning is a sensory-motor or cognitive skill; that
is, a procedural skill which per se is not resource-demanding. In other
words, execution of the procedural skill may run without noticeable
effort. Rather, the practicing of such skills depends on automatic process-
ing, which does not reduce the capacity for simultaneous performance of
other tasks.
We do not know whether and to what extent the learning of con-
tingent vocalizations between mother and infant is resource-demanding.
4 Dialogues as Procedural Skills 133

Therefore, a resource-demand criterion does not justify the term proce-


dural for this type of dialogues. On the other hand, the execution of early
interactions/dialogues do have the characteristics of procedural skills.
In general, procedural skills depend on a set of rules governing, for
example, vocal/linguistic production. Since Anderson (1976, 1983),
these have been conceptualized as if-then rules and make up a form of
knowledge representation. They are also said to form production systems,
underlying, for example, bicycling, or the sequencing of sounds in epi-
sodes of linguistic behavior. Most likely, turn-taking in dialogues also
depends on a particular production system, with signals that specify the
end of an utterance by one of the interlocutors and the beginning of an
utterance by the other part. Dialogues between child and caregiver, or
between two children, depend on rules of turn-taking and may run prior
to the learning of words, and apparently without any external goal or
incentive. The rules are generally implicit and not accessible to conscious
reflection. Also, dialogues between language competent people may run
with some degree of automaticity, and which therefore exemplify pro-
cedural skills in the domain of language behavior. Actually, small talk
among adult people may also belong to this category (see below Sect. 4.8).
Procedural knowledge and the capacity of procedural learning are often
spared, whereas declarative knowledge is markedly impaired in amnesic
patients. These are at a severe disadvantage when tested with tasks requir-
ing conscious recollection, but may be able to learn mirror drawing and
tracking a moving target on a pursuit rotor. (Remember the case of HM,
who had the hippocampus surgically removed in an attempt to treat his
epilepsy.) Episodic memories which are commonly regarded as involved
in a declarative memory system also develop later and may depend on
more extended neural networks compared to procedural skills.
Moreover, declarative memories are impaired prior to procedural or
implicit memories in old-age onset of dementia. On this account, it is
likely that more brain resources are invested in the former memory system,
and thus I assume that maintenance of declarative knowledge requires
more cognitive capacity than maintenance of procedural knowledge.
I will address any possible indications about the cognitive mecha-
nisms underlying the emergence and practicing of early dialogues in
prehistorical times. To what extent were such dialogues shaped and
134 Language Evolution and Developmental Impairments

maintained by brain mechanisms underlying the procedural memory


system? In my view, early dialogues, both in evolution and individual
development, have been based on production systems which are rela-
tively automatized without reducing capacity to perform other tasks;
that is, production systems which permit to some extent divided atten-
tion and executive control.
The two interlocutors of a dialogue may differ in linguistic skills and
knowledge. For example, the dialogue between a linguistically compe-
tent caregiver and the child who is in the process of articulating his/her
first words, there exists a state of nonparity between the two parties. In
these cases, dialogues are running as a process of learning, and when suc-
cessful, also as the practicing of a skill. Vertical transmission of language
depends on nonparity between the interlocutors of a dialogue (mother–
child interactions). Among adults, however, there generally exists a state
of parity between the interlocutors; that is, no apparent difference in
linguistic competence exists between the two parties. For example, when
the two parties do not share a language, but become involved in the
development of a new communicative system (i.e., pidgin language), a
process of procedural learning and the establishment of a new skill is
started. Also when the interlocutors share a language, dialogues may have
the character of a procedural skill (small talk). Horizontal transmission
is associated with dialogues where there exists a state of parity between
the participant parties. However, horizontal transmission may also take
place when there exists a state of nonparity between the interlocutors; for
example, in dialogues which involve immigrants who become assimilated
in another language community.

4.2 The Evolutionary Role of Procedural


Dialogues
The ability to engage oneself in a dialogue has an important function
both for the maintenance of language in society, and the transmission of
language between generations. The role and importance of the dialogue
in both evolution and acquisition of language cannot be overestimated.
Therefore, I assume there is a basis in evolution which makes the learning
4 Dialogues as Procedural Skills 135

of dialogues easy; that is, part of an instinct to learn. Thus, in agreement


with Borjon and Ghazanfar (2014), I assume there are systems of behav-
ior which serve cooperative breeding, socializing and tension-reducing
functions by subhuman primates that are pre-adaptations for linguistic
dialogues by humans. These pre-adaptations warrant easy learning of
contingent vocalizations, and the development of verbal conversations
by humans.
The importance of dialogues in the maintenance of a language capac-
ity means that initiation and part-taking are easy. Hence there are some
dialogues which are not resource-demanding and which therefore have
many characteristics in common with other skilled behavior. In Sect. 4.7
below, I will present research which suggest an explanation of why dia-
logues are easy and which therefore justify my characterization of these
dialogues as procedural skills.
Behavioral systems by subhuman primates, which serve as pre-
adaptations to language may also be characterized as procedural skills.
For example, mutual grooming by primates seem to be based on a
production system which have primary adaptive functions of hygiene,
but it also involves gentle touches and massaging in an harmonious
interplay between conspecific animals. Like early dialogues between
child and caregiver by humans they tend to affirm belongingness and
a close and intimate relationship between the interlocutors. Another,
perhaps more important, example is turn-taking by marmoset mon-
keys; interactions in both examples are most likely controlled by a set
of if-then rules.

4.2.1 Vocal Turn-Taking by Marmoset Monkeys

Turn-taking is a characteristic aspect of dialogues between human speak-


ers. While person 1 speaks, person 2 attends, then person 1 stops his
part to admit a vocal response from speaker 2, and when speaker 2 relin-
quishes his/her part, speaker 1 takes again his/her turn etc. In perfect
turn-taking, the two persons do not interrupt each other.
It is well-known that most infants are capable of turn-taking in interac-
tions with their caregiver. In studies of the evolutionary origin of language,
136 Language Evolution and Developmental Impairments

it will be of prime importance to find out whether turn-taking is also a


characteristic of communication by animals. As mentioned in Chap. 2,
Sect. 2.6, Takahashi, Narayanan, and Ghazanfar (2013) reported a study
of turn-taking by marmoset monkeys which I will briefly review in this
section. These animals do not have the linguistic capacities as humans,
but phee calls (long-distance contact calls) serve to keep track of each
other, in particular when they have no visual contact, and are sustained
by a cooperative breeding strategy (see Fig. 4.1) Takahashi et al. registered
a large number of phee calls from 10 marmoset monkeys. These monkeys
were paired in various combinations and studied in a sound-attenuated

Fig. 4.1 Marmoset monkeys (callitrix jacchus) are small animals of about 40
cm in length, weight about 350 grams, who live up to 16 years. They have
relatively small brains, but are closely related to humans in terms of structure,
behavior and physiology. They are endemic to the Atlantic forest of north-
eastern Brazil, live in extended family groups and share with humans a coop-
erative breeding strategy. Their temporal coordination of vocal responses
resembles vocal interactions in human linguistic dialogues. By permission of
Inbound TeleSales. iStockphoto.com.
4 Dialogues as Procedural Skills 137

room where the animals were placed in opposite corners and separated by
an opaque curtain to prevent visual contact.
Phee calls from the two monkeys which were not separated by more
than 30 seconds of silence, were defined “contingent exchange calls.”
There was zero overlapping among these exchange calls, which agrees
with general observations made by interacting humans. By exchanging
the time series of one animal in dyad with the time series of a randomly
selected animal in another dyad, they tested the hypotheses that zero
overlapping was due to dependent vocal interactions, and not the adverse
effect of very low rates of responding. They found that that “marmo-
sets wait for the vocal exchange partner to finish calling before respond-
ing” (p. 2162). The consistent waiting period of 5–6 s was discussed as a
possible effect of resetting some planned interval when he hears the call
from another monkey. However, “the call interval duration of an indi-
vidual is, on average, significantly shorter (median = 5.63 s) during vocal
exchanges than when the same subject produces calls without hearing
an intervening call from another individual (median = 11.53 s, p value
< 0.001)” (p. 2163). They concluded that the marmosets take turns and
that one of them waits until the other marmoset has finished his call, and
then responds following an interval that cannot be explained by a reset-
ting of its natural rhythm.
To explain the dynamics of turn-taking, Takahashi et al. tested a model
of an oscillator-like mechanism by measuring the interval between mar-
moset 1’s first call and the marmoset 2’s first call, second call, third call,
and so on. Then, this procedure was repeated for marmoset 1’s second
call, and by calculating the cross-correlation between the two call time
series a degree of coupling was assessed. It turned out that this correla-
tion peaked at regular intervals showing both that marmoset 1 pro-
duced his calls with consistent intercall intervals, but also that those
marmoset 2’s calls occurred between marmoset 1’s calls and had a con-
sistent intercall interval. These results supported the coupled oscilla-
tor model and showed that calls were produced in between the other
marmoset’s calls (antiphase) with intervals of ≈12 s. Hence it is likely
that the periodicity of the one marmoset’s calls can be modulated by the
other marmoset’s calls.
138 Language Evolution and Developmental Impairments

The turn-taking mechanism by marmoset monkeys agrees in many


ways with cooperative turn-taking in human conversations. Both spe-
cies demonstrate that cooperating individuals act like coupled oscilla-
tors. However, vocal exchanges in human turn-taking is much faster,
within hundreds of milliseconds, whereas call exchanges by the mar-
mosets take place in a time scale of 3–5 s. The difference in the speed of
turn-taking may be interpreted as a difference in the amount of infor-
mation transmitted per units of time or in an interactional episode.
There are also similarities, because turn-taking may have a comforting
effect and serve as a means of stress reduction by both species. Takahashi
et al. also point out that embedded in the exchange of calls is a confor-
mation of gender and group identity (which may apply equally well to
contingent vocalizations by humans). Finally, it should be mentioned
that dialogues, like those described as call exchanges by marmosets
and “small talk” by humans, may have the function of reinforcing and
maintaining the availability of important channels of communications
(see Sect. 4.8 below).
Borjon and Ghazanfar (2014) pointed out that turn-taking is the
product of a cooperative breeding strategy by both marmosets and
humans, and that it demonstrates “convergent evolution of vocal coop-
eration without convergent evolution of brain size.” Old World primates
with considerably larger brains have not demonstrated instances of vocal
turn-taking. However, these primates have demonstrated other forms
of cooperative behavior which may also serve as pre-adaptation to lan-
guage. Thus, Wilson et  al. (2013) demonstrated that rhesus macaques
who shared a common ancestor with humans ~25 million years ago
were capable of more complex AG learning than marmoset monkeys
who shared a common ancestor with humans ~25 million years ago (see
Chap. 3, Sect. 3.2). Both AG learning by macaques and turn-taking by
marmoset monkeys may be pre-adaptations to language, but in view of
the observation reported by Wilson et al., it may be asked whether AG
learning and turn-taking abilities represent pre-adaptations to different
subcomponents of language. On the one side, the ability underlying AG
learning may have represented a pre-adaptation of grammar, whereas
turn-taking may have served pre-adaptation of linguistic interactions and
social control functions.
4 Dialogues as Procedural Skills 139

4.2.2 Turn-Taking in Infant–Caregiver Interactions

Turn-taking, in the way of contingent vocalization between infants and


their caregivers, has been commonly acknowledged in developmental
psychology at least since Bowlby, and is still part of introductory texts.
Turn-taking is also a general characteristic of conversations by adults.
Because we will specifically address turn-taking by infants and mothers,
we shall take notice of a major difference in this behavior by marmosets
and humans. Regarding the former, there exists a state of parity between
the two individuals in the exchange of phee calls, whereas a state of non-
parity exists in contingent vocalizations between the infant and his/her
caregiver. Thus, mothers are generally expected to lead and initiate a dia-
logue with her child because she is the mature part, and therefore she
is the one who makes explicit attempts to initiate a vocal interaction.
However, it is generally assumed that the infant exerts a considerable
influence on the running interaction with the mother.
Turn-taking is considered to be automatic and resource-free (Pickering
& Garrod, 2004), but depends on highly coordinated timing of responses.
Bornstein, Putnick, Cote, Haynes, and Suwalsky (2015) raised the ques-
tion of whether the key features of turn-taking (minimal gap minimal
overlap norm) are universally practiced or whether it is community spe-
cific. They examined the rates of mother–infant interactions and their
covariance with community and gender, and the relation between mater-
nal and infant rates of vocalizations. Moreover, they examined the degree
to which infant vocalization was contingent on maternal vocalization, and
vice versa. They observed naturalistic interactions between mothers and
infants at home in 11 countries (Argentina, Belgium, Brazil, Cameroon,
France, Israel, Italy, Japan, Kenya, South Korea, United States). The results
showed that rates of mother and infant vocalizations were uncorrelated
and highly community-dependent, and that the mothers were overall
more responsive to their infant’s vocalizations than vice versa. However,
these results also showed that “mothers nearly universally spoke to their
infants in response to their infant’s nondistress vocalizing” (p. 7).
In view of the great cultural differences in beliefs about early social
interactions and human development, community effects on rates of
vocalizations and on maternal and infant vocal contingencies were not
140 Language Evolution and Developmental Impairments

surprising. However, the Bornstein et al. study supported the view that
key features of turn-taking are universally present in maternal–infant
interactions, and because these key features are also observed by vocal
turn-taking in monkeys, they may be interpreted as vestiges of the evolu-
tionary origins of language.
Turn-taking has been observed also by deaf children who are exposed
to sign language from birth (Emmorey, 2002). The “speaker” signs a
few words and the addressee similarly signs his/her answer, and, like
turn-taking by hearing babies, they follow a “minimal gap minimal over-
lap” norm. However, signed turn-taking differs from vocal turn-taking in
the way that the “speaker” cannot start the conversation unless he makes
sure that the addressee can visually attend to his behavior. The hearing
baby can initiate a conversation independent of visual contact, and there-
fore starting a dialogue by typically developing children seems easy. (As
shown in the next section, this problem is more complex than what it
seemed like in the first case.)
Leclère et al. (2014) examined a number of mother–child interaction
studies by focusing on the concept of synchrony. Turn-taking is only one
of the terms which are used to refer to synchrony in mother—child inter-
actions. Other terms are mutuality, reciprocity, rhythmicity, harmonious
interaction, and shared affect. Like in studies of turn-taking they also
focused on the interactive partnership between child and caregiver with
the dyad as the unit of analysis. They examined 61 selected works in the
years between 1977 and 2013 and showed that synchrony has been assessed
by 1) global interaction scales for dyads, 2) specific synchrony scales, and
3) microcoded time-series analysis. For clinicians working with language-
impaired children, it may be worthwhile to take a look into these assess-
ment tools. They are mentioned here because the focus on synchrony as
defined by Leclère et al. does add something to my discussion of turn-
taking. Thus, verbal behavior, either spoken or signed has a particular
rhythmicity. In Chap. 7, Sect. 7.2, you will see that hand movements
which conform to sign language has a frequency close to 1 Hz, whereas
random and nonlinguistic motor activity by infants has a much higher fre-
quency, around 2.5 Hz. In speech, humans generally produce syllables at a
frequency of 3 to 8 Hz. Rates above 8 Hz are generally incomprehensible
(Fujii & Wan, 2014). Due to differences in units (hand movements vs syl-
4 Dialogues as Procedural Skills 141

lables), natural speed of production differ for the two modalities; however,
both have a selected rhythm. Therefore, mutual adjustment of spoken or
signed frequencies may also be considered as an aspect of synchrony in
linguistic dialogues. However, the content of this term is not new; thus,
in the preceding chapter I mentioned Shanker and King, who interpreted
communicative learning by chimpanzees as the resulting of “interactional
synchrony,” and in Sect. 4.7 you will see that Garrod and Pickering (2004)
use “interactive alignment” to explain why some dialogues are easy.

4.3 Signaling the Intention to Communicate


The fact that turn-taking is a universal aspect of mother–child interac-
tions may be said to support an instinct to learn language. The exact
mechanisms whereby this learning takes place are still a matter of specu-
lation, but in the following two sections I will present two attempts to
address this issue.
Scott-Phillips, Kirby, and Ritchie (2009) pointed out that linguistic
signals are both learned and symbolic. Therefore, these twin features
show that there is no á priori relationship between form and meaning,
and hence they asked “if meanings are not innately specified, then how
can individuals agree on what forms should refer to what meanings in
the first place” (p. 226). Giving credit to previous works on the prob-
lem, they argued that all of them implicitly assumed that “individuals
are able to detect that a given behavior is intended to be communicative”
(p. 226). To study the way signalhood can be signaled in situations where
the forms of a signal are not pre-specified by the researcher; they intro-
duced an experimental game which I will describe in some detail below.
Their study shows how some dialogues may be initiated by a bootstrap-
ping process, but does not apply to the general problem of how linguistic
behavior can be distinguished, independent of intent, from nonlinguistic
behavior by infants as well as adults.
The evolution of language depended on face-to-face contact among
members of relatively small communities or tribes; hence, dialogues
were highly needed. To initiate a dialogue, early humans must have been
able to signal an intention to communicate. Scott-Phillips et al. (2009)
142 Language Evolution and Developmental Impairments

argued that previous research has generally avoided the problem of how
humans achieved a capacity to signal signalhood. First, previous research-
ers have had a tendency to predefine the communication channel, a solu-
tion which begs the question because “participants know that any inputs
that come to them via the communication channel are (almost certainly)
communicative in nature” (p.  226). Second, the roles of signaler and
receiver may be predefined, and thereby the receiver will easily be primed
to interpret any behavior from the signaler as communicative. Finally,
complete avoidance of the problem takes place when the possible forms of
a communicative signal are pre-specified by the researcher. Alternatively,
Scott-Phillips et al. argued that there are two logically acceptable ways
of explaining the capacity to signal “signalhood”: either it emerged from
noncommunicative behavior or it was created de novo.
To study the way people may signal signalhood in advance of a suc-
cessful dialogue, they presented “the embedded communication game”
on networked computers. In this game, there are two players, each of
them is presented with a “stick man” in a box containing 2 × 2 quad-
rants which were colored red, blue, green or yellow, and each of the
two players can move the “stick man” around from one quadrant to the
center of any of the other quadrants. The players have no interactions
with each other, and they lack shared information, except that they see
both boxes as well as the movements made by the other player, but each
player can only see the colors of his/her own box. The players press the
space bar to finish, whereupon the colors of both boxes are revealed to
both players. If they have finished on identical colors, they earn a score
of one point.
When both players press the space bar again, a new round begins. The
colors are now differently assigned to the four quadrants, but at least one
of the four colors appears in both boxes to make possible a score of one
point in the next round. The highest number of points scored in succes-
sion defines the pair’s final score. In this situation, the participants need
not only to agree on what behavior corresponds to what meaning, but
also to find a way to signal that a certain movement is a signal. Many
pairs failed to communicate; thus, the low incidence of success showed
that it was extremely difficult to co-opt their movements for the purpose
of communication.
4 Dialogues as Procedural Skills 143

The fact that some pairs were eventually able to score a point in every
round shows that signaling of signalhood was possible. Thus, some pairs
converged upon a system of movements that made possible the selection
of a default color whenever available. Scott-Phillips et  al. (2009) said
that “this strategy is not communicative, but it does allow pairs, once
they have converged on the same default color, to score at above chance
levels” (p. 239). In those cases when one of the players did not have the
default color, he/she performed some unexpected movements like oscil-
lations sideways, or looping around in the box. These movements did
not have a specific meaning, but the recipient easily interpreted it as “no
default color,” whereupon their meaning changes to one of the other
colors. Hence, these movements may be said to have served to change the
direction of attention in order to initiate a dialogue.
Scott-Phillips et al. concluded that the players, when successful, solved
the problem of signaling signalhood by “a bootstrapping process, and
that this process influences the final form of the communication system”
(p. 226). Similarly, it may be assumed that early humans found different
ways of initiating communication by trial and error in a bootstrapping
fashion.
In natural languages, there are other means of signaling signalhood,
both among early hominids, and among humans today. The way we
address another person in order to initiate a dialogue, or just ask a ques-
tion or make a short statement, means to signal signalhood in an every-
day setting. For some children, this may be an overly demanding task
that prevents important communication. In most linguistic societies,
there seems to be a social “address code” that must be learned in order to
participate in a dialogue, and the dialogue itself may include a number of
skills that are the products of enduring community practice. I believe that
dialogues in prehistorical times, and in particular settings also in modern
times, may have been ritualistic and served religious practices. Also, I will
add that “small talk” may include a number of implicit rules that govern
interactions among humans today.
In an asymmetric relationship, such as the one between mother and
child, it may seem that one part, the mother, initiates the dialogue. In
other words, in dialogues where there exists a state of nonparity between
the interlocutors, initiation may be the effect of a conscious decision on
144 Language Evolution and Developmental Impairments

the adult’s part. This means that the vertical transmission of language
is entirely the responsibility of the adult members of the community.
However, this is also an oversimplification, because the mechanisms
underlying communicative interactions between child and caregiver
mean that signaling signalhood may take place both ways, from caregiver
to child and vice versa. The gestural and vocalizing behavior of the child/
infant may “invite” the caregiver to join the dialogue, but this process is
subject to certain constraints, mentioned in the preceding chapter, Sect.
3.2. Both parties must possess what I have called an access code to early
dialogues.

4.4 Models of Language Acquisition


in Dyads
I shall now present two models of language acquisition. The first
addresses the problem of how children are able to take part in dialogues
in the first place. It presupposes babbling, and asks how a stream of
phonemes is transformed into conceivable word forms. The second
one addresses the development of a new language which differs from
the participants’ own language. I present these models in this chapter
because: 1) they both describe a scenario of turn-taking, and 2) they
both describe the process of learning as incremental and rule-based.
I argue that both models involve to the development of procedural
dialogues.

4.4.1 From Babbling to Conceivable Word Forms

Although early dialogues do not require a large lexicon, lexical knowl-


edge will generally grow out of dialogic practices. However, the mature
user of language may still involve him/her in dialogues that run rela-
tively independent of declarative/semantic knowledge. Can dialogues
run without an apprehension of lexical meaning? Turn-taking by infant
and caregiver, prior to the child’s learning of lexical meaning, involves an
4 Dialogues as Procedural Skills 145

affirmative answer. In other words, early procedural dialogues can run


without knowledge of lexical meaning. When infants start to take part in
a dialogue with their caregivers, they do so without this type of knowl-
edge. Some may think that early vocal (or gestural) interactions between
infant and caregiver do not constitute “dialogues.” I disagree, because
such interactions form the very beginning of language development, and
moreover they demonstrate important functional characteristics of the
procedural skills involved in mature dialogues. Finally, they indicate the
mechanisms whereby vertical transmission of language takes place.
Lyon, Nehanive, and Saunders (2012) have investigated the process of
“transition from babbling to word forms” takes place, and they focused
on the acquisition of “rudimentary linguistic skills—characteristic of a
human child of about 6–14 months.” By this age, a child is generally
capable of articulating particular words often heard in interactions with
a caregiver without comprehending their lexical meaning. What starts as
meaningless babble is transformed into word forms which become rein-
forced by a caregiver or some other person. The researchers studied inter-
active language learning between a humanoid robot, named DeeChee,
and a human participant, whose speech is initially perceived by the robot
as a stream of phonemes. A random generator defined the stream of pho-
nemes uttered by the robot, which is initially heard by a human partici-
pant as just babble. The participant was instructed to listen to this babble
and to take notice of sound sequences that resembled words; his task was
to teach the robot the shapes and colors of particular objects. Turn-taking
was made possible by having the robot babble for four seconds, then
listen for four seconds before babbling again. A more realistic method
was also adopted; that is, the participant took his turn when the robot
blinked or smiled.
According to Lyon et al. (2012), the main assumptions underlying this
study were:

• DeeChee practices turn-taking in a proto-conversation.


• It can perceive phonemes, analogous to human infants.
• It is sensitive to the statistical distribution of phonemes, analogous to
human infants.
146 Language Evolution and Developmental Impairments

• It can produce syllabic babble, but without the articulatory constraints


of human infants, so unlike a human of this age it can produce conso-
nant clusters.
• It has the intention to communicate so reacts positively to reinforce-
ment, such as approving comments (p. 7 online publication).

DeeChee’s babble is incrementally affected by the participant’s


speech. Although it is still quasi-random, it is clearly biased towards the
most frequently perceived syllables. The word forms that emerge from
DeeChee’s babble clearly show a sensitivity to the statistical distribution
of syllables in the participant’s speech. Salient content words were more
likely acquired than function words.
DeeChee served as a computer model of a human infant with the
language-learning capacities commonly observed by typically developing
children. DeeChee’s sensitivity to the statistical distribution of linguistic
elements in the participant speech gives evidence to these capacities.
The model shows how the acquisition of word forms takes place; it
does not show how the robot learns a particular code for labeling objects
and events. Vocal learning from babbling to more speech-like utter-
ances is subject to particular constraints, and apart from the statistical
distribution of syllables, the design included no variables of statistical
constraints. The model did not include a specific sensitivity statistical
structures below syllable level; that is, structural differences in phoneme
sequences, such as P languages vs NP languages (see Chap. 3, Sect. 3.2).
Moreover, the model is restricted in the way that other factors which
may affect the transition from babble to word-like forms such as prosody,
utterance length etc are not included in the model. Smiles and eye blinks
are included.

4.4.2 Learning an Artificial Language

Several researchers have argued that some sort of economizing principle


is at work when new languages evolve. Procedural learning is likely to
play a major role in the establishment of new communicative systems.
The example presented here includes a series of dialogues between more
4 Dialogues as Procedural Skills 147

people who develop a linguistic community. A common code that per-


mits communicative interactions between individuals who initially have
no common language will require a production system which economizes
available cognitive resources. Selton and Warglien (2007) argue for a sort
of economizing principle in communication, illustrated in a study of the
emergence of a new simple language. Pairs of subjects participate in an
experimental game where the means of communication are new, and in
the beginning the subjects have no common language available. Instead
of focusing on acquisition, the researchers focused on the emergence of
an artificial language; that is, a language of new codes for the labeling of
objects.
In the aforementioned study, a list of geometrical figures was pre-
sented on a computer screen, varying in shape, inserts and sometimes
color. The subject was required to assign a message to each figure by
selecting letters from a string of permissible letters. The two participants
interacted anonymously in pairs, and both faced the same set of fig-
ures and the same list of permissible letters. Thus, the situation was a
dialogue between two participants who did not know each other. In
each period, a figure and a subject was randomly chosen. The message
specified by the letter code was then transmitted to the other player.
“The transmission is successful if and only if the messages specified by
the codes are the same. A payoff is obtained for a communication suc-
cess, but the letters have costs that must be borne by the sender. After
each period both players receive feedback on the chosen figure and the
messages specified by the code of the receiver. After receiving feedback
they can change their code” (p. 7362). Thus, in contrast to declarative
learning, which is generally fast and specialized for one trial learning, the
acquisition taking place in this situation is generally slow and depends
on incremental learning.
Four versions of the experiment were undertaken, varying the number
of figures, repertoire of permissible figures, and their costs. The problem
presented is: to what extent is a common code acquired? An insufficient
repertoire of letters seemed to be a serious obstacle for the attainment
of a common code. Also, the degree of role symmetry between the two
parties was important. Thus, analysis of the pooled data seemed to show
that some communication took place between a leader and an imitator,
148 Language Evolution and Developmental Impairments

and thus mismatches were avoided by simultaneous adjustments to the


code of the other.
A common code could have had a compositional grammar or a non-
compositional grammar, or simply be ungrammatical. A compositional
grammar required a mapping of features to letters or strings of letters,
and by arranging them in a fixed order of features. The authors show that
grammars do not matter much in stable environments (with a small set
of figures and letters), but “compositional grammars offer considerable
advantages in novel environments.” (p. 7363).
Selton and Warglien’s coordination task offered a simultaneous chal-
lenge to two subjects, and a successful solution of this task required the
acquisition of a common code. In other words, the coordination task
invited a dialogue between pairs of subjects, and this dialogue fully
depended on the acquisition of a new and common code. Once this code
was established, the two subjects were practicing the same cognitive skill,
which has some characteristics in common with other examples of pro-
cedural knowledge; for example, the Weather Prediction Task (WPT),
which has been used as an instrument for studying procedural learning
(Kemény and Lukács, 2010) and, like Selton and Warglien’s coordination
task, includes feedback-based incremental learning (see Chap. 8, Sect.
8.3.2). Also, both tasks involve context-dependent, stimulus-response,
rule-like relationships. Therefore, I believe that learning in Selton and
Warglien’s coordination game has most likely been mediated by the pro-
cedural memory system (Ullman and Pierpoint, 2005).
In natural languages, differences in age, social status, and power are
pervasive; hence, dialogues between parent and child, between master
and novice, and between native and immigrant tend to be asymmetric. In
these cases, asymmetry means a state of nonparity; that is, the interlocu-
tors differ with respect to linguistic competence. However, imitation may
also take place in dialogues where there exists a state of parity between the
interlocutors; the two parties may take different roles that permit the one
to imitate the other interlocutor. I shall therefore add a few comments on
the role of imitation in dialogues.
As pointed out in Chap. 3, the procedural memory system underlies
not only aspects of rule-learning, but also the acquisition of sensory-motor
skills that are essential in imitative behavior. The role of asymmetry in
4 Dialogues as Procedural Skills 149

Selton and Warglien’s coordination game shows the incidence of imitation


when the one player simultaneously adjusts the code to the other player.
This adjustment, which may be an element of any linguistic dialogues,
is most likely controlled by brain mechanisms underlying the procedural
system (Buccino et al., 2004). In dialogue theory, the role of imitation
has been commonly acknowledged. Thus, Pickering and Garrod (2004)
stressed that imitation is conductive to the conversational alignment of
interlocutors. This alignment may serve to confirm a mutual relationship,
as in turn-taking, and thereby increase the benefits of communication
with a minimum of memory and articulation costs. I assume that a mini-
mization of memory costs downplays the role of declarative memory,
whereas procedural memory is more heavily taxed.
For the pre-linguistic child, the ability to imitate sounds and gestures
is probably a major precondition for the initiation of a dialogue with his/
her caregiver. Also, for adults that do not share a common language, any
attempt to start a dialogue will generally include an element of imitation.
The emergence of pidgin languages may be compared to the acquisition
of a common code in Selton and Warglien’s study. Thus, although pidgin
languages give rise to entirely new vocabularies, early development may
have depended on some degree of imitation and new lexical items may
have come about by mutual adjustments of the vocal responses. I believe
that communication in some ancient societies may initially have been
based on relatively small vocabularies, and yet have served an instrumen-
tal function for the group or tribe.

4.5 Language Games and Pidgin Languages


As mentioned in the Introduction, the concept of language games was
introduced by Ludwig Wittgenstein in his Philosophical Investigations
(1958). His description of a language game has many characteristics
in common with the present conception of a dialogue as a procedural
skill. Does this mean that Wittgenstein’s contribution to the philosophy
of language also has relevance to theories of language evolution? He
was greatly influenced by Augustine’s Confessions, wherein language is
understood as words joined by action. This concept of meaning that
150 Language Evolution and Developmental Impairments

is involved in a language game belongs to an “idea of language more


primitive than ours.” In my reading of Wittgenstein, the words “block,”
“pillar,” “slab” and “beam” were not parts of declarative memory. The
learning of words-joined-by-action was the learning of a procedural
skill. The game, which included a builder and a helper, may be consid-
ered a simple dialogue.
Some dialogues are similar to language games in the way that they are
independent of large vocabularies. A procedural dialogue like a language
game can be based on a very small vocabulary, and in this sense I will
equate the present conception of a simple dialogue with Wittgenstein’s
conception of a language game. Such dialogues, instead of depending on
an extended semantic development, will depend on a closed set of easily
acquired action rules. Therefore, this type of dialogue appears early in
development by many children, and may have had a special role in the
evolution of language.
As pointed out, I consider dialogues that are expressions of pro-
cedural skills to constitute a subclass of dialogues. In a similar way,
Wittgenstein claimed that a language game is only a small segment
of the whole of language. However, it could be complete in itself and
constitute the entire language of a tribe, an assertion that shows the
relevance of language games to the evolution of language. If modern
research on systems of memory had been available to Wittgenstein,
he may have seen a certain similarity between a language game and a
procedural skill in language. A language game, like the one described
as communication between a builder and a helper, is obviously con-
text-dependent, and as pointed out in Chap. 3, procedural skills are
also context-dependent; that is, they are typically learned as stimulus-
response, rule-like relationships.
Dialogues that I consider to display procedural skills often have a
particular instrumental function; for example, dialogues in a bartering
situation. Therefore, such dialogues will be context-dependent as well
as specific to a behavioral domain. Following Wittgenstein, people may
have separate language games for separate behaviors, like walking, run-
ning or fighting. This does not mean that they also have the capacity to
imagine these behaviors. What these people lack is not the words, but the
behaviors and reactions that are part of a game of imagination. Thus, in
4 Dialogues as Procedural Skills 151

Wittgenstein’s Investigations, behavioral domain-specificity seemed to be


the most critical aspect of any language game.
The evolution of language beyond language games has permitted the
use of words or signs across behavioral domains. This involves a transition
from language behavior that is mostly dependent on the procedural sys-
tem into a language behavior that is equally dependent on the procedural
and the declarative system.

4.6 Dialogues and the 


Language-Impaired Child
van Balkom, Verhoeven, and van Weerdenburg (2010) showed that chil-
dren with delayed language development had difficulties in turn-taking
with their caregivers. The conversations with these children were charac-
terized with a less “facilitative style” and few contingencies which served
initiation of vocal responses over time. Similarly, Hudson, Levickis,
Down, Nicholls, and Wake (2015) have shown that maternal responsive-
ness to children 1:6 years of age predict language outcomes at three and
four years of age. It is likely, therefore, that language impairments, by
some of these children at least, have their origin in inadequate dialogues
with their caregivers in infancy and early childhood. In Chap. 8, I will
discuss the clinical manifestations of these impairments.
The initiation of dialogues between a caregiver and a child is a critical
factor for the vertical transmission of language. Moreover, the learning
constraints discussed in the preceding chapter form a safeguard to this
transmission. At the same time, the genetic variability underlying lan-
guage development also involves instances of failure; that is, individuals
with great difficulties in learning language. Children that, for different
reasons, have had an inadequate (or entirely missing) vocal/manual inter-
action with their caregiver will have a delayed acquisition of language
(however, the opposite is not necessarily true). These children have rarely
been able to initiate and to take part in dialogues that are the main learn-
ing arenas for language acquisition. They must be able to make use of
language in communicative interactions with peers, caregivers and teach-
ers, but some children cannot meet these challenges properly. In general,
152 Language Evolution and Developmental Impairments

children with delayed language acquisition due to inadequate interaction


with a care-giver rarely initiate a dialogue themselves, and in many cases
they do not know how to address another person unless he/she is well-
known to the child. With some effort, a language-delayed child may take
part in a short dialogue, often characterized by short sentences with a
hesitant manner of expression. For an outside observer, the child’s lan-
guage seems highly deficient and immature.
It is difficult to distinguish between children with delayed language
acquisition (some of them catch up with their peer group) and those with
permanent language impairment. Also it may be difficult to distinguish
shyness and social passiveness from delayed or impaired language. Tests
of procedural learning ability may provide possible diagnostic indicators
of developmental language impairments (see Chap. 8). Here I will only
stress the importance of attending to the child’s ability to initiate and
take part in dialogues, not only with an adult but also with members of
the peer group. Finally, it should also be stressed that early interaction
between an infant and a caregiver an important factor of language acqui-
sition, and that inadequate or missing early interaction is a risk factor of
developmental language impairment.

4.7 Why Some Dialogues Are Easy


As important gateways to a linguistic community, dialogues should be
easily learned and practiced. Also linguistic communities depend on
dialogues, and therefore this gateway to language competence must be
safeguarded in evolution. Most children, despite sensory and motor dis-
orders, take part in early forms of dialogues and do acquire a language.
Thus, children “invent” dialogues under very unfavorable conditions.
For example, deaf children often invent simple gestural systems called
home signs to communicate with other hearing members of the family.
Deaf siblings or twins that invent home signs, have dialogues with each
other that are practically incomprehensible to others. Their dialogues
represent procedural skills that evolved in the interaction between two
people (or between a few family members). I mention home signs in this
chapter because of the way they have developed from poor interactive
4 Dialogues as Procedural Skills 153

conditions in isolated deaf families where the children lack normal expo-
sure to speech or sign language. They formed a kind of pre-linguistic
dialogues which were developed through family members’ own efforts,
and certainly without a formal instruction. Hence there may have been
wired-in abilities that came to their advantage in learning to communi-
cate with their hands. It is the seemingly easy way of developing early
dialogues that makes these observations from Nicaragua important. Of
course, there are other easily learned dialogues, which I have described
above; for example, early vocal interactions between mother and child (in
a hearing family). Other dialogues within specific behavioral domains,
like simple types of bartering, may require greater effort to learn, but
once acquired they are easily practiced by the interlocutors. In general,
there are dialogues which are run with a certain degree of automatic-
ity, and which therefore do not lend much support from declarative
memory. Instead, they exemplify procedural skills.
Some years ago, Garrod and Pickering (2004) set out to explain
why “conversation is so easy.” In view of what has been said about SLI
children, I think Garrod and Pickering’s statement may be changed to
assert that “conversation is so easy for typically developing children.”
In their view, dialogues are so easy because of the “processing mecha-
nism that leads to alignments of linguistic representations between
partners” (p. 8). They say that conversational partners generate their
utterances on the basis of what they have just heard, and by asking a
question, the speaker has already specified “the high level goal for his
addressee’s next utterance” (p. 9). Garrod and Pickering’s description
of interactive alignment is comparable to Selton and Warglien’s coor-
dination game, wherein one player adjusts the code to the other player.
The latter work, however, is more specific on the learning of a common
code, which is described as incremental and rule-governed. Therefore,
I will argue that it is the building of dialogues as procedural skills that
makes them so easy for typically developing children and adults. In
dialogues, therefore, partners build an implicit common ground for
communicative interactions, ascertaining a parity of input and out-
put messages. In a more recent work, Menenti, Pickering, and Garrod
(2012) argue that interlocutors prime each other at different levels of
representations.
154 Language Evolution and Developmental Impairments

By emphasizing the role of priming, Menenti et  al. seem to have


focused on a type of dialogue with a neural substrate that extends
beyond the procedural memory system. Both priming and proce-
dural memory have been classified as nondeclarative memory systems.
However, the two are controlled by different brain structures; that is,
the procedural system is rooted in the frontal/basal ganglia circuits,
whereas lexical semantic priming is controlled by the middle/superior
temporal gyrus. In agreement with several other researchers, Garrod
and Pickering, as well as Menenti et al., pointed out that many social
behaviors are automatically triggered by the perception of action in
others and hence, there must be “parity of representations used in
speaking and listening.” In agreement with the motor theory of speech
perception, they said that there must be a mechanism which links per-
ception and action in such a way as to mediate an alignment of rep-
resentations. As mentioned in the Introduction and Chap. 3, such a
mechanism is documented in the literature as mirror neurons in the
premotor cortex of macaque monkeys; that is, the F5 area which has
been considered as the homolog to Broca’s area in humans. I also men-
tioned that this area can be subdivided into Brodmann areas 44 and
45. Brain imaging studies have shown that area 45 is activated during
language output, and area 44 is activated during nonlinguistic actions
(recall my discussion in Chap. 3, Sect. 3.5).
Mirror neurons have not been directly recorded in human brains,
but Rizzolatti and Craighero (2004) have reported an equivalent system
which is activated when people imitate actions, and Corballis (2010)
has argued that hominid precursors to human language have been based
on mime. Again, I will stress that the role of imitation in language
acquisition by human infants is undisputable. No wonder that in the
human brain there is a remarkable overlap between imitation and lan-
guage, extending beyond Broca’s area to the superior temporal sulcus
and Wernicke’s area in the temporal cortex. Thus, neural mechanisms
underlying imitation in humans, both linguistic and nonlinguistic imi-
tation, are well-documented in the research literature. Therefore, the
prospects of finding a neural basis for the alignment of representations
in dialogues are good.
4 Dialogues as Procedural Skills 155

Garrod and Pickering argued that the alignment of representations in


dialogues is a wired-in ability by human subjects. I will add, however,
that there must be special learning constraints which mediate the acqui-
sition of this ability. Moreover, the automaticity of such alignment, as
shown in the rapid and often effortless turn-taking by two partners in a
dialogue, bears witness of a procedural skill. Therefore, the full substra-
tum underlying the “easy dialogues” must include the basal ganglia in
addition to the premotor and temporal cortices mentioned above, and
the interactional alignment that takes place in these dialogues is the result
of incremental and context-dependent learning. In typically developing
children, this learning process is generally successful, and hence there
must exist a learning readiness which invites early participation in dia-
logic communication.

4.8 Small Talk: Maintenance


of Communicative Channels
Small talk is an informal type of conversation which, according to
Garrod and Pickering (2004), has “the effect of aligning social repre-
sentations between pairs of interacting individuals” (p. 10). I will add
that small talk is easy and runs with some degree of automaticity, and
that “interactive alignment” is an important characteristic of this type
of dialogue. Also, there may be implicit rules for the ways small talk
is performed, and, therefore, I have compared them to early dialogues
between a child and his/her caregiver. For both types, the maintenance
of contact/social relationship means more than selection of a topic for
conversation, and I have, therefore, included small talk in the category
of procedural dialogues.
Does small talk have any cognitive and biological function? Adults and
generally competent users of language take part in small talk; therefore,
it generally does not serve language acquisition. According to Bickerton
(2014) small talk serves to maintain important communicative channels.
Another adult does not only serve as the other part in the dialogue, but
also as a channel of communication whose accessibility is verified in small
156 Language Evolution and Developmental Impairments

talk. By engaging oneself in small talk, we confirm the availability of


important communicative channels, while we also reinforce and improve
these channels.

4.9 Concluding Remarks


In the preceding three sections, I have talked about early dialogues in
a developmental context. In ontogeny, at least one of the parties will
generally have some declarative knowledge which he/she shares with the
community, and when accessed this knowledge may change the course of
linguistic interactions. In phylogeny, an access to such knowledge cannot
be taken for granted, and may have had less impact on dialogues in early
linguistic societies. However, dialogues must have been present during
the very birth of a new language; for example, the use of home signs in
the evolution of a new sign language, and I assume they have occurred in
the protolanguages. Actually, the first human languages may have been
substantiated in the form of dialogues.
Turn-taking in communicative expressions among animals is well-
known, but without a grammar that permits a relatively complex phrase
structure, we cannot conclude that such behavior represents linguistic dia-
logues. However, the brain structures in monkeys, which are homolog to
structures underlying dialogic communication in humans, are well-known
and show that neurobiological apparatus for linguistic communication
is in place in subhuman hominids. At the same time, both animals and
humans acquire procedural skills; thus, neurobiological as well as cognitive
research indicates a continuous transition from communicative interac-
tions between animals to linguistic dialogues among humans.
As pointed out above, dialogues which, in this chapter, have been char-
acterized as procedural skills, run with a high degree of automaticity. Turn-
taking, mutual priming of responses, alignment of responses without a
deliberate search in semantic memory, dependence on implicit rules which
apply quickly: all are critical characteristics of these dialogues. Moreover,
these dialogues form the “bricks and mortar” of new linguistic societies.
Dialogues that developed and evolved as procedural skills rest on
wired-in abilities of human subjects. Therefore, such dialogues warrant
4 Dialogues as Procedural Skills 157

an effective transmittance of a language from one generation to another.


Also, dialogic experience by one individual gives support to a diversity of
linguistic interactions with other individuals in the community. Finally,
interactions between people in different linguistic communities give sup-
port to language change and the rise of new languages.

References
Anderson, J. R. (1976). Language, memory and thought. Hillsdale, NJ: Lawrence
Erlbaum Associates Inc.
Anderson, J. R. (1983). The architecture of cognition. Harvard: Harvard University
Press.
Bickerton, D. (2014). More than nature needs: Language, mind and evolution.
Cambridge, MA: Harvard University Press.
Borjon, J. I., & Ghazanfar, A. A. (2014). Convergent evolution of vocal coop-
eration without convergent evolution of brain size. Brain, Behavior and
Evolution, 84, 93.102.
Bornstein, M. H., Putnick, D. L., Cote, L. R., Haynes, O. M., & Suwalsky, J. T.
D. (2015). Mother-infant contingent vocalizations in 11 countries.
Psychological Science, 26(8), 1272–1284. doi:10.1177/0956797615586796.
Buccino, G., Vogt, S., Ritzl, A., Fink, G. R., Zilles, K., Freund, H.-J., et al.
(2004). Neural circuits underlying imitation learning of hand actions:
An event-related fMRI study. Journal of Cognitive Neuroscience, 16,
114–126.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Fujii, S., & Wan, C. Y. (2014). The role of rhythm in speech and language reha-
bilitation: The SEP hypothesis. Frontiers in Integrative Neuroscience, 8, 777.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in
Cognitive Sciences, 8, 8–11.
Hudson, S., Levickis, P., Down, K., Nicholls, R., & Wake, M. (2015). Maternal
responsiveness predicts child language at ages 3 and 4 in a community-based
sample of slow-to-talk toddlers. International Journal of Language &
Communication Disorders, 50, 136–42.
158 Language Evolution and Developmental Impairments

Kemény, F., & Lukács, Á. (2010). Impaired procedural learning in language


impairment: Results from probabilistic categorization. Journal of Clinical and
Experimental Neuropsychology, 32, 249–258.
Leclère, C., Viaux, S., Avril, M., Achard, C., Chetouani, M., Missonier, S., et al.
(2014). Why synchrony matters during Mother-Child interactions: A sys-
tematic review. PLoS One, 9(12), e113571.
Lyon, C., Nehanive, C. L., & Saunders, J. (2012). Interactive language learning
by Robots: The transition from babbling to word forms. PLoS One, 7, e38236.
Menenti, L., Pickering, M. J., & Garrod, S. (2012). Toward a neural basis of
interactive alignment in conversation. Frontiers in Human Neuroscience, 6,
185.
Pickering, M.  J., & Garrod, S. (2004). Toward a mechanistic psychology of
dialogue. Behavioral and Brain Sciences, 27, 169–190.
Rizzolatti, G., & Craighero, L. (2004). The mirror neuron system. Annual
Review of Neuroscience, 27, 169–192.
Scott-Phillips, T. C., Kirby, S., & Ritchie, G. R. (2009). Signalling signalhood
and the emergence of communication. Cognition, 113, 226–233.
Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. Cambridge:
Cambridge University Press.
Selton, R., & Warglien, M. (2007). The emergence of simple languages in an
experimental coordination game. Proceedings of the National Academy of
Sciences of the United States of America, 104, 7361–7366.
Takahashi, D. Y., Narayanan, D. Z., & Ghazanfar, A. A. (2013). Coupled oscil-
lator dynamics of vocal turn-taking in monkeys. Current Biology, 23,
2162–2168.
Ullman, M. T., & Pierpoint, E. I. (2005). Specific language impairment is not
specific to language: The procedural deficit hypothesis. Cortex, 41,
399–433.
van Balkom, H., Verhoeven, L., & van Weerdenburg, M. (2010). Conversational
behaviour of children with developmental language delay and their caretak-
ers. International Journal of Language & Communication Disorders, 37,
295–319.
Wilson, B., Slater, H., Kikuchi, Y., Milne, A. E., Marslen-Wilson, W. D., Smith,
K., et al. (2013). Auditory artificial grammar learning in macaque and mar-
moset monkeys. Journal of Neuroscience, 33, 18825–18835.
Wittgenstein, L. (1958). Philosophical investigations. (The English text of the
third edition). Englewood Cliffs, NJ: Prentice Hall.
5
Evolving Meaning in Language

This chapter includes an array of problems that are the most difficult in
all fields of research related to language. Meaning in language belongs
to the subcomponent of semantics and has been discussed within differ-
ent conceptual frameworks. Within formal semantics, it is argued that
meaning in language is propositional; for example, the truth value of,
“the glaciers in Greenland are melting” determines the meaning of the
proposition. A proposition links the “world” to the truth value in the
mind of the speaker; thus, formal semantics has provided a system for
analyzing propositions to deal with problems of meaning in language.
It may be expected that a chapter about meaning in language should
deal in more details about formal semantics and propositional meaning.
Thus, Fitch (2010) stressed that “propositional meaning is another dis-
tinct design feature of language: a central component of semantics that
had to evolve for language in its modern sense to exist” (pp. 121–122).
He argued that music possesses both “phonology” and “syntax,” but can-
not express propositional meaning. This is a feature which belongs to
human language only.
The problem is whether analysis of propositions as described in formal
semantics presupposes a metalinguistic ability which is associated with

© The Editor(s) (if applicable) and The Author(s) 2016 159


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_5
160 Language Evolution and Developmental Impairments

evolutionary and developmental literacy. Therefore, formal semantics is


constrained to modern languages, whereas its relevance to a theory of
evolution is highly debatable. Also, further discussion of formal semantics
will require an exposition of rules of logics, which will hardly serve the
objectives of the present work. Instead I shall describe, in greater detail,
two conceptions of meaning in language: Meaning as intention—which
is attributed to acts or beliefs, both of which are characterized more or
less by temporary states of affair—versus meaning as knowledge—a con-
tinuing state of affairs. Knowledge can be transmitted between people,
whereas intentions are not “object-like” and cannot easily be transferred
between people. Knowledge involves concepts and categories, which will
form part of the subject matter for the present chapter. Based on this
interpretation of “meaning” I shall also address the way meaning has been
communicated in pre-literate languages.
Following the interpretation of meaning as knowledge, I shall turn to
cognitive studies of concepts and categories, and I will focus on differences
between animal and human learning of concepts. I will also present a few
works on the neurobiology of lexical meaning that address the neural bases
of conceptual knowledge by humans. Finally, I will turn to the dynamics
of conceptual/semantic learning by humans, and explain why the organi-
zation of communicative networks (collaborative structures) is important.

5.1 Meaning as Intention


This interpretation of meaning has been advanced by Grice (1957) and
discussed by several scholars, most recently by Scott-Phillips (2015).
Grice distinguished between natural and non-natural meaning. In the
former, a signal A consistently predicts an effect B, whereas the latter
is used in communication when the signaler intends to communicate
a certain message to an audience/receiver; that is, non-natural mean-
ing is linked to an “intention” in the speaker. Natural meaning is clearly
associated with indexes in Peirce’s classification of communicative signs
(see Chap. 3, Sect. 3.1.1); thus, signals which have natural meaning are
sustainable and enduring, whereas non-natural meaning is a temporary
state of affairs.
5 Evolving Meaning in Language 161

The criteria of non-natural meaning are as follows: 1) The signaler


intends to make the audience believe X, or produce a particular response
in the audience; and 2) the audience must recognize that signaler has
this intention; and 3) the signal has the intended effect on the audience’s
beliefs or behavior. Here the critical term is “intention,” a wish to create
a belief or change of behavior by the audience in a particular situation
Y. Grice’s cooperative maxim requires that the signaler and the receiver
share a knowledge about the world and a particular channel of commu-
nication, but the intention may be a temporary state of affair and is not
commonly shared by other people. The question is whether non-natural
meaning can be associated with the meaning of symbols (symbolic refer-
ence) in Peirce classification of signs. I think not, because signals that
have non-natural meanings are linked to intentions and are therefore
transitory and context-dependent. In short, Grice’s cooperative maxim is
mainly relevant for pragmatics, whereas its relation to the more general
notion of meaning in language is debatable.

5.2 Meaning as Knowledge


To reassure the position taken above, I make a distinction between the
meaning of lexical items (or propositions) which have continued exis-
tence and can be repeatedly expressed versus the meaning of events which
are mostly temporary, but may also reoccur in particular circumstances.
Written words are objects in their own right. A vocally expressed word
is also an object in the sense that it may be repeatedly expressed, and we
may discuss the way it is used—whether it is correct or false in relation
to its linguistic and communicative context. A vocal or manual expres-
sion, which cannot be repeatedly expressed, and which cannot be judged
with respect to grammatical fitness, is a passing event. The critical reader
will argue that an alarm call by marmoset monkeys can be repeatedly
expressed, and my answer is that the alarm call will be repeated only
under specific stimulus condition (the threat of a predator). An alarm call
cannot be grammatically evaluated.
We can now discuss the meaning of “meaning” in language on the
assumption that language consist of “word-objects.”
162 Language Evolution and Developmental Impairments

5.2.1 Lyons’ Discussion of the “Meaning of Meaning”

“Meaning” is a vocabulary word which is used in different ways in col-


loquial English. So what is the “meaning of meaning” in everyday lan-
guage? Lyons (1977), a pioneer researcher in the field, raised this question
in the first volume of his book Semantics. He called attention to the vari-
ous senses of “meaning” in everyday use of language, and he asked the
reader to consider a number of sentences which include the word “mean-
ing;” for example:

a. What is the meaning of “sesquipedalian”?


b. I did not mean to hurt you.
c. He never says what he means.

These sentences exemplify distinguishable meanings of “meaning.”


The way we may interpret “meaning” in sentence (a) comes close to the
preferred meaning of the term in Lyons’ Semantics, as the meaning of a
lexeme or a vocabulary word. Thus, meaning is attributed to an item in
a written list rather than an action in a communicative setting; in other
words, the lexeme can be decontextualized from this setting. Although he
focused on lexemes, I think his interpretation of meaning in sentence (a)
implies a reference to linguistic structure, and therefore, his interpreta-
tion of “meaning” may be cast more generally as the meaning of any lin-
guistic phrase, sentence or text. (Thereby, this concept of “meaning” may
be said to include propositional meaning.) In sentence (b), “mean” does
not describe a particular wording, but involves a statement of “inten-
tion,” and therefore “mean” does not imply a specific linguistic structure
(“word-objects”). In sentence (c) “means” is used to describe the reli-
ability of someone’s utterances (which is not an object of grammatical
judgement).
Lyons admitted, however, that the meaning of meaning in sentence
(a) is merely expressed in an intuitive or pre-theoretical sense. Further
elaborations of this concept were needed. Let us consider the preferred
sense of meaning in Lyons’ Semantics. What more could we say about
the meaning of a lexeme? In semiotics, a lexeme constitutes a linguistic
5 Evolving Meaning in Language 163

symbol whose referent, real or fictitious, is the meaning of the symbol.


However, the meaning of a lexeme may vary depending on context. The
question is whether a lexeme can have a specific/invariant meaning; that
is, a literal meaning. In everyday use of language, the apprehension of
meaning requires that we take into consideration both the context of
other linguistic units and the prosodic and paralinguistic features of the
utterance. Prosodic features include, for example, the rising intonation in
a question, and the paralinguistic features may be exemplified by the vol-
ume and voice of an utterance. Together, these features determine what
has been called the illocutionary force of an utterance. When someone
says to me, “You are a real friend,” I need to capture the illocutionary
force of the utterance in order to decide whether it is said sincerely or
ironically. Without discovering its illocutionary force, I cannot under-
stand the “meaning” of the utterance; judgment of grammatical fitness is
not sufficient. This shows that we are unable to capture the meaning of
“meaning” in language with a single definitive statement, and that sen-
tences as well as lexemes are equipped with meaning in social contexts.
(When we add the “illocutionary force” to an utterance, it becomes like
an “intention” in Grice’s cooperative maxim.)
Lyons also acknowledged another characteristic of all natural lan-
guages; namely, the capacity for self-description; languages may be used
to describe themselves. (Actually, Lyons’ definition of meaning in lan-
guage is an expression of the capacity for self-description.) Obviously,
this capacity sets human languages apart from signaling-systems by other
species, and also from nonverbal communication in human beings.
The term he introduced to refer to this feature of natural languages was
reflexivity. Words are both objects and tools used in action towards other
“word-objects.”
The feature of reflexivity is an important aspect of all natural lan-
guages, and therefore we cannot deal with the evolution of lexical mean-
ing without taking into consideration the way this feature emerged in
language. My question is: when, in evolution (and development), did
reflexivity become a characteristic feature of natural languages? Did
nonreflexive use of language precede the reflexive use in evolution
and development? Lyons, however, did not discuss reflexivity and the
164 Language Evolution and Developmental Impairments

meaning of “meaning” in an evolutionary perspective. Instead, he was


much occupied with a technical vocabulary and notational conven-
tions. He seemed to mean that reflexivity is an optional characteristic
of natural languages, and that words could be used as reflexive as well
as nonreflexive, a distinction he compared to the one between use and
mention of words. The word “sesquipedalian” in “What is the meaning
of sesquipedalian” is mentioned (and reflexive), whereas in a sentence
like “He is inordinately fond of the sesquipedalian turn of phrase,”
the same word is used, and therefore in a nonreflexive way. I think
the technical vocabulary introduced by Lyons is important, in particu-
lar, the distinction between reflexive and nonreflexive use of words.
However, he did not inquire into the origin of reflexivity, which will
be one of my objectives in the present chapter. Reflexivity is a product
of evolution of language and a product of learning and development of
language skills by children. Reflexivity may not have been a character-
istic of early languages. How did it emerge as an essential characteristic
of all natural languages? Was reflexivity a natural characteristic of pre-
literate languages?

5.2.2 Meaning as Symbolic Reference

Recall my short description of the Peircian classification of signs in Chap.


3, Sect. 3.1.1. There are three main classes of signs: icons, indexes and
symbols. Icons refer to objects by similarity, whereas indexes refer to
objects or events by contiguity or correlation. Symbols, however, do not
only refer to objects; thus, in addition to having external referents, they
also refer to other symbols. Thus, words are symbols because they are
parts of a lexical/semantic network. The development of this network
creates the “meaning” of a word; that is, the symbolic reference of a word.
Comprehension of the symbol–symbol relationship is a prerequi-
site for the reflexive use of words. The “meaning” of lexemes or phrases
depends on a network of signs and therefore Lyons’ conception of “mean-
ing in language” is fully compatible with Peircian classification of signs
and Deacon’s symbolic reductionism.
5 Evolving Meaning in Language 165

5.3 Meaning in Pre-literate Languages


Was reflexivity a feature of preliterate languages? In an oral culture, prior
to the invention of writing, people have been able to reflect upon their
own use of language; that is, use of language may have been evaluated
like any other form of behavior. However, this may first of all have been
an evaluation of oral performances: of recitations, formulaic expressions,
rhetoric, and so on. In this way, language becomes embedded in artistic
performances and cannot be judged or contemplated independent of the
artistic event. To complicate the matter, oral performances have most
likely taken place in community scenarios as described in the Homeric
poems, and therefore, these poems, do not tell us what use of language
was like among commoners, slaves and others in their everyday life. To
me, oral culture as revealed in the Homeric poems does not show an
awareness of language. Hence, we do not know whether the distinc-
tion between “use” and “mention” of words have been apprehended in
preliterate languages. In short, we have no evidence that reflexivity (as
described by Lyons) was a feature of these languages.
So what more do we know about pre-literate languages? The studies
of oral tradition in the Homeric period may indicate some important
aspects of these languages. The “data base” is a collection of literary stud-
ies of the Homeric poems, the Iliad and the Odyssey, whose interpre-
tation was influenced by the “Homeric question.” Several writers have
argued that Homer was not a literate person, and that the poems could
have been the products of an unorganized succession of redactors. It was
Milman Parry (1971) who was credited for the discovery of the unitary
structure of the Iliad and the Odyssey, and who therefore argued that
they must have been the creation of one man. The metrical structure of
these poems, in particular their hexameter line, indicated an oral culture
that set them apart from later epics in a literate culture. There were some
important differences between languages in the two cultures, and in the
early post-war years, a number of scholars within linguistics and social
anthropology turned their interest into what has been known as “the
great divide” between orality and literacy.
166 Language Evolution and Developmental Impairments

Orally transmitted culture rests on cognitive constraints in human-to-


human interactions. In the Introduction, I mentioned Ong (1982), who
argued that oral culture and language in the preliterate societies were
severely constrained by “mnemonics and formulas” favoring rhythmic
patterns, repetitions and alliterations. In agreement with Malinowski, he
stressed that language (in oral culture) is “a mode of action and not sim-
ply a countersign of thought” (p. 32). The Hebrew term dabar, which
means both word and event, indicates the close link between language
and action. Also, the link between language and memory meant that
knowledge is constrained by what you can recall, for people in an oral
culture to “keep something in mind”; they had to “think memorable
thoughts.”
On this account, a distinction between lexical meaning and form of
expression may have become very difficult in primary oral culture. At
least this distinction was not stimulated or encouraged in early oral tradi-
tions. It would have required a freedom of expression that did not exist.
Instead, “your thought must come into being in heavily rhythmic, bal-
anced patterns, in epithetic and other formulary expressions, in standard
settings…in proverbs which are constantly heard by everyone so that
they come to mind readily, and which themselves are patterned for reten-
tion and ready recall, or in other mnemonic form” (Ong, 1982, p. 34).
The emphasis on formulary expressions means that phrases, consisting
of several vocabulary words, may have served as units of language. This is
often called recursion and considered to be a feature which distinguishes
human language from nonhuman communication systems. However, if
no units below the phrase level can be used, the generative potentialities
of language will be severely constrained. This will also constrain the type
of verbal memory that can be demonstrated in an oral culture (see Ong’s
discussion of “mnemonic constraints”).
May verbal memory in oral culture, which is subject to these con-
straints, have represented declarative memory (see Chap. 3, Sect. 3.3.1)?
In the way I have read Ong’s work, the answer is no. Words, or any com-
municative expressions, were considered as motor actions, and therefore
I understand Ong’s term “recall” as reconstruction of events, in particu-
lar the reconstruction/repetition of orally presented events. Considering
speech as a domain of motor action, we might reasonably describe such
5 Evolving Meaning in Language 167

reconstruction as a motor skill. In line with this conception, spoken


words were power-driven; that is, “explosion of sound,” and hence to
speak meant to exert power. This explains why oral people considered
names as conveying power over things.
As a consequence of the emphasis on motor action, the meaning of
words could have been intimately linked to the physical expression of
the words. On this account, any distinction between form and mean-
ing would be very difficult or counterintuitive in primary oral cultures.
Meaning becomes dependent on the form of linguistic expression, and
in consequence the motor aspects of language severely constrain the type
of knowledge that can be communicated in a preliterate oral culture.
Apparently, Whorph’s (1956) general interpretation of the thought-
language relationship may find some support in studies of preliterate lan-
guage and culture. We generally distinguish between the strong and the
weak form of the Whorphian hypothesis. According to the first, language
determines thinking, whereas the second form of the hypothesis says that
language influences thought.
Today, most researchers agree that the strong form of the hypothe-
sis lacks any support in modern research, whereas the weak form is still
debated in the literature. The Whorphian hypothesis has been mainly
discussed in relation to natural languages in modern times. It is just as
relevant in relation to ancient and preliterate languages, and the question
is whether the strong version of the hypothesis may find some support
in studies of oral culture. In that case, its validity may have been histori-
cally limited to pre-literate languages: The way motor action in language
constrained knowledge and thought in preliterate oral culture may as well
give some support to the strong version of the hypothesis.
However, the strong version of the Whorphian hypothesis is con-
tradicted by the translatability of languages; that is, meaning is no lon-
ger constrained by a form of articulation, but may be conveyed across
linguistic forms. Translatability is an evolutionary product which may
have depended on complex interactions across groups, but first of all,
the distinction between meaning and form of expression was encour-
aged by the technologies of writing. Written languages may have testi-
fied to the medium transferability of language. When verbal utterances are
transferred into a written statement, an equivalence of expressive forms,
168 Language Evolution and Developmental Impairments

auditory and written, is acknowledged. Therefore, writing, and hence


the medium transferability of language, presupposes a conception of lan-
guage as a constellation of objects rather than a series of actions (see more
discussions in Chap. 6).
When words are treated as objects, they may also serve as tools which
can be applied to other objects. Language has become de-contextualized
from events of vocal behavior. Thus, words can be used to describe other
words; that is, the feature of reflexivity becomes an important attribute of
language, and most probably the awareness of this attribute has depended
on the invention of writing. Prior to this invention, or prior to the com-
mon use of written language, there is no evidence of languages which
contained the feature of reflexivity.

5.4 The Meaning of Words as Concepts:


A Cognitive Approach
As pointed out above the study of meaning in the semantic/linguistic
tradition generally implies a reference to linguistic units; that is, words,
phrases, sentences, and so on. In the following, I will argue that semantic
meaning within a cognitive research tradition always implies a reference
to concepts and categories. This research tradition focuses on con-
cepts and learning of concepts both within language and nonlanguage
domains. Also in this tradition, the learning of concepts or categories
by animals are compared to the learning of concepts by humans. That
is why the cognitive approach agrees with an evolutionary perspective
on semantic meaning. However, the linguistic/semantic and the cogni-
tive approach to meaning are reconcilable approaches. I consider them
to be supplementary and therefore both provide important frameworks
for addressing problems of semantic development and evolution. Studies
of concepts and concept learning within nonlinguistic domains have a
long history in cognitive psychology. Remember that Ullman argued that
language shares important biological and computational substrates with
memory, commonly considered to be a nonlanguage domain. His pro-
cedural/declarative (PD) model has gained much support, particularly
in relation to grammar (see Chap. 3). I will similarly argue that meaning
5 Evolving Meaning in Language 169

in language shares important characteristics with the acquisition of con-


cepts/categories across linguistic/nonlinguistic domains.
In all natural languages, there is a complex relationship between form
and content. The early emphasis on the form of linguistic expression,
mentioned in the Introduction, waned with the growth of vocabular-
ies when the particular wording of messages became more optional.
Although meaning could still be captured to some extent by phonologi-
cal and morphological form, meaning became less dependent on form
of expression. Thus, vocabulary words could be differently interpreted
depending on context; that is, the polysemy of words which, among oth-
ers, has been studied by Hoffman, Lambon Ralph, and Rogers (2012).
They proposed a computationally based measure on semantic ambiguity
and contextual usage of words. Homonymy, same sounding words with
different meaning, also contributes to the complexity of the form–con-
tent relationship. Finally, synonymy shows that particular symbols/words
which differ in articulatory or orthographic expression convey the same
semantic meaning. This characteristic of modern languages represent
a challenge to cognitive theories: Obviously, the similarity of meaning
between synonymous words must reside in a high-level entity or abstract
concept. More generally, therefore, instants of meaning in language can
be characterized as concepts or categories.

5.4.1 Categorical Perception

It has been commonly assumed that only verbal stimuli are subject to cat-
egorical perception, which is therefore an example of categorization that
takes place in a language domain only. As I will show later, this assump-
tion is lacking support in contemporary research. First, however, I will
briefly present a reminder of what categorical perception is.
The expressive form of a linguistic symbol—that is, the exact form of
manual or vocal articulation—will differ between people, and for the
same individual it may also differ from time to time. The articulatory and
acoustical expression of the English word pen will differ between indi-
viduals, and similarly will the exact manual expression of the sign for pen
in sign language. These differences are within-category variations that do
170 Language Evolution and Developmental Impairments

not signal any change of meaning, other differences of manual or vocal


articulation do represent changes of meaning. For the child, therefore,
it is necessary to distinguish form of expression from meaning; that is,
a distinction that sometimes may be difficult to learn. For adult users of
language, this distinction offers no problem in everyday life.
In cases of great linguistic isolation, however, the distinction between
expressive form and meaning may become a hardship for some indi-
viduals. Such isolation, when a child is exposed to insufficient stimula-
tion from only one or a few individuals over years, implies a poverty of
expressive form. The expressive variations that are needed to establish a
new concept or cognitive category are lacking, and hence it may also be
difficult to detect stimulus cues that signal a distinction between cat-
egories. In English the distinction between bit and pit is signaled by a
voice onset time of about 25 milliseconds, whereas each of the two cat-
egories are associated with large variations in voice onset time below and
above this limit. These within-category variations—that is, allophones of
a phoneme—are apparently neglected by the skilled user of language. My
point is that the phonemic transmission between /b/ and /p/ will only be
detected on the condition that the child is exposed to a sufficient diver-
sity of expression in the linguistic community.
The phoneme is a category of speech sounds (the phones), and tran-
sitions between phonemes signal differences of meaning. Therefore,
categorical perception of speech sounds constitutes a basis for the acqui-
sition of meaning in spoken language. This phenomenon is not limited
to the perception of speech sounds; it has also been documented for the
perception of manual signs in sign language. Emmorey (2002) describes
an experiment by McCullough, Brentari, and Emmorey who presented
stimulus continua of still images by varying in small steps one of the
phonological categories, whereas the other remained constant. Thus,
by varying hand configuration, the sign for PLEASE was incrementally
transformed into the sign for SORRY. In another series of trials, place of
articulation was varied to transform the sign for ONION into the sign
for APPLE.
Two groups of subjects, deaf signers and hearing nonsigners, partici-
pated in the experiment. The subject was asked to decide whether the
presented stimulus was more like #1 or #2 (which were the still images of
5 Evolving Meaning in Language 171

the two endpoints). A sigmoid distribution of responses in both groups


indicated categorical perception of the presented signs, a result that was
highly expected for the deaf signers. Emmorey remarked that it was more
surprising to find the same distribution of responses by the hearing non-
signers. She argued that this result shows that categorical perception of
signs seems to have a perceptual rather than a linguistic basis. The ques-
tion is whether this form of categorization also occurs in a nonlanguage
domain.
Actually, more recent research has demonstrated categorical percep-
tion of both linguistic and nonlinguistic stimuli. Franklin, Pilling, and
Davies (2005) and Clifford, Franklin, Davies, and Holmes (2009) have
convincingly shown that colors are categorically perceived by pre-lingual
infants. These studies support the claim that categorical perception has a
perceptual rather than a linguistic basis; that is, the phenomenon occurs
in language as well as nonlanguage domains.
Like the way procedural and statistical learning of predictive dependen-
cies are preconditions to language acquisition, it may also be argued that
categorical perception, and all aspects of categorization in early infancy
(e.g., Mareschal and Quinn, 2001), constitute preconditions to language
acquisition. The categorization of stimuli that differ with respect to a
number of physical dimensions, is a prerequisite not only to object rec-
ognition and object constancy, but also to the learning of names. It may
be that categorization along some dimensions is a “wired-in” ability that
turns into effect once the child is exposed to the adequate stimulation.
In other words, this ability will not serve the development of language,
unless the community provides sufficient diversity of linguistic exposure.
On this condition, it may be argued that categorical perception, within
language domains and nonlanguage domains, may have facilitated dis-
tinctions of meaning in pre-historic communication among humans.

5.4.2 Concepts and Categories

Phonemes and colors, which are the products of categorical perception, are
specific examples of concepts studied in cognitive psychology. I will now
turn to the general study of concepts in cognitive psychology. In this field,
172 Language Evolution and Developmental Impairments

the study of concepts may be considered as the study of knowledge; in


other words, concepts are specific instances of everything we know. Eysenck
and Keane (2000) proposed a distinction between two types of concepts:
“objects” (dog, chair, pen) and “relations” (above, between, empathy). We
may also add “actions.” All types of concepts are important in language
development and evolution, but for the moment I will mostly deal with
object concepts, which will interchangeably be referred to as concepts or
categories. The general approach has been to ask people to make category
judgments of specific instants of objects, to show that concepts can be
defined by attributes, and that such judgments are rule-governed. This
research has revealed gradients of typicality and prototypes, which have
given rise to different theoretical views on the nature of concepts.
On one hand, concepts and categories are generally considered to be
human achievements. Together they form semantic networks which, I
assume, are not present by animals, and these networks may be compared
to the interrelationship between symbols in Deacons’ theory of symbolic
reference. However, the genuine expression of semantic networks by
humans, and the characterization of concepts as human achievements,
does not mean that animals do not develop concepts. Since Harlow pub-
lished his observations of “learning set” by monkeys, it has become clear
that concepts are easily learned by subhuman subjects, the way monkeys
solve oddity problems is a case in point. Therefore, the study of concepts
has been undertaken with different species and has demonstrated conti-
nuity between animal and human cognition.
In human cognition, a distinction between explicit and implicit learn-
ing has long been commonly acknowledged (Seger, 1994). Explicit
learning is consciously accessible, whereas implicit learning is not. The
distinction may also be compared to the one between intentional and
incidental learning. Both are descriptive terms without any reference to
underlying mechanisms of learning (Eysenck and Keane, 2000). Ashby,
Alfonso-Reese, Turken, and Waldron (1998) have argued for a distinction
between explicit and implicit categorization systems based on particu-
lar operational characteristics. The explicit system is rule-based; it derives
responses according to a uni-dimensional analysis and depends on working
memory and executive attention. Humans learn rule-based (RB) category
tasks quickly through explicit reasoning, and are generally capable of
5 Evolving Meaning in Language 173

explaining the task solution verbally. The implicit system is nonanalytical


and depends on multidimensional processing. Information–integration
(II) tasks used to test the implicit system is poorly learned by humans.
They also learn these tasks more slowly.
In a study of implicit and explicit category learning by capuchin mon-
keys, Smith, Crossley, et al. (2012) designed a set of circular sine-wave
gratings that varied in spatial frequency and orientation of bars. In the RB
tasks, only bar frequency carried information and could be solved accord-
ing to a uni-dimensional rule. In the II task, both bar frequency and
orientation carried information, but neither of them carried sufficient
information. Here task solution required multi-dimensional integration,
which could hardly be explained verbally. The results show that capuchin
monkeys were capable of dimensional analysis and learned the RB tasks
more easily than the II tasks. Smith, Crossley, et al. (2012) therefore con-
cluded that nonhuman primates have “some structural components of
humans’ capacity for explicit categorization” (p. 295).
The question is whether other vertebrate species also respond differ-
ently to RB and II tasks, or whether they lack the explicit categorization
system. A commitment to this system and hence a capacity of dimensional
analysis is demonstrated by a superior RB performance in relation to II
performance. Smith et al. (2011) showed that pigeons learned matched
RB and II tasks equally quickly to the same accuracy level, and concluded
that pigeons showed no “commitment to dimensional analysis.” Their
results gained further support in a more recent work by Smith, Berg, et al.
(2012) who studied categorization capacities in four species: humans,
macaques, capuchin monkeys and pigeons. Using the same matched RB
and II tasks, they found that although pigeons solved both task categories
equally quickly, they showed no commitment to dimensional analysis.
The authors suggested that pigeons may host an “ancestral vertebrate cat-
egorization system from which that of primates emerged.” Their primate
data showed continuity with human cognition; that is, humans and non-
human primates share important aspects of explicit categorization.
Smith, Crossley, et al. (2012) also argued that the implicit–explicit dis-
tinction is grounded in separate neural structures. The implicit system
relies on the striatum, whereas the explicit system relies on the anterior
cingulate gyrus, prefrontal cortex and the medial temporal lobe structures.
174 Language Evolution and Developmental Impairments

Thus, neural substrates underlying the implicit system are very similar to
the neural basis of the procedural system. Also, the neural substrates of the
explicit system largely coincide with the substrates underlying the declara-
tive system (see Chap. 2, Sect. 2.2). However, there are important differ-
ences, in particular since the declarative system mediates verbal expression
by humans; that is, linguistic behavior which has not been demonstrated
in explicit categorization by macaques and capuchin monkeys. Moreover,
the procedural system is generally involved in serial and skill learning, yet
category learning of the type studied by Smith, Crossley, et al. (2012) may
share similar mechanisms with procedural learning.
Implications for the evolution of concept or category learning are
important. Studies by Smith et al. mentioned above demonstrate a pos-
sible line in evolution from the nonanalytic vertebrate categorization in
pigeons to the explicit dimensional analysis in nonhuman primates, and
to the declarative categorization by human subjects. However, nonhuman
primates, in spite of their commitment to dimensional analysis, do not
show declarative categorization. Their capacity of explicit categorization
can be interpreted as a pre-adaptation for the declarative learning of con-
cepts or categories by humans. The evolutionary basis of categorization
or concept learning in humans shows itself in the way that we are capable
of both explicit and implicit categorization. The question is how it has
been possible for humans to capitalize on the neural mechanisms that
emerged in primate evolution. The studies of Smith et al. reported above
demonstrated some important mechanisms for explicit categorization,
but did not account for the declarative aspects of concept learning, may
be the final attainment in human cognitive evolution. The nonanalytic
vertebrate categorization and the explicit dimensional analysis in nonhu-
man primates are all nondeclarative capacities, which is why I consider
them to be pre-semantic forms “meaning;” that is, meaning is implicit in
the act of categorization. Lexical meaning, as studied in the tradition of
Lyons, is associated with declarative memory. Thus, the transition from
nondeclarative to declarative memory also involves major leap in the evo-
lution of meaning in language, and in my view this transition has been
facilitated by the invention of writing. Therefore, language in preliter-
ate/oral cultures may have represented a transitional stage between pre-
semantic and semantic forms of linguistic communication.
5 Evolving Meaning in Language 175

Pre-semantic processing of meaning, such as explicit dimensional


analysis is important in the general process of language development and
may form a precondition to later declarative processing of meaning. Can
we study how these forms of early categorization have been acquired by
children? I think we can, because the degree to which individuals capi-
talize on the evolutionary cognitive basis shows itself, not only in the
general accuracy of categorization (tests of concept learning), but more
specifically on the RB–II difference; that is, the degree of commitment
to dimensional analysis. (See Chap. 8, Sects. 8.2 and 8.6) This difference
may be assessed, once tests based on Smith, Crossley, et al.’s (2012) works
are designed, and I think such tests may be important additions to the
diagnostic tools used in studies of developmental language impairments.
The cognitive approach to the study of meaning in language, from
nonanalytic vertebrate categorization to declarative categorization by
humans, has given rise to an evolutionary perspective to the psychology
of language. Language impairments with inadequate communication of
meaning should be studied within the same perspective and conceptual
framework. There are aspects of language impairments which may origi-
nate from failures of implicit learning and nonanalytic categorization,
but other aspects of language impairments may involve learning processes
which belong to a later stage in evolution. In the beginning two sections,
in particular the semantic tradition of Lyons, I have addressed ways of
studying declarative meaning in language. Now it is time to review more
studies in the cognitive research framework, which deals with the declara-
tive semantic aspects of meaning. The general problem is whether we can
find a neural substrate of semantic meaning.

5.5 Towards a Neurobiology of Lexical


Meaning
In the literature discussing the role of neurobiology for the study of lexi-
cal meaning, two positions have been advanced. The traditional approach
takes semantic knowledge to be essentially different from modality spe-
cific systems for perception, action and emotion (Fodor, 1983; Pylyshyn,
176 Language Evolution and Developmental Impairments

1984). Kemmerer and Gonzales-Castilla (2010) distinguished the clas-


sic position as the “Disembodied Cognition Hypothesis” in contrast to
the “Embodied Cognition Framework.” This latter position means that
“semantic knowledge is not purely amodal, but is instead anchored in
modality-specific input/output systems, such that many forms of con-
ceptual processing involve the transient recapitulation of diverse aspects
of sensorimotor and affective experiences”. Obviously, the statement “not
purely amodal” shows that the two positions are not considered to be
mutually exclusive.
Although the discovery of the mirror neurons (see Chap. 3) did not
lead to a new discussion of lexical meaning, the general impetus of this
research tradition inevitably also strengthened the “Embodied Cognition
Framework.” Obviously, this approach invites a promising integra-
tion of semantics with cognitive neuroscience, and may eventually also
provide new insights into the neural mechanisms of lexical meaning.
(These prospects are not equally shared by the “Disembodied Cognition
Hypothesis.”) Different systems of mirror neurons are implicated in dif-
ferent classes actions (Arbib, 2009), and hence these structures have been
said to form a substratum for the semantics of action. I think, however,
it will be a severe mistake just to equate the meaning of symbols with
the semantics of action. Thus, Toni, de Lange, Noordzij, and Hagoort
(2008) who discussed implications of research on the mirror neurons in
macaques and human subjects, argued that “there is no decisive evidence
that motor systems play an exclusive role in semantics” (p. 72, see also my
discussion in Chap. 3, Sect. 3.4).
Although the Embodied Cognition Framework has its strength in rela-
tion to the semantics of action, and although the Disembodied Framework
does not equally emphasize the role of motor action, both positions still
permit a neurobiology of lexical meaning. This field of research has pro-
vided promising new insights into the neural mechanisms underlying the
processing of lexical meaning. However, I think an important condition
to further success is that the researchers incorporate a link to cognitive
theories of memory in their research approach. Thus, the DP model of
language invites studies of the brain substrates of declarative memory;
that is, a research approach which has brought the medial temporal com-
plex into focus of attention, whereas theories of semantic processing and
5 Evolving Meaning in Language 177

control have also renewed interests in the cognitive and linguistic role of
the prefrontal cortex. I think both have substantial relevance to the brain
substrates underlying lexical meaning in language.
In this work, I give less emphasis on the distinction between a “dis-
embodied” and “embodied” framework. What matters is a cognitive
neurobiological approach, which deals with semantic meaning in terms
of category learning and conceptual knowledge. In the following, I will
discuss the role of particular neural substrata for the acquisition and use
of such knowledge.

Hippocampus and the Para-Hippocampal Region According to the DP


model of language, and in agreement with the position taken here, the
neurobiological substrate of declarative memory is also the substrate of
lexical meaning. As described in Chap. 2, this means that lexical mean-
ing will depend, first of all, on medial lobe structures such as the hippo-
campus, the entorhinal, perirhinal and the para-hippocampal cortex (the
medial-temporal complex). Manns and Eichenbaum (2006) discussed
the evolution of declarative memory by reviewing recent research on
these structures from humans and experimental animals. They focused
on electrophysiological studies of monkeys and rats that performed well
on simple recognition tasks of odors and visual stimuli. They concluded
that the hippocampus and the para-hippocampal region are anatomically
well-conserved across the mammalian species and that the anatomical
conservation is “matched by a similarity in fundamental functional role
across species” (p. 804). It may be argued that the animal subjects demon-
strated episodic memory with some of the characteristics of the declara-
tive system; that is, fast formation of new associations. (Episodic memory
is considered to be a form of declarative memory, see Fig. 3.1, Chap. 3.)
At the same time, the performances of these animals depended, more
generally on item-in-context memory; that is, a procedural rather than
a declarative characteristic. First of all, the associations learned in these
tasks were expressed differently from verbal and declarative memory in
human subjects, they were not consciously accessible.

Manns and Eichenbaum fully acknowledged the differences in man-


ners of expression, and that “in humans, the resulting capability for
178 Language Evolution and Developmental Impairments

declarative memory is reflected in the conscious recollection of facts and


events” (p. 795). In this way “declarative memory” was used as a generic
term across vast differences in expression. Still, I think their paper lacks a
more thorough discussion of what constitutes declarative memory.
Obviously, the hippocampus and the para-hippocampal region play
a major role in declarative learning and memory, but it does not form
the entire substratum for the declarative system, nor the full substratum
for the extraction of lexical meaning in communication among humans.
Incoming information to the hippocampus and para-hippocampal region
must be translated in neocortical regions before it can be expressed in
behavior. Therefore, we cannot understand the substratum of declarative
memory without taking into consideration the neocortical organization
of input to this region. In the neocortex, the prefrontal cortex is a region
of prime interest among researchers who have studied the neurobiologi-
cal bases of symbolic behavior in humans.

The Prefrontal Cortex Given that symbolic communication is a genu-


ine human ability that is not matched by any communicative skills in
the primate species, this ability should have a correlate in the structural
changes that took place in the transition from hominid to human brain.
The fact that our brain is proportionately bigger, and therefore capable
of processing more information than the brains of our near hominid
ancestors, is commonly mentioned as a possible explanation. However,
brain size has little to do with this matter. Deacon (1997) expressed that
“human brains are not just large ape brains; they are ape brains with some
rather significant alterations of proportions and relationships between
the parts” (p. 255).

These alterations are found in the disproportionately large size of the


prefrontal cortex and the shift in connectivity favoring prefrontal con-
nections in all other systems. In phylogeny, the prefrontal cortex is a
late-developing region that covers almost one-third of the neocortex. The
late myelination of axonal connections from this region also seems to be
associated with the development of cognitive functions (Fuster, 2002).
The question is whether we can link the enlargement of the prefrontal
cortex to the emergence of language, and therefore of symbolic reference
5 Evolving Meaning in Language 179

in humans. At first hand, however, this hypothesis is not supported by


data on linguistic dysfunctions after localized brain damage. Speech and
speech comprehension are only minimally affected by localized damage
to prefrontal tissues outside of Broca’s area.
Speech and speech comprehension, the expressive and receptive aspects
of language, may be considered as particular implements of a general
capacity of language. Without these implements, use of language is oblit-
erated, but the patient may still hold a vestige capacity of symbolic refer-
ence. On this account, it may be speculated whether some aphasia means
a distortion of particular sensory and motor implements of language, but
not necessarily a disordered capacity of symbolic reference. As long as this
general capacity is spared, a different channel of communication may be
adopted. In other words, efforts of rehabilitation may bring about a form
of language that permits some vital social interactions for the patient.
Thus, practice in augmentative and alternative communication is a viable
option for some aphasic patients, and perhaps also some children with
delayed or disordered speech (Wilkinson and Hennig, 2007).
Broca’s and Wernicke’s areas are mainly associated with motor con-
trol and auditory processing in linguistic communication, whereas the
prefrontal areas are recruited during the planning of complex behavior.
A wide range of cognitive functions are affected after damage to the pre-
frontal cortices, and an attempt to find a common underlying pattern for
these functions may lead us into speculations of a general substrate for
symbolic behavior. The question is: What type of tasks may serve as tests
of a general symbolic capacity?
A number of brain imaging studies have addressed the question of
whether the prefrontal cortices host a substrate for symbolic activity. In
particular, the role of the left inferior prefrontal cortex (LIPC) in the
processing of word meaning by healthy and neurologically intact individ-
uals has been extensively studied. Thus, Wagner, Pare-Blagoev, Clark, and
Poldrack (2001) asked participants which of two words (e.g., “flame” and
“bald”) was closest in meaning to a cue (e.g., “candle”). This task invites
participants to make judgments of category membership, and therefore,
this task relates to a number of other mainstream cognitive studies of
concepts. The new finding reported in this study is that LIPC activity
increased with the number of words presented in the choice set. Bunge,
180 Language Evolution and Developmental Impairments

Wendelken, Badre, and Wagner (2005) showed that LIPC activity is


likely to increase when participants are making hard semantic judgments;
for example, category membership judgments of nonprototypical exem-
plars, e.g., does “earl” belong to the “royalty” category. Also, LIPC activity
increases when a target word is preceded by a semantically incongruent
as compared with a congruent sentence (Cardillo, Aydelott, Matthews,
& Devlin, 2004). Therefore, these studies gave support to a prototype
view of concepts, and convincingly demonstrated the involvement of the
LIPC in semantic processing. Also, interference of semantic processing
may be produced by transcranial magnetic stimulation (TMS) of LIPC
(Thiel et al., 2005), and acquired lesions of the LIPC affect the accuracy
of semantic categorization (Devlin, Matthews, & Rushworth, 2003).
The problem is how well the behavioral tasks that were designed to
test semantic processing also reflect a symbolic ability independent of the
auditory-vocal conditions of the test situation. I believe, however that
semantic categorization and integration represent general processes in
both speech and sign languages, and that these tasks are likely to trigger
a symbolic activity in the prefrontal cortex of the participants. If this
assumption is correct, the LIPC may be involved in a neural substrate
of symbolic reference by humans, and, thus, the declarative semantic
aspect of language. However, this substrate may not be limited to LIPC,
but may also involve other cortical areas as well. Samson, Connolly, and
Humphreys (2007) showed that a patient (PW) with damage to the right
prefrontal and temporal cortices had major problems in executive control
of semantic processing. PW was presented with a cue word followed by
three words in the choice set. One of the choice words was a synonym
and one was an antonym, both were either weakly or strongly associated
with the cue word. The third was an unrelated word. PW was instructed
to choose the word that was closest in meaning to the cue (synonym
condition), and in another condition, he was asked to choose the word
that was opposite in meaning to the cue (antonym condition). The task
was also presented to two control participants matched for educational
background.
PW had some major problems in responding to these tasks. He tended
to make errors on trials where one of the distracters was strongly asso-
ciated with the cue word. He could not easily override the automatic
5 Evolving Meaning in Language 181

processing of these distracters. Thus, Samson et al. (2007) argued that


PW’s stroke had impaired his executive control of semantic processing,
and in my opinion, this impairment may eventually have affected his
comprehension of lexical meaning. The authors discussed the possibility
that the impaired executive control may also have resulted from his right
temporal lobe rather than his right prefrontal lesion. The right part of
the anterior temporal lobe has been shown to be involved in more coarse
semantic integration, e.g., in comprehension of discourse and metaphors
(Jung-Beeman, 2005). Since Samson et al. (2007) tested the comprehen-
sion of single words, they considered PW’s problem to be one of semantic
selection rather than semantic integration. They admitted, however, that
the anterior temporal lobe could also be involved in semantic selection.
Samson et al.’s (2007) study indicates that a possible substratum for
symbolic reference, and hence for lexical meaning, is not limited to the
left prefrontal lobe, but may also involve parts of the right prefrontal
and the temporal lobes. This argument rests on the assumption that
“semantic selection” is a manifestation of an underlying ability of sym-
bolic reference. In summary, we find arguments for a distributed network
of cells that may be involved in symbolic processing by man. Some of
these structures, such as the ventral premotor cortex, may have served
an important role in the evolution of a modality-independent language
capacity by humans. However, the development of premotor connections
with other regions means that other parts of the brain may also have been
involved in symbolic processing. At present, we do not fully know the
complex circuitry of neural activity underlying lexical meaning, but the
works reviewed here do identify some of the critical substrata.
In modern research on the neurophysiological principles underlying
language processing, it is generally assumed that the neocortex applies the
same algorithms independent of tasks. Semantic coding or other specific
functions are mediated by input and output connections, therefore con-
cepts and categories depend on circuitries which connect different cell
assemblies. However, it has been shown that a small number of highly
specialized neurons in the medial temporal lobe (MTL) are activated by
the presentation of the picture of a famous actress, Halle Berry (Quian
Quiroga, Reddy, Kreiman, & Fried, 2005). These cells were also activated
by her written name, which shows that the cells responded to a highly
182 Language Evolution and Developmental Impairments

abstract concept. Friederici and Singer (2015) points out that neurons
respond to categories, not to specific members of that category. However,
recognition of the person on a picture is informational-specific and there-
fore refers to a specific member. Friederici and Singer called this process
sparse coding, and the probability of encountering such units by chance
is small. It is made possible by “iterative recombination of feature-specific
responses” along different pathways which originate in the MTL.  For
the brain it is like answering “20 questions” (actually many more) in a
few milliseconds. Combinations and recombination of feature-specific
responses run according to the same principles underlying concept for-
mation in both language and nonlanguage domains, and may therefore
be said to serve a pre-adaptation to language.
Arbib (2009) have argued that “the first creatures who had a language-
ready brain did not yet have language” (p. 264). Thus, the critical sub-
strata for comprehension of lexical meaning may have been in place by
the hominids and early Homo sapiens, but due to insufficient epigenesis
these substrata may have remained inoperative. Sociocultural evolution
may have provided a type of environmental exposure, which at some
point in the history of mankind, have made these substrata operative.
Semantic selection and integration studied by Samson and others may
not have been in place before this point in history. As long as the study of
underlying processes required verbal expressions, they could not be dem-
onstrated by animals. However, methodological constraints may not have
been the only reason why executive control of semantic processing can-
not be demonstrated by subhuman subjects. Maybe semantic declarative
meaning in language has more aspects which are not explicitly addressed
in the neurobiological studies mentioned above. Here, we may easily run
into some speculations. However, I assume that competent users of lan-
guage today have a metalinguistic capacity which enables them to treat
linguistic signals as “objects” in their own right, and due to this capacity
they can also apprehend the reciprocity aspect of language.
In the following, I will deal with some aspects of cultural evolution
that may cast some light on the growth of new communicative systems
and the emergence of lexical semantic meaning in language. I cannot
say for sure to what extent these aspects also are involved in the rise
of metalinguistic knowledge, but I assume that they form part of the
5 Evolving Meaning in Language 183

critical preconditions for such knowledge. First of all, I will call attention
to some important community factors; prime among them are the size
of linguistic communities and the frequency of interactions within and
between such communities. In short, these factors contributed to the
diversity of expression that is a prerequisite to symbolic/lexical meaning.

5.6 The Importance of Diversity


in Communicative Interactions
When only two individuals communicate, the variability in the expres-
sion of messages is low. This variability increases with the number of
people involved, and this variability is particularly high when different
people are involved in different communicative episodes. In the emer-
gence of new communicative systems/languages and the abstraction of
meaning are both dependent of expressive variability. Therefore I shall
deal with some examples from the emergence of new communicative
systems, and I claim that the dynamics of these systems also relate to the
way linguistic meaning evolved in the human species..
In the Introduction, I mentioned some historical data on the evolu-
tion of new sign languages (the NSL that evolved among deaf children
in a primary school in Managua in the late 1970s, and the new BSL
that Senghas (2005) reported from a Bedouin community in present-
day Israel). Initially, in these communities communicative networks have
been very small; children have communicated with a few people over
time. In Managua, the network of communicative partners increased
when the deaf children attended school where new interactive and collab-
orative structures were established. I shall now present an example from
research on the evolution of new communicative systems which shows
the importance of changing partners in communicative interactions.

5.6.1 The Role of Collaborative Structures

In the previous chapter, I described an experimental model for the


development of a new communicative system. Here the dialogue was
184 Language Evolution and Developmental Impairments

the setting for communicative development. The next study to be pre-


sented here also makes use of dialogues between the participants, but
the participants are now changing partners through the experiment, and
thereby some variability of communicative interactions is warranted.
Also the creolization of the NSL depended on novel interactions with
children from other families. This is a “community effect” which can
be observed in the learning of new communicative systems and stud-
ied experimentally. Fay, Garrod, Roberts, and Swoboda (2010) therefore
designed a study where they distinguished between individualistic and
collaborative models on the evolution of such systems. The first assumes
that language is transmitted incrementally from generation to genera-
tion, and is highly influenced by the learning biases of the children. On
this account interaction between same-generation members becomes
superfluous. The collaborative model, on the other hand, assumes that
interaction and feedback are critical to language evolution, and that
interaction between same-generation members is important. In support
of this model, they reported the results of a graphical communication
task where participants communicated a set of predetermined concepts
by drawing on a standard whiteboard. Some easily confusable items such
as Drama and Soap Opera were included in the set. The participants
were tested in pairs and alternated in the director and matcher roles, and
for the director, the to-be-represented concept was presented in white
text. Drawing and erasing took place on the two players shared white-
board, and when the matcher believed he could identify the intended
referent, he pushed the “Got It” button.
The participants were randomly allocated to one of two conditions:
the community condition or the isolated pair condition. The former con-
sisted of four separate eight-person communities, and within each com-
munity participants switched partner until each of them had interacted
with each of the remaining seven members. Each of the community con-
ditions were designed such that “a global communication system could
be established by the time participants encountered their fourth partner.
For instance, assume person 2 adopts person 1’s sign system (Round 1),
and that person 3 is subsequently influenced by person 2 (Round 2). If
person 8 aligns with person 3 (Round 3), person 1 and person 8 will
share a comparable sign system upon meeting (Round 4), despite having
5 Evolving Meaning in Language 185

never directly interacted” (Fay et al., 2010, p. 361). In the isolated pair
condition each participant interacted with the same partner throughout
the game.
By studying drawing similarities in the two conditions, Fay et  al.
(2010) could test the different predictions made by the individual-
istic and collaborative models. A certain alignment of drawings was
expected across games, thus in the isolated pair condition the drawings
in Round 7 should be more similar than the drawings in Round 1. In
the community condition, however, the “target” of alignment was the
community, rather than an individual partner, therefore interaction
with different community members will be crucial for the establish-
ment of a shared communicative system. Fey et  al. now compared
the degree of alignment across noninteracting community members
(persons from the same community who did not interact in Round
1 and Round 7) with the degree of alignment across noninteracting
isolated pairs. At Round 7 drawings among noninteractive community
members had become increasingly similar, whereas drawings among
noninteractive isolated pairs had become increasingly dissimilar.
Members of the isolated pair condition had established a local sign
system, whereas members of the community condition had established
a global sign system.
The diversity of interactions among participants in the community
condition was critical for the establishment of a new communicative
system. On this account, it seems that the number of interacting com-
munity members is a critical factor. The community of deaf children in
the Managua primary school for special education grew rapidly from
50 to 200 and more in the early 1980s. During this decade a new sign
language emerged with a highly developed vocabulary and grammatical
structure. Skills in signing varied with year of entry into the commu-
nity (more complex signing was observed by children with entries after
1983). Taking this into consideration, (Senghas et al. 2004) found that
the younger group signed more rapidly and produced a richer and gram-
matically more complex language. The community of deaf people among
the Al-Sayyid Beduins was smaller and grew from 10 to 150 over three
generations. Therefore, BSL emerged more slowly, and has been around
about twice as long as NSL (cf. Senghas, 2005).
186 Language Evolution and Developmental Impairments

5.7 More Questions


As announced in the preceding section, I have reviewed Fay et al.’s work
because I believe that the causal factors underlying the emergence of lexi-
cal meaning and the evolution of communicative systems may overlap.
Both depend on interaction and feedback between members of a lin-
guistic community; however, studies of artificial communicative systems
leave a number of questions about lexical meaning unanswered. In Fay
et al. (2010), as well as in previous studies with a similar approach (Fay,
Garrod, and Roberts, 2008; Galantucci, 2005), participants were given a
set of concepts which directors communicated to the matchers. I assume
that the adult students serving as participants were all acquainted with
these concepts; they did not involve in any form of conceptual learning.
Recoding known words into a graphical system of signs means that the
participants learned new forms of expression, not new concepts.
In contrast, the deaf children in the Managua primary school, as par-
ticipants of a natural process of language evolution, learned new words
and produced new verb arguments. There is no reason to believe that
the new sign language evolved merely as a way of communicating a set
of preformed concepts. Without taking a Whorfian position on the rela-
tionship between thought and language, I assume that the establishment
of the new language also contributed to conceptual development by the
community members. In short, sign and concept may have co-evolved
in the Nicaraguan and Bedouin languages; the question is to what extent
this process may have resulted in a conceptual understanding of the sign
language itself.
To what extent do deaf signers distinguish between the lexical content
of an utterance from its prosodic and paralinguistic features? In sign lan-
guage, these features are generally conveyed by mouthing and mouth ges-
tures produced simultaneously with the manual signs. Other nonmanual
components such as facial expression and eye gaze which have important
functions within morphology and syntax of ASL may also convey prosodic
features and signal turn taking among the participant signers. However,
the control of prosodic and paralinguistic features are less well-known in
sign language compared with spoken languages, and yet we have reasons
to believe that these features have become immanent properties of the new
5 Evolving Meaning in Language 187

as well the older sign languages. As pointed out above, the prosodic and
paralinguistic features determine the illocutionary force of an utterance;
thus, this also occurs in sign language. Children are generally very sensi-
tive to these aspects of language, and therefore I assume that nonmanual
components have effectively influenced communication among the early
Nicuraguan and Beduin signers, as well as by language users in ancient
history. But this is not to say that illocutionary force has been conceptu-
ally distinguished from the lexical meaning of the utterance.
As far as I know there is nothing in the reports about the two sign
languages which indicates an ability among the community members to
decontextualize the signed lexemes; that is, a metalinguistic ability to deal
with the new language as an object of reflection. In the early days of the
new sign languages illocutionary force, despite its effect on behavior, may
not have been properly understood apart from the “literal meaning” of
the signs. Metalinguistics, and hence acknowledgment of reflexivity, both
in spoken and sign languages, belong to an advanced stage of evolution
that emerged with the development of writing.

5.8 Concluding Remarks


I have discussed communicative interactions by animals and humans,
which convey meaning in the sense that the interactions are instrumental
for the parties, but are pre-semantic in the sense that they do not permit
conscious recollection of lexical meaning. The early forms of pre-semantic
interactions are associated with statistical learning and implicit categori-
zation whose neural structures are not yet fully understood, although
the premotor cortex, the basal ganglia and the striatum are strongly
involved. For a later stage of language evolution clinical and neurobio-
logical research has identified some critical brain regions involved in the
semantic processing of words. However, we do not know at present, the
exact mechanisms and all neural substrata underlying the evolution of
lexical meaning. In particular, we do not know the neural mechanisms
underlying metalinguistic knowledge and the reflexivity of language. It
seems like we have no founded approach for the study of neural mecha-
nisms underlying these characteristics of language.
188 Language Evolution and Developmental Impairments

However, we have gained some knowledge on the evolution of mean-


ing in language. This is an aspect which gradually evolved from early
forms of communicative (pre-semantic) interactions, to the well-struc-
tured languages of modern societies, and we do know some of the criti-
cal factors which were responsible to this development. For example,
the collaborative model described by Fay et  al. (2010) shows that the
meaning of signs may be shared by community members whether or
not they have directly communicated with each other. The critical factor
is the community condition which permits interactions with new part-
ners. Extending this finding, we can conclude that lexical meaning is a
social, rather than an individualistic or idiosyncratic property. On this
account, a third person perspective (see Introduction) presents itself: the
lexical meaning of communication between two persons, A and B, can
only be assessed by a third person C who attends and comprehends the
communicative episode. C represents the linguistic community whose
size may vary from a small group to millions of people. This is why dia-
logues that are incomprehensible to others, for instance, home signs by
deaf children, do not warrant an exchange of lexical meaning. The social
nature of lexical meaning also ascertains the abstractness of the concept,
it shows that the meaning of a word or sign is abstracted from its form
of expression and permits a vocabulary which includes both homonyms
and synonyms. The abstraction of meaning from form may not have
been similarly apprehended in early languages, making lexical mean-
ing an evolutionary late achievement. Also, this abstraction is not fully
apprehended in all communities or social groups, and may be inhibited
or impaired in Asperger patients (see my discussion of literal meaning in
Chap. 6, Sect. 6.8.2).
At this point, I will return to the association between the declarative
system and the lexical semantic system. By way of definition, declarative
knowledge requires a linguistic or verbal expression, and yet it is inde-
pendent of the use of particular tokens, for example, the domestic animal
commonly used in hunting or as a pet may be expressed as dog, hund or
chien, or by a particular configuration of hand movements in sign lan-
guage. This is the essence of Ryle’s category of “knowing that,” and hence
declarative knowledge and lexical meaning are similar cognitive phenom-
ena that most likely depend on the same neurobiological substrata.
5 Evolving Meaning in Language 189

The present conception of linguistic meaning as a social rather than an


individualistic property agrees with a position taken by Deacon (1997)
when he states that, “Languages are social and cultural entities that have
evolved with respect to the forces of selection imposed by human users”
(w 110). Thus, languages belong to a universe of phenomena “outside the
brain” which is the product of a sociocultural evolution: “the other evolu-
tion.” At the same time this evolution is directed by the learning capac-
ities of children; thus, language structures outside the brain “embody
the predispositions of children’s minds” (p. 109). Although he described
languages as the product of “the other evolution,” he also stressed that
languages are entirely dependent on human users. By analogy with a
parasitic organism or a virus that is dependent on an organic host, he
described the relationship between languages and people as a symbiotic
relationship (see also Chap. 6, Sect. 6.8).
I find Deacon’s parasitic model which implicates a reference to “the
other evolution,” highly instructive for any inquiry into the evolution of
language. More specifically, it also supports the claim I have made that
a reference to a third user of language representing a language commu-
nity, is necessary to fully describe the linguistic communication between
two interacting persons. This reference I consider to be a consequence
of the conviction that language behavior and language acquisition do
not merely originate from within the individual brain. As long as we
consider language behavior merely as motor action, we might easily over-
look this requirement, but in dealing with more abstract characteristics
of language, a reference to language structures of a community becomes
absolutely essential. This is why lexical meaning in particular, requires a
reference to “outside” language structures.

References
Arbib, M. A. (2009). Evolving the language ready brain and the social mechanisms
that support language. Journal of Communication Disorders, 42, 263–271.
Ashby, F. G., Alfonso-Reese, L. A., Turken, A. U., & Waldron, E. N. (1998). A
neuropsychological theory of multiple systems in category learning.
Psychological Review, 105, 442–481.
190 Language Evolution and Developmental Impairments

Bunge, S. A., Wendelken, C., Badre, D., & Wagner, A. D. (2005). Analogical
reasoning and prefrontal cortex: Evidence for separable retrieval and integra-
tion mechanisms. Cerebral Cortex, 15, 239–249.
Cardillo, E. R., Aydelott, J., Matthews, P. M., & Devlin, J. T. (2004). Left infe-
rior prefrontal cortex activity reflects inhibitory rather than facilitatory prim-
ing. Journal of Cognitive Neuroscience, 16, 1552–1561.
Clifford, A., Franklin, A., Davies, I.  R. L., & Holmes, A. (2009).
Electrophysiological markers of categorical perception of color in 7 month
old infants. Brain and Cognition, 71, 165–172.
Deacon, T. (1997). The symbolic species: The co-evolution of language and the
brain. London: Penguin books.
Devlin, J. T., Matthews, P. M., & Rushworth, M. F. (2003). Semantic process-
ing in the left inferior prefrontal cortex: A combined functional magnetic
resonance imaging and transcranial magnetic stimulation study. Journal of
Cognitive Neuroscience, 15, 71–84.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Eysenck, M. W., & Keane, M. T. (2000). Cognitive psychology: A students hand-
book. Hove: Psychology Press.
Fay, N., Garrod, S., & Roberts, L. (2008). The fitness and functionality of cul-
turally evolved communication systems. Philosophical Transactions of the
Royal Society B-Biological Sciences, 363, 3553–3561.
Fay, N., Garrod, S., Roberts, L., & Swoboda, N. (2010). The interactive evolu-
tion of human communication systems. Cognitive Science, 34, 351–386.
Fitch, W. T. (2010). The evolution of language. Cambridge: Cambridge University
Press.
Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Franklin, A., Pilling, M., & Davies, I. R. L. (2005). The nature of infant colour
categorization: Evidence from eye-movements on a target detection task.
Journal of Experimental Child Psychology, 91, 227–248.
Friederici, A. D., & Singer, W. (2015). Grounding language processing on basic
neurophysiological principles. Trends in Cognitive Sciences, 19, 329–338.
Fuster, J.  M. (2002). Frontal lobe and cognitive development. Journal of
Neurocytology, 3–5, 373–385.
Galantucci, B. (2005). An experimental study of the emergence of human com-
munication systems. Cognitive Science, 29, 737–767.
Grice, H. P. (1957). Meaning. Philos Rev, 66, 377–388.
5 Evolving Meaning in Language 191

Hoffman, P., Lambon Ralph, M. A., & Rogers, T. T. (2012). Semantic diversity:
A measure of semantic ambiguity based on variability in the contextual usage
of words. Behavior Research Methods, 45, 718–730.
Jung-Beeman, M. (2005). Bilateral brain processes for comprehending natural
language. Trends in Cognitive Sciences, 9, 512–518.
Kemmerer, D., & Gonzales-Castilla, J. (2010). The two-level theory of word
meaning: An approach to integrating the semantics of action with the mirror
neuron theory. Brain and Language, 112, 54–76.
Lyons, J. (1977). Semantics (Vol. 1). Cambridge: Cambridge University Press.
Manns, J.  R., & Eichenbaum, H. (2006). Evolution of declarative memory.
Hippocampus, 16, 795–808.
Mareschal, D., & Quinn, P.  C. (2001). Categorization in infancy. Trends in
Cognitive Sciences, 5, 443–450.
Ong, W. (1982). Orality and literacy: The technologizing of the word. London:
Methuen.
Parry, A. (1971). Introduction. In M. Parry (Ed.), The making of Homeric Verse:
The collected papers of Adam Parry. Oxford: Clarendon Press.
Pylyshyn, Z. (1984). Computation and cognition. Cambridge, MA: MIT
Press.
Quian Quiroga, R., Reddy, L., Kreiman, G., & Fried, I. (2005). Invariant visual
representation by single neurons in the human brain. Nature, 435,
1102–1107.
Samson, D., Connolly, C., & Humphreys, G.  W. (2007). When “happy”
means “sad”: Neurophysiological evidence for the right prefrontal cortex
contribution to executive semantic processing. Neuropsychologia, 45,
896–904.
Scott-Phillips, T. C. (2015). Meaning in animal and human communication.
Animal Cognition, 18, 801–805.
Seger, C. A. (1994). Implicit learning. Psychological Bulletin, 115, 163–196.
Senghas, A. (2005). Language emergence: Clues from a new Bedouin Sign
Language. Current Biology, 15, 463–465.
Smith, J. D., Ashby, F. G., Berg, M. E., Murphy, M. S., Spiering, B., Cook,
R. G., et al. (2011). Pigeons’ categorization may be exclusively nonanalytic.
Psychonomic Bulletin and Review, 18, 414–421.
Smith, J. D., Berg, M. E., Cook, R. G., Murphy, M. S., Boomer, J., Spiering, B.,
et  al. (2012). Implicit and explicit categorization: A tale of four species.
Neuroscience and Biobehavioral Reviews, 36, 2355–2369.
192 Language Evolution and Developmental Impairments

Smith, J. D., Crossley, M. J., Boomer, J., Church, B. A., Beran, M. J., & Ashby,
F. G. (2012). Implicit and explicit category learning by capuchin monkeys
(Cebus apella). Journal of Comparative Psychology, 126, 294–304.
Thiel, A., Haupt, W. F., Habedank, B., Winhuisen, L., Herholtz, K., Kessler, J.,
et al. (2005). Neuroimaging-guided rTMS of the left inferior frontal gyrus
interferes with repetition priming. NeuroImage, 25, 815–823.
Toni, I., de Lange, F.  P., Noordzij, M.  L., & Hagoort, P. (2008). Language
beyond action. Journal of Physiology – Paris, 102, 71–79.
Wagner, A.  D., Pare-Blagoev, E.  J., Clark, J., & Poldrack, R.  A. (2001).
Recovering meaning: Left prefrontal cortex guides controlled semantic
retrieval. Neuron, 31, 329–338.
Whorph, B.  L. (1956). Language, thought and reality: Selected writings of
Benjamin Lee Whorph. New York: John Wiley.
Wilkinson, K. M., & Hennig, S. (2007). The state of research and practice in
augmentative and alternative communication for children with developmen-
tal/intellectual disabilities. Mental Retardation and Developmental Disabilities
Research Reviews, 13, 58–69.
6
Literacy and Language

Literacy is a late achievement in the evolution of language, both when


considered as individual skills and when it is a community characteristic
(literate society). The first graphic forms from which Western writing
systems may have evolved date back to the ancient Sumerians living some
6000 years ago (which is also the time historical linguists have been able
to trace back the known protolanguages). Language as a biological capac-
ity is much older. Thus, archeologists and comparative biologists believe
that humans developed a language capacity at least 100,000 years ago.
So what is the reason why a full chapter is allocated for a discussion of
literacy, because it covers only a small portion of the evolutionary history
of language?
First, the answer is that language, as a biological capacity, may have
changed more rapidly in the later epoch of human history, and therefore
the present chapter will be deal with the “literate part” of language evolu-
tion. Important characteristics of modern languages may have evolved
together with the development of writing. In particular, I assume that
the reflexivity of language, and in general metalinguistic knowledge, is
associated with the invention of writing. The Swedish linguist Per Linnel
(2005) also argued that modern linguistics is severely biased by written

© The Editor(s) (if applicable) and The Author(s) 2016 193


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_6
194 Language Evolution and Developmental Impairments

language. Thus, we may ask whether linguistic structures are to some


extent the products of a literate mind, or did they exist prior to and inde-
pendent of the invention of writing?
In retrospect, meta-linguistic abilities might seem to be important pre-
requisites to the invention of writing, but these abilities may also have
evolved as the products of writing. Certainly, technologies of writing,
and hence literacy, may have had a comprehensive impact on the human
mind. Thus, the great civilizations, in both the old and the new world,
made their appearances when their languages became enriched with a
technology of writing. Today, literacy is an important precondition to the
development of social and cultural institutions in any society.
Second, the transition from a pre-literate to literate societies involved
new demands of learning, which were successfully met only to the extent
that individuals were given appropriate schooling or other educational
opportunities. Some people have not been capable of meeting these
demands to the standards set by their communities. Their problems have
commonly been described as communication disorders which comprise
both language impairments and reading difficulties or dyslexia. In the
DSM-5 (Diagnostic and Statistical Manual) language impairments (for
example, SLI) are clearly distinguished from dyslexia, which is a specific
learning disorder. In view of the great overlap between symptoms of the
two disorders, I find it difficult to make a sharp terminological distinc-
tion between the two. In language impairments the clinical concerns are
general oral language disorders whereas in dyslexia the concerns are about
reading and writing difficulties. However, both may be associated with
the new demands of learning with the rice of literacy in modern societies.
Literacy has changed language, both oral and written language; hence,
the present chapter is warranted also in view of the clinical objectives of
my work.
In this chapter, I will show that the effects of literacy are not limited
to language per se, but extend more generally to language and cognition
(Sect.  6.4 below). At first I will briefly describe the major events which
led to the invention of writing by the Sumerians, including a primitive
form of grammar. In this background I will review some early discussion
of what writing represents: Do written characters represent words? Did
written language change our conceptions of words? Next I will briefly
6 Literacy and Language 195

describe the major writing systems where the written characters are said
to represent different levels of language. Here the question to be dis-
cussed is whether there exists an optimal writing system/orthography.

6.1 At the Threshold of Writing


The “book-keeping system” invented to keep track of animals in the herd
by the Sumerians about the fourth millennium bc has often been men-
tioned as a precursor to writing. Pre-writing may also have evolved as the
gradual standardization of paintings from the Paleolithic era to histori-
cal times, but it was this book-keeping system that most likely gave rise
to a “symbolic awareness” by humans. According to Schmandt-Besserat
(1987), tokens of different shapes and markings (“count stones”) were
used to record the number of animals. New tokens were added for ani-
mals born in the spring, and later the appropriate number of tokens was
removed for animals slaughtered. In a further advancement of the system,
the count stones were stored in so-called “bullae”; that is, earthen vessels.
These were properly sealed to prevent their contents from being tampered
with. To determine the contents of the vessel from the outside, while still
damp the vessels were stamped by impressions representing the stones
deposited inside.
The surface of the vessels had become a kind of writing surface, and
the impressions marked on them were symbols that stood for symbols,
namely the counts stones inside the vessels. This is why many researchers
have argued that the development of the new book-keeping system may
have triggered a symbolic awareness by the Sumerians.
There are other factors that may have motivated this practice. Ehlich
(1983) described the book-keeping system as an example of social problem
solving. In a pre-literate culture, socially important texts were preserved
by repetitive speech acts. However, there were speech acts associated with
economic transactions that could not so easily be repeated across situa-
tions. The promulgation of such acts was not possible without a tech-
nology that made speech permanent. The purpose of new book-keeping
practices was to overcome the evanescence of speech, but the bonus effect
of these practices was the invaluable ability of symbolic awareness.
196 Language Evolution and Developmental Impairments

As mentioned in the Introduction, and later in Chap. 3, Sect. 3.1.1,


Schmandt-Besserat (1987) reported that the invention of number writing
also coincided with the invention of “syntactic writing.” This is an his-
torical event which may have provided a cognitive basis for the ensuing
development of language. In the beginning, tokens represented objects
by a simple one-to-one correspondence. For example, four sheep were
represented by four marks on a stick, and distinct tokens for different
types of objects were applied. Later, the four tokens were replaced by two
tokens, one representing sheep and one the number of tallies, a proce-
dure that according to Olson, may well have allowed the development of
an abstract number concept. In my opinion, the more important effect
of this invention is the awareness that a string of tokens can be given a
syntactic structure. It may be argued that syntax of speech was as old as
language itself, but the meta-linguistic awareness of this aspect may have
arisen with the invention of writing, in particular with the use of syn-
tactic writing. Once the awareness of syntax developed, the generativity
of syntax may have gained momentum, and hence we may ask whether
Humboldt’s principle of discrete infinity applies to literate languages only.
The syntactic writing, which according to Olson (1998) was found
on a tablet from Ur in 2960 bc, describes the contents of a storehouse
and can be read in any language like the Arabic numerals 1,2,3, and
so on. They do not necessarily represent words in the spoken language.
According to Olson (1998) the sign for bee does not necessarily represent
the word “bee,” only the object bee. “But if the sign is now appropriated
to represent the verb “be,” the sign has become a word sign, a logograph.
The principle involved in this case is that of the rebus, the use of a sign
which normally represents one thing to represent a linguistic entity that
sounds the same; this entity is a word” (p. 75). Olson concluded that a
script consisting of such word signs combined by syntax may in principle
represent everything that can be said.
Granted that this description of the invention of writing in the Near
East is correct, words became abstract entities. Hence, Olson believed
that the Sumerian script would “spell the death of ‘word’ magic or more
precisely ‘name’ magic” (p. 75). This may be correct for the learned people
who took part in the invention and use of the new script. Words were no
longer emblems; they were distinguished from objects and existed only as
6 Literacy and Language 197

entities in the human mind. This is an elitist interpretation of the inven-


tion of writing and may not apply to many others that gradually became
literate. The extent to which written language has served as a basis for the
development of metacognition has probably differed among members of
linguistic communities in historical times. By some people, word magic
may in a way have been “transferred” to written language. (As if magic
was attributed to written texts.) I shall return to this claim in Sect. 6.7; in
the following, however, I will give a description of writing systems, which
will serve as a further elaboration of the concept of literacy.

6.2 Writing Systems


In linguistics, writing has commonly been classified as logographies, sylla-
baries or alphabets. These are writing systems that are said to represent dif-
ferent levels of spoken language; that is, logographies represent language
at the morphemic level, thus, a logography such as the Chinese language
has also been called a morphography. Syllabaries have also been called pho-
nographies which are representing the sounds of syllables. Alphabets are
representing language at the phonemic level.
The standard compound characters of the Chinese written language
represent the best-known example of a logography. This script has been
in use for more than 3000 years and represents the longest uninterrupted
writing tradition in the world. The Chinese languages form a number of
mutually unintelligible dialects; however, characters in the written lan-
guage represent meaning directly and may therefore be read by most peo-
ple regardless of their spoken dialect. Thus, Chinese script has commonly
been called ideographic, but in fact only a small proportion of characters
which survived from ancient texts can be termed ideographs. For 80 %
of the logographs the relation between the character on the one side and
both pronunciation and meaning on the other needs some qualifying
comments: Most of these characters consist of two parts, one referring
to the meaning, the semantic radical, and another referring to pronun-
ciation, the phonetic compound. (Sometimes these characters are also
termed phonograms) The phonetic compound does not map on to sound
in a one-to-one relationship, it can only give an approximate indication
198 Language Evolution and Developmental Impairments

of pronunciation. Ancient Chinese also included pictographs that repre-


sented objects by visual similarity. Today there are only a small percent-
age of characters that constitute pictographs, most of them have become
stylized to an extent that makes their similarity to objects imperceptible.
The reading of logographs can be compared to the reading of an
alphabetic writing system in two ways: as the statistical properties of
orthography-to-phonology mapping (O  – P) and orthography-to-
semantics mapping (O – S). In alphabetic systems the O – P mapping
between character and phoneme varies between the transparent lan-
guages such as Italian and Serbo-Croatian on one side and the opaque
orthography of English on the other. However, in general the O  – P
mapping in alphabetic systems is more systematic than in Chinese script;
furthermore O – P mapping is between character and syllable in Chinese
(not between character and phoneme). On the other hand, the semantic
radical in Chinese represents semantic categories, and therefore the O – S
mapping is more systematic in Chinese than in languages with an alpha-
betic writing system (see Zhao et al., 2014).
Fluent readers of Chinese automatically scan the configuration of the
logograph to access its semantic meaning. Does this mean that Chinese
readers are more subject to semantic interference in a Stroop task? Tzeng
and Wang (1983) asked their subjects who were fluent Chinese read-
ers, and other subjects who were fluent readers of either a syllabic or
alphabetic script, to name the colors of ink in which color names were
printed, once with a congruent ink color and once with an incongruent
ink color. As a control they also asked their participants to name the col-
ors on a series of different color patches where no color name appeared.
The Stroop effect means that it takes longer to name a series of colors
in incongruently colored ink than a series of unmarked color patches.
They found the Chinese logographs produced greater interference in the
Stroop task than any other type of script. In a control experiment, they
also found that the stronger Stroop effect with the Chinese script did not
depend on whether the color names were read aloud.
Logographic script is also part of other written languages, for example,
the kanji of Japanese, which were “borrowed” from Chinese and hence
were comprehensible by both Chinese and Japanese readers. Logographs
were also mixed in with Hettitic hieroglyphs and formed part of the Maya
6 Literacy and Language 199

writing system. Because the latter combined logographs and phonetic


symbols, it has more properly been called a logosyllabic writing system.
The Maya script has been preceded by a number of other writing systems
developed by the Zapotecs and the Olmecs (see Coe, 1992, 2002).
Examples of syllabaries in use today are the kana systems of Japanese
and the Korean hangul, the latter invented in the fifteenth century by
King Sejong. In Japan the katakana and the hiragana form the two syl-
labic kana systems. The former can be found in a more printed style and
is used for writing foreign words, for example, “television,” while other
words may be written in hiragana with a more cursive style. Alphabets
developed from ancient scripts in Mesopotamia, Egypt and Crete,
“invaded” modern societies, and although they represent speech at the
phonemic level, they have been considered the most perfect system of
writing.
Gelb (1963) viewed alphabetic writing as a culmination of a refine-
ment process towards an optimal representation of language. Similarly,
the sequence of the evolution of writing systems was said to be:

1. Drawings
2. Ideographs
3. Logographs
4. Syllabic scripts
5. Alphabets

In support of this view, it has been held that representation of the


smallest segments of speech is inherently desirable. The phonemes rep-
resented by alphabetic writing constitute smaller segments than those
represented in any other orthography.
Logographic and syllabic scripts constitute viable orthographies today;
they are not outdated systems of communication that hamper cultural
and technological innovations. General McArthur, after the U.S. vic-
tory over Japan in 1945, argued that development of an alphabetic script
for the Japanese, followed by an extensive new educational program,
would be a prerequisite for the introduction of democracy and for tech-
nological and economic development. Post-war history shows that he
was completely mistaken. In the same period, China had a hard time
200 Language Evolution and Developmental Impairments

fighting illiteracy, and it may still be questioned whether their problems


are grounded in parts on the continued use of a logographic script. A
standardization program for this script, and the introduction of pinyin,
an alphabetic transcription system for the Chinese language, has been
mentioned as attempts to remedy the social and educational problems of
illiteracy. Anyhow, China’s economic and technological success in mod-
ern times does not give support to a claim that logographic writing has
impeded development.
Henderson (1984) mentioned two principal objections to the con-
cept of optimality in orthography: Criteria of optimality do not take
into account the purpose for which a writing system has been developed.
Arabic numerals are logographic symbols in the way that they represent
meaning directly independent of phonemic processing. They serve arith-
metic calculation better than Roman numerals. Different scripts repre-
senting spoken languages may also be evaluated in relation to the purpose
of representing a particular language. Thus, the efficiency of orthography
is constrained by the nature of the spoken language for which it is being
used. As an example, Henderson mentioned that “the large number of
consonant clusters in Korean is better fitted by syllographs that are con-
structed out of alphabetic elements” (p. 13).
According to Olson (1998) the conception of written language as a
representation of spoken language is fundamentally wrong. Instead he
argued that written language serves as a model of language. Olson con-
sidered writing systems to be communicational systems in their own
right, and although inadequate, they serve as models of spoken language:
“The view I shall elaborate…is that writing systems provide the concepts
and categories for thinking about the structure of spoken language rather
than the reverse. Awareness of linguistic structure is a product of a writing
system not a precondition for its development” (p. 68). I fully agree with
Olson on this point, thus, I will argue that meta-linguistics is a product of
literacy, not vice versa. Similarly, Humboldt’s principle of discrete infinity
leading to the conception of universal grammar (UG) is also a product of
literacy. The relationship between writing and oral language also means
that a gradual transformation of written language to conform to com-
mon speech is no desirable goal. This process may impede the efficacy of
written language as a communicational system, and have long-term social
6 Literacy and Language 201

and cultural consequences. On the other hand, written language should


not differ too much from the structure of spoken language, because that
would interfere with educational programs and objectives.
Finally, it should be stressed that the writing systems, from logography
to alphabetic writing, make different demands on the educational sys-
tems, and these demands have various socio-cultural effects which are the
subject of a different discussion.

6.3 Are Brain Regions Differently Allocated


to Reading of Chinese and English?
The Tzeng and Wang study cited above, shows that logographic and
alphabetic reading are associated with different cognitive processing.
Also, Sasanuma (1974) reported that kanji and kana reading are differ-
ently impaired by Japanese patients suffering from aphasia. Does this
mean that brain regions are differently allocated in the reading of dif-
ferent writing systems? Siok, Perfetti, Jin, and Tan (2004) have shown
that, whereas dyslexia depends on reduction of grey matter volume in
the left temporal-parietal and occipital regions by readers of English, this
disorder depends on a similar reduction of grey matter in the left mid-
dle frontal gyrus by dyslectic readers of Chinese (see also Siok, Niu, Jin,
Perfetti, and Tan (2008). Also Tan, Laird, Li, and Fox (2005) has shown
this region to be important for reading and writing Chinese characters).
The stronger involvement of premotor cortex in Chinese readers can be
seen in relation to the rote learning practices adopted in Chinese schools.
The left middle frontal gyrus is thought to be involved in the allocation of
resources for working memory, a mechanism which sustains the associa-
tion between reading performance and handwriting skills. Therefore, the
functional link between reading and writing is stronger for logographic
writing systems than for alphabetic scripts. Does this mean that the left
hemisphere systems involved in reading differ for alphabetic and logo-
graphic writing systems?
Zhao et al. (2014) found great overlaps of the left hemisphere systems
responsible for reading in Chinese and alphabetic languages, with an
exception of the middle frontal gyrus which is uniquely recruited for the
202 Language Evolution and Developmental Impairments

reading of Chinese characters. This supports the observations reported by


Tan et al. In English different regions are involved in O – P and O – S
mapping, whereas an fMRI study of Chinese readers showed a great over-
lap in the regions activated for the two types of mapping. In contrast to
the stronger activation of regions involved in O – S mapping in alphabetic
reading, their training study of Chinese readers showed a balanced neural
division of labor for the processing of phonological and semantic aspects.
Notice that participants in the Zhao et al. study were native Mandarin
speakers who ranged in age from 19–27 years. Thus, the balanced division
of labor for phonology and semantics was demonstrated by mature and
competent readers of Chinese; we do not know whether the results gener-
alize to children in beginning classes of reading instruction. However, the
balanced division of labor by the Chinese readers shows that representa-
tion of semantic categories in the visual form of the logograph makes
semantic processing easier and consequently that the writing system can
be comprehended by readers of different oral languages/Chinese dialects.

6.4 Trends in Cognitive Research on Illiteracy


In the following I will address two problems which can be raised across
the writing systems. 1) In what ways does the human mind change dur-
ing the process of learning to read? 2) How did language as a human spe-
cific capacity change by the invention of writing and the ensuing growth
of literacy? The latter problem is an extremely difficult one which can be
approached only indirectly. Part of this problem may be addressed by
studying the cognitive and communicative effects of illiteracy.
In view of the first problem I shall turn to Olson (1998), who reported
a number of studies about the ways children of different ages interpret
verbal messages. Are pre-school and school children able to distinguish
what is said from what is meant by a verbal statement presented in narra-
tives? Children below the age of 6 years are generally incapable of making
this distinction. Torrence, Lee, and Olson (1985) asked children varying
in age from three to ten years of whether “Teddy Bear” should be awarded
a sticker based on what was Teddy’s answer to a particular request. In the
verbatim trials Teddy’s task was to repeat exactly what the other character,
6 Literacy and Language 203

the Big Bird, had just said. In the paraphrase trials, Teddy’s task was to
say what the other character wanted, and in these trials it did not matter
whether he used the same words or not. Practice trials were given, and the
order of verbatim and paraphrase trials were counterbalanced. Children
below the age of four were unable to answer correctly both the verbatim
and paraphrase trials. Although three-quarters of the four- and five-year-
olds correctly judged the paraphrase trials, but failed on the verbatim tri-
als; only children of six years or older were capable of judging both types
of trials correctly. Olson (1998) concluded that the youngest children
showed a “conflation of what is said with what is meant” (p. 127).
In the Torrence et al. study, age and formal schooling co-varied. Thus,
we cannot say what caused the ability to distinguish verbatim from
intended meaning, but this distinction is nonetheless a prominent char-
acteristic of most literate ways of thinking. Is it a universal characteristic
of literacy and thus independent of signaling modality? I do not know
of any studies of sign users that address the same problem. A distinction
between what is a “verbatim” message of signs and what is meant by the
message will be equally important among users of a sign language. In sign
language, the “verbatim” message will correspond to the specific sign-
expression, while the intended meaning may be a different one. We may,
therefore, talk about a general distinction between the verbatim meaning
associated with the form of expression on the one side and the intended
meaning as an abstraction from the form of expression on the other. The
ability to make this distinction is a cognitive achievement that is observed
once the child is old enough and has been adequately exposed to lan-
guage, and may be this exposure requires reading instruction. Also the
distinction is implicit in Lyons description of the reciprocity of language,
and furthermore I find this distinction to be a functional prerequisite to
the acquisition of a ToM.
Literacy may have affected the ability to comprehend metaphoric
and figurative language, but the mechanisms for this effect are largely
unknown. Historical changes in language capacity are mainly a matter
of speculation. Thus, we have no direct evidence for the way language
changed as an effect of the introduction of writing in antique Greece,
but classical literacy studies of the transition from an oral to a written
culture (Goody and Watt, 1968; Havelock, 1976, 1982; Ong, 1982)
204 Language Evolution and Developmental Impairments

have initiated an interesting debate on the issue (see also Olson, 1998),
parts of which were presented in the preceding chapter, Sect. 5.3. Here
I argued that the translatability of languages depended on the capacity
to read and write. This capacity meant that language can be treated as a
constellation of “objects,” rather than vocal (or manual) performances.
In more recent years, the quest for other empirical evidence has trig-
gered new research within psychological, educational and biological sci-
ences. In line with the issue raised by Ong and others, modern research
has addressed the problem of the cognitive consequences of illiteracy.
Performance on a number of cognitive tests and brain scanning data
from illiterate persons, have been compared to similar data from liter-
ate persons. The problem with these studies is the definition of illiter-
acy. Commonly, illiteracy has been defined by lack of formal schooling
(Kosmidis, Tsapkini, Folia, Vlahou, and Kiosseoglou, 2004; Scribner &
Cole, 1981). In Kosmidis et al.’s work, the illiterate group consisted of
elderly women (M = 71.95 years) who never attended school due to liv-
ing in a poverty-stricken, agrarian society. They were able to name a few
letters, and according to their self-report illiteracy did not prevent a sat-
isfactory integration in the local community. Healthy individuals who
are illiterate, but have lived in a literate society over years, and who have
been able to take care of themselves in nondemanding manual work, may
yet have been exposed to a literate world directly or indirectly in interac-
tions with other community members. The question is whether illiteracy
should be defined merely by educational criteria, or by educational crite-
ria in combination with socio-cultural characteristics.
In principle, there are two reasons for illiteracy in contemporary soci-
eties. 1) Social reasons: Poverty (as by the illiterate group in Kosmidis
et al.), absence of schools, sociocultural factors that cause disapproval of
education, child labor, and so on. 2) Personal reasons: Intellectual disabil-
ity, motor and sensory disorders, various central nervous system patholo-
gies that interfere with learning and language acquisition. Ardila et  al.
(2010) in a recent literature review, argued that the “two main classes of
reasons for illiteracy present potential confounders for research” (p. 690):
People that are illiterate due to social reasons generally belong to a lower
socioeconomic class, have more health problems and are less exposed to
media of communication. Those who are illiterate due to personal reasons
6 Literacy and Language 205

are more likely to be cognitively or neurologically impaired. Ardila et al.


suggested that a way of overcoming these difficulties is to study the effect
of literacy by comparing individuals with themselves rather than another
control group of individuals; for example, by studying “adults before and
after they acquire literacy.”
In the following I will focus on the neuropsychological differences
between literate and illiterate persons. The studies reviewed by Ardila
et al. (2010) show that these groups have been subdivided into the liter-
ates, the functional illiterates, and in one study also a literate nonschooled
group. Therefore, these studies address the cognitive and linguistic effects
of schooling in contemporary societies, but do not generally target the
evolutionary effects of literacy. (Here literacy studies may form an impor-
tant source of knowledge.) However, knowledge about “cognition with-
out reading” may provide some clues to studies of language evolution,
and therefore without claiming a full coverage, I want to present some
important trends from this field of research.
Do measures of brain functions differ between literate and illiterate
persons? fMRI has demonstrated differences of brain activation dur-
ing language-based tests for the two groups; most clearly during rep-
etition of pseudo-words. Thus, Castro-Caldas, Peterson, Reis, Askelof,
and Ingvar (1998) concluded that an activation of brain regions (i.e.,
the left hemisphere perisylvian area) by the illiterate group was insuf-
ficient for processing phonological segmentation. Carreiras et al. (2009)
compared structural brain scans of people who learned to read as adults
with matched illiterates. They found that the splenium of the callosum
by the former group contained more white matter, whereas other areas
such as the “bilateral angular, dorsal occipital, middle temporal, and left
supra-marginal and superior temporal gyri” contained more grey matter.
At present we can only speculate about the complete functional conse-
quences of these differences. In any case, recent research has demonstrated
both differences in functional activation and functional architecture in
the brains of literate and illiterate people.
Literacy does not change the left hemisphere dominance for lan-
guage. However, left-damaged illiterates do not present the same num-
ber of errors in aphasia tests as the left damaged literates, and conversely
right-damaged illiterates perform more poorly on aphasia tests than
206 Language Evolution and Developmental Impairments

right-damaged literates (Lecours et  al., 1988). Therefore, the right-


hemisphere is likely to have a stronger involvement in language process-
ing by illiterates compared to literate individuals.
It has been demonstrated that general cognitive functioning differs
between illiterate and literate people. This finding does not come as a
surprise because the Mini-Mental State Examination, commonly used
for screening against dementia, is clearly biased against those who are
illiterate. However, illiterates have scored less than literates, not only on
items related to writing but also on items that show orientation to time.
Furthermore, it has been shown that illiterates perform poorer than liter-
ates on a diversity of motor tests. It remains to show whether this differ-
ence in motor performance is correlated with practice in writing.
How does illiteracy affect memory and language? In both areas it may
be difficult to design tasks that do not bias against illiterate persons.
Thus, conventional neuropsychological memory tests are generally tests
of explicit memory, such as wordlist learning, free recall, forward and
backward digit span. Illiterate persons generally perform more poorly on
all of these tests compared to schooled literates. Interestingly, Eslinger
and Grattan (1993) observed a remarkable discrepancy between poor
free recall and good recognition by illiterates in an object learning task.
They argued that the illiterate participants failed to organize the mate-
rial to be learned and lacked retrieval strategies that are critical for free
recall. Analytic strategies are most likely enhanced when the child learns
to read, or more generally they are the products of successful language
development. Lacking those strategies will necessarily hamper explicit
and declarative memory.
Phonological segmentation which is important in learning to read
and subsequently in the perception and use of pseudo-words and low-
frequency words, require analytic strategies. Hence this aspect of language
is most likely impaired by illiterates compared to literates. Kosmidis et al.
(2004) have shown that people who have not learned the correspondences
between graphemes and phonemes (O – P mapping) have great difficul-
ties in repeating pseudo-words. They perform on a par with schooled lit-
erates when high frequency words are presented, not with low-frequency
words or pseudo-words. Also, the vocabulary size of schooled literates is
most probably larger than that of illiterates, though hard data on the issue
6 Literacy and Language 207

is missing. Reis, Peterson, Castro-Caldas, and Ingvar (2001) did not find
any difference in the ability to name real objects, but illiterates performed
more poorly in a task of naming photographs and an even stronger dis-
advantage for naming drawings. Similarly, illiterates have great difficul-
ties in copying Bender drawings. In general, visually guided hand motor
behavior seems to depend on the acquisition of literacy.
Do differences in vocabulary-size rest on a language learning disor-
der by illiterate persons? In Chap. 2, I mentioned Baddeley, Gathercole,
and Papagno (1998) who argued that the phonological loop, which is a
component of the Baddeley and Hitch working memory model, serves
as a language learning device. The loop has three subcomponents: one
of them is the phonological store; spoken words or pseudo-words have
direct access to this store which holds the memory traces for a few sec-
onds. Words are therefore soon forgotten unless they are refreshed in a
subvocal rehearsal system, another subcomponent of the loop which also
receives input from a grapheme-phoneme transformation unit. Thus,
memory of visually presented words and pseudo-words also depend on
the subvocal rehearsal system, but it also depends on O  – P mapping
which is not learned by illiterate people. The capacity of the phonological
loop is commonly assessed by the short-term memory span (for words
and digits), but more directly by the nonword repetition test. Auditory
presentation of stimuli may not require O – P mapping, and therefore
illiterates should not be disadvantaged. However, learning of O – P map-
ping may affect verbal short-term memory regardless of the modality
of the presented stimuli. Both Castro-Caldas et al. and Kosmidis et al.
have demonstrated that literacy influences the capacity of the phono-
logical loop, as measured by nonword repetition tasks. It does not matter
whether we emphasize literacy or schooling because both imply the learn-
ing of O – P mapping.
Kosmidis, Zafiri, and Politimou (2011) administered five tests of work-
ing memory and attention span to four groups of participants: illiterate,
functionally illiterate, self-educated literate and school-educated literate).
The literate groups outperformed the illiterate groups in digit span for-
ward and backward sentence span and the spatial span backward tests,
whereas the literate and illiterate participants did not significantly differ
on the spatial span forward and the “Remembering a New Route” tasks.
208 Language Evolution and Developmental Impairments

In the literate groups, schooling gave an advantage only in the digit span
backward test, whereas “illiterate and functionally illiterate groups were
indistinguishable from each other.” The authors therefore concluded that
differences in working memory performance can be attributed to literacy
per se and not the effects of schooling.
The studies reviewed in this section give important information about
functional differences on a number of cognitive measures, and on some
brain scanning and neurobiological measures between literates and illit-
erates. [For a more detailed discussion of this research, see Ardila et al.
(2010)]. The studies reviewed here do not show any historical effects
of literacy on the evolution of language, however, they cast light on the
cognitive changes which are most likely the results of learning to read.
The effects of literacy in the community also depend on strategies in
reading education; that is, on cultural conceptions of what reading are
(Sects. 6.5.2 and 6.6 below).

6.5 The Difficult Transition to Literacy


The transition to literacy can be interpreted in two ways: 1) The histori-
cal change; that is, the sociocultural transition to literacy, from antiquity
to modern societies, when writing was introduced in the community
as a means of communication. 2) Development; that is, the acquisition
of literacy when the child learns to read and write. The sociocultural
transition has been difficult because conceptions of “reading,” and the
roles of written language and reading in the community, have varied con-
siderably in the course of time. The developmental transition has been
difficult because reading is not a “natural” human ability. In general,
it depends on schooling and formal instruction to acquire reading as a
new skill. Moreover, the success of schooling depends on strategies and
“philosophies” of education. Therefore, the developmental transition also
depends on the way sociocultural transition to literacy has taken place.
In the following I shall first address the process of learning to read, and
the way sociocultural factors have complicated and sometimes arrested
this process.
6 Literacy and Language 209

6.5.1 Reading Without Interpretation

Olson (1998) stressed that written languages are models of spoken lan-
guages, but they are also communicational systems in their own right.
Hence the purpose of writing will be to communicate semantic meaning
to the reader of written texts. However, writing is also a technology by
way of which spoken language is transformed into visual characters and
vice versa (O – P mapping). For the schooled literate, it generally makes
no difference whether we talk about written language as a communica-
tional system or a technology; he/she is making use of both aspects of
reading.
To understand the development of literacy, historically and as learn-
ing achievements by children, it may be wise to keep the two aspects of
written languages apart. The technological aspect (grapheme–phoneme
conversion) is commonly the first skill taught in schools. In some cultural
and religious contexts, this skill is considered to be the main target for
reading instruction. This is the reason why reading aloud may have been
encouraged, and may sometimes have become a necessity. Islamic fami-
lies in the West (and also in the Middle East) often send their children
to Quran schools where they are instructed to read the Holy Scripture
in Arabic. In many cases, the families and hence their children speak a
different language themselves, and may be ignorant of Arabic. The child
may still be taught to read verses from the Quran aloud. He/she may not
understand what they say (reading without interpretation) but the Imam
teacher tells him/her about the meaning of the text. This shows that it
is quite possible to teach a child to read a text aloud in a different and
incomprehensible language. When the verses are spoken with the right
voice; that is, prosodic features, and may be also with the right rhythmic
movements of the body, the reading is valued as a sacred act. (Of course
reading without understanding may also take place when the text is based
on the child’s own language.) This performance shows that the child has
acquired the technological aspects of reading without interpretation; that
is, the O – P mapping runs practically errorless.
Also, historically the mastery of reading as a technological skill has
been important. The Christian Bible tells the story of the Ethiopian
210 Language Evolution and Developmental Impairments

eunuch who was busy reading the prophet Isaiah. When asked by Philip,
the evangelist, whether he could understand the text, he replied, “How
can I unless someone guides me?” This example shows that reading in
a technological sense may have preceded interpretation. The history of
Christianity, Judaism, and Islam is full of examples of reading practices
wherein technological proficiency has been a target of learning in its own
right, or a skill that has been appreciated on a par with understanding of
the text.
llliterate people today may, despite their lack of reading competence,
understand the general communicative function of writing, and they may
positively evaluate the importance of reading. Illiteracy today is mostly
due to poverty and lack of educational opportunities. In the early days of
writing, however, written texts may also have been looked upon as magic,
and few people may have understood their communicative function. For
centuries thereafter, some people regardless of educational opportunities
may still have failed to understand the idea of writing. When adopted,
after maybe years of apprenticeship, writing was seen by many as an exten-
sion of speech. The fact that written texts were generally read aloud, for
example, by monks reading the holy texts in the medieval monasteries,
shows that writing was taken as a representation of speech. Classical lit-
erature includes some counterexamples though. Thus, in St. Augustine’s
Confessions, the famous bishop Ambrose of Milan was said to read by
scanning the page rapidly with his eyes while his tongue remained silent.
This observation apparently surprised and impressed Augustine, because
scholars at that time generally read aloud. The problem of whether texts
were read silently or aloud in antiquity is thoroughly discussed by Knox
(1968). In any case, the misconception of writing as a representation of
speech lived among linguists until the modern area (Bloomfield, 1933).
However, representation of speech at the level of phonemes or syllables is
partly obtained only in most systems of writing. In this way, logographic
writing as in Chinese is different from alphabetic writing. The level and
form of representation defines the technology of writing, not its function
as a system of communication. As long as writing was seen as an exten-
sion of speech, it also came with the same authority as speech, and when
presented as the words of God in the great religions, writing created a
feeling of awe and total submission. Writing was considered as conserved
6 Literacy and Language 211

speech, and therefore, the messages it contained did not wane, but had
“eternal validity.”
The invention of writing as a technology, and subsequently reading
as a technological skill, did not change language. This took place when
written languages became communicational systems in their own right,
fully capable of conveying semantic meaning. Writing made languages
translatable, and thereby literacy also affected the evolution of language.
This is why I consider the distinction between writing as a technology
and writing as a linguistic/communicational system so important. Does
this distinction apply to written languages only, or is it equally applicable
to a discussion of spoken languages?
The examples I have mentioned above—one from reading of the
Quran and one from reading of the Christian Bible—shows that histori-
cally “technical” reading may have preceded interpretation. The concept
of reading technology may not be applicable to speech, yet spoken lan-
guages include procedural skills that form the preconditions for commu-
nication about semantic meaning. In preliterate societies, however, we
may speak about “oral literature” in the sense discussed by Ong (1982).
When recited in public this literature is being “read,” and the act of reci-
tation is itself an art, highly valued in the oral cultures.
In oral cultures the meaning of a recited poem may not have been
apprehended apart from the expressive form of recitation. The recitation
did not necessarily involve interpretation, rather interpretation tended to
be a matter for others, say the chief of the tribe, the priest, the Imam, the
elderly of the group, or even the extended community. The indigenous
people of Napa Rui tell how important agreements, transactions etc. were
conserved by public announcement in the extended group, and when
recited by a member of the group consensus was required for assigning
an interpretation.

6.5.2 Reading Difficulties: Dyslexia and Hyperlexia

The transition to literacy has been difficult due to both cultural precon-
ceptions of reading and to constraints and cognitive deficits in the indi-
vidual. The clinical term generally used about the latter type of difficulties
212 Language Evolution and Developmental Impairments

is dyslexia; that is, reading difficulties by typically developing children


without neurological impairments or brain diseases. Alexia is a term
used to describe lack of reading comprehension by aphasia patients after
strokes or other brain pathology. According to DSM-5, alexia is classi-
fied as a symptom of a language disorder, aphasia, whereas dyslexia is
classified as a specific learning disorder. The reason is that alexia is associ-
ated with a brain disease, whereas dyslexia is not. Thus, it is commonly
acknowledged that Broca’s and Wernicke’s areas are involved in normal
use of language, whereas no such structures have specifically evolved for
the purpose of reading. Since writing was invented 6000 years ago, there
has not been time enough for the evolution of such structures.
The discovery of mirror neurons in the macaque brain, and putative ana-
logue structures in the human brain, has led to assumptions that humans were
pre-adapted for language before the use of well-structured languages took
place. Could a similar pre-adaptation for reading have taken place in human
evolution? As mentioned in the Introduction, Arbib (2009) who took a pre-
adaptionist view also argued that neural structures for reading have been in
place before the advent of literacy. As pointed out by Varney (2002) these
structures may previously have developed for the use of another function. By
analogy, he referred to the frequently used example of jaws which are said to
have evolved from small bony gill supports in fish. “It took millennia for the
working jaw to appear” which happened without any “premeditation, plan-
ning or intent,” while its adaptive value may have had an immediate effect
on its ability to produce surviving progeny” (p. 4). Similarly for the case of
reading, the central nervous system (CNS) must have included structures
that were developed for other functions, whereas their adaptive value in later
cultural settings show that they were pre-adapted for reading. The question
is: Which were these functions or abilities? Varney argued that because ges-
tural communication has been part of the human repertoire for the last 5
million years, it may have formed a precursor to language and because this
is a visual capacity it may also have served as a pre-adaptation for reading. In
several studies he used a test of “pantomime recognition” to assess this ability,
and he showed that all aphasics with impaired reading comprehension were
also impaired in pantomime recognition. The relationship between the two
abilities was unilateral in the sense that “pantomime recognition” predicted
reading comprehension, but not the other way around.
6 Literacy and Language 213

Sasanuma’s reports on difficulties in reading kanji by Japanese apha-


sics have indicated a form of alexia for ideographs and logographs.
Early forms of writing involved ideograms (as in ancient Egypt), and
Varney suggested that the ability to decode ideograms may be mea-
sured by a Footprint Reading Test. (animal tracking). He showed that
all patients who were unable to “read” footprints were also alexic.
Moreover, all patients who were impaired on letter recognition were
also impaired on footprint reading. In conclusion he suggested that
“ancient skills of gestural comprehension and animal tracking were
the underpinnings of brain organization that permitted reading to
occur” (p. 3).
Dyslexia. Notice that the studies reported by Varney were undertaken
with alexic patients, and therefore they cannot generalize to typically
developing or dyslexic children. However, the pre-adaption to reading,
as measured by tests of pantomime recognition and “footprint reading”
will show great variability also among the latter population of children.
Pantomime recognition requires some time to mature, and is consistent
only at adult level by age 5, other abilities such as letter recognition will
equally depend on a period of maturation. Therefore the transition to lit-
eracy, on the level of individuals, will depend on intentional instruction
and be restricted by factors which affect cognitive maturation. In con-
trast, the acquisition of language is independent of formal instruction,
and depends on abilities which in general mature faster than the abilities
which are pre-adaptive to reading.
According to Varney the structures which were pre-adapted for reading
must have been “something to do with vision”; both pantomime recog-
nition and animal tracking were visual processing capacities. There is a
possibility that writing systems with only ideographs and/or logographs
could permit reading based on visual capacities only, but syllabic and
alphabetic systems are more based on the sounds of speech. Logographic
systems also require O – P mapping at the level of syllables not phonemes.
Therefore all writing systems make demands on both hearing and vision;
the reader must learn both the O – P and O – S correspondences. For
the child (or illiterate adult), it takes time to learn these correspondences
(“reading codes”), and it remains unknown whether any structures of the
CNS is pre-adapted to serve both aspects of reading.
214 Language Evolution and Developmental Impairments

The learning of sound letter correspondences requires an analytic


mode in the processing of spoken words and utterances. In the research
literature this mode of processing speech is called phonological awareness
which has commonly been assessed by a sound deletion task. The child is
asked to repeat a spoken word or pseudo-word, and then he/she is told to
repeat the word again with one sound deleted from the word. Sound–let-
ter correspondences can be learned when a critical level of phonological
awareness is obtained. However, the development of phonological aware-
ness is a major challenge for many children. It was soon discovered that
most children with dyslexia had phonological problems (Bishop, 1997),
and it is still a major issue in research on reading difficulties (Farguharson,
Centann, Franzluebbers, and Hogan, 2014).
Phonological awareness is an aspect of language which evolved with the
invention of writing, and because it is a “recent” attainment in the evolu-
tion of language, its variance in the general population is considerable.
Therefore, phonological difficulties are found not only by children classi-
fied as dyslexic, but also by children with developmental language impair-
ment. Consequently, some researchers have asked whether dyslexia and
specific language impairment are the same or distinct disorders (Bishop
and Snowling, 2004; Catts, Adlof, Hogan, and Ellis Weismer, 2005).
The question is whether the genetic etiology of developmental lan-
guage impairment differ from the one of dyslexia. In the human genome
9 regions (DYX1–DYX9) have been associated with dyslexia. Among
these regions DYX2 has often been considered as the most promising one
with a linkage to dyslexia. Their candidate genes (KIAA0319, DYX1C1,
DCDC2, and ROBO1) are all implicated in the disorder (Lim, Ho,
Chou, & Waye, 2011). With an exception of KIAA0319 most genetic
factors of dyslexia differ from those identified for developmental language
impairments (see Chap. 2, Sect. 2.4). Other genes of the 9 loci mentioned
above have also been associated with dyslexia. Thus, the genetic etiol-
ogy of dyslexia is the product of a complex interaction of many genes.
Newbury, Monaco, and Paracchini (2014) argued that studies of “com-
plex genetic disorders indicate that there may be hundreds of genetic
variants contributing to any one phenotypic status” (p. 287).
It should be stressed that the genes which are associated with dyslexia
means that they are implicated in the disorder, they are not reading-specific
6 Literacy and Language 215

genes. The candidate genes mentioned above are involved in fetal brain
development, in particular neuronal migration processes; DYX1D1 also
affect cognitive skills like “one minute reading,” “digit rapid naming” and
“nonword repetition.” This means that the candidate genes are related to
the control of behavioral domains which extend beyond reading and co-
develop with reading ability.
Hyperlexia. The most severe forms of reading difficulties are gener-
ally associated with dyslexia. Could there be nondyslectic forms of read-
ing difficulties? In reading there is always a trade-off between speed and
accuracy. In other words, there is a trade-off between the speed of O – P
matching (technologically correct reading) and O – S mapping (reading
with interpretation). Thus, we have fast readers who do not grasp much
of the meaning of the text, and we have slow readers who understand
meaning very well. We will also find a number of transitions between the
two extreme cases.
The examples I have described above show that reading without
interpretation is a phenomenon which in some communities have been
socially and culturally accepted (and may be even encouraged). However,
reading without interpretation may also take place among children in a
clinical setting; that is, children who may suffer developmental delays or
belong to a spectrum disorder of a particular disease (autism). Silberberg
and Silberberg (1967) were the first researchers to describe these cases as
hyperlexia; that is, decoding ability that is out of proportion with compre-
hension ability. Also, hyperlexia often exemplify cases of precocious read-
ing by children who have been obsessed by letters and numbers from an
early age. Because precocious reading without lexical comprehension has
been associated with autism, the researchers have disagreed on whether
to consider hyperlexia a disability or a superability. Without going into
this discussion I shall briefly mention the work of Grigorenko, Klin, and
Volkmar (2003) who reviewed the literature available at that time, and
who concluded that “hyperlexia is a superability demonstrated by a very
specific group of individuals with developmental disorders, rather than a
disability exhibited by a portion of the general population” (p. 1079). As
far as I have seen, the clinical status of hyperlexia still remains undecided.
The observation that decoding ability by hyperlexic persons is out of
proportion with comprehension ability is a matter which deserves careful
216 Language Evolution and Developmental Impairments

consideration. In reading, these persons are capable of correctly pronounc-


ing the words of a written text without comprehending the meaning of
these words. Are the same words which are correctly read without inter-
pretation part of the child’s oral vocabulary? I do not know any observa-
tions or research data which show the extent to which the two types of
words overlap in hyperlexic readers. In any case, the example mentioned
above of Islamic children who are capable of reading the Quran in Arabic
shows that words which are not in the child’s oral vocabulary can be
(almost) correctly pronounced in reading.
The words correctly read in hyperlexia consist of a sequence of pho-
nemes which are present in the child’s phonological repertoire. Although
the words are not present in the child’s oral vocabulary, he/she must be
capable of producing the constituent phonemes. The hyperlexic child
solves a mapping problem: written characters will be mapped on to vocal
responses (O  – P); teachers often speak about this performance as the
discovery of the “reading code.” This is not a minor problem because a
written character is not a cipher on a particular vocal response, written
characters can only be mapped on to a sequence of phonemes (a word)
in accordance with a set of if-then rules. The fact that such problems
are solved independent of semantic comprehension of the text shows
that the skill may be functionally dissociated from semantic learning.
(A direct lexical route from orthography to phonology) Also, the if-then
rules are generally inaccessible to conscious reflection. On this account,
I think it is likely that decoding and comprehension abilities are served
by different neural structures, and that these structures have different ori-
gins in the evolution of language. Precocious reading may be character-
ized as a procedural skill, and is likely to depend on the brain structures
underlying the procedural memory system (see Chap. 3, Sect. 3.3.2).
Functional magnetic imaging has revealed that such reading is associated
with increased activity in the left inferior frontal and superior temporal
cortices, whereas O – S mapping is related to activities in inferior tempo-
ral gyrus and parietal regions (Turkeltaub et al., 2004; see also the dorsal
pathway, Chap. 3, Sect. 3.6).
It should be stressed that hyperlexia depends on a division of labor
between phonology and semantics. In English and other opaque
orthographies, the brain circuits underlying O –S mapping are more
6 Literacy and Language 217

heavily taxed than the brain regions which control the O – P mapping.
In Chinese, however, the division of labor between the two systems is
more equitable (Zhao et al., 2014). Does this mean that hyperlexia is
more likely found among readers of alphabetic writing systems com-
pared to readers of logographic writing systems?
Although decoding ability is sometimes out of proportion with com-
prehension ability, the two will develop in parallel in typical educational
settings. The two abilities will also be mutually dependent in the estab-
lishment of literate competence; however, these abilities may have differ-
ent origins in the evolution of language. The ability of O – P mapping
(the technological management of reading) is a procedural skill which
therefore may be dissociated from the comprehension ability (O  – S
mapping) and declarative memory. Historically, and in some educational
settings today, the two abilities may have been confused and therefore
caused disagreement about what reading is.

6.6 Summary of Reasons Why Writing May


Have Changed Language
If writing, and the ensuing skill of reading, changed the human brain,
language may also have changed it. Let me recapitulate briefly: There
are structural differences (relative distribution of grey and white mat-
ter) between late readers and matched illiterates. Literates and illiterates
with left-hemisphere damage perform differently in tests of aphasia. Also,
illiterates have more difficulties in repeating pseudo-words than literate
people. Regardless of whether these differences are due to schooling or
informal learning to read, these differences which favored the literate
brain, may have been selectively adapted. The question is how structural
differences also changed language.
I have argued above that reflexivity of language is a product of writ-
ing. O  – P mapping is an instrument which presupposes phonologi-
cal awareness; that is, an awareness which can be consciously expressed
in linguistic terms. Therefore phonological awareness is an aspect of
reflexivity. Language as a capacity for referring to, or describing itself
evolved in the era following the invention of writing. (Compare the term
218 Language Evolution and Developmental Impairments

“reflexive pronoun” in classical grammar, which involves an action/event


turned back upon the subject.) Also the associated distinction between
use and mention of words becomes a necessary instrument of reading
instruction, and by the growth of literacy it becomes part of oral lan-
guage as well. Moreover, O – S mapping presupposed a semantic aware-
ness (called symbolic awareness above) which is similarly linked to the
invention of writing, and which finally gave rise to the meta-linguistic
capacities by language users in literate languages. These capacities involve
an analytic attitude to language which may not have existed in the con-
text of poetic and oral traditions in pre-literate languages (see Sects. 1.6
and 5.3). However, the changes of language due to writing did not come
all at once, not even in the modern world. Thus, examples of reading
without interpretation show that despite success of O  – P mapping,
O – S mapping failed. The conversion of written characters to sounds of
speech was emphasized at the costs of reading as a way of communica-
tion. Once reading instruction took into account both types of mapping,
a new “standard of linguistic communication” came into existence. This
“standard” also changed oral language.

6.7 Cultural Preconceptions of Reading


The transitions to literacy meant that in addition to reading difficulties by
individuals also cultural preconceptions about the functions of reading had
to be overcome. Written language was supposed to represent speech, and
in line with this assumption, written texts were generally read aloud; thus,
Olson (1998) pointed out that “the restoration of voice was critical to cap-
turing the intended meaning” (p. 184). This shows that writing, from the
very beginning, had obvious limitations when studied from the perspective
of modern linguistics. As pointed out, in the preceding chapter one of these
limitations had to do with the failure of conveying the illocutionary force
of a statement, writing represented “what is said” and not “how it should
be taken.” Later various attempts have been made to cope with these
limitations, and when successful these attempts have contributed to what
Olson framed “the conceptual revolutions associated with literate culture.”
Some people were never involved in these revolutions; those who became
involved represented the most advanced stage in the evolution of language.
6 Literacy and Language 219

Literate persons from the classical era and the medieval ages contin-
ued to put into texts, sayings from oral cultures, or from other writings
that depended on narratives or legends transmitted between generations
of people. They did not bear witness to Olson’s conceptual revolutions
with literate culture; rather their texts have generally been formed on the
premises of an oral culture. Thus, writing supported a conservative set of
mind; it reinforced sayings which may have been repeated over and over
again, and consequently the orally based literature did not serve as reports
on novel events. However, as argued by Ong, texts based on oral culture
did not lack “originality of their own kind.” The canonical Gospels seem
to show that new elements in old stories have been added. An analogue
version of the birth and life of Jesus Christ, as depicted in the Gospels,
can be found in the ancient Hindu text the Bhagavad Gita, composed
sometime between the fifth and second centuries bc. Here Krishna, like
Christ, was said to be the son of God, and both acted as healers and mira-
cle workers. The similarities between the Christian and Hindu texts show
that neither of them bears reliable evidence of novel events. This does not
mean that they lacked aspects of novelty; their narrative framework still
permitted innovations of the story.
According to the Gospel of Matthew, Jesus was born of the Virgin
Mary. This means that Mary was worshiped as a goddess, or a virgin
“creatrix.” However, the worship of the virgin and her child was com-
mon in the East and the Middle East centuries before the birth of Christ.
Thus, mythological texts indicate that the Egyptian Madonna Isis was
a virgin while giving birth to Horus, and it is still debated whether
Krishna was born of a virgin. It is commonly assumed that Krishna was
the eighth son of Devaki, yet she has been given the status of Virgin
Goddess. Also Greek mythology has presented a threefold description
of Aphrodite: Aphrodite the virgin, Aphrodite the wife, and Aphrodite
the whore.
The reasons for writing the classical texts of the kind mentioned above
were not to report historical events, but to establish and reinforce a con-
servative mindset. Intellectual experimentation was not a characteristic
of early literate texts. The discussion of philosophical, social and political
problems was a literate innovation which contrasted with the orally based
literature in the classical era. The “media” by which new information is
distributed is a recent conception in human history.
220 Language Evolution and Developmental Impairments

With the preconceptions of writing and reading which existed from


antiquity, through the middle ages to the 20th century, educational prin-
ciples which stressed the technology of reading together with rote learn-
ing practices were favored in schools and other institutions. Therefore,
educational principles may themselves have slowed and sometimes pre-
vented the transition to literacy. Consequently, it has been difficult to
distinguish reading difficulties by children and adults which on the one
side rest on immature and impaired cognition, and the reading difficul-
ties which on the other side depend on flawed education.

6.8 Literal Meaning and Asperger Syndrome


The conception of written language as a representation of speech is linked
to the conception of “literal meaning.” Only by the “restoration of voice”
could the reader capture the intended meaning; that is, the literal mean-
ing of the text. Therefore, O – P mapping was also considered as a means
of interpretation, and “literal meaning” is the “meaning” which requires
only O – P mapping, whereas O – S mapping is dispensable. Is literal
meaning a characteristic of linguistic communication by some develop-
mentally impaired children?
Asperger patients are characterized by social and communicative
impairments and often rigid adherence to routines. They have semantic
and pragmatic difficulties which are generally manifested as severe dis-
abilities in understanding nonliteral language. Figurative language and
metaphors are often incomprehensible. These difficulties are revealed
in tasks of semantic integration. Gold, Faust, and Goldstein (2010)
examined the semantic integration process in 16 ASD patients and 16
matched controls using ERP. The N400 amplitude was used as an index
of effort invested in the semantic integration of word pairs presented on
a computer screen. These pairs denoted literal, conventional metaphoric;
novel metaphoric meaning; or just unrelated word pairs. As shown by
the N400 amplitudes, ASD patients invested greater effort in integrating
metaphoric word pairs compared to controls. The two groups did not
differ in integration of the literal and unrelated word pairs.
6 Literacy and Language 221

Based on these, and similar observations, it is commonly assumed


that ASD patients have no problem comprehending the literal mean-
ing of words, but are generally incapable of processing metaphors, both
conventional and novel. Therefore, we may raise further questions about
the literal meaning of words. What is involved in the contrast between
literal and metaphoric meaning? The concept of literal meaning has
been strongly debated in theories of literacy and in theological works
on the original and intended meaning of the Holy Scriptures. I think it
is unlikely that scholarly works in these fields will ever contribute to an
understanding of this form of reading (and of course it does not contrib-
ute to an understanding of ASD either). Therefore, we should ask for
an alternative approach to an explication of literal meaning. Does literal
meaning implicate a form of decoupling of the mechanisms underlying
the interpretation of texts (reading without O  – S mapping)? Does it
involve a regression to an earlier form of reading?

6.9 Invention of Writing as Niche


Construction
The growth of literacy has taken part over many centuries, from early
Sumerian writing to present-day school projects in developing countries.
In this era, an immense niche construction has taken place which has
changed the evolution of the human mind and language. The concept
of a “niche” is commonly used in other fields of biological evolution. An
example often used about niche construction is the introduction of dairy
farming in Europe, which affected the frequency of the allele for lac-
tose persistence. Consequently, more individuals benefited from drink-
ing milk into adulthood. Thus, human-constructed practices affected
the transmission of genes (Creanza, Fogarty, and Feldman, 2012). In my
opinion, the concept applies equally well to studies of language evolu-
tion. In fact, it may be subsumed under the general term “cultural niche
construction.” Human-constructed cultural niche may affect the trans-
mission of genes, but a culturally transmitted trait, for instance, a mode
of communication, may also affect the transmittance of other cultural
traits.
222 Language Evolution and Developmental Impairments

Creanza et  al. (2012) presents a model which involved both gene-
culture and culture-culture interactions. The latter applies specifically to
literacy which is a cultural invention that has affected the evolutionary
dynamics of other cognitive and linguistic traits. It has enforced rules
of transmission, for instance, (formal) instruction and schooling, and it
involved forms of social control and power. In consequence, literacy has
become a major force of selection, in particular because vertical transmis-
sion of this trait has involved assortative mating.
However, I find it difficult to apply the Creanza et al. model directly
in the case of literacy and language. This model presupposes two defini-
tions, one of a recipient trait T (which determines a cultural phenotype)
and one of a niche constructing trait N (which determines selection and
assortative mating). Each has two possible states (T: T, t and N: N, n),
and by combinations, these give rise to four possible phenotypes. It may
be possible to conceive of a cultural phenotype of literacy, and an inter-
acting constraint in the literate world as the niche constructing trait, but
further application of the model will run the risk of an unavoidable over-
simplification. Creanza et  al. themselves applied the model in relation
to religion and fertility, not to literacy and language. Maybe some major
adjustments of the model could be made to deal with the role of writing/
literacy in the evolution of language.
As pointed out above, the arguments of assortative mating and selec-
tion pressures apply to literate cultures. Therefore, an important task
will be to develop a formal/explicit model of niche construction in the
case of literacy and language evolution. Literacy, including classical as
well as computer-based technologies of writing, has more than any other
historical event formed the ecology of the human mind. It can be com-
pared to Deacon’s concept of “the other evolution” (see Chap. 5, Sect.
5.8). Human beings today, from young children to elderly persons, are
exposed to an ambient environment of letters, characters, acronyms, texts
and other literate symbols, to which adaptation becomes important. This
literate ecology of mind shapes our use and conception of language, and
determines the survival or death of linguistic communities. In fact, the
adjustment to the literate world is a major condition for the develop-
ment and survival of cultures, and finally for the reproductive capacity
of individuals.
6 Literacy and Language 223

6.10 Questions About the Future of Language


and Literacy
The development of language and literacy in the age of information tech-
nology is not considered to be a topic for the present work. However,
in view of the vast impact of the literate ecology of mind, up to and
including the computer-mediated communication of our times, a few
reflections on the future of linguistic communication will be a matter of
some concern.
Prior to the age of information technology texts were located in
scrolls, books or other entities, and their availability depended on the
invention of printing, the distribution of papers, magazines, and so on.
Consequently, it still made sense to talk about specific and identifiable
sources of information, even after a huge number of revisions and transla-
tions had taken place. People were taught how to “look up something,”
in books or libraries to find the information needed. This state of affairs
has changed to some extent in the age of the computer; information has
become accessible from anywhere, and texts are no longer as located as
they used to be. Do computers re-introduce some of the characteristics of
“orality” in pre-literate societies?
As pointed out by many researchers of pre-literate cultures, oral
discourse was based on sound, which is evanescent and having mean-
ing only as long as it (acoustically) does not go out of existence, or can
be reproduced in verbal memory. Therefore, oral discourse depended
strongly on memory capacities, leading to an emphasis on formulas
and memory structures. Ferris (2002) pointed out that “computer-
mediated communication reintroduces the qualities of temporal
immediacy, phatic communion, the use of formulaic devices, pres-
ence of extra textual content, and development of community that
had been characteristics of oral communication”(online publication).
Previously, Ong (1982) reasoned among similar lines and predicted
the advent of “secondary orality.” The extent to which later develop-
ment of computer technology has given evidence in support of this
prediction is a matter of discussion beyond the objectives of the pres-
ent work.
224 Language Evolution and Developmental Impairments

We should also compare computer-based communication with com-


munication by way of written texts or books. Because the reader of
computer-mediated messages is allowed or invited to manipulate its con-
tent, the traditional distinction between a reader and a writer becomes
unclear. Therefore, the interactional participation in computer-based
communication means that it departs from traditional communication
in a “literate” world.
In short, computer-based literacy will differ from traditional literacy.
Knowing that traditional literacy changed language, we may also ask
whether the new literacy will also eventually change language. In com-
puter writing, traditional style is often abandoned in favor of conven-
tional forms, and acronyms are frequently used. The hierarchical phrase
structure of natural languages, with long sentences, may sometimes be
compromised in favor of fast and effective communication.
Language in a traditional literate culture differs in many ways from
what is often characterized as computer language. Ong (1982), who
had not yet seen all aspects of the new technology, nonetheless made an
important observation. The ways in which grammar is introduced differ
between the two languages: In computer languages, grammar is stated
first and thereafter used, whereas in natural languages it is used first and
thereafter “abstracted from usage and stated explicitly in words only with
difficulty and never completely” (p. 7). In my view, this observation is
also essentially correct in relation to modern computer language. Most
important is the way in which grammar is acquired in the two languages:
In computer language, grammar is acquired by slow and incremental
learning, which eventually gives rise to highly automatized computer
skills. (Compare the form of dialogues which I have described in Chap.
4 as procedural skills.) Grammar of natural languages is, on one hand,
acquired due to wired-in learning constraints, and the use of grammati-
cally correct statements also has, in many cases, the characteristics of pro-
cedural skills. However, grammar that is the result of abstraction from
usage has a very different history of learning. The declarative knowledge
of grammar is acquired by a form of learning which is fast, but also fal-
lible and sensitive to interference (see Chap. 3, Sect. 3.3.1). These aspects
of declarative learning will not fit into educational regimes of high-tech
6 Literacy and Language 225

societies, and eventually the neglect of declarative learning may affect the
language of future generations.
The development of computer skills, but also the use of mobile phones,
iPhones, iPads, and so on will affect people’s vocabularies, creating similar
content words in otherwise different languages. Furthermore, the use of
computer-mediated communication has improved the quality and effi-
ciency of second language (L2) learning. Thus, communication in face-
to-face settings encourages multidirectional interaction. Many teachers
have observed higher rates of peer-to-peer talk (but also higher rates of
human–machine interactions) and less dependence on student–teacher
interactions, in classrooms with high-tech solutions for language learning.
In short, computer-mediated communication affects the social orga-
nization and mobility of people, and this mobility has always been an
important factor in language evolution and change. Also, this mobility
and associated interactions with different ethnic and linguistic groups
will increase with the growth of computer=mediated communication.

References
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Ardila, A., Bertolucci, P. H., Braga, L. W., Castro-Caldas, A., Judd, T., Kosmidis,
M. H., et al. (2010). Illiteracy: The neuropsychology of cognition without
reading. Archives of Clinical Neuropsychology, 25, 689–712.
Baddeley, A. D., Gathercole, S. E., & Papagno, C. (1998). The phonological
loop as a language learning device. Psychological Review, 105, 158–173.
Bishop, D. V. (1997). Uncommon understanding. Development of disorders of lan-
guage comprehension in children. East Sussex, UK: Psychology Press.
Bishop, D. V., & Snowling, M. J. (2004). Developmental dyslexia and specific
language impairment: Same or different? Psychological Bulletin, 130, 858.
doi:10.1037/0033-2909.130.6.858.
Bloomfield, L. (1933). Language. New York: Holt, Reinhart & Winston.
Carreiras, M., Seghier, M. L., Baquero, S., Estevez, A., Lozano, A., Devlin, J. T.,
et al. (2009). An anatomical signature for literacy. Nature, 461, 983–986.
226 Language Evolution and Developmental Impairments

Castro-Caldas, A., Peterson, K. M., Reis, A., Askelof, S., & Ingvar, M. (1998).
Differences in inter-hemispheric interactions related to literacy, assessed by
PET. Neurology, 50, A43.
Catts, H. W., Adlof, S. M., Hogan, T. P., & Ellis Weismer, S. (2005). Are spe-
cific language impairment and dyslexia distinct disorders? Journal of Speech,
Language, and Hearing Research, 45, 1378–1396.
Coe, M. D. (1992). Breaking the Maya Code. London: Thames Hudson. ISBN
0-500-05061-9.
Coe, M.  D. (2002). The Maya (6th ed.). London: Thames Hudson. ISBN
0500050619.
Creanza, N., Fogarty, L., & Feldman, M. W. (2012). Models of cultural niche
construction with selection and assortative mating. PLoS, 7, e42744.
Ehlich, K. (1983). Development of writing as social problem solving. In
K. Ehlich & F. Coulmas (Eds.), Trends in linguistics. Studies and monographs.
Writing in focus. Berlin: Mouton Publishers.
Eslinger, P. J., & Grattan, L. M. (1993). Frontal lobe and frontal-striatal sub-
strates for different forms of human cognitive flexibility. Neuropsychologia,
31, 17–28.
Farguharson, K., Centann, T. M., Franzluebbers, C. E., & Hogan, T. B. (2014).
Phonological and lexical influences on phonological awareness in children
with specific language impairment and dyslexia. Frontiers in Psychology, 5,
838.
Ferris, S. P. (2002). Writing electronically: The effects of computers on tradi-
tional writing. Journal of Electronic Publishing, 8(1).
Gelb, I. J. (1963). A study of writing (2nd ed.). Chicago: University of Chicago
Press.
Gold, R., Faust, M., & Goldstein, A. (2010). Semantic integration during meta-
phor comprehension in Asperger syndrome. Brain & Language, 113,
124–134.
Goody, J., & Watt, I. (1968). The consequences of literacy. In J. Goody (Ed.),
Literacy in traditional societies. Cambridge: Cambridge University Press.
Grigorenko, E.  L., Klin, A., & Volkmar, F. (2003). Annotation: Hyperlexia:
Disability or superability? Journal of Child Psychology and psychiatry, 44,
1079–1091.
Havelock, E. (1976). Origins of Western literacy. Toronto: OISE Press.
Havelock, E. (1982). The literate revolution of Greece and its cultural consequences.
Princeton, NJ: Princeton University Press.
Henderson, L. (1984). Writing systems and reading processes. In L. Henderson
(Ed.), Orthographies and reading. Perspectives from cognitive psychology, neuro-
psychology and linguistics. Hillsdale: Lawrence Erlbaum Associates.
6 Literacy and Language 227

Knox, B. M. W. (1968). Silent reading in Antiquity. Greek, Roman, and Byzantine
Studies 9/4, Winter.
Kosmidis, M.  H., Tsapkini, K., Folia, V., Vlahou, C.  H., & Kiosseoglou, G.
(2004). Semantic and phonological processing in illiteracy. Journal of the
International Neuropsychological Society, 10, 912–827.
Kosmidis, M. H., Zafiri, M., & Politimou, N. (2011). Literacy versus formal
schooling: Influence on working memory. Archives of Clinical Neuropsychology,
26, 575–582.
Lecours, A. R., Mehler, J., Parente, M. A., Beltrami, M. C., Canossa de Tolipan,
L., Cary, L., et al. (1988). Illiteracy and brain damage. 3: A contribution to
the study of speech and language disorders in illiterates with unilateral brain
damage (initial testing). Neuropsychologia, 26, 575–589.
Lim, C. K., Ho, C. S., Chou, C. H., & Waye, M. M. (2011). Association of the
rs3743205 variant of DYX1C1 with dyslexia in Chinese children. Behavioral
and Brain Functions, 7, 16. doi:10.1186/1744-9081-7-16.
Linnel, P. (2005). The written language bias in linguistics. London: Routledge.
Newbury, D. F., Monaco, A. P., & Paracchini, S. (2014). Reading and language
disorders: The importance of both quantity and quality. Genes (Basel), 5,
285–309.
Olson, D. R. (1998). The world on paper. The conceptual and cognitive implica-
tions of writing and reading. Cambridge: Cambridge University Press.
Ong, W. (1982). Orality and literacy: The technologizing of the word. London:
Methuen.
Reis, A., Peterson, K.  M., Castro-Caldas, A., & Ingvar, M. (2001). Formal
schooling influences two- but not three-dimensional naming skills. Brain and
Cognition, 47, 397–411.
Sasanuma, S. (1974). Kanji versus kana processing in alexia with transient
agraphia: A case report. Cortex, 10, 84–97.
Schmandt-Besserat, D. (1987). Oneness, twoness, threeness: How ancient accoun-
tants invented numbers. New York: New York Academy of Sciences.
Scribner, S., & Cole, M. (1981). The psychology of literacy. Cambridge, MA:
Harvard University Press.
Silberberg, N., & Silberberg, M. (1967). Hyperlexia: Specific word recognition
skills in young children. Exceptional Children, 34, 41–42.
Siok, W. T., Niu, Z., Jin, Z., Perfetti, C. A., & Tan, L. H. (2008). A structural-
functional basis for dyslexia in the cortex of Chinese readers. Proceedings of
the National Academy of Sciences of the United States of America, 105,
5561–5566.
Siok, W. T., Perfetti, C. A., Jin, Z., & Tan, L. H. (2004). Biological abnormality
of impaired reading is constrained by culture. Nature, 43, 71–76.
228 Language Evolution and Developmental Impairments

Tan, L. H., Laird, A. R., Li, K., & Fox, P. T. (2005). Neuroanatomical correlates
of phonological processing of Chinese characters and alphabetic words.
Human Brain Mapping, 25, 83–91.
Torrence, N., Lee, E., & Olson, D. R. (1985). Oral and literate competencies in
the early school years. In D. R. Olson, N. Torrence, & A. Hildyard (Eds.),
Literacy, language, and learning: The nature and consequences of reading and
writing (pp. 256–284). Cambridge: Cambridge University Press.
Turkeltaub, P.  E., Flowers, D.  L., Verbalis, A., Miranda, M., Gareau, L., &
Eden, G. F. (2004). The neural basis of hyperlexic reading: An FMRI case
study. Neuron, 41, 11–25.
Tzeng, J. L., & Wang, W. S.-Y. (1983). The first two R’s. American Scientist, 71,
238–243.
Varney, N. R. (2002). How reading works: Considerations from prehistory to
the present. Applied Neuropsychology, 9, 3–12.
Zhao, J., Wang, X., Frost, S. J., Sun, W., Fang, S.-Y., Menci, W. E., et al. (2014).
Neural division of labor in reading is constrained by culture: A training study
of reading Chinese characters. Cortex, 53, 90–106.
7
The Modality-Independent Capacity
of Language: A Milestone of Evolution

As stated in the Introduction, the term “language” refers to the ability to


acquire and make use of language. In this chapter I will argue that this abil-
ity can be expressed across different articulators, and that language there-
fore is a modality-independent capacity of communication. It follows that
I will distinguish the general capacity of language from the articulatory
(vocal and manual) expressions of language. Also, in accordance with this
distinction, I consider language impairments as different from production
errors in spoken and sign languages. Thus, speech disorders—for exam-
ple apraxia, dysarthria, speech sound disorders, and voice disorders—are
nosologically different than, but still related to, language impairments.
Similarly there are production errors in sign language such as “slips of the
hand” and impediments to sensory motor skills which reduce the com-
municative efficacy of a signed message. The language ability which is
underlying, yet conceptually distinguished from the articulatory expres-
sions of language, is an ability which cuts across the sensory and response
modalities; that is, a modality-independent capacity of language.
Notice that the term “modality” refers primarily to differences between the
senses such as vision and hearing, but it will also be used about articulators.
Manual gestures and vocal expressions are different articulatory modalities.

© The Editor(s) (if applicable) and The Author(s) 2016 229


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_7
230 Language Evolution and Developmental Impairments

Finally I also use the term about sign and spoken languages which represent
different language modalities. I hope context will reveal the intended mean-
ing of modality.
How does this conception of language agree with theories of language
evolution? Apparently, Hockett (1960) may have used a different concep-
tion of language and argued as if speech is the ultimate goal of language
evolution. In that case, does sign language represent a more primitive
form of linguistic communication, or did language evolve as a modality
independent and abstract capacity of symbolic representation? Are signed
and spoken languages equal expressions of a modality-independent
capacity of symbolic representation?
According to the gestural theory of language evolution, intentional
communication by our hominid ancestors was based on manual and
other bodily gestures. Vocal communication belongs to an evolutionary
recent period in the history of mankind. Corballis (2010) mentioned two
arguments for this theory: 1) Only few species, such as elephants, seals,
killer whales and some birds, are capable of vocal learning, a prerequisite
to spoken language. Among the primates only humans are vocal learn-
ers. These observations are contrasted with the extensive use of bodily
gestures for communicative purposes among chimpanzees and bonobos.
2) It has not been possible to teach vocal language to the great apes.
The most successful attempts to teach intentional communication (not
vocal) were made by Savage-Rumbough and Rumbough who trained two
chimps to communicate with lexigrams (see Chap. 3). However, Kanzi
(described in Chap. 2) learned to follow spoken instructions in sentences
up to seven or eight words. This example may be interpreted as an evi-
dence of “fast mapping.” generally considered to be a capacity of human
infants. Corballis seemed to discard with the Kanzi case as an evidence of
speech comprehension. He assumed that words served as discriminative
stimuli which triggered behavior, and in any case, Kanzi never learned to
speak by taking part in dialogues with a human partner.
According to the gestural theory of language evolution there must have
been a switch from primarily gestural to primarily vocal communication.
Corballis (2010) also discussed whether this switch took place gradually
or whether it occurred suddenly, in one saltational shift. In agreement
with several other researchers, he believed that this shift was gradual and
7 The Modality-Independent Capacity of Language … 231

that it depended on a process of “grammaticalization”: The first words


were assumed to be content words, nouns and verbs often found in pidgin
languages, whereas function words occurred as abbreviations or “mutila-
tions” of such words. It is not clear, however, why function words should
be more easily expressed vocally than manually, whereas the development
of sign languages also shows a similar process of “grammaticalization.”
What is the status of sign languages within a gestural theory of language
evolution? Do they form a reminiscence of early communicative skills in
human evolution? Already Klima and Bellugi (1979) attacked a number
of misconceptions about sign languages: These are neither primitive nor
“universal” forms of communication, and they are not made up codes
for the representation of words in spoken languages. Despite the differ-
ence in signaling modality, sign languages are true human languages, and
show a number of similarities with spoken language regarding the acqui-
sition and use of symbolic systems of signs. These similarities, I assume,
depend on a general capacity of language/symbolic reference which can
be expressed in different media with different articulators. Also, the
extent to which congenitally deaf and blind children acquire a system of
symbolic reference, based on tactual stimuli, may give further support to
the notion of a modality-independent capacity of language. In the pres-
ent work, however, I think strong enough arguments for this capacity can
be found in a systematic comparison of speech and sign languages.
The modality-independent capacity of language is a cognitive endow-
ment by most human beings today. Apparently, language may have evolved
towards a language-in-general capacity. Thus, I recommend that we distin-
guish the general evolution of language—for instance, the ability of symbolic
reference—from the evolution of particular sensory motor channels of com-
munication—for instance, the auditory-vocal channel involved in speech.
The idea of a general capacity of language across different modalities
and channels of communication is not a new one. Rather, it is implicit
in Emmorey’s (2002) work on sign language, made a cardinal point in
Deacon’s (1997) book on the “symbolic species”; and more recently, also
gained empirical support by Krentz and Corina (2008) who showed that
“the human language bias is not speech specific” (p. 1).
In the Introduction, Sect. 1.4.2, I presented a few notes of a “lan-
guage bias” as discussed by Vouloumanos and Werker (2004) and later by
232 Language Evolution and Developmental Impairments

Krentz and Corina. (To prepare for the ensuing discussion I shall shortly
repeat them here.) The former researchers argued for a privileged status of
speech, because they observed that two-month-old infants listened lon-
ger to speech (monosyllabic nonsense words) than nonspeech analogues.
Krentz and Corina showed that their six-month-old hearing babies pre-
ferred to look at unfamiliar visual signs (from ASL) over nonlinguistic
pantomime. Therefore, these researchers claimed that infants, instead of
being tuned to speech have a “language-general bias” (p. 1).
The position taken by Krentz and Corina, which I will follow here, is
now commonly accepted in the research literature. Still this position needs
a few comments: Within each of the two modalities, hearing and vision,
stimuli differ with respect to their language relatedness. Therefore, typi-
cally developing infants are most likely tuned to speech sounds as well as to
signs, when these have features which “signal” their relevance for language.
The critical features depend on frequency of modulations, rhythmicity and
statistical characteristics, for example, transition probabilities. In Chap. 3,
I have discussed learning constraints related to statistical characteristics of
the stimulus materials, and below, Sect. 7.3, I will also deal with frequency
of modulations as a factor in linguistic pre-semantic interaction.
The capacity of symbolic reference is a prerequisite to speech, yet the
two may have co-evolved in ancient history. Speech may also be the result
of selection pressures that did not similarly apply to all forms of symbolic
communication. Symbolic reference depends on a general-purpose mech-
anism which serves social and communicative interactions in a variety of
sensory-motor conditions. Speech, however, represents a specific adapta-
tion to communicative needs (for example, communication in darkness).
Both speech and other forms of symbolic communication involve a com-
plex use of signs that is commonly referred to as symbolic reference. As a
general-purpose mechanism, symbolic reference is not dependent on the
use of particular articulators; for example, vocal-auditory signs or manual
signs. It may also evolve with other types of communicative signs (tactual-
kinesthetic sign). I therefore consider symbolic reference to be a universal
feature of language that has triggered the development of more specific
communicative skills. Deacon (1997) made the same point by arguing
that “the evolution of vocal abilities might more accurately be seen as a
consequence rather than the cause of the evolution of language” (p. 255).
7 The Modality-Independent Capacity of Language … 233

7.1 Cross-Modal Nature of Symbolic


Reference
Both speech and sign language make use of linguistic symbols which can
be materialized in any modality; that is, they do not depend on specific
sensory or motor processing, but are the products of interpretive process-
ing triggered by, in principle, any external stimulus.
All kinds of signs in semiotics are in principle modality-independent.
In Chap. 3, Sect. 3.1.1, I said that symbols in Peirce’s classification of
signs are always part of a referential system and therefore independent of
sensory and motor modalities. Thus, sign–sign relationships are equally
important as sign–object relationship. Also signs at different levels of ref-
erence—icons, indexes and symbols—are related to each other in a hier-
archical structure which makes syntax/grammar an immanent aspect of
linguistic symbols.
Although symbols point to objects, they may also be used in the
absence of the referent. Moreover, symbols may be used to refer to a
class of related referents; that is, symbolic reference is independent of a
particular context. What matters are the interpretive processes that bind
symbolic tokens together in an integrated system of tokens. Symbolic
reference involves a sort of dual reference in the sense that an indexical
association with objects is implicitly maintained in the referential rela-
tionship between words or signs.
The same linguistic symbols exist across different sense modalities and
articulatory expressions; therefore the use of such symbols (symbolic ref-
erence) depends on a level of processing above the neural circuitry of per-
ception and action. It is this abstractness of the language concept which I
intend to communicate by describing language as a modality-independent
capacity. However, this abstractness does not mean a disembodiment of
language, which is still grounded by the neurophysiological principles
presented in Chap. 5, Sect. 5.5.
In a recent online debate about UG, Bolhuis, Tattersall, Chomsky,
and Berwick (2015) argued against Lieberman (2015) whom they said
took language to be a means of communication, “with human speech as a
‘key attribute’.” They stressed that “speech is one possible externalization
234 Language Evolution and Developmental Impairments

of language (among others such as sign) and is not an essential part of


it” (p. 1, online publication). Although Lieberman considered language
to be essentially spoken communication, he does not seem to have con-
fused speech with language, yet this debate shows that the distinction
between the two concepts may not have been acknowledged in full, even
by contemporary researchers of the neurobiology of language. I think
Lieberman by stressing speech as the “key attribute” of language first
of all gives reasons for the dominance of spoken language (see Sect.  7.8
below), not that speech is a primary attribute of language.
I have presented the modality-independent capacity of language as
a generic term which includes both speech and sign language (and
possibly also a tactually based language). It is now time to review the
main arguments from contemporary and some classical research works
that can be said to converge on this idea. I shall do this by focus-
ing on the similarities between spoken and sign language, both in the
ways these languages are learned, and in the ways they are represented
in the human brain. (I shall also describe some functional differences
between the two types of languages) Some of these similarities may be
common knowledge in contemporary research literature (and could
have been taken for granted?), and yet I need to address them in order
to bring the concept of a modality-independent capacity of language
into the focus of theoretical discussions on language and language
acquisition. First of all, this chapter intends to show that the modality-
independent concept of language has implications for theories of lan-
guage evolution.

7.2 Cross-Modal Trends of Language


Acquisition
Here I will address some developmental processes which show basic simi-
larities between speech and sign languages. Later I will also address some
neurophysiological and functional differences between the two language
modalities.
7 The Modality-Independent Capacity of Language … 235

7.2.1 The Language Acquisition Task

Typically developing children acquire their first language seemingly


without any major efforts. Both deaf children who are exposed to sign
language from birth and hearing children who are exposed to spoken lan-
guage acquire their local language easily. Because the two groups depend
on different signaling modalities, there will be some differences in the
way they acquire language, but there will also be important similarities.
In the following I will describe some features of the language acquisition
task which are shared by the two groups.
At the outset, the infant faces a language acquisition task that can be
framed in terms of a mapping or translation problem which is essentially
the same for both hearing and deaf children: How can linguistically struc-
tured utterances, made by other people, be mapped into self-performed
actions? These utterances are generally vocalized words or articulated signs,
and the self-performed actions are vocal gestures or manual gestures. This
is a complex mapping problem whose solution requires the impact of a
distributed network of nerve cells. In other words, this problem engages
the entire cognitive apparatus. It will be difficult to explain in full the way
this task is solved by infants. In principle, it requires a neural substrate
that links perception and action in linguistic communication. Therefore,
it is generally held that a major step towards an understanding of the
mapping function is made by the discovery of F5 neurons of the macaque
monkey (Rizzolatti and Arbib, 1998), and a similar system located in the
convexity of the inferior parietal cortex by man (Fogassi et al., 2005). As
mentioned in Chap. 3, Nyström (2008) also provided evidence for the
existence of a mirror neuron system by six-month-old children. These
provide the required link between the perception and production of lan-
guage on the assumption that the system is not modality-specific. Also as
mentioned in Chap. 3, the putative system of mirror cells in humans has
response properties which are lacking in animals. Thus, in humans these
cells also respond to intransitive acts, not only to transitive object-related
acts. (In competent readers, they also respond to written information
about such acts.)
236 Language Evolution and Developmental Impairments

The language-acquisition task, despite differences in the ambient stimu-


lus environment, is initially very similar for deaf and hearing infants. Later
these children become more dependent on modality-specific stimulation.

7.2.2 Babbling in Deaf and Hearing Babies

To optimize the mapping of linguistic stimuli into self-performed actions,


the practicing of movements that depend on the same neural network is
important. Consequently, a high frequency of babbling is expected both by
speech- and sign-exposed children, but the actual probability of babbling
depends among other factors on the degree of language exposure, and of
the child’s age at the time of exposure. Another factor that influences the
degree of babbling is the sensory feedback from the vocal or manual ges-
tures produced in babbling. As we shall see, both deaf and hearing babies
babble, albeit with different articulators. Vocal babbling among hearing
babies exposed to speech occurs around seven months of age (de Boysson-
Bardies, 1999). Also, some deaf babies are reported to produce vocal bab-
bles. but due to lack of auditory feedback their babbling starts late and
has a low rate of cyclicity (Oller and Eilers, 1988). Of major interest is
the observation that deaf babies exposed to sign language from birth bab-
ble with their hands prior to producing their first sign (Emmorey, 2002).
Petitto and Marentetto (1991) observed a class of hand activities by deaf
babies that differed from other gestures or anything else they did with their
hands. This class of activities was called manual babbles. because they con-
formed to the traditional criteria commonly used to identify vocal babbles
by hearing babies: They constituted a subset of possible “sign-phonetic”
units in natural sign languages, had a syllabic (consonant-vowel) organiza-
tion, and were produced without meaning or reference. Deaf babies (and
hearing babies exposed to sign language from birth) who babble with their
hands, reduplicate the manual movement, much like the reduplication of
vocal syllables (e.g., bababa) by hearing babies exposed to speech. Like the
phonetic–syllabic pattern of vocal babbles, which seem to be continuous
with phonetic form of the first words, manual babbling also seems to be
predictive of the phonetic form of the first signs. Petitto and Marentetto
reported that manual babbles were produced from 10–14 months, but
7 The Modality-Independent Capacity of Language … 237

Emmorey (2002) stressed that babbling starts approximately at the same


age and follow the same stages by hearing and deaf babies. The similarities
between vocal and manual babbling means that both may be interpreted
as a key mechanism that permits babies to discover and produce the pat-
terned structure of natural language (de Boysson-Bardies, 1999).
Other researchers (MacNeilage & Davies, 2000; Thelen, 1991) have
argued that babbling is the result of general motor development and is
therefore akin to other motor activities, like movements of the hands and
arms, sitting, standing and walking. Vocal babbling depends on the matu-
ration of the neuroanatomical and neurophysiological mechanisms under-
lying control of the vocal tract. The content of vocal babbling has been
considered as a direct consequence of the lip and tongue placement, and
the reduplications of the consonant-vowel syllabic form are determined by
the rhythmic mandibular oscillations. According to this interpretation (e.g.,
the motoric hypothesis), babbling is a nonlinguistic pre-speech activity. In
other words, this interpretation may also support a general claim about
the origin of language that mechanisms of production came first, and were
then followed by language (Lieberman, 2000; Pinker & Bloom, 1990).
The problem with the motoric interpretation of babbling is that it
does not account for the role of linguistic input. To deal with this prob-
lem, and to test the motoric versus the linguistic hypothesis of babbling
Petitto, Holowka, Sergio, Levy, and Ostry (2004) studied the spontane-
ous hand movements of two groups of hearing babies at 6, 10, and 12
months of age. The three babies in group 1 were exposed to speech from
birth and had received no sign language input. The three babies in group
2 were all reared by profoundly deaf parents, and were therefore exposed
to sign language from birth. Petitto et al. (2004) argued that despite the
latter groups exposure to sign, “the motoric hypothesis would predict
similar hand activity to that seen in speech exposed babies because lan-
guage acquisition in sign exposed babies does not involve the mouth”
(p. 43). They placed infrared emitting diodes on the babies’ hand, and by
way of Optotrack sensors they monitored the trajectory of hand move-
ments over time. In addition, video recordings were used for qualitative
assessments of hand movements that constituted babbles.
Petitto et al. (2004) discovered that the sign-exposed babies produced a
class of hand movements that conformed to the rhythm of sign language.
238 Language Evolution and Developmental Impairments

These movements had a low frequency of 1 Hz, in contrast to the nonlin-


guistic activities at a higher frequency of 2.5 Hz and above. Movements of
the former class were performed within the permissible sign-space, whereas
the high frequency movements were not so restricted. Both groups of babies
produced the high-frequency movement segments and were thus similar
with respect to the production of nonlinguistic hand activities. The low-
frequency movement segments were almost solely produced by the sign-
exposed babies. Nonetheless, hearing babies acquiring speech produced a few
occasional and highly reduced manual babbles. Yet, the few low-frequency
manual movements by speech-exposed babies were not performed within
the permissible sign-space, and even though they fell within a low-frequency
mode, they were still higher in frequency than the observed manual babbles
by the sign-exposed babies. The researchers therefore concluded that true
manual babbling was produced by the deaf sign-exposed babies alone.
In view of the robust similarities of the syllable structure between the
vocal and manual babbling, Petitto et al. argued that the infant brain may
host a specialized mechanism for detecting input patterns that are associated
with structural aspects of natural language. Moreover, they argued that these
mechanisms are linked to rudimentary motor programs to produce them,
but that these programs are not necessarily linked to a particular response
modality. On these premises, Petitto et al. (2004) conclude that “speech and
manual movements in young babies are equipotential articulators, either of
which can be recruited “online” in very early development, depending upon
the language and modality to which the baby is exposed” (p. 69).
In my view, there must be development of a modality-independent
capacity of language that guarantees equipotentiality of manual and vocal
articulators. Although these are differently recruited depending on lan-
guage exposure, further development of either speech or sign language
shows the realization of a modality-independent language capacity.

7.2.3 Developmental Milestones

The following descriptions can be found in most textbooks on develop-


ment; they are presented here to emphasize the similarities in developmental
trajectories for deaf and hearing babies. Deaf parents often report that their
7 The Modality-Independent Capacity of Language … 239

signing children produce the first signs at about 8.5 months, whereas hear-
ing children produce their first words between 10 and 13 months. Thus,
it has been commonly assumed that deaf children produce their first signs
earlier than hearing children produce their first words. However, Emmorey
(2002) pointed out that the first signs by deaf children are not actually sym-
bolic signs but “prelinguistic communicative gestures” that are produced
by both deaf and hearing children. When we take symbolic and referential
criteria into account, we find that the first signs and the first words appear
around the first birthday.
We also find similar and analogue trends in the acquisition of phonol-
ogy by hearing and deaf children. “Baby signs” are produced by altering
and simplifying the adult form, for example, by substituting one hand-
shape used by the adult signer by another that does not require the same
degree of motor control. Similarly, by hearing children acquiring speech;
fricatives and liquids are often replaced by stop consonants.
Motherese means that adults who speak to a child modify their speech
by using a higher-pitched voice, a wider range of prosodic contours, lon-
ger pauses, add emphatic stress, and so on. An equivalent to motherese
by parents of hearing children also occurs by signing parents of deaf
children. Signs produced for children are generally longer in duration,
contain more repetitions, and are made with larger and more distinct
movements. Thus, comparable milestones have been observed in the
acquisition of language for both deaf and hearing children, and these
milestones have been reached at the same developmental ages by the two
groups of children.

7.2.4 The Critical Period Hypothesis

Lenneberg (1967) argued that language acquisition is linked to brain


maturation, and is therefore likely to occur in childhood before a major
loss of neural plasticity takes place. In line with this argument, he hypoth-
esized a critical period or a time window of opportunity during which the
child’s brain is particularly sensitive to linguistic input and prepared for
learning of linguistic expressions. To test this hypothesis, we have been
dependent on generally anecdotal reports of children with a late exposure
240 Language Evolution and Developmental Impairments

to language. (Note the famous case of Genie, who was isolated in her
home until the age of 13 [Curtiss, 1977] and received her first linguistic
training when the window of opportunity may have already been shut.)
The devastating effects of isolation from a linguistic community will vary
depending on the duration of deprivation in early childhood. Do deaf
children similarly depend on a critical period for the learning of sign lan-
guage? Programs for the detection of deafness among infants have only
recently been provided in developed countries, which means that some
deaf children have suffered from a period of deprivation before they were
systematically exposed to sign language. The length of this period may
vary from child to child. Therefore, the study of language acquisition by
deaf children raised within loving families offers a special opportunity to
test Lenneberg’s hypothesis.
Newport (1991) compared skills in ASL of deaf people who fell into one
of three groups: 1) Native learners, who were exposed to ASL from birth,
2) early learners, who were first exposed to ASL when they entered school
at the age of 4–6 years, and 3) late learners, who were not exposed to ASL
before the age of 12. Participants in all three groups had practiced ASL
for at least 30 years. Newport found that age of acquisition had no effect
on basic word order in ASL. This finding supports a common assumption
that word order is a robust property of language that can be learned after
puberty. On the other hand, scores on tests of ASL morphology and age of
acquisition correlated -.60 to -.70. Thus, participants who acquired ASL
early in childhood outperformed those who learned this language at later
ages. Other researchers (Mayberry, 1995; Mayberry & Eichen, 1991) have
found that phonological processing is particularly vulnerable to a late start.
To my knowledge, there are no analogue studies of the effect of a
delayed start of speech acquisition by hearing children. Therefore, we
cannot tell whether the “window of opportunity” is the same for deaf and
hearing children. Yet, the studies of late deaf starters mentioned above
do support a general formulation of Lenneberg’s hypothesis: There is a
critical period for the development of a modality-independent capacity
of language. On this account, we should expect the sign language—and
speech acquisition processes—to be affected by the maturation of the
same neuroanatomical and neurophysiological substrata.
7 The Modality-Independent Capacity of Language … 241

7.3 Is There an A-Modal “Language


Rhythm”?
As mentioned above, Pepitto et al. discovered that deaf babies performed
manual babbles which conformed to the rhythm of sign language at the
frequency of 1  Hz. Low-frequency movements were also performed
by speech-exposed babies; therefore, we may ask whether there exists
a low-frequency range that forms an a-modal “language rhythm.” Are
there low-frequency modulations shared by ambient speech and sign
language that are generally preferred by deaf and hearing babies? The
frequency range may exceed 1 Hz and yet be considerably lower than
the frequencies commonly observed for random movements of the
hands and lips. Dolata, Davis, and Macneilage (2008) presented evi-
dence that both vocal and manual babbling had higher frequencies,
close to 3.0 Hz. Yet these frequencies are still lower than those gener-
ally found by spontaneous hand movements and rhythmic mandibu-
lar oscillations. Perhaps, therefore, we may still talk about a “language
rhythm” which may have a special role in language acquisition and
reading. Thus, it has be shown that in beginning readers, children’s
sensitivity to slow rhythmic modulations (of ≈ 1.5 Hz) correlates with
their reading ability (Kovelman et  al., 2012). These researchers also
demonstrated an overall greater activation for slow rhythmic stimuli
in both hemispheres, but the left hemisphere were selectively “tuned”
to rhythmic stimuli around 1.5 Hz. The select sensitivity for this range
may form a cross-modal mechanism for the acquisition of a reading
code (see Chap. 5).
However, the exact range of frequencies to which our brain is tuned
to is a matter of discussion. Fujii and Wan (2014) pointed out that the
rate of syllable production is 3–8 Hz. Thus, a select “language rhythm”
may be located in this range of frequencies. If the rate is higher than
8  Hz, speech is unintelligible. A select sensitivity for lip-smacking fre-
quencies of 3, 6, and 10 Hz, presented in video clips to monkey avators
(Ghazanfar & Takahashi, 2014), may indicate a primate precursor to the
“language rhythm” observed in humans.
242 Language Evolution and Developmental Impairments

7.4 Neural Representations of Signed


and Spoken Languages
The brain substrates underlying the comprehension and production of
linguistic utterances are mainly the same for speech and sign languages.
Signing individuals with damage in the left hemisphere cortical regions
show language disturbances whereas their right hemisphere is spared.
The left hemisphere specialization is now commonly acknowledged both
for signed and spoken languages, and hence it could not be argued that
hemispheric specialization has a unique role in the processing of one of
the modalities, vision or hearing. After reviewing case studies of sign-
ing individuals with language disturbances after brain damage and a
number of other lesion and neural imaging studies, Emmorey (2002)
rejected Tallal’s hypothesis that the left hemisphere is specialized for rap-
idly changing sensory events and Kimura’s hypothesis that it supports
control of complex motor actions. She therefore concluded that “the left
hemisphere specialization for language does not appear to arise from the
particular demands of auditory speech perception” (p. 282). Instead, the
left hemisphere specialization has most likely arisen from the needs of a
modality-independent use of language, and that a production/perception
matching system is underlying both speech and sign language.
Apart from the left hemisphere specialization for language, there are
also cross-linguistic differences in the brain representation of the speech
and sign language. Corina (1998) found that only damage to critical left
hemisphere structures, such as Broca’s and Wernicke’s area and the supra-
marginal gyrus, caused any sign language impairments. However, lesions
in Wernicke’s area proper have not been observed by patients with sign
language aphasia, and lesions in the supramarginal gyrus proper are not
typically associated with speech-comprehension deficits. In addition to
the critical brain structures shared by signed and spoken languages, it
has therefore been speculated that sign language also depends on inferior
parietal areas. The critical role of the shared areas for processing of the
two linguistic forms has also been supported by cortical stimulation map-
ping of a deaf signer that needed surgical treatment for seizers (Corina
et al., 1999).
7 The Modality-Independent Capacity of Language … 243

By the end of the last century, the neural structures underlying both
linguistic forms were considered isomorphic; thus, commonalities were
stressed by most research workers. In the beginning of the present cen-
tury, more evidence showing cross-linguistic differences were reported
(see Corina, Lawyer, & Cates, 2013, for a critical review). Also, a growing
awareness that human language may have bi-hemispheric representations
gave rise to more research on the role of the right hemisphere in linguistic
processing. Perhaps sign languages depend on right hemisphere resources
to an extent that is not observed for spoken languages. Thus, neuroimag-
ing studies have shown that comprehension of particular grammatical
constructions in ALS and BSL depend on activation of right posterior-
parietal regions in a way that has not been reported for spoken languages.
Several researchers have therefore speculated that sign languages involve
more processing of spatial relationships, which permits a coordinated
control of both hands. In this language modality relations such as “on.”
“above” “under” need no specific lexical item, but may be depicted by
the configured movement of the one hand in relation to the shape of the
other hand.
In short, contemporary research may indicate a conflict between com-
monalities and cross-linguistic differences between the two modalities of
language. The specific coupling of sensory inputs and linguistic articu-
lators in both forms of language has necessarily affected the outcomes
of neuroimaging studies as well as case studies of the aphasias. While
acknowledging the possibility that linguistic competence requires spe-
cialized and language-specific neural mechanisms, Corina et al. (2013)
concluded with the following dilemma: “The broader point is whether
aphasic deficits should be solely defined as those that have clear homolo-
gies to the left hemisphere maladies that are evidenced in spoken lan-
guages, or whether the existence of signed languages will force us to
consider the conception of linguistic deficits such as aphasia and open
the possibility that there may be multiple ways in which the human brain
may manifest linguistic abilities” (last para of e-pub issue).
I am fully cognizant of the existence of modality-specific neural mech-
anisms, and yet the observed homologies may be interpreted as the more
abstract representations of a modality-independent capacity of language.
244 Language Evolution and Developmental Impairments

These homologies do not mean that signed and spoken languages are
unconstrained developmental options. Thus, despite the functional and
structural similarities between speech and sign language, they may also
compete for limited resources to an extent that is not found between
same-modality languages. This will be the problem addressed in the fol-
lowing section.

7.5 Cross-Modal Reorganization by the Deaf


After Long-Term Exposure to Sign
Language
The equipotentiality of speech and manual articulators, claimed by Petitto
et al. (2004), exist by very young babies only, and are soon replaced by a
preference for one type of articulators. Thus, the type of language expo-
sure, speech or sign language, reinforces one class of articulators possibly
at the expense of the other class. Thus, switching from sign language to
speech or vice versa becomes more difficult with age; lasting exposure to
one modality of language destroys the equipotentiality of articulators that
existed at birth.
Teoh, Pisoni, and Miyamoto (2004) showed why cochlear implantation
in adults, particularly in the late-implanted pre-lingual deafened adults,
cause major problems in acquisition of fluent speech. They discussed the
anatomical and physiological changes that take place in peripheral and
central auditory pathways upon prolonged deafness. The degeneration
found in the spiral ganglion cells of the peripheral structures of the audi-
tory system was not similarly found in the auditory cortex; the supra-tem-
poral gyrus (associative auditory cortex) does not atrophy or degenerate.
Instead, a cross-modal reorganization, subsequent to long-term auditory
deprivation, takes place. The auditory cortex will be “colonized” by visual
stimuli, and rewired to process visual information. Thus, Teoh et al. con-
cluded that: “the colonization of the auditory cortex by other sensory
modalities is the main limiting factor in post-implantation performance,
not the pathological degenerative changes of the auditory nerve, cochlear
nucleus, or auditory midbrain” (p.  1714). Their observations may be
7 The Modality-Independent Capacity of Language … 245

related to the general plasticity of the human brain, which means that
visual cortex may similarly respond to spoken language in blind children
(see Bedny, Richardson, and Saxe, 2015).
Because the “colonization of the auditory cortex” after prolonged expo-
sure to sign language complicates the acquisition of speech, Teoh et al.
also argued that educational programs for cochlear implant (CI) users
that stress oral communication, may potentially reduce the “cortical colo-
nization” phenomenon, and are therefore preferable in relation to pro-
grams that stress “total communication.” Thus, educational programs that
include use of signs, in combination with oral exercise, may support the
processing of visually evoked signals in the auditory cortex. The question
is whether the two modalities of communication, in the long run, may
mutually interfere, and consequently make the full proficiency of sign-
speech bi-linguality more or less impossible. Wooi Teoh et al.’s discussion
of the consequences of the “cortical colonization” phenomenon is highly
relevant for the post-operative support for children with CI. The options
regarding language planning for these children in the twenty-first century
were discussed by Knoors and Marschark (2012). These writers did not
discuss the educational and remedial consequences of the colonization
phenomenon. yet they wisely concluded that “language planning and lan-
guage policy should be revisited in an effort to ensure that they are appro-
priate for the increasingly diverse population of deaf children” (p. 291).
Experimental works which relate to the effects of sign-speech (bimodal)
bilingualism are needed. The frequency-lag hypothesis (Gollan et  al.
2011) claims that lexical retrieval is disadvantaged by bilinguals due to
a “frequency lag” in use of the two languages, in particular in the use
of the nondominant language. Emmorey, Petrich, and Gollan (2013)
reported the results of a picture-naming task with three groups of par-
ticipants: 1) Hearing ASL: English bimodal bilinguals, 2) Monolingual
deaf signers, and 3) English-speaking monolinguals. The bimodal bilin-
guals showed a higher frequency effect; that is, they were slower and
less accurate when naming pictures in ASL, both when compared with
English (their nondominant language) and with monolingual deaf sign-
ers. Picture naming in English showed no difference in naming latencies,
error rates or frequency effects when bimodal bilinguals were compared
with monolinguals.
246 Language Evolution and Developmental Impairments

Emmorey et al.’s work may be interpreted as a linguistic drawback for


bimodal bilingualism, compared to both deaf and hearing monolinguals.
Although brain evolution may have equally paved the way for signed
and spoken languages, bimodal bilingualism may not have been selected
on a par with unimodal bilingualism. Increased mobility and interac-
tion between language societies stimulate the development of bi- and
multilingualism by the hearing populations. The question is whether this
mobility also stimulated (unimodal) bi- and multilingualism among the
deaf signers. I do not know the extent of growth of unimodal bilingual-
ism in the deaf population. Any differences here between signed and spo-
ken languages may reveal differences between the two modal forms of
language which will be discussed in Sect. 7.8.

7.6 Is the Equipotentiality of Articulators


in Communication Specific to Humans?
Though humans are the only species having a modality-independent
capacity of language, we may ask whether animals are capable of devel-
oping communicative skills in optional modalities and articulators. Can
we find communicative skills in animals which serve similar functions
but are expressed in different sensory-motor modalities, dependent or
independent of somatic anomalies or other interactional constraints?
The various communication systems in animals are all dependent on
stimulus events belonging to a particular sensory modality and motor
apparatus. The honeybee recruitment dance is conveyed in a visual
medium, and the Vervet monkey alarm calls are sound stimuli. The
humpback whale song may be detected both as sound and pressure waves.
However, I do not know any species where groups of conspecifics have
developed alternative modes of communication depending on differences
in sensory or motor abilities/disorders. In humans, speech and sign lan-
guages are based on different modalities of reception and expression and
yet reveal robust similarities of structure and processes of acquisition. In
my opinion, the acquisition of the two types of languages must depend
on linguistic input patterns that are shared by the two modalities, and
7 The Modality-Independent Capacity of Language … 247

as argued by Petitto et al. (2004) the mechanisms for detection of these


patterns are linked to equipotential articulators. Therefore, the two types
of communication testify to a general, modality-independent capacity of
language by man. No similar evidence of a general language capacity has
been reported for any nonhuman species.
The newborn infant has a capacity to learn language in almost any
modality. This capacity has also been characterized as an instinct to learn,
albeit along a number of different routes or channels. These are devel-
opmental potentialities that do not exist in animals. The infant poten-
tialities for the acquisition of speech and sign language exist with equal
strength at birth, whereas the selection of language modality depends on
the quality and extent of linguistic exposure.

7.7 The Dominance of Spoken Languages


For the typically developing child, the selection of articulators follows the
ambient exposure of linguistic signals. However, in the context of lan-
guage evolution this problem may turn out to be far more complex. From
the times when language arose, approximately 100,000 years ago, vocal
responses may not have had a dominant position among other, equally
possible articulators in human linguistic communities. Given that the
evolution of a language-ready brain that sets manual and vocal articula-
tors is on par, why did spoken languages proliferate globally throughout
the history of Homo sapiens sapiens?
How do we explain the different viabilities of spoken and signed lan-
guages? Given Pepitto et al.’s equipotentiality of articulators at birth, we
may ask why speech and sign languages have not been equally repre-
sented in ancient and modern societies among typically developing indi-
viduals. After all, deafness is not a prerequisite for the acquisition of sign
language, so why did not hearing people, apart from relatives and teach-
ers of deaf persons, acquire sign language? Instead, spoken languages have
dominated linguistic communication throughout most of human history.
A discussion of this problem requires that we focus on the sociocognitive
conditions that support language, but also on the memory mechanisms
supporting the use of signs on the one side and vocal responses on the
248 Language Evolution and Developmental Impairments

other. Perhaps the viabilities of the two types of languages, speech and
sign languages, differ in some important respects.
Let me revert to the development of a new sign language in Nicaragua;
that is, the NSL (see Introduction, Sect. 1.3). In 1981, after the Sandinists
had taken power in Nicaragua, a new vocational school for the deaf was
opened in Managua. Deaf children had previously been raised in isolated
families with mainly nonsigning parents, and in this context deaf children
learned a rudimentary form of communication with manual gestures.
They developed a small “vocabulary” of gestures, and to some extent a
strategy for communicating longer sentences (also characterized as a pid-
gin sign language). Arbib (2009) stressed that these skills resulted from
the collective efforts of the family to communicate. However, the gestures
were not standardized, and therefore they were commonly labeled “home
signs,” because they were completely unintelligible to people outside the
family. (As mentioned in the Introduction, these were gradually aban-
doned and exchanged, via a pidgin sign language, with a new and well-
structured creole sign language.)
With the establishment of the vocational school in Managua, a new
situation for deaf adolescents and young adults emerged. They were
encouraged to look upon themselves as social actors who collectively cre-
ated their own identity. In other words, they became a new linguistically
defined peer group whose cohesiveness depended on the standardization
and adjustment of signs. After having met with other deaf children and
adults, their home signs were transformed into a pidgin and later into a
rather arbitrary articulation of signs agreed upon in the new community
of deaf people; the birth of a new language had taken place. This process,
however, depended strongly on teachers or administrators who provided
the community with the idea of a language. Yet, like home signs had been
created by the collective efforts of the family, the development of NSL
was made possible by the collective efforts of the community of students
(see the role of collaborative structures in Chap. 5, Sect. 5.6.1).
The social mechanisms that operated during the development of NSL
have most probably affected the emergence of any language from prehis-
toric times to the present. It should be stressed, however, that the lan-
guage communities that are created by these mechanisms are defined by
a particular modality and form of expression. Therefore, there are great
7 The Modality-Independent Capacity of Language … 249

barriers between sign and spoken language communities that impede


communicative interactions between them. Are the odds for further
development and expansion equal for the two types of language com-
munities? Apparently, the principle of equipotentiality of articulators
at birth means that neither of them is biologically odds-on. Yet, there
may be functional or processing differences which set the one modal-
ity of expression at a disadvantage relative to the other. Communication
between signers is not possible in darkness, and sign production is greatly
impeded when the person is occupied with other manual tasks. Both
modalities of linguistic communication depend on sensory and working
memory. Because the duration of “echoic” memory traces favors a longer
span of attention, speech is generally running 1.5 times as fast as signing.
“Iconic” memory traces fade faster, and therefore, the mental “replay”
of signs will cover shorter sequences of communicative elements. Thus,
Lieberman (2015) stated that human speech is a key attribute of lan-
guage “since it allows information to be transmitted at a rate that exceeds
the fusion frequency of the auditory system. It otherwise would not be
able to retain more than a few words to working memory— precluding
comprehending distinctions in meaning conveyed by even moderately
complex syntax” (p. 2, online publication).
It may be difficult to prove that any of these differences really set sign
language at a disadvantage in society relative to spoken languages. There
may, however, be social and cultural factors that affect the viability of the
two communities differently. The viability of a language community has
to do with the growth of a language, and in the long run on the societal
and global importance of a language or a group of languages. The learn-
ing of a language is supposed to provide the person with interfaces, or
means of communication with as many other people as possible. Home
signs do not afford communicative interactions with other people out-
side the family, whereas acquisition of a structured and standardized sign
language does. In the long run, however, the viability of a sign language
(like a spoken language) depends on the possibilities that it can be passed
on to the next generation. Also, it is important that a language commu-
nity does not isolate itself, or become isolated from other languages, that
translations are encouraged, and that new members are included in the
language community by marriage.
250 Language Evolution and Developmental Impairments

These requirements may jeopardize sign languages relative to spoken


languages; in particular, because use of a language is a criterion of cul-
tural belongingness. I believe that deaf signers are more easily segregated
from other linguistic communities than are any group of spoken lan-
guage users. The sign language community becomes a deaf culture that
Emmorey characterized in this way:

Deaf people form a community by virtue of shared values, interests, cus-


toms, and social goals, and deaf culture is unique in its world view, artistic
expression, and humor. Deaf people seek each other out and join together
in many social, political, and athletic organizations both locally and nation-
ally (as well as internationally) (2002, p. 7).

However, the extent to which deaf signers are capable of interacting


globally with signers from other countries is not well known. Deaf people
who use ASL and deaf people who use the BSL do not understand each
other. Although both groups belong to the same Western culture, interac-
tions between the two depends on the degree of integration in the English
language society, and hence on the degree of bilingualism among the deaf
people. The two sign languages are mutually unintelligible. Given this
background, I believe that globalization is more difficult for deaf signers
than for users of a spoken language.
This does not mean that sign languages are indigenous languages
without any evidence of historical relationships to other sign languages.
Since deaf teachers established the first vocational school for the deaf
in the United States, ASL still retains some resemblance to French Sign
Language (FSL). Today, however, both are as distinct as English and
French. Interactions between deaf people across countries, and across
cultures, do not necessarily indicate the growth of a particular sign lan-
guage community. Such interactions may be the result of bilingual skills
(sign–speech bilinguality) among the deaf. Therefore, we should keep
two questions separate: What is the development and expansion of par-
ticular sign languages, and what is the integration and social welfare of
deaf individuals in modern societies?
In most Western societies, there has been a political movement against
segregation and toward integration of deaf children in mainstream
7 The Modality-Independent Capacity of Language … 251

schools. Some reports have shown that these efforts have been successful
(see, for example Antia, Jones, Reed, and Kreimeyer, 2009). However,
Rydberg, Gellerstedt, and Danemark (2010) present a less optimistic pic-
ture of the level of educational attainment. They studied 2144 people
born between 1941 and 1980 who attended a special education program
for the deaf in Sweden. These were compared to randomly chosen hear-
ing people who were born in the same period. They concluded that “the
educational reforms have not been sufficient to reduce the unequal level
of educational attainment between deaf and hearing people” (p. 313).
It may be argued that the observed differences in educational attain-
ment are due to the fact that deaf students work on the premises of the
spoken language culture. Therefore, it seems to be an impossible task to
raise the literacy rate among the deaf on an equal level with the hearing
population. Skills that are based on the comprehension and production
of speech will of course set the deaf at a disadvantage. All barriers and
inequalities that disfavor the deaf in educational settings testify to the
dominance of spoken language in society.
Could it be otherwise? The social mechanisms underlying the creation
of any language have served the sign languages as well as the spoken
languages. However, written languages have been invented and devel-
oped for the spoken languages. Sign languages, despite various attempts
to build an alphabet of signs, have not similarly been bestowed on a
written language. This explains why sign languages have been less viable,
compared to spoken languages, in the development of modern societies.

7.8 The “Language Mode” Revisited


The last section dealt with differences between spoken and signed lan-
guages, whereas preceding sections have addressed the similarities
between the two modes of communication. In total, I have argued for a
cross-modal capacity of language which determines the efficiency of com-
munication in both modalities. Therefore future research should address
the question of what may be the cross-modal source of language impair-
ments. The low-frequency modulations of both manual and vocal behav-
ior mentioned in Sect.  7.3. seem to have formed a “language rhythm”
252 Language Evolution and Developmental Impairments

which is critical for early acquisition of language, and when impaired


may cause enduring difficulties in developing language skill. In addi-
tion to a “language rhythm” the statistical learning constraints studied
by Saffran et al. (see Chap. 3, Sect. 3.2) may also depend on cross-modal
mechanisms, which serve the establishment of early and basic language
functions.
A cross-modal source of language impairments, whether it depends
on a rhythm disorder or constrained statistical learning, will most likely
affect all aspects of language. Language impairments which have a cross-
modal source in development will cause difficulties regardless of whether
the child/adult is exposed to speech or sign language. (Compare the inci-
dence of language impairments by congenitally deaf children, which are
comparable to language impairments by hearing children, which was
described in Chap. 2, Sect. 2.2)
Use of language is also affected by a number of factors which are specific
to the media of communication and the modes of articulation. Also, both
formal and informal modes of education will affect the linguistic skills of
people, and therefore, the clinical manifestations of language impairments
by children and adults will vary greatly. The final chapter of this book will
deal with the various attempts to define critical markers of developmental
language impairments, and the prospects which rise in an evolutionary
perspective both with respect to diagnostics and remedial treatment.

References
Antia, S. D., Jones, P. B., Reed, S., & Kreimeyer, K. H. (2009). Academic status
and progress in communication in deaf and hard-of-hearing students in gen-
eral education classrooms. Journal of Deaf Studies and Deaf Education, 14,
293–311.
Arbib, M. A. (2009). Evolving the language ready brain and the social mecha-
nisms that support language. Journal of Communication Disorders, 42,
263–271.
Bedny, M., Richardson, H., & Saxe, R. (2015). “Visual” cortex responses to
spoken language in blind children. The Journal of Neuroscience, 35,
11674–81.
7 The Modality-Independent Capacity of Language … 253

Bolhuis, J. J., Tattersall, I., Chomsky, N., & Berwick, R. C. (2015). Language:
UG or not to be, that is the question. PLoS Biology, 13, e1002063.
doi:10.1371/journal.pbio.1002063.
Corballis, M. C. (2010). Mirror neurons and the evolution of language. Brain
& Language, 112, 25–35.
Corina, D.  P. (1998). Studies of neural processing in deaf signers: Toward a
neurocognitive model of language processing in the deaf. Journal of Deaf
Studies and Deaf Education, 3, 35–48.
Corina, D. P., Lawyer, L. A., & Cates, D. (2013). Cross-linguistic differences in
the neural representation of human language: Evidence from users of signed
languages. Frontiers in Psychology, 3, 587. doi:10.3389/fpsyg.2012.00587.
Corina, D.  P., McBurney, S.  L., Dodrill, C., Hinshaw, K., Brinkley, J., &
Ojemann, G. (1999). Functional roles of Broca’s area and supramarginal
gyrus: Evidence from cortical stimulation mapping in a deaf signer.
NeuroImage, 10, 570–581.
Curtiss, S. (1977). Genie: A psycholinguistic study of a modern day “wild child”.
New York: Academic Press.
de Boysson-Bardies, B. (1999). How language comes to children: From birth to two
years (M. DeBevoise, Trans.). Cambridge, MA: MIT Press.
Deacon, T. (1997). The symbolic species. The co-evolution of language and the
human brain. London: Penguin books.
Dolata, J. K., Davis, B. L., & Macneilage, P. F. (2008). Characteristics of the
rhythmic organization of vocal babbling: Implications for an amodal linguis-
tic rhythm. Infant Behavior & Development, 31, 422–431.
Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign lan-
guage research. Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, K., Petrich, J. A., & Gollan, T. H. (2013). Bimodal bilingualism and
the Frequency-Lag Hypothesis. Journal of Deaf Studies and Deaf Education,
18, 1–11.
Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G.
(2005). Parietal lobe: From action organization to intention understanding.
Science, 308, 662–667.
Fujii, S., & Wan, C. Y. (2014). The role of rhythm in speech and language reha-
bilitation: The SEP hypothesis. Frontiers in Integrative Neuroscience, 8, 777.
Ghazanfar, A. A., & Takahashi, D. Y. (2014). Facial expressions and the evolu-
tion of the speech rhythm. Journal of Cognitive Neuroscience, 26,
1196–1207.
254 Language Evolution and Developmental Impairments

Gollan, T. H., Slattery, T. J., Goldenberg, D., Van Assche, E., Duyck, W., &
Rayner, K. (2011). Frequency drives lexical access in reading but not in
speaking: The frequency-lag hypothesis. Journal of Experimental Psychology.
General, 140, 186–209.
Hockett, C. D. (1960). The origin of speech. Reprint from Scientific American,
603.
Klima, E.  S., & Bellugi, U. (1979). The signs of language. Cambridge, MA:
Harvard University Press.
Knoors, H., & Marschark, M. (2012). Language planning for the 21st century:
Revisiting bilingual language policy for deaf children. Journal of Deaf Studies
and Deaf Education, 17, 291–305.
Kovelman, I., Mashco, K., Millott, L., Mastic, A., Moiseff, B., & Shalinsky,
M. H. (2012). At the rhythm of language: Brain bases of language-related
frequency perception in children. Neuroimage, 60, 673–682.
Krentz, U. C., & Corina, D. P. (2008). Preference for language in early infancy:
The human language bias is not speech specific. Developmental Science, 11(1),
1–9.
Lenneberg, E. (1967). Biological foundations of language. New York: Wiley.
Lieberman, P. (2000). Human language and our reptilian brain: The subcortical
bases of speech, syntax and thought. Cambridge, MA: Harvard University Press.
Lieberman, P. (2015). Language did not spring forth 100 000 years ago. PLoS
Biology, 13, E1002064. doi:10.1371/journal.pbio.1002064.
MacNeilage, P. F., & Davies, B. L. (2000). On the origin of internal structure of
word forms. Science, 288, 527–531.
Mayberry, R. (1995). Mental phonology and language comprehension or What
does that sign mistake mean? In K. Emmorey & J. Reilly (Eds.), Language,
gesture, and space (pp. 355–370). Mahwah, NJ: Lawrence Erlbaum.
Mayberry, R., & Eichen, E. (1991). The long-lasting advantage of learning sign
language in childhood. Another look at the critical period for language acqui-
sition. Journal of Memory and Language, 30, 486–512.
Newport, E. L. (1991). Contrasting conceptions of the critical period for lan-
guage. In S.  Carey & R.  Gelman (Eds.), The epigenesist of mind: Essays in
biology and cognition (pp.  111–130). Cambridge, UK: Lawrence Erlbaum
Associates.
Nyström, P. (2008). The infant mirror neuron system studied with high density
EEG. Social Neuroscience, 3(3-4), 334–347.
Oller, D.  K., & Eilers, R.  E. (1988). The role of audition in baby babbling.
Child Development, 59, 441–449.
7 The Modality-Independent Capacity of Language … 255

Petitto, L. A., Holowka, S., Sergio, L. E., Levy, B., & Ostry, D. J. (2004). Baby
hands that move to the rhythm of language: Hearing babies acquiring sign
languages babble silently on the hands. Cognition, 93, 43–73.
Petitto, L.  A., & Marentetto, P.  F. (1991). Babbling in the manual mode:
Evidence for the ontogeny of language. Science, 251, 1483–1496.
Pinker, S., & Bloom, P. (1990). Natural language and natural selection.
Behavioral and Brain Sciences, 13, 707–784.
Rizzolatti, G., & Arbib, M.  A. (1998). Language within a grasp. Trends in
Neoroscience, 21, 188–194.
Rydberg, E., Gellerstedt, L. C., & Danemark, B. (2010). The position of the
deaf in the Swedish labor market. American Annals of the Deaf, 155, 68–77.
Teoh, S. W., Pisoni, D. B., & Miyamoto, R. T. (2004). Cochlear implantation
in adults with prelingual deafness. Part 1. Clinical results. Laryngoscope, 114,
1536–1540.
Thelen, E. (1991). Motor aspects of emergent speech: A dynamic approach. In
N.  A. Krasnegor, D.  M. Rumbaugh, R.  L. Schiefelbush, & M.  Studdert-
Kennedy (Eds.), Biological and behavioral determinants of language develop-
ment (pp. 329–362). Hillsdale, NJ: Lawrence Erlbaum.
Vouloumanos, A., & Werker, J.F. (2004). Tuned to the signal: the privileged
status of speech for young infants. Developmental Science 7(3), 270–276.
Vouloumanos, A., & Werker, J.  F. (2007). Listening to language at birth:
Evidence for a bias for speech in neonates. Developmental Science, 10(2),
159–171.
8
Developmental Language Impairment:
Perspectives of Etiology and Treatment

In Chap. 2, I discussed some conceptual issues about developmental lan-


guage impairment. These were related to the exclusion criteria of SLI,
and the assumption of impairments which are specific to language. Other
issues were related to the criteria of inclusion and the possibility of defin-
ing “critical markers” for SLI, the genetic etiology of this impairment,
and the problems of differential diagnostics.
The ensuing five chapters focused on aspects and issues of evolution.
Now it is time to summarize the main arguments which are raised in
these chapters and which have direct relevance to the study of develop-
mental language impairment. What are the benefits of the evolutionary
approach taken here, and what are the implications for diagnoses and
treatment?

8.1 The Evolutionary Perspective


The implications of this approach can most clearly be seen when we focus
on aspects of continuity in the evolution of language: 1) Pre-adaptations
for language have taken place in the learning and praxis of particular

© The Editor(s) (if applicable) and The Author(s) 2016 257


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6_8
258 Language Evolution and Developmental Impairments

behavioral patterns by subhuman primates, and which occur in refined


forms by humans. 2) These are behavioral patterns which depend on neu-
ral substrates which are relatively well-known in neurobiological research
and which are shared between language and nonlanguage domains. 3)
Language aspects which have an evolutionary history in pre-adaptations
by animals and early man are subject to particular learning constraints;
that is, wired-in abilities which safeguard the process of acquisition and
which also serve as means of the vertical transmission of language.
The three points involve components of language with an evolution-
ary origin which precedes other components, and which guarantees the
first two S’s (sign and structure) in Fitch’s componential analysis (Sign-
Structure-Semantics). Linguistic signals also involve structure, albeit on
a different level than phonology and syntax, and therefore the two S’s
make up the complete structure of language. The main assumption to
be discussed in this chapter is this: developmental language impairments
depend on early abnormalities in the neural mechanisms underlying
comprehension and use of language structure. Therefore the two S’s in
Fitch’s componential analysis will be focused in discussions of diagnoses
as well as (remedial) treatment.
What about the third S, semantics? The acquisition of a “mental lexi-
con” may also be impaired in affected children, and hence semantic abili-
ties are not necessarily spared in developmental language impairment.
However, deficient semantic abilities are generally indirect consequences
of difficulties in structural analysis. Although, I will deal with develop-
mental language impairment mainly as “structural impairment” I will
also make brief excursions into other aspects of language; for exam-
ple, emotional and prosodic aspects, which may as well be affected by
language-impaired children. Although I acknowledge the heterogeneous
symptomatology of these children, the evolutionary perspective taken
here brings the “structural analyses” into focus of discussions. There are
two theoretical paradigms, both presented in Chap. 3, which will serve
as a frame of reference for the following reviews: 1) Ullman’s declarative
procedural model, and Saffran’s constrained statistical learning paradigm.
Both are used in research which addresses the “core” problems for
language-impaired children, and which therefore have high relevance for
the development of a diagnostics for developmental language impairment.
8 Developmental Language Impairment... 259

Ullman’s DP model deals with the mechanisms of both vocabulary and


grammar/syntax learning; the latter involves learning of the main linguis-
tic structure (the second S in Fitch’s component analysis). This model also
includes the procedural deficit hypothesis (PDH) which has been tested
in a number of research works to be reviewed below. Saffran’s constrained
statistical learning paradigm will be included in order to address the
acquisition of linguistic signals (the first S) and artificial grammar learn-
ing (AGL) which has gained considerable attention in recent research
on etiology and treatment. Both paradigms address functions which are
likely products of early pre-adaptations of language.
Let me recapitulate some major propositions about the DP model
(see Chap. 3, Sect. 3.3): The procedural memory system is important in
learning syntax and phonology, whereas the declarative memory system
is involved in the acquisition of vocabulary and general semantic knowl-
edge. The former system mediates rule-learning, and is therefore involved
in the learning and performance of sequences, both serial and abstract.
Therefore, a PDL is associated with grammar impairments that most
likely form the major part of developmental language impairments. More
generally, the impaired learning of sequences also implicates difficulties
in detecting and remembering the statistical structures of language, both
within words and sentences, and will therefore interfere with the acqui-
sition of the two S’s (signal and structure) of language. The learning of
linguistic signals depends on the transition probabilities in the sequences
of sounds/gestures, and thereby the segregation of words in a stream of
sounds. I therefore consider the constrained statistical learning paradigm
of Saffran to be compatible with Ullman’s DP model.
Why should developmental language impairments be linked to dys-
functions of phylogenetically older structures underlying the procedural
memory system? According to Squire, Knowlton, and Musen (1993),
skills which are controlled by the frontal/basal ganglia circuitry “are reli-
able and consistent, and they provide for myriad, nonconscious ways of
responding to the world” (p. 486). Such skills, for the most part learned
implicitly in early childhood, constitute a firm basis for subsequent devel-
opment. The acquisition of declarative knowledge, which may continue
into adulthood, is generally dependent on the medial temporal lobe struc-
tures (see Chap. 3). This system forms a basis for conscious recollection of
260 Language Evolution and Developmental Impairments

words and phrases, but is “fallible in the sense that it is sensitive to inter-
ference and prone to retrieval failure” (Squire et al., 1993, p. 486). The
procedural system is less flexible, which means that dysfunctions of the
frontal/basal ganglia circuitry may have lasting consequences, whereas
failure of the phylogenetically more recent system may be more corrigible.

8.2 Interactions Between the Declarative


and Procedural Systems: Methodological
Implications
The PDH states that children with developmental language impairment
“are afflicted with procedural system brain abnormalities that result in
grammatical impairments and/or lexical retrieval deficits” (Ullman &
Pierpoint, 2005, p. 405). To test this hypothesis, we have to define the
behavioral correlates of the procedural system, or, more specifically, we
need to define the learning tasks which depend on the operation of the
procedural system. This turns out to be very difficult because most behav-
ioral tasks/patterns will depend on a complex interaction between the
declarative and procedural system (see Chap. 3, Sect. 3.3.2). To learn
rule-governed patterns, the procedural system depends on selection of
lexical items from declarative memory, and the acquisition of new knowl-
edge often involves the operation of both systems.
In associative learning, the novelty of stimuli is important; associat-
ing novel stimuli with rare or novel responses are assumed to activate
the procedural system. Associating meaningful stimuli with meaningful
new words are supposed to tap the declarative memory store. However,
the dissociation between the two long-term memory systems depends
on speed of presentation. Slow presentation of items in rich semantic
context facilitates declarative memory. Vocabulary learning, which is gen-
erally said to depend on the declarative system, will be impaired when
demands are made on phonological segmentation and phonological
short-term memory (see Bishop & Hsu, 2015).
As will be shown below, serial reaction tasks are often used to study
procedural learning. However, performance on these tasks may also be
influenced by the declarative memory. The dissociation between the two
8 Developmental Language Impairment... 261

systems depends on the complexity of the presented series. Also, proba-


bilistic category learning tasks (see The Weather Prediction Task [WPT]
below) which may be solved by explicit strategies do not provide a good
measure of procedural skills. In conclusion: We can only approximate an
experimental dissociation of procedural and declarative learning tasks.
However, many research works have been done to test the PDH for devel-
opmental language impairments. Some of them use research paradigms
which will be reviewed and discussed below; these may also be developed
as diagnostic tests, Notice that their validity rests on a successful dissocia-
tion of the declarative and procedural memory systems.

8.3 Tests of the PDH


In Chap. 3, Sect. 3.3.2, I also reviewed Peterson, Folia, and Hagoort
(2010) who showed that implicit learning of AG depends on the activa-
tion of the left inferior frontal region, whereas the medial temporal lobe
is deactivated during the process. This study, which is based on fMRI
data, shows the association of AG and the neural structures underlying
the procedural memory system. It supports the DP model and is compat-
ible with the PDH. To test this hypothesis, studies of language-impaired
participants are needed. Thus, what type of skills/behavioral patterns
can be used for this purpose? Ullman and Pierpoint (2005) pointed out
that grammatical impairments tend to be accompanied by impairments
in a number of nonlinguistic domains, such as motor control of oral
fine movements, mental rotation, hypothesis testing and probabilistic
categorization, sequencing, statistical learning and executive functions.
In this way, observations that show co-morbidity of language impair-
ment with any of these functions have been interpreted as support for
the PDH. Some researchers have therefore compared sequence learning
by language-impaired and typically developed children by using a Serial
Reaction Time (SRT) task. Others have compared the two groups on
a task of probabilistic categorization such as the WPT.  In the follow-
ing, I shall review a few of these studies which show somewhat disparate
results as to the association between language impairments and proce-
dural difficulties. Their relevance for the PDH, however, depends on the
262 Language Evolution and Developmental Impairments

selection of children in the experimental group who suffer primarily from


grammar impairments.
According to a third approach, the PDH may also be tested within a
linguistic domain as long as the task does not require declarative memory
of the presented materials. Thus according to this hypothesis, implicit
learning and generalization of novel language structures will be deficient
in grammar impaired children. Along this line of research, I shall crit-
ically review a few studies of acquisition and generalization of AG by
language-impaired and typically developing (TD) children. First, I shall
deal with studies that focus on a nonlinguistic ability.

8.3.1 SRT

In SRT tasks, participants are shown four boxes or circles arranged hori-
zontally across a computer screen or ordered in a diamond configuration.
Whenever a stimulus appears in one of the four boxes, the participant is
told to press a button on the response pad that match the location of the
visual stimulus. Participants are not told that the stimuli are presented in
a fixed sequence, usually 10 items long, for example, 4,2,3,1,3,2,4,3,2,1,
where each stimulus presentation corresponds to a particular location on
the screen. Sequence learning is measured as improvements in accuracy
and/or reaction time (RT) compared to a randomly ordered sequence.
Typical performance by participants with normal language (NL) devel-
opment is an initially rapid decrease in RT followed by an asymptote. In
Tomblin, Mainela-Arnold, and Zhang (2007) adolescents with SLI were
able to learn the sequences, but only after significantly more trials com-
pared to TD adolescents. Also, the SLI participants did not approach an
asymptote at the end of training.
Later, Lum, Gelgic, and Conti-Ramsden (2010) compared 15 chil-
dren with SLI with nonimpaired children in a different version of the
SRT task. They measured procedural learning by subtracting RT in a
fourth block from RT in a pseudo-random ordered fifth block. The SLI
children were not able to learn the sequences at levels comparable to
the nonimpaired children. Lum et al. (2010) also tested the participant’s
explicit knowledge of the presented sequences. They assured that “none
8 Developmental Language Impairment... 263

of the children participating in the study was able to recall the ten-item
sequence pattern” (p. 101). They found that the language-impaired chil-
dren did not learn the sequences at the level of the nonimpaired children.
Gabriel, Maillart, Guillaume, Stefaniak, and Meulemans (2011) ran a
probabilistic version of the SRT task with 15 SLI children and 15 TD
controls. The RT difference between the final block and a subsequent con-
trol block did not differ significantly between the two groups. Children
with SLI were as fast as the controls, and hence, the authors concluded
that children with SLI “do not display global procedural system deficits.”
Explicit knowledge of the presented pattern was not examined.
The disparate results from the two last mentioned studies may have
to do with the relative number of grammar impaired children com-
pared to the number of children without grammar impairment in the
broader language-impaired SLI group. I believe a further analysis of the
data based on a re-categorization of the impaired children into grammar-
impaired (GI) and normal grammar (NG) will be needed. Finally, the
presentation rates of stimuli and manner of responding (touching the
screen rather than a keyboard) in the two studies may have caused a dif-
ferent involvement of working memory, and because explicit memory
was not examined in the Gabriel et al. study, we do not know whether
declarative knowledge may have contributed to the disparate results in
the two studies.
Hedenius et  al. (2011) presented some important contributions to
the understanding of procedural learning by language-impaired chil-
dren. Their approach is innovative in at least two ways: First, the group
introduced the Alternating Serial Reaction Time (ASRT) task. A ran-
dom block that follows the fixed sequence of items is replaced by ran-
dom items that are interspersed with the pattern throughout the task;
for example, 1-r-2-r-4-r-3 (numbers correspond to specific locations and
r correspond to random locations). This procedure elicits no declarative
knowledge, and makes possible continuous examination of procedural
learning. Secondly, the Hedenius group extended the ASRT task to study
consolidation and retention of sequence knowledge (long-term learning),
an extension that is warranted by previous observations of dyslexic chil-
dren who perform well in initial training of mirror drawing but suffer a
264 Language Evolution and Developmental Impairments

setback on the same task one day later compared with the performance
of TD children. In the Hedenius et al. (2011) study, both SLI children
and TD children showed evidence of initial-sequence learning. The two
groups did not differ with respect to long-term learning, but only the TD
children showed clear evidence of consolidation. To show whether defi-
cits of sequence learning are associated specifically with grammar impair-
ment rather than broadly defined language impairments, all children
participating in the study were re-categorized into GI and NG groups.
Based on the Clinical Evaluation of Language Fundamental-3 (CELF-3)
Word Structure, Recalling Sentences and Sentence Structure subtests for
children 7–8 years, and CELF-3 Formulated Sentences and Recalling
Sentences subtests for children 9–14 years, they constructed a composite
grammar test. Z-scores at or below −1.14 were defined as GI, and those
above −1.14 were defined as NG. Both GI and NG children showed evi-
dence of initial-sequence learning, but only NG children demonstrated
clear evidence of consolidation and long-term learning.
Recently, Lum, Conti-Ramsden, Morgan, and Ullman (2014) pre-
sented a meta-analysis of eight studies where SRT tasks have been used to
test the PDH in children with SLI. The results of 186 participants with
SLI and 203 TD children were examined using a meta-regression analy-
sis. The increase in RT in the random block which is taken as a measure
of sequence learning was compared between SLI and TD children in the
sample of eight studies. They found an average effect size of .328, which
is significant, showing that PDH is supported in the meta-analysis. They
also found that effect sizes varied as a function of the age of participants
and characteristics of the SRT task.

8.3.2 The WPT

Th WPT, which involves probabilistic category learning, was originally


introduced by Knowlton, Squire, and Gluck (1994), and has been used
to dissociate procedural and declarative memory. The participant is pre-
sented with an image of one, two, three or four objects, for instance,
tarot cards or geometrical shapes, randomly combined, and the task is to
decide whether the pattern predicts sunshine or rain. Feedback is given to
8 Developmental Language Impairment... 265

permit participants to study progress of incremental learning. The intro-


ductory parts of the task are generally considered to depend on single-
cue strategies with procedural activity, whereas the later phases are said
to build on multi-cue strategies generally associated with activity in the
declarative system. Shoamy, Myers, Onlaor, and Gluck (2004) using the
WPT compared patients with mild symptoms of Parkinson’s disease and
age-matched control participants. They found no group differences in
the initial phase of 50 trials. In the ensuing trials, control participants
gradually switched from a single-cue to a multi-cue strategy, whereas the
Parkinson’s participants did not change. Kemény and Lukács (2010),
reasoning from the PDH, expected language-impaired children to show
the same performance pattern on the WPT, similar to the Parkinson’s
patients. They studied the performances of 16 children who were diag-
nosed as language-impaired according to Hungarian versions of classi-
cal language tests (PPVT and TROG), and who were compared to 16
TD children. Both groups had a mean age of 11;3 years. The language-
impaired children showed deficient learning on the WPT; that is, a defi-
ciency which appeared already in the early stages of the task.
Children in the experimental group of Kémeny and Lukács’ study
were broadly defined as language-impaired, and although grammar defi-
cits may have been a core problem, the study does not explicitly relate
PDH to grammar impairments. The authors argued that the deficient
learning of the experimental group is an abnormality that tends to
accompany language impairment, a proposition that fully agrees with the
PDH. The question is whether the experimental group was characterized
by grammatical impairments only, or whether lexical/semantic problems
were also involved. Furthermore, it may be argued that the WPT may be
solved using explicit strategies and that this task therefore is not a good
test for the PDH. In any case, the authors admitted that we cannot know
whether the observed deficit, “is selective to the procedural system or is
complemented by deficits in the declarative system.”
Many researchers will argue that the PDH cannot be tested with non-
linguistic tasks. Instead, the critical tasks should involve the learning of
language structures, either natural or artificial language structures. In the
following I shall critically review some studies that relate to the PDH by
using AGL tasks.
266 Language Evolution and Developmental Impairments

8.3.3 AGL and Language Impairment

The great challenge for the young child who is about to learn a first lan-
guage is to comprehend and make use of a hierarchical phrase structure.
This structure involves nonadjacent dependencies, as can be illustrated
in the sentence: The man on the sofa has aching legs (i.e., the man, not
the sofa, has aching legs). In Chap. 3, I reviewed some experiments by
Saffran et al. (2008), who showed that 12-month-old children are able
to learn predictive dependencies simulating the complex phrase struc-
ture of natural languages. It may be that the learning of such dependen-
cies is very difficult for some children who are language-impaired. I have
therefore argued that detection of the statistical dependencies in natu-
ral language utterances may provide an access-code to early dialogues, a
code that may be insufficiently “wired-in” by some children that turn out
to have language-learning difficulties. The learning of such dependen-
cies can be studied by use of AGL tasks comprised of series of nonsense
syllables/words.
Both adjacent and nonadjacent dependencies are learned by the TD
individual. Plante, Gomez, and Gerken (2002) presented sentence strings
that showed adjacent dependencies like the word order constraints of
a finite-state grammar. Participants made grammaticality judgments of
novel strings, and after only 5-minute exposure to the language, TD
adults performed above chance, whereas adults with language impair-
ments did not exceed chance level performance. Nonadjacent depen-
dencies are generally considered more difficult, because the learning of
such dependencies require subjects to ignore considerable variation in
intervening elements. In fact, however, the likelihood of detecting non-
adjacent dependencies increases with the variability of intervening ele-
ments. Thus Gomez (2002) presented children with three nonsense word
strings, A-X-B, where A and B were always the same, and X represented
a set of 3, 12, or 24 words. It turned out that children in a listening time
test could only discriminate between grammatical and ungrammatical
strings in the high-variability condition (24 words).
Grunow, Spaulding, Gómez, and Plante (2006) adopted the
Gomez’ task in a study of AGL, and college students with and with-
out language-learning difficulties served as participants. They listened to
8 Developmental Language Impairment... 267

sentences composed of three nonsense words, in which the X element


represented a set of either 12 or 24 words. Participants with NL skills
were able to learn and generalize the nonadjacent dependencies in both
variability conditions, whereas those with language-learning difficulties
did not perform above chance in any of the two conditions. This work
has been criticized due to a small sample size and a lack of significant
group differences, but another study by Hsu, Tomblin, and Christiansen
(2008), with a similar procedure, also showed that the high-variability
condition only facilitated nonadjacent dependency learning by TD ado-
lescents, not by adolescents with language impairments.
In a more recent study, von Koss Torkildsen, Dailey, Aguilar, Gómez,
and Plante (2013) showed that the variability principle generalized
beyond the A-X-B grammatical form. They presented strings of non-
words which took the forms of aX and Yb, where a and b were single and
specific nonwords, while X and Y were represented by 3 or 24 different
nonwords. Sixteen students with NL development, and 16 students with
language-based learning disability (LLD) participated in the study. Half
of each group was assigned the low variability condition (3 nonwords),
the other half was assigned the high variability condition (24 nonwords).
After a familiarization phase participants were tested for recognition of
strings heard and for generalization of the grammar with nonword strings
containing a new X or Y element. Learning strategies contained in incor-
rect responses were identified by recording the number of times items
with co-occurrence violations (aY, Xb) and items with linear order viola-
tions (Xa, bY) were accepted. Learning was defined as high acceptance
of the correct strings combined with low acceptance of either of the two
violation types.
Participants in the LLD group, who were assigned the low variabil-
ity condition (3 nonwords), were unable to distinguish items that had
been heard from items that deviated from previously presented items (co-
occurrence and linear order violations). Also they did not show evidence
of generalization to new grammatical strings. The other half of the LLD
group, who was assigned the high variability condition showed evidence
of both learning and generalization of the grammar. Participants in the
NL group learned and showed evidence of generalization in both low
and high variability conditions, but relative effect sizes suggested that
268 Language Evolution and Developmental Impairments

members of this group also benefitted more from the high variability
condition. The authors concluded that “these findings demonstrate that
rapid learning of grammatical forms can be achieved for individuals with
language-learning disabilities, if the language input is structured in ways
that facilitates rapid, unguided learning” (p. 625).
Hsu and Bishop (2011) examined evidence that language-impaired
persons have particular problems in extracting statistical dependencies,
and argued that due to these problems the language-impaired child or
adult becomes more dependent on rote learning (exemplar-based learn-
ing). In a previous AGL experiment by Hsu et al. (2008), token frequency
was varied independent of variability in an A-X-B paradigm. Because the
test strings were all heard during training, token frequency was as high
as 72 in the set size = 2 condition with only 6 different sentence strings.
In set size = 12 there were 36 different sentences with a token frequency
of 12, and in set size = 24, there were 72 different sentences each with
a token frequency of 6. Thus variability was negatively correlated with
token frequency. Among the TD participants the number of participants
who reached 100 % accuracy in at least 1 nonadjacent pair was highest in
the high variability condition, as expected. 15 % of the language-impaired
participants reached this level of performance in the same condition, and
25 % in the other variability conditions. Thus more language-impaired
participants reached the 100 % level of performance when variability was
low and token frequency high. These results agree with clinical observa-
tions showing that overlap of utterances produced by SLI children with
those produced by their caregiver is greater than with those produced by
their siblings. Thus language acquisition in this group is hampered with
exemplar-based, rather than rule-based learning, and therefore becomes
more dependent on rote learning. This observation is clinically relevant,
but is not informative about the etiology of grammar impairments.

8.3.4 Statistical Learning of Linguistic Signals

The literature that presents major support to the PDH has emphasized
statistical learning, and the experimental tasks are described in terms
of procedural learning. Perrechut and Pacton (2006), who emphasized
8 Developmental Language Impairment... 269

implicit rather than procedural and statistical learning, have focused


on a different learning process. Within an implicit learning tradition,
grammaticality judgments are said to depend on fragments of strings or
chunks. One may ask whether chunks like words or syllables are pri-
mary in relation to statistical patterns, and that chunks are learned as
declarative knowledge. Irregular forms of verbs, which more clearly
involve arbitrary sign-referent relationships, have been argued to depend
on declarative processing. Chunks, considered as basic language catego-
ries, depend on “idiosyncratic mappings” and are stored in a memorized
“mental lexicon.” This interpretation also agrees with the general posi-
tion of the primacy of the lexical/semantic system in language evolution
(Bickerton, 2003).
However, chunks are not necessarily different from statistical pat-
terns. Consider, for example, the question of how linguistic chunks are
acquired: we may as well ask how we are capable of segmenting words
out of a continuous stream of speech sounds. The two questions address
one and the same subject matter. In natural languages, the predictive
dependencies between phones within words are always higher than the
predictive dependencies between words. Evans, Saffran, and Robe-Torres
(2009) constructed a language out of CV syllables to form trisyllabic
“words,” for example, dutaba and tutibu. The within word transitional
probabilities ranged from 0.37 to 1.0, and the transitional probabilities
across word boundaries ranged from 0.1 to 0.2. Language-impaired and
NL controls listened to this language for 21 minutes. The children were
asked to draw using a computer-coloring program, while the examiner
controlled that the children sustained interest in the drawing. In two
alternative tests with 36 trials, the children heard pairs of trisyllables
(consisting of a “word” and a nonword foil). The nonwords were made
up of syllables in the “word” inventory that never followed each other in
the 21-minute speech stream. They were then told to choose the sound
in each pair that sounded more like one they had heard while drawing.
After 21 minutes, only participants in the NL group performed signifi-
cantly above chance. In a 42-minute speech condition with similar stim-
uli and procedures, both groups performed significantly above chance.
This shows that poor implicit learning by language-impaired children
makes segmentation of words (chunking) more difficult and attainable
270 Language Evolution and Developmental Impairments

only after prolonged exposure to the speech stream. This result also sup-
ports other studies showing that SLI children perform poorly in AGL
tasks under nonoptimal conditions. Also a 42-minute tone condition
turned out to be very difficult for the language-impaired children. Evans
et al. (2009) constructed a tone stream out of 11 pure tones from the
same octave (starting at middle C). These were combined into groups
to form “tone words,” which were not separated by any form of acous-
tic markers. The only clues to the beginning and end of a “tone word”
were the transitional probabilities between tones. Again the children were
occupied with a drawing task while listening to the tone stream for 42
minutes. After the implicit learning session, the children were presented
with 36 test-pairs each consisting of a “word” and “nonword.” They were
then asked to choose the sound sequence that sounded most familiar.
Again, the performance of the control group was significantly different
from chance, while the performance of the language-impaired children
did not differ from chance. These studies show that learning of linguistic
signals, words or “basic chunks” are most likely mediated by the proce-
dural, not the declarative system.
Counter-evidence to the PDH. According to the PDH, people with
basal ganglia dysfunction will have problems in learning AG tasks. In
addition, dysfunctions of the cerebellum, in particular the dentate
nucleus, will interfere with AG learning. However, Witt, Nühsma, and
Deuschl (2002) have shown that patients with advanced Parkinson’s dis-
ease can accomplish AG learning. This observation provides an important
counter-evidence for the PDH. However, whereas people with grammar
impairment tend to have abnormalities in the basal ganglia and/or cer-
ebellar structures, all people with abnormalities in these structures do not
necessarily have grammar impairments. The particular interconnections
between these structures and parts of the frontal cortex influence the
way neural abnormalities might interfere with grammar development.
Thus Ullman and Pierpoint (2005) argued that not all frontal regions
are involved in procedural memory. The most important parts are the
Supplementary Motor Area and in part Broca’s area containing BA 44
and 45. More research is needed to show the critical nerve circuitry
underlying early grammar learning.
8 Developmental Language Impairment... 271

Notice also that anomalies of the brain structures underlying the pro-
cedural system also predict phonological problems. Phonological rep-
resentations of new words, in particular words whose sound structure
are hard to memorize, may not be established, or learned only with
great efforts. Thus repeated exposure with guided listening and talking
is often necessary for new word learning. However, frequent words may
be spared.
Language-impaired children have great difficulties in tasks which
require repetition of nonwords. This problem has been taken as a diag-
nostic marker of language impairments (see Chap. 2, Sect. 2.3). Also,
it has been shown that one of the affected members of the KE family
acquired phonological structures of English only with an extremely
delayed rate (Fee, 1995). Phonological structures are sequential struc-
tures, the learning of which depends on the neural system underlying
the procedural memory. Therefore, phonological difficulties will be cor-
related with problems in the learning of AG.

8.4 The Declarative Memory System by


Language-Impaired Children
The studies reviewed in the above section show that recent research has
given considerable support to the PDH. Thus impairment of the first two
S’s is related to dysfunctions of the procedural system. Now, the question
is whether the declarative system is also impaired or whether it is rela-
tively spared by children with a procedural language disorder (PLD). This
problem is addressed in a recent study by Bishop and Hsu (2015). They
compared 28 children with SLI (7–11 years) with 28 younger typically
developing children who were matched for raw scores on a test of recep-
tive grammar in two tasks of paired associate learning. The SLI children
were also compared on the same two tasks with another age-matched
group of 20 TD children. In one of the tasks the children were told to
select a picture of four rare animals to match a heard novel name. This is
a vocabulary task which is generally said to involve the declarative system,
however, SLI children have difficulties in vocabulary learning and word
272 Language Evolution and Developmental Impairments

retrieval. Thus Ullman and Pierpont have argued that both declarative
and procedural systems are involved in vocabulary learning; which one of
the systems will be most heavily taxed depend on the methods of assess-
ment (see Sect.  8.2 above). In the other task, the participants were told
to match a complex nonverbal sound with a visual pattern; that is, a task
which was said to depend on the declarative system without demands of
phonological analysis. In this way, they could compare declarative learn-
ing on verbal and nonverbal paired associate learning tasks.
An errorless learning procedure was followed in both tasks. The child
heard a target word and was told to select a picture by clicking on it.
The picture of the animal was removed by the robot, and when cor-
rect the robot also said the target word. When incorrect, the robot said
nothing and the child was told to try again until the correct picture was
selected. The same procedure was followed in the other task with visual
patterns and meaningless sounds. No spoken responses were needed, and
the errorless procedures were adopted to minimize demands of working
memory.
In the vocabulary task, the age-matched TD children outperformed
the other groups. The level of performance at the start was higher,
whereas their rate of improvement was the same as the other two
groups; only the intersection of curves differed between the groups. In
the nonverbal paired associate task, there were no reliable differences
between the groups. Because the results showed spared declarative
learning by the language-impaired group, they were said to be consis-
tent with the PDH.
The intact declarative system in the cross-modal associate learning
task was given considerable attention by Bishop and Hsu. This fact
shows that the declarative system may be more effectively used in treat-
ment. However, declarative failure may still be found among language-
impaired children as a consequence of grammatical difficulties. However,
the relative sparing of declarative abilities may be exploited in attempts
to develop alternative methods of treatment. The balance between the
procedural and declarative system may tip in favor of the latter system,
but this does not mean that language-impaired children have no lexical/
semantic problems.
8 Developmental Language Impairment... 273

8.5 Lexical Problems


So far I have presented research evidence showing that grammar impair-
ment is associated with a dysfunctional procedural system. Impairments
of the lexical/semantic system may be associated with dysfunctions of the
medial temporal lobe structures (as in Wernicke’s aphasia). Thus, when
language-impaired children demonstrate preserved semantic abilities,
these structures are supposed to function normally. However, semantic/
lexical impairments do not occur only in temporal lobe dysfunctions, but
may also be affected by dysfunctions of the procedural system.
As argued above, grammar-impaired children will also have semantic
problems when the meaning of words depends on grammatical analysis,
on the retrieval of long sentences in working memory, or when linguistic
information is presented rapidly. These children tend to be impaired in
nonlinguistic domains as well; for example, in tasks requiring sequenc-
ing, speed, timing and balance. Oro-motor and facial praxis turned out
to be severely impaired in affected members of the KE family. Similarly,
Tallal, Stark, and Mellits (1985) reported that rapid oral movements are
very difficult for language-impaired children. These children also may be
impaired in tasks of mental rotation and working memory.
Some children with an SLI diagnoses show semantic/lexical impair-
ments with normal or near-normal grammatical abilities, and as a
rule, they also perform nonlinguistic tasks at the level of TD children.
Although their impairment involves a failure of declarative function, it
has not been possible to link their impairment to a temporal lobe dys-
function. (However, as described in Chap. 5, Sect. 5.5, the roles of the
hippocampus and para-hippocampal regions, as well as the involve-
ment of the left inferior prefrontal cortex in hard semantic judgments,
are well-documented in neurocognitive research.) Without negating
the relationship between these structures and the declarative system, I
think a neurocognitive framework for an interpretation of semantic/lexi-
cal impairments should be replaced by an associative network approach.
Since the works of Collins and Lofthus (1975), the network notion and
the concept of spread of activation have created a strong impetus in the
study of semantic memory and knowledge representation in cognitive
274 Language Evolution and Developmental Impairments

psychology. According to this approach, a vocabulary rests on an asso-


ciative network of ideas, and each of the ideas are represented by nodes
that are connected by associative links. Associative theories that make
use of this general network notion were introduced as a general frame-
work for studying human memory. However, it may also serve as a gen-
eral framework for studying semantic/lexical abilities. The vocabulary
depends on the size of the associative network, and the retrieval of ideas/
words from this network can be described as a “travel” via connections
between related nodes until the target information is reached. A node, for
example, the one representing the target idea/word, will be represented
once it receives a strong enough input signal. It receives input not only
from an external stimulus, but also from other nodes in the network. The
activation of a particular node depends on the strength of connections by
which it is linked to other nodes in the network. Activation travels from
node to node via associative links, and activation at each node may be
subthreshold, but may be summated by subsequent input signals to reach
threshold value. Thus retrieval of a particular word or concept depends
on a spread of activation in the network.
In my view, the development of semantic/lexical abilities is essentially
the same as the development of an associative network of ideas. Modern
network theories stress that network development requires an active role
for the child. It is important that words or other items in the ambient lin-
guistic environment are apprehended in several different ways. Therefore,
communicative interactions between more than two people are needed
for semantic development. The language user must be exposed to a diver-
sity of expressions, but environmental conditions in early childhood and
adolescence do not always warrant this diversity. Sometimes children
grow up in environmental conditions that resemble “the isolated pair
condition” in Fay et al.’s (2010) study (see Chap. 4, Sect. 4.5). For these
children, language exposure is poor, and therefore all aspects of linguistic
communication are affected, but primarily a deficient exposure is notice-
able by a small and ineffective vocabulary.
Language environments for children may differ from extreme linguis-
tic poverty on one hand (for example, family arenas giving rise to home
signs by deaf children), and on the other hand, the “community condi-
tion” (in Fay et al.’s study) wherein many individuals interact, often with
8 Developmental Language Impairment... 275

new interactional partners. Differences in the complexity of language


exposure, as described in relation to the two interactional arenas, deter-
mine the development of an associative network for the child. Therefore,
clinics dealing with language-impaired children have developed “learning
programs” for the acquisition of new words and a fuller comprehension
of general concepts. In particular dialogues with a teacher, the child is
encouraged to reflect on the meaning of words, and thereby to strengthen
his/her semantic/lexical abilities. Therefore, these “learning programs”
also extend and strengthen the child’s associative network.
To simulate the community condition, learning programs for language-
impaired children should include a group of participants who will be
trained to communicate with each other with the objective of building
shared knowledge about a particular subject. In this setting, it will be
important that children teach each other, thereby raising the general level
of knowledge in the group.

8.6 Language Impairment and the Processing


of Prosodic and Paralinguistic Features
In Evans et al. (2009), Experiment 2b, language-impaired children per-
formed poorly in a task on tone-word segmentation. Although, this task
was constructed as a nonlinguistic task, the transitional probabilities
between tones can be said to mimic some prosodic features of natural
languages. Does this mean that prosodic features are poorly detected by
language-impaired children? May be these children also have problems in
processing other prosodic and paralinguistic characteristics of language.
TD infants are capable of detecting metrical stress patterns in an artificial
language context. This ability is age-dependent and is not equally demon-
strable by typical adults (Bahl, Plante, & Gerken, 2009). Thus, we may
ask if language-impaired children and adults may have “bygone” a critical
period of sensitivity to metrical patterns. However, Plante, Bahl, Vance,
and Gerken (2010) have shown that language-impaired children, mean
age 55.5 months, show rapid implicit learning of stress assignment rules.
Thus, language-impaired children may acquire metrical stress patterns
on the level of normally developing children, and yet have problems in
276 Language Evolution and Developmental Impairments

detecting other prosodic features. In my opinion, more research is needed


to explicate the relationship between developmental language impair-
ments and the acquisition of prosodic and paralinguistic patterns.
Is sensitivity to prosodic and metrical stress patterns necessary for the
development of a grammar? This question also pertains to the role of
prosody in the evolution of language. In natural languages nonsyntactic
information, such as metrical stress, correlates with syntactic structure.
Prosodic cues may serve to bracket words into phrases, and may therefore
serve as a precondition to grammar development. However, the statisti-
cal patterns of speech sounds may be equally important; the question
is whether such patterns always entails some bracketing information of
prosody. According to Saffran (2001), the two types of information are
not necessarily linked, because bracketing information of prosody may
sometimes be unavailable. In these situations, statistical patterns, in
particular within phrase dependencies, may become elusive. However,
Saffran’s further research has convincingly shown that the statistical pat-
terns, in particular the predictive dependencies, may themselves serve as a
cue to phrase structure (see Chap. 2). On this account, grammar develop-
ment may take place with minor support from prosodic and paralinguis-
tic information.
Statistical patterns of speech sounds (and gestural movements in
sign language), form a universal prerequisite for language acquisition.
Prosodic patterns have also provided an evolutionary early factor in lan-
guage evolution, but statistical patterns have gained a priority. The sta-
tistical patterns of natural languages vary, but access to the predominant
statistical pattern in the ambient linguistic environment is essential for
development, regardless of the availability of prosodic patterns. Hence,
I consider statistical learning to be a universal prerequisite to language
acquisition and the mechanisms underlying this learning as the primary
factor that triggered language evolution. However, by taking this posi-
tion I do not downplay the role of prosodic and nonlinguistic infor-
mation. By accessing this informational content of the linguistic input,
children more easily become socialized in the group, tribe or commu-
nity; an event which favors, but does not guarantee, the acquisition of
language.
8 Developmental Language Impairment... 277

8.7 A Renewed Discussion of Diagnostic


Terminology
The complex symptomatology of developmental language impairments
means that we may never be able to represent them all with a single
generic term. Likewise, we shall meet with similar difficulties when trying
to define subgroups of language-impaired children. There are reasons why
language impairments should not be classified into two types, grammar
and semantic impairments. First, dysfunctions of the procedural system
may affect language behavior in different ways. Some children with pro-
cedural dysfunctions will have phonological problems; others will develop
normally in this respect. Some children with these dysfunctions will have
pragmatic problems, and others may have semantic problems which indi-
rectly are linked to their grammatical problems, not necessarily to a poor
linguistic environment. However, these children also show similarities,
which means that they can be referred to by one clinical term (see below).
At the same time, there are children with semantic problems which can
only be linked to a cultural and educational setting. This shows that clini-
cians and researchers must deal with an etiological diversity which makes
a classification of language impairments extremely difficult.
Perhaps we do not need to define one generic term which represents all
types of language impairments, but one term which refers to a substantial
number of children; that is, a term which indicates that their problems
involve developmental in contrast to acquired impairments. Also, there
may be no sense in categorizing language impairments into subgroups of
impaired children merely based on linguistic characteristics. The review
of research literature presented above and also in the previous chapters
shows a basis of contemporary research for introducing the Ullman and
Pierpont term “procedural language disorder” (PLD) as the new term. It
represents many, but not all types of language impairments, and although
great individual differences of impairments are associated with this term,
its pros outweigh its cons. The considerable support given the PDH in
contemporary research is the main reason why I prefer PLD as the main
diagnostic term. However, I have a number of other arguments for using
this term:
278 Language Evolution and Developmental Impairments

1. PLD refers to dysfunctions in evolutionary old structures of the human


brain and is therefore vested in a theory of language evolution.
2. The prospects for linking the new term to genetic etiology are good.
3. The term may be linked to interactional/dialogic dysfunctions in early
childhood (see Chap. 4).

I admit that the new term cannot be used as a diagnostic category


unless it is associated with a set of diagnostic tests. As far as I know, these
do not exist, but can be designed from the research tasks which most suc-
cessfully have been applied to test the PDH. Candidate examples will be
ASRT and AGL tasks (see above); that is, tasks which of course need stan-
dardization and construction of norms. Based on contemporary research
on dysfunctions in nonlinguistic domains, it will be possible to provide
guidelines (to be included in the next version of DSM?) for checking
co-morbid deficits of other motor and cognitive skills. The reason is that
anomalies of the neural structures underlying the procedural system are
associated with impairments in both language and nonlanguage domains.
The position taken here means that Bishop’s (2014) 10 questions can
be answered in the following way:

1. My concern about children’s language problems means that I focus


on causal factors which have been studied in recent research (tests of
the PDH).
2. I abandon diagnostic terms such as language disorder and specific lan-
guage impairment of the very same reasons explained in Bishop’s
paper.
3. Although PLD can be linked to anomalies in the pre-frontal basal
ganglia circuitry, the new term does not refer to a disorder with an
equally well-known etiology like Down syndrome. Hence the new
term does not “medicalize” children’s difficulties, and rather than
introducing a medical model, PLD rests on a cognitive model.
4. The appropriate criteria for identifying (many) children’s language
problems are defined by the PDH.
5. PLD involves a wide spectrum of problems, both within a language
and nonlanguage domain, rather than any “specific” problems with
language.
8 Developmental Language Impairment... 279

6. The new term means that language impairments share some charac-
teristics with other neurodevelopmental disorders.
7. Other labels for unexplained language problems generally do not
have a link to evolutionary theory.
8. The consequences of the “lack of agreed terminology” are severe. To
avoid misunderstanding and “doubts of reality” the new term also
needs “marketing” in the field of public health.
9. The new term, PLD, means there are good reasons why impaired
children “should also undergo an evaluation to identify areas of
strength: activities they may enjoy and have the possibility of suc-
ceeding at” (Bishop, 2014, p. 390).
10. The proposed term, PLD is the answer.

By using the proposed term, I suggest a categorization which focuses


on causal factors of language impairment. Also the term “procedural”
is linked to the learning and memory of skills and therefore PLD is a
category of learning impairments. Thus it implicates a developmental
rather than an acquired impairment. When based on research related to
the PDH, this term should also be easier to explain to the wider public
(Bishop: http://psyweb.psy.ox.ac.uk/oscci/). However, children may also
have language difficulties which are not represented by the new term, and
which will be addressed on a general basis in the following section.

8.8 Language Difficulties and Social


Disengagement
The feasibility of social interactions in dialogues or similar linguistic
scenarios differs tremendously among children and adults. This is an
ability which can be demonstrated already as turn-taking behavior
by infants, and which appears as readiness to get involved linguistic
interactions in later development. Turn-taking and involvement in
dialogues are generally considered to be pre-conditions to language
acquisition. However, the feasibility of linguistic dialogues does not
guarantee other linguistic skills; for example, AG learning. Thus,
280 Language Evolution and Developmental Impairments

grammar and social-linguistic competence may have different evo-


lutionary origins. The Old World monkeys (rhesus macaques) out-
performed the New World monkeys (marmosets) in AG learning (see
Chap. 3, Sect. 3.2), while marmoset monkeys are the only subhuman
species which have demonstrated turn-taking behavior (Chap. 4, Sect.
4.2.1). Given that grammar and social-linguistic aspects of language
are relatively independent components of language, we may account
for children who do not fit the PLD category but are nonetheless lin-
guistically handicapped.
Willingness to involve oneself in linguistic dialogues and other lin-
guistic scenarios has been looked upon as a personality trait and may
therefore have been less attended to in the field of speech and language
disorders. However, children who are language-impaired due to lack of
social-linguistic competence may be helped in various ways, for example
teaching them efficient address codes. Therefore these children should
be recognized as a subgroup of language-impaired children, and not as a
special category within clinical child psychology.
The relative independence between grammar and social-linguistic
aspects of language also means that some children and adults may excel
in the latter component while being relatively impaired in grammar and
lexical skills. It may seem like these people have a “disguised” form of
language impairment.

8.9 Approaches to Remedial Treatment


Today, there is a vast number of experientially based techniques and
methods, used by clinicians and teachers to give remedial treatment to
children with developmental language impairments. All children with
developmental language impairment need to develop “language aware-
ness” which enable them to comprehend important structures of lan-
guage. In the present section, I will describe a few methods, both with
linguistic and nonlinguistic tasks which are relevant to most children with
developmental language impairment. The first one is a general procedure
followed by most institutions where children with language difficulties
8 Developmental Language Impairment... 281

are treated; the second one makes use of computer games which form a
“family” of methods with mostly nonlinguistic materials.
Semantic coaching. This method is relevant for most children with
language problems, because in many cases they also struggle with social
and emotional problems which accompany their language difficulties.
Therefore, the solution to these problems requires the creation of an edu-
cational setting where the teacher gains the child’s trust, while awakening
a curiosity for words. This is of course a task for the devoted teacher or cli-
nician skilled in special education, and cannot be outlined in details here.
Its objectives will be a dialogue about the meaning of words: Incite the
child to talk, or to take active part in dialogues about concepts/ events/
objects, while the same words are repeatedly used in different linguis-
tic contexts. The face-to-face dialogic setting is important, but semantic
training may as well be undertaken in small (selected) groups of children.
Different institutions or resource centers have gained practical and clini-
cal experience in organizing this form of treatment; that is, professional
experience that may easily be shared with others.
For children with a low vocabulary, we should also take into consid-
eration Fay et al.’s (2010) research on the evolution of new communi-
cative systems (see Chap. 5, Sect. 5.6.1). These researchers stressed the
importance of interactions in a community setting where communica-
tion between new partners take place. In consequence, therapists, as
part of a coaching program, should encourage communication between
same-generation members. Thus semantic coaching by teachers or clini-
cal workers is not enough, and may sometimes produce signs of contra-
indications. In addition to semantic coaching in special schools or clinics,
it is important to provide conditions for interactions with other children.
Has the child attended kindergarten or nursery school, and what has the
quality of interactions been in those institutions? Does the child have
same-age friends, and to what extent has the child attended a peer group
in school? If not, it is essential to change the environmental conditions to
make the most out of language learning in peer groups of other children.
Some language-impaired children may also perform poorly on cogni-
tive, nonlinguistic tasks, and some may have a symptomatology of co-
morbidity with other cognitive and behavioral disorders. In these cases,
282 Language Evolution and Developmental Impairments

the semantic coaching discussed above will be an insufficient remedia-


tion, and may be replaced by a cognitive remediation program which
targets more basic neurocognitive functions that evolved early in the his-
tory of mankind.
Cognitive remediation. The nonspecific language-impaired chil-
dren have procedural dysfunctions which prevent the building of rule-
governed structures in language. These dysfunctions, which also mean a
disadvantage for semantic learning, are associated with functional devia-
tions of brain substrates discussed in Chap. 3, and in many cases may be
genetically based (see Chap. 2). Because the learning of sequential and
hierarchical structures depend on working memory, for example, the cor-
rect repetition of sounds in a nonword, impairments of executive and
rehearsal functions may also be implicated. The following procedures are
therefore applicable to most children with language difficulties regardless
of whether they conform to the criteria of PLD. There are computer games,
which in general invoke interest and support adherence such as Brain age,
Brainware safari, and CogniFit Personal Coach (CPC). The latter is a
home-based, computerized and individualized training program (www.
cognifit.com/). It includes tasks of working memory, divided attention,
eye-hand coordination, planning and others. Executive functions are
critical factors for solving most of these tasks. A baseline cognitive evalu-
ation is undertaken with the Neuropsychological Examination–CogniFit
Personal Coach (N-CPC). This test is also administered after training has
been validated against several other standard neuropsychological tests;
for example, the Cambridge Neuropsychological Test Automated Battery
(CANTAB). The child starts training at a level of difficulty which rests
upon the results of the N-CPC evaluation. During all sessions, the CPC
uses an adaptive-interactive system, making sure that the child always
works in his/her comfort zone and does not experience high levels of
frustration.
Recently, Kronenberger, Pisoni, Henning, Colson, and Hazzard (2011)
reported an intervention study which was designed to test the feasibility
and efficacy of the Cogmed Working Memory Training program (www.
cogmed.com/). The participants were nine children (ages 7–15 years) with
profound bilateral hearing loss and with cochlear implantation prior to
age 3 years. The program contains 12 different kinds of video game-like
8 Developmental Language Impairment... 283

computer-based exercises. The tasks involved auditory-visuospatial short-


term memory skills, and combined short-term and working memory
skills. Cogmed Working Memory Training, like CogniFit, uses an adap-
tive training algorithm by which the complexity of forthcoming tasks
is adapted or increased slightly to comply with the participant’s level of
performance. Efficacy measures of working memory and sentence repeti-
tion skills were obtained prior to and after a five-week training period.
The children demonstrated significant improvements in working
memory and sentence-repetition skills. “Improvements in working mem-
ory decreased slightly at the 1-month follow-up and more substantially
at 6-month follow-up. However, sentence repetition continued to show
marked improvement at 6-month follow-up” (p. 1182).
The work of Kronenberger et  al. (2011) and others show that
computer-based exercises are viable options for cognitive remediation of
language difficulties by children with CI. However, such exercises may
also be redesigned and opted for cognitive remediation of hearing chil-
dren with language difficulties. The question is whether the video and
computer-based repetitive tasks have been properly tailored for improve-
ment of core functions underlying language comprehension and language
skills. Moreover, commercial programs, which I have mentioned above,
lack tasks on complex working memory (see Conway et al., 2005) and
AGL. These should be included in a new program of cognitive remedia-
tion for language-impaired children, but the way this can be done, while
taking into account related observations in clinical settings, is a matter
of applied clinical research. In particular, AGL, which assesses statistical
learning skills, should be followed up in this type of research.

8.10 Statistical Learning and Language


Impairment: New Insights into Methods
of Treatment
The main objection against the repetitive tasks used in cognitive train-
ing has to do with their statistical structure. Both CogniFit and Cogmed
Working Memory programs make use of an adaptive training algorithm,
284 Language Evolution and Developmental Impairments

which is important, but the serial presentation of stimuli had an uncon-


strained statistical structure; that is, input stimuli formed pseudo-random
sequences.
The syntax in any natural language implicates a statistical structure,
thus sequences of signals are statistically constrained. Also, the sequences
of sounds in words are statistically constrained and differences in tran-
sitional probabilities within and between words/signals form the bases
of word segmentation. Structure at the level of syntax and structure at
the level of signals form the first two S’s in Fitch’s component analysis;
together they form the basic structure of language. Statistical structure is
a modality-independent aspect of language which is equally involøved in
speech and sign languages, and as argued by Saffran and her co-workers,
human infants have a wired-in ability to implicitly learn the embedded
patterns in series of stimuli.
Children with PLD are generally impaired in relation to the first two
S’s in the component analysis of language. Therefore it is important to
design interesting tasks or games which guarantee long-term adherence
and which challenge their ability to detect statistical structures in the
ambient environment. Exercises with linguistic materials may have a
negative effect on motivation and adherence by children with language
difficulties. Rather, it will be possible to design nonlinguistic games or
tasks which demand attention to statistical structures and are therefore
tailored to their core difficulties.
Conway, Gremp, Walk, Bauernschmidt, and Pisoni (2014) asked
whether the enhancement of domain-general learning abilities can
improve language function such as nonword repetition. It is well-known
that statistical learning abilities are related to acquisition of language
(see studies of AGL reviewed above). However, no previous research had
shown whether such learning also enhances language function when
training tasks make use of nonlinguistic stimuli. Conway et al. made use
of a working memory (WM) task, which was designed according to an
adaptive-interactive program. In contrast to the WM tasks mentioned
above, the stimuli were not presented randomly, but formed sequences
of structured patterns. The participants see a 4 × 4 matrix of circles which
are lit up in apparently random or pseudo-random sequences. The par-
ticipants, however, do not know that the circles do not appear randomly
8 Developmental Language Impairment... 285

but conform to underlying statistical regularities: Any given circle has


only three others that can follow it. After a few trials the subjects will
implicitly detect the regularities, and consequently their recall perfor-
mance will improve. The processing of these regularities has been called
structural sequence processing (SSP).
Conway et  al. (2012) ran two experiments, one with healthy adults
and one with deaf or hard of hearing children. In both experiments
the sequences, which obeyed particular statistical patterns, were re-
randomized for each participants on the following day. Therefore improve-
ments in recall performance could be attributed to an enhancement of
the ability to detect statistical regularities and not the learning of one
specific set of regularities. In the first experiment, the adult participants
were randomly assigned one of three groups. In Group 1 participants
were given an adaptive and statistically constrained version of the task.
In Group 2 the task was an adaptive one with pseudo-random sequences
of stimuli. In Group 3 the sequences were nonadaptive and statistically
nonconstrained. Pre- and post-training scores were obtained for Forward
digit span and the Stroop Color and Word test, and finally pre- and post-
training scores were obtained for a nontrained task of implicit statistical
learning. Some enhancement of working memory and executive control
were reported for all groups, but only Group 1 showed an improvement
on a nontrained sequential learning task. For the other two groups, the
results showed that “training participants to interact with random pat-
terns actually hampers their ability to learn structured patterns following
training” (p. 323).
The second experiment addressed the problem of whether delayed lan-
guage development can be linked to poor statistical learning. In a pre-
liminary report of an ongoing study, they described a training task with
23 hard of hearing (mean age 8:2) children. “Among this group, 10 had
bi-lateral CI, 8 were fitted with one implant and one hearing aid, and
the remaining 5 children wore hearing aids in both ears” (pp. 323–324).
The children were assigned one of two groups matched for chronologi-
cal age. In Group 1 the training condition was adaptive and sequences
conformed to underlying statistical regularities. In Group 2 the condition
was nonadaptive and the sequences were pseudo-random. The following
pre- and post-training measures were obtained for participants in both
286 Language Evolution and Developmental Impairments

groups: Children’s Test of Nonword Repetition (Gathercole & Baddeley,


1990) and a nontrained measure of visual sequential learning. Only chil-
dren in Group 1 showed a significant reduction in the mean number
of syllable errors in the Nonword Repetition test, and only children in
this group showed a significant improvement from pre- to post-training
sessions on the number of correctly reproduced statistically constrained
sequences.
The studies of Conway et al. have strongly supported the hypothesis
that language acquisition relies on a domain-general learning mechanism,
rather than a dedicated domain-specific mechanism. Several researchers
have argued that SSP constitutes the domain-general learning mecha-
nisms underlying language acquisition. In an ERP study, Christiansen,
Conway, and Onnis (2011) demonstrated that structural irregularities
in an SSP task and syntactic violations had similar effects on the P600
component. Therefore they argued that the same neural mechanisms
are underlying syntactic processing and SSP learning. Smith, Conway,
Baurenschmidt, and Pisoni (2015) investigated the mechanisms underly-
ing the transfer effects of SSP training. Sixty-six adult participants were
quasi-randomly assigned one of three groups: Group 1 formed an SSP
training group who were involved in viewing and reproducing visual-
spatial structured sequences (in a 4 × 4 matrix of circles like the one used
in Conway et al. study). Group 2, which was called a WM group, viewed
and reproduced nonstructured sequences. Both groups followed an
adaptive-interactive program. Group 3 was given a nonadaptive program
with nonstructured sequences of stimuli, which was expected to give no
cognitive improvements. All participants received a battery of cognitive
tests, including Speech Recognition in Noise and Statistical Sequence
Learning on day 1. In the former test, which was used to assess language
ability, participants listened to spectrally degraded sentences and were
told to write down the last word they heard. In half of the sentences the
last word was highly predictable, whereas the anomalous sentences in
the other half had last words of low predictability. The language score
was defined as the number of correct words in the high predictability
minus the number of correct words in the low predictability condition.
Sequence training was run on days 4–5, and on day 6 they were again
assessed with the same tests from pre-training.
8 Developmental Language Impairment... 287

An overall MANOVA showed no significant effect of time of testing,


and no interaction with group which could have shown an effect of SSP
training on SSP or language. However, SSP training could improve lan-
guage processing through its enhancement of SSP.  Thus a mediational
model analysis showed two competing effects, one indirect and one direct
effect of adaptive sequence training. In the former this training had a
positive effect on SSP performance, which in turn improved language
processing. This indirect effect was said to motivate a “novel intervention
to treat language impairment” and was therefore highly valued in clinical
as well as theoretical contexts. However, one puzzling problem remained:
why does SSP training have a negative direct effect on language perfor-
mance? The authors indicated one possible answer by arguing that only
in group 1 (adaptive and structured sequence training) did scores on high
predictability sentences worsen from pre- to post-training. Scores on the
anomalous sentences did not worsen for any of the experimental groups.
Thus sequence training with structured sequences may have interfered
with knowledge of language regularities. Therefore we may ask whether
Smith et al.’s choice of language test was a good one.
By using degraded sentences the test may assess focused auditory
attention in addition to comprehension of sentence structure. Obviously,
degraded sentences were needed to avoid ceiling effects by adult partici-
pants from Indiana University (with little variance of language ability?).
A sample of younger TD children may give rise to an adequate variance
on a standardized test, say a subtest from CELF 4 (for example, Receptive
Language, Phonological awareness, or Language Structure).
The adaptive training procedure in statistical learning tasks used in
both Conway et al.’s and Smith et al.’s study may prove to have a general
relevance for enhancement of language function by a more groups with
developmental impairments. In short, these studies do support the claim
that SSP represent a domain-general mechanism for the acquisition of
language, but they did not test children with a diagnosis of developmen-
tal language impairment. We know that many children in this category
have procedural difficulties which interfere with sequence learning, and
for whom the SSP training procedure may be helpful.
Thus Lukács and Kemény (2014) tested SLI children and age-matched
TD children on two tasks of sequence learning (SRT and AG learning
288 Language Evolution and Developmental Impairments

tasks), and the WPT.  A relatively smaller proportion the SLI children
showed evidence of learning in the two sequence learning tasks com-
pared to the TD children. In contrast to their previous study (Kemény
& Lukács, 2010) there was an equal proportion of learners in the two
groups on the WP task. (By the way, this task can be solved by.) The
two sequence learning tasks were not directly comparable to the adaptive
training procedures used in the Conway et al. study, however, I agree that
they may be linked to a domain general mechanism of learning.
Gabay, Thiessen, and Holt (2015) have also reported impaired statisti-
cal learning by children with developmental dyslexia (DD). These children
performed significantly more poorly than a control group on a statisti-
cal learning task with both linguistic and nonlinguistic stimuli. Gabay
et al. therefore concluded that the reading problems of the DD children
did not arise from phonological impairment but a “more general proce-
dural learning deficit.” Does this mean that dyslexia and developmental
language impairment are similar disorders? It may be that PLD, due to
different developmental trajectories, gives rise to different surface impair-
ments, but are etiologically the same disease.
In summary, I find SSP training to be the most adequate method of
treatment for children (and adults) with PLD. SSP training as defined in
the Conway et al. and Smith et al. studies represent a remarkable improve-
ment in treatment methodology, because it applies to groups which show
superficially different impairments (reading difficulties, delayed language
by hard of hearing people). However, much research remains to define
the specific mechanisms involved in SSP; that is, the distinctive factors
for typical versus anomalous development of language.
In view of the research reviewed in the present chapter, I will address
policy-making in the field: Institutions which offer remedial work for
children with developmental disorders, in particular children with PLD,
cannot improve practice unless they have experts who engage themselves
in clinically oriented research. These will be experts who are familiar
with most of the research works reviewed in this chapter, and who are
also involved in clinical assessment and treatment of children and adults
with developmental disorders. The design and testing of new remedial
programs will have to be done stepwise in a constant interaction with
research and clinical practice.
8 Developmental Language Impairment... 289

References
Bahl, M., Plante, E., & Gerken, L. A. (2009). Processing prosodic structure by
adults with language based disability. Journal of Communication Disorders,
42, 313–323.
Bickerton, D. (2003). Symbol and structure: A comprehensive framework for
language evolution. In M. H. Christiansen & S. Kirby (Eds.), Language evo-
lution: The states of the art. Oxford: Oxford University Press.
Bishop, D. V. (2014). Ten questions about terminology for children with unex-
plained language problems. International Journal of Language &
Communication Disorders, 49, 381–415.
Bishop, D. V., & Hsu, H. J. (2015). The declarative system in children with
specific language impairment: A comparison of meaningful and meaningless
auditory-visual paired associate learning. BMC Psychology, 3(1), 3.
doi:10.1186/s40359-015-0062-7.
Christiansen, M. H., Conway, C. M., & Onnis, L. (2011). Similar neural cor-
relates for language and sequential learning: Evidence from event-related
brain potentials. Language & Cognitive Processes, 27, 231–256.
Collins, A. M., & Lofthus, E. F. (1975). A spreading activation theory of seman-
tic processing. Psychological Review, 82, 407–428.
Conway, A.  R. A., Kane, M.  J., Bunting, M.  F., Zach Hambrich, D.,
Wilhelm, O., & Engle, R.  W. (2005). Working memory span tasks: A
methodological review and user’s guide. Psychonomic Bulletin & Review,
12, 769–786.
Conway, Gremp, Walk, Bauernschmidt and Pisoni (2012). Can we enhance
domain-general learning abilities to improve language function? In
P. Rebuschat & J. N. Williams (Eds.), Statistical learning and language acqui-
sition. Berlin: De Gruyter Mouton.
Evans, J.  L., Saffran, J.  R., & Robe-Torres, K. (2009). Statistical learning in
children with specific language impairment. Journal of Speech, Language, and
Hearing Research, 52, 321–335.
Fay, N., Garrod, S., Roberts, L., & Swoboda, N. (2010). The interactive evolu-
tion of human communication systems. Cognitive Science, 34, 351–386.
Fee, E. J. (1995). The phonological system of a specifically language-impaired
population. Clinical Linguistics and Phonetics, 9, 189–209.
Gabay, Y., Thiessen, E. D., & Holt, L. (2015). Impaired statistical learning in
developmental dyslexia. Journal of Speech, Language, and Hearing Research,
58, 934–945.
290 Language Evolution and Developmental Impairments

Gabriel, A., Maillart, C., Guillaume, M., Stefaniak, N., & Meulemans, T.
(2011). Exploration of serial structure procedural learning in children with
language impairment. Journal of the International Neuropsychological Society,
17, 336–343.
Gathercole, S. E., & Baddeley, A. D. (1990). Phonological memory deficits in
language disordered children: Is there a causal connection? Journal of Memory
and Language, 29, 336–360.
Gomez, R.  L. (2002). Variability and detection of invariant structure.
Psychological Science, 13, 431–436.
Grunow, H., Spaulding, T. J., Gómez, R. L., & Plante, E. (2006). The effects of
variation on learning word order rules by adults with and without language-
based learning disabilities. Journal of Communication Disorders, 39,
158–170.
Hedenius, M., Persson, J., Tremblay, A., Adi-Japha, E., Veríssimo, J., Dye,
C. D., et al. (2011). Grammar predicts procedural learning and consolida-
tion deficits in children with specific language impairment. Research in
Developmental Disabilities, 32, 2362–2375.
Hsu, H. J., & Bishop, D. V. (2011). Grammatical difficulties in children with
specific language impairment: Is learning deficient? Human Development, 55,
264–277.
Hsu, H. J., Tomblin, J. B., & Christiansen, M. H. (2008). The effect of vari-
ability in learning nonadjacent dependencies in typically-developing indi-
viduals and individuals with language impairments. In A.  Owen (Chair)
(Ed.). The role of input variability on language acquisition and use; Symposium
presented at the XI International Congress for the Study of Child Language
(IASCL); Edinburgh.
Kemény, F., & Lukács, Á. (2010). Impaired procedural learning in language
impairment: Results from probabilistic categorization. Journal of Clinical and
Experimental Neuropsychology, 32, 249–258.
Knowlton, B. J., Squire, L. R., & Gluck, M. A. (1994). Probabilistic category
learning in amnesia. Learning & Memory, 1, 106–120.
Kronenberger, W. G., Pisoni, D. B., Henning, S. C., Colson, B. G., & Hazzard,
L. M. (2011). Working memory training for children with cochlear implants:
A pilot study. Journal of Speech, Language, and Hearing Research, 54,
1182–1196.
Lukács, A., & Kemény, F. (2014). Domain-general sequence learning deficit in
specific language impairment. Neuropsychology, 28, 472–483.
8 Developmental Language Impairment... 291

Lum, J.  A., Conti-Ramsden, G., Morgan, A.  T., & Ullman, M.  T. (2014).
Procedural learning deficits in specific language impairment (SLI): A meta-
analysis of serial reaction time task performance. Cortex, 51, 1–10.
Lum, J. A., Gelgic, C., & Conti-Ramsden, G. (2010). Procedural and declara-
tive memory in children with and without specific language impairment.
International Journal of Language and Communication Disorders, 45, 96–107.
Perrechut, P., & Pacton, S. (2006). Implicit learning and statistical learning. One
phenomenon, two approaches. Trends in Cognitive Sciences, 10, 233–238.
Peterson, K. M., Folia, V., & Hagoort, P. (2010). What artificial grammar learn-
ing reveals about the neurobiology of syntax. Brain & Language. doi:10.1016/j.
bandl.2010.08.003.
Plante, E., Bahl, M., Vance, R., & Gerken, L. A. (2010). Children with specific
language impairment show rapid implicit learning of stress assignment rules.
Journal of Communication Disorders, 43, 397–406.
Plante, E., Gomez, R., & Gerken, L. (2002). Sensitivity to word order cues by
normal and language/learning disabled adults. Journal of Communication
Disorders, 35, 453–462.
Saffran, J. R. (2001). The use of predictive dependencies in language learning.
Journal of Memory and Language, 44, 483–515.
Saffran, J., Hauser, M., Seibel, R., Kapfhamer, J., Tsao, F., & Cushman, F.
(2008). Grammatical pattern learning by human infants and cotton-top tam-
arin monkeys. Cognition, 107, 479–500.
Shoamy, D., Myers, C. E., Onlaor, S., & Gluck, M. A. (2004). Role of the basal
ganglia in category learning: How do patients with Parkinson’s disease learn?
Behavioral Neuroscience, 118, 676–686.
Smith, G. N. L., Conway, C. M., Baurenschmidt, A., & Pisoni, D. B. (2015).
Can we improve structured sequence processing? Exploring the direct and
indirect effects of computerized training using a mediational model. PLoS
One, 10, e0127148. doi:10.1371/journal.pone.0127148.
Squire, I. R., Knowlton, B., & Musen, G. (1993). The structure and organiza-
tion of memory. Annual Review of Psychology, 44, 453–495.
Tallal, P., Stark, R., & Mellits, E. (1985). Identification of language-impaired
children on the basis of rapid perception and production skills. Brain and
Language, 25, 314–322.
Tomblin, J. B., Mainela-Arnold, E., & Zhang, X. (2007). Procedural learning in
adolescents with and without specific language impairment. Language
Learning and Development, 3, 269–293.
292 Language Evolution and Developmental Impairments

Ullman, M. T., & Pierpoint, E. I. (2005). Specific language impairment is not
specific to language: The procedural deficit hypothesis. Cortex, 41,
399–433.
von Koss Torkildsen, J., Dailey, N. S., Aguilar, J. M., Gómez, R., & Plante, E.
(2013). Exemplar variability facilitates rapid learning of an otherwise unlearn-
able grammar by individuals with language-based learning disability. Journal
of Speech, Language, and Hearing Research, 56, 618–629.
Witt, K., Nühsma, A., & Deuschl, G. (2002). Intact artificial grammar learning
in patients with cerebellar degeneration and advanced Parkinson’s disease.
Neuropsychologia, 40, 1534–1540.
Index

A articulatory buffer, 58
Ackermann, H., 64 artificial grammar learning (AGL),
affective resonance, 85 83, 97, 259, 266–8
Aguilar, J.M., 267 artificial language, 146–9
Alfonso-Reese, L.A., 172 Ashby, F.G., 172
Alibali, M.W., 95 Askelof, S., 205
alphabets, 197, 199 Asperger syndrome, 71, 220–1
Alternating Serial Reaction Time asymmetric relationship, 143
(ASRT) task, 263 Attention Deficit Hyperactivity
Alvarez, P., 102–3 Disorder (ADHD), 52
American Sign Language (ASL), 5, auditory cortex, 244, 245
12, 186, 240 Auditory Repetition Test (ART), 58
a-modal language rhythm, 241 Augustine
anarthria, 59 Confessions, 149
Anderson, J.R., 133 autism spectrum disorder (ASD), 52,
aphasia, 49, 120, 201, 205, 217, 68–71
242, 243
Arbib, M.A., 20, 21, 115–7, 182,
212, 248 B
Ardila, A., 108, 119, 204, 205 babbling, 42, 50, 144–6, 236–8
Armstrong, D.F., 116 Baby signs, 239

© The Editor(s) (if applicable) and The Author(s) 2016 293


A. Lian, Language Evolution and Developmental Impairments,
DOI 10.1057/978-1-137-58746-6
294 Index

Baddeley, A.D., 27, 58, 207 BSL version of British Picture


Badre, D., 180 Vocabulary Scale (BPVS), 55
Bahl, M., 275 bullae, 195
basal ganglia, 64, 104, 105, 107, Bunge,S.A., 179–80
108, 187, 270
Bauernschmidt, A., 284, 286
behavioral systems, 135 C
Bellugi, U., 231 Call, J., 31
Benson, A.M., 105 Cambridge Neuropsychological Test
Beran, M.J., 31 Automated Battery
Berg, M.E., 173 (CANTAB), 282
Berwick, R.C., 112, 233 candidate genes, 214, 215
Bickel, B., 110, 111 Carreiras, M., 205
Bickerton, D., 5, 15, 66, 89, 92, 93, Castiello, U., 22, 114
108, 155 Castro-Caldas, A., 205, 207
bimodal bilinguals, 245 categorical perception, 169–71
Bishop, 50–3, 56, 65, 268, 272 Centers for Disease Control and
Blanco, N.J., 64 Prevention (CDC), 68
Bolhuis, J.J., 112, 233 central nervous system (CNS), 212
book-keeping system, 195 cerebrospinal fluid (CSF), 61
bootstrapping process, 143 Cerri, G., 118
Borjon, J.I., 138 Chandrasekaran, B., 64
Bornkessel-Schlesewsky, I., 110 childhood apraxia of speech (CAS),
Bornstein, M.H., 139 63
Botting, N., 60 Children’s Test of Nonword
Bradshaw, J.L., 79, 80 Repetition (CN REP), 58, 286
brain regions, 21, 187, 201–2, 205, Chinese written language, 197
217 Chomsky, Noam, 4, 7, 12, 18, 108,
Briscoe,J., 56 112, 233
British Sign Language (BSL), Choudhary, K.K., 110
54, 55 Christian Bible, 209–10
British Sign Language Production Clark, J., 179
Test, 55–6 Clifford, A., 171
British Sign Language Receptive Clinical Evaluation of Language
Skills Test (BSL-RST), 55 Fluency (CELF), 52
Broca’s area, 20, 41, 105, 115, 116, Clinical Evaluation of Language
118, 154, 270 Fundamentals (CELF-R), 62
Brown, B.B., 59, 69, 70 cochlear implant (CI), 245, 282
Index 295

Cogmed Working Memory Training pre-adaptation of grammar,


program, 282, 283 110–2
CogniFit Personal Coach (CPC), protolanguage, 92–4
282, 283 symbolic threshold, 83–9
cognitive neuroscience approach, 23, ventral and dorsal pathways,
176 119–21
cognitive position, 8 vertical transmission mechanism,
cognitive remediation, 282–3 94–100
Collins, A.M., 273 ways of, 81–2
Colson, B.G., 282 Conti-Ramsden, G., 60, 262, 264
common code, 147 Conway, C.M., 72, 284–7
communicative difficulties, 70, 71 Corballis, M.C., 21–3, 117, 154, 230
communicative learning, 124, 141 Corina, D.P., 25, 231, 242, 243
communicative skills, 81, 82, 85, 89, cortical colonization, 245
231, 246 Cote, L.R., 139
community, 33, 37, 91, 93, 99, 131, Craighero, L., 154
139, 165, 171, 185, 188, Creanza, N., 17, 222
248–50, 281 creole languages, 33–5, 92–4, 110
community condition, 184, 185, crit de chat syndrome, 27
188, 275 Crossley, M.J., 173, 174
community effect, 139, 184 cross-modal reorganization, 244–6
computer-mediated communication, Crutchley, A., 60
225, 232 cultural preconceptions, 42, 211,
Connolly, C., 180 218–20
consonant-vowel (CV) syllable, culture, 9, 18, 38–40, 165–7,
121–2, 236, 237 195, 251
constrained statistical learning
framework, 41, 42, 81,
94–100, 259 D
continuity in evolutionary time and Dailey, N.S., 267
across domains Danemark, B., 251
declarative procedural model, Darwin, Charles, 13, 15, 18
101–10 Davies, I.R.L., 171
displacement, road to language, Davis, B.L., 241
89–92 Deacon, T., 83–6, 88, 178, 189,
mirror neurons, 112–9 230, 231
motor system, 121–3 deaf babies, 42, 236–8, 241
296 Index

de Aroújo, I., 87 language difficulties and social


declarative memory system, 37, 64, disengagement, 279–80
100, 102–3, 107, 133, 178, lexical problems, 273–5
271–2 linguistic signals, statistical
declarative procedural (DP) model, learning, 268–71
60, 259 methodological implications,
declarative memory system, 260–1
102–3 perspectives for research, 71–3
procedural-deficit hypothesis, and processing of prosodic and
109–10 paralinguistic features, 275–6
procedural memory system, remedial treatment, 280–3
103–9 Serial Reaction Time task, 262–4
DeeChee, 145, 146 statistical learning and, 283–8
de Lange, F.B., 22, 116, 176 and ubsystems of language, 10–1
de Saussure, F., 24, 83 Weather Prediction Task, 264–5
design features, 5, 39, 89, 159 dialogues, 67, 131–2
Deuschl, G., 270 easy dialogues, 152–5
developmental dyslexia (DD), 288 infant–caregiver interactions,
developmental language impairment, turn-taking in, 139–41
1, 3 language acquisition models, 144–9
artificial grammar learning and, language games and pidgin
266–8 languages, 149–51
child and caregiver, early and language-impaired child, 151–2
interactions, 65–8 procedural skills and early
critical markers, 57–61 dialogues, 132–4
deaf children to sign language signaling, 141–4
from birth, 53–7 small talk, 155–6
declarative memory system, vocal turn-taking, 135–8
271–2 Di Pellegrino, G., 19
diagnostic labels and specificity of discrete infinity, 112, 196
impairment problem, 51 Disembodied Cognition Hypothesis,
diagnostic terminology, renewed 176
discussion, 277–9 displacement, 38, 89–92
differential diagnostics, problems Dolata, J.K., 241
of, 68–71 domain-specific language, 36, 38
evolutionary perspective, Down, K., 151
257–60 duality of patterning, 5, 6, 9
genetic etiology, 62–5 dual-route model, 120, 121
Index 297

dysarthria, 59, 259 Fadiga, L., 19


dyslexia, 49, 63, 194, 211–5 F5 area, 20, 41, 115, 154
fast mapping, 31, 230
Faust, M., 70, 220
E Fay, N., 41, 184–6, 274, 281
Early Care and Education (ECE), 30 Feldman, M.W., 17
East Africa, 90 Ferris, S.P., 223
Egan, G, F., 104 Fisher, J., 31, 62
Egyptian Madonna Isis, 219 Fitch, W.T., 4, 6, 9, 13–6, 82, 159
Ehlich, K., 195 Fogarty, L., 17
Eichenbaum, H., 177 Fogassi, L., 20
Elder, J.H., 69, 70 Folia, V., 107
Embodied Cognition Framework, Footprint Reading Test, 213
176 FOXP2, 62, 64
Emmorey, K., 117, 170, 171, 231, Fox, P.T., 201
239, 242, 245, 246 Franklin, A., 171
epigenetics, 15, 16, 67, 92 French Sign Language (FSL), 250
episodic memories, 133 frequency-lag hypothesis, 245
equipotential articulators, 246–7 Friederici, A.D., 182
Evans, J.L., 95, 269, 270, 275 Fujii, S., 241
event-related potentials (ERP), 70, 110
evolutionary biology
conceptual framework, 13–4 G
evolutionary-developmental Gabay, Y., 288
biology, 14–7 Gabriel, A., 263
language evolution and language Galantucci, B., 122
change, 11–3 Galese, V., 20
niche construction theory, 17–8 Gallagher, S., 69
evolutionary-developmental biology Garcia-Marti, G., 61
(Evo-Devo), 14–7 Garrod, S., 41, 141, 149, 153–44, 184
Eysenck, M.W., 172 Gathercole, S.E., 59, 207
Gelb, I.J., 199
Gelgic, C., 262
F Gellerstedt, L.C., 251
Faculty of Language in a Broad sense Gerken, L.A., 266, 275
(FLB), 4 Gervain, J., 72
Faculty of Language in a Narrow gesticulatory movements, 26
sense (FLN), 4 gestural theory, 24, 42, 118, 230, 231
298 Index

Ghazanfar, A.A., 72, 136, 138 Hirokawa, K., 64


Girbau-Massana, D., 61 Hitch, G.J., 27, 58
Gluck, M.A., 264, 265 Hockett, C.D., 5, 24, 39, 89, 230
Glynn, D., 86–7, 88 Hoffman, 169
Gold, R., 70, 220 Holmes, S., 54, 171
Goldstein, A., 70, 220 Holowka, S., 237
Gollan, T.H., 245 Holt, L., 288
Gómez, R.L., 266, 267 home signs, 32–6, 91, 93, 152, 156,
Gonzales-Castilla, J., 176 248
Graf Estes, K., 95 homonymy, 38, 169
grammar learning, 6, 88–9, 110–2 Homo sapiens, 80, 81, 87, 88, 90, 92,
grammaticalization, 231 108, 182
Grapheme-Phoneme Converter, 58 horizontal transmission, 81, 134
Gremp, M.A., 284 Hsu, H.J., 267, 268, 272
Grice, H.P., 7, 160 Hudson, S., 151
Grigorenko, E.L., 215 Hulme, C., 59
Grunow, H., 266 Humphreys, G.W., 180
Gudwin, R., 87 Hurst, J.A., 62
Guillaume, M., 263 hymenoptera, 90
hyperlexia, 70, 215–7

H
Hage, S.R., 64 I
Hagoort, P., 22, 107, 116, 176 ideographics, 197
Halle, M., 6 ideographs, 197, 213
Hamzei, F., 114 if-then rules, 133, 216
Hauser, M.D., 4, 86–7, 88 impaired procedural learning, 100
Hawaiian pidgins, 35 infant-caregiver interactions, 50,
Haynes, O.M., 139 139–41
Hazzard, L.M., 282 information–integration (II) tasks,
Headturn Preference Procedure, 96 173
Hedenius, M., 263, 264 Ingvar, M., 205, 207
Henderson, L., 200 instinct to learn, 13, 19, 66, 67, 141,
Henning, S.C., 282 247
Herman, R., 54 intention, 69, 70, 141–4, 160–1
Hickok, G., 120, 121 interactional synchrony, 85, 141
hippocampus, 64, 102, 103, 133, interaction theory, 69
177–8 interactive alignment, 67, 141, 153–5
Index 299

Islam, 210 language, 11, 79–80, 159, 160, 193


isolated pair condition, 184, 185, 274 categorical perception, 169–71
communicating meaning, 30–8
communicative interactions,
J importance of diversity, 183–5
Jackendoff, R., 33 concepts and categories, 171–5
Jakobson, R., 6 conceptual framework, 13–4
Jin, Z., 201 cultural preconceptions, 218–20
Judaism, 210 developmental impairments, 10–1
dominance of, 247–51
evolution and change, 11–3
K evolutionary-developmental
Kaminski, J., 31, 32 biology, 14–7
Keane, M.T., 172 future of, 223–5
Kemény, F., 265, 287–8 gestural theory of, 230
Kemmerer, D., 176 intention, 160–1
key attribute, 233, 234, 249 knowledge, 161–4
KIAA0319, 63, 214 language-culture interactions,
King, B.J., 85 38–40
Kirby, S., 141 literal meaning and Asperger
Klima, E.S., 231 syndrome, 220–1
Klin, A., 215 mirror neurons, discovery of,
Knoors, H., 245 19–24
knowledge, 14, 31, 35, 41, 70, 98, neurobiology of lexical meaning,
106, 134, 160, 188, 240, 263, 175–83
287 niche construction, invention of
meaning of meaning, 162–4 writing, 221–2
symbolic reference, 164 niche construction theory, 17–8
Knowlton, B.J., 35, 107, 259, 264 pre-literate languages, 165–8
Knox, B.M.W., 210 pre-semantic signaling and role
Kosmidis, M.H., 207 in vertical transmission, 24–30
Krentz, U.C., 25, 231 and subsystems, 4–10
Kronenberger, W.G., 282, 283 writing, 217–8
language acquisition model, 25, 26,
29, 31, 32, 40, 41, 50, 51, 58,
L 63, 64
Lai, C.S.L., 62 artificial language, learning an,
Laird, A.R., 201 146–9
300 Index

language acquisition model (cont.) learning programs, 275


babbling in deaf and hearing Lee, E., 202
babies, 236–8 left inferior prefrontal cortex (LIPC),
babbling to conceivable word 179, 180, 273
forms, 144–6 Lenneberg,E., 239
critical period hypothesis, 239–40 Levickis, P., 151
developmental milestones, 238–9 Levy, B., 237
implications for, 91 lexeme, 162–3, 175–83
task, 235–6 lexical/semantic system, 109, 273–4
language awareness, 280 lexigrams, 84, 86
language-based learning disability Liberman, A.M., 122
(LLD), 267 Lieberman, P., 14, 233, 234, 249
language bias, 231 Li, K., 201
language-culture interactions, 38–40 linearity index (L), 97
language deprivation, 29–30, 54 linguistic community, 99, 131, 152,
language difficulties, 55, 71, 279–80, 186, 188, 240
282, 283 linguistic signals, 19, 21, 25, 26, 28,
language disorder, 50, 53, 62, 99, 51, 72, 258, 268–71
212 linguistic skills, 72, 134, 252
language games, 34, 37, 149–51 displacement, 89–92
language-general bias, 232 protolanguage, 92–4
language-impaired children, 271–2, symbolic threshold, 83–9
275, 277, 280–2 linguistic symbols, 37, 38, 81, 83,
language instinct, 12, 19, 66 88, 169, 233
language-learning device, 27 linguistic utterances, 117, 242
language-like stimuli, 25–7, 29, 94 Linnel, Per, 193
language modalities, 42, 112, 243, literacy, 3, 14, 38, 42, 71, 193
247 brain regions, 201–2
language rhythm, 241, 251–2 cognitive research, 202–8
language universal, 5, 19, 89 dyslexia and hyperlexia, 211–7
larynx, 16, 17, 80 future of, 223–5
Lashley, K.S., 112 reading without interpretation,
last common ancestor (LCA), 29, 37 209–11
latent capacities, 82 threshold of writing, 195–7
law of replacement, 37 writing systems, 197–201
learning constraints, 22, 25, 27, 28, literal meaning, 187, 220–1
51, 95, 124, 125, 151, 155, Liu, F.-C., 64
252 Lofthus, E.F., 273
Index 301

logographies, 197–201, 213 medium transferability, 39, 40,


logographs, 197–9, 213 167, 168
logosyllabic writing system, 199 Mehler, J., 72
Loula, A., 87 Mellits, E., 273
Lukács, A., 265, 287–8 memory systems, 35, 37, 64, 101–9,
Lum, J.A., 262, 264 259, 271–2
Lyon, J., 93, 145 Menenti, L., 153
Lyons, J., 7, 38, 39, 41, 162–4 mental lexicon, 102, 108, 258, 269
Semantics, 172 Meulemans, T., 263
Milner,A.D., 102
Mini-Mental State Examination, 206
M mirror neurons, 19–24, 51, 112–9,
Macneilage, P.F., 241 154, 176, 235
Maddox, W.T., 64 Miyamoto, R.T., 244
Maillart, C., 263 modality-independent capacity, 231
Mainela-Arnold, E., 262 a-modal language rhythm, 241
Manns, J.R., 177 babbling in deaf and hearing
MANOVA, 287 babies, 236–8
Marentetto, P.F., 236–7 communication, equipotential
marmoset monkeys, 41, 72, 73, articulators, 246–7
135–8, 161, 280 critical period hypothesis, 239–40
Marschark, M., 245 cross-modal reorganization,
Marti-Bonmati, L., 61 244–6
McGeary, J.E., 64 developmental milestones, 238–9
meaningful units, 6, 111 dominance of spoken languages,
meaning in language, 159, 160 247–51
categorical perception, 169–71 language acquisition task, 235–6
communicative interactions, language mode revisited, 251–2
importance of diversity, 183–5 signed and spoken languages,
concepts and categories, 171–5 242–4
intention, 160–1 symbolic reference, cross-modal
knowledge, 161–4 nature, 233–4
neurobiology of lexical meaning, Modern languages, 8, 28, 33, 34, 93,
175–83 111, 160, 169, 193
pre-literate languages, 165–8 Monaco, A.P., 62
Meck, W.H., 105 Morgan, A.T., 54, 55, 57, 264
medial temporal lobe (MTL), 64, morphography, 197
102, 103, 107, 181, 259, 261 Morse code, 28
302 Index

motherese, 239 Nühsma, A., 270


motor action, 21, 22, 34, 51, 112, Nystrom, P., 235
166, 167, 176
motor system, 121–3
Musen, G., 35, 107, 259 O
mutilations, 231 object-related action, 114
mutual attunement, 85 Olson, D.R., 196, 200, 202, 203,
Myers,C.E., 265 209, 218, 219
Ong, W., 39, 166, 204, 219, 223, 224
Onlaor, S., 265
N orthography-to-phonology mapping
Narayanan,D.Z., 72, 136 (O – P), 198, 202, 206, 207,
natural languages, 94, 143, 148, 169, 216, 217, 220
269, 284 orthography-to-semantics mapping
natural meaning, 160 (O – S), 198, 202, 213, 215,
Nehanive, C.L., 93, 145 216, 218, 220
Neo-Darwinism, 15, 17 Ostry, D.J., 237
neuroanatomical structures, 60, 61
neurobiological approach, 23, 115, 117
neurobiological model, 64 P
Neuropsychological Examination– Pacton, S., 268–9
CogniFit Personal Coach pantomime recognition, 212, 213
(N-CPC), 282 Papagno, C., 207
Newport, E.L., 240 para-hippocampal region, 177–8
Nicaraguan Sign Language (NSL), parasitic model, 189
33, 34, 36, 248 Pare-Blagoev, E.J., 179
niche construction theory, 17–8, 221–2 Peirce, 83, 160, 161, 233
Nicholls, R., 151 Pepitto, 241, 247
Nieder, A., 83 Perfetti, C.A., 201
nonadjacent dependencies, 266, 267 Perrechut, P., 268–9
nondeclarative systems, 103 Peterson, K.M., 107, 205, 207
non-natural meaning, 160, 161 Petitto, L.A., 236–7, 244, 247
nonpredictive (NP) languages, 95, 96 Petrich, J.A., 245
nonword repetition test, 59, 207, 286 Philosophical Investigations, 34, 149
Noordzij, M.L., 22, 116, 176 phonemes, 118, 144, 170, 171, 198,
Norbury, C.F., 56 199, 206, 210, 216
noun phrase (NP), 7, 110, 111 phonographies, 197
novel interactions, 184 phonological awareness, 214, 217
Index 303

phonological storage, 58 linguistic signals, statistical


phonology, 6, 10, 159, 202, 239, 259 learning, 268–71
Pickering, M.J., 141, 149, 153–5 Serial Reaction Time (SRT) task,
pictographs, 198 262–4
pidgin languages, 33, 35–7, 92, 108, Weather Prediction Task (WPT),
149–51, 248 264–5
Pierno, A.C., 22, 114 procedural dialogues, 41, 132
Pierpoint, E.I., 60–1, 64, 100, 109, infant–caregiver interactions,
119, 272 turn-taking in, 139–41
Pilling, M., 171 vocal turn-taking, 135–8
Pinker, S., 12 procedural language disorder (PLD),
Pisoni, D.B., 72, 244, 282, 284, 286 42, 65, 109, 271, 277
Plante, E., 266, 267, 275 procedural memory system, 64, 73,
plastic song, 66 103–9, 148
Podzebenko, K., 104 procedural skills, 132–4
Poeppel, D., 120, 121 easy dialogues, 152–5
Poldrack, R.A., 179 infant–caregiver interactions,
Politimou, N., 207 turn-taking in, 139–41
positron emission tomography (PET) language acquisition models,
studies, 113, 114 144–9
pragmatics, 8–9 language games and pidgin
predictive languages (P-languages), languages, 149–51
28, 94, 95 and language-impaired child,
prefrontal cortex, 173, 178–83, 273 151–2
pre-linguistic behavior, 124 procedural skills and early
prelinguistic communicative gestures, dialogues, 132–4
239 signaling, 141–4
pre-literate languages, 24, 39–41, small talk, 155–6
165–8 vocal turn-taking, 135–8
pre-semantic processing of meaning, productivity/openness, 5
175 protolanguage, 21, 22, 33, 37, 50,
primary language impairment, 50 79, 80, 92–4, 193
procedural declarative (PD) model, Putnick, D.L., 139
41, 42, 168
procedural deficit hypothesis (PDH),
65, 82, 109–10, 259–61 Q
AGL and language impairment, Queiroz, J., 87
266–8 Quran, 209, 211, 216
304 Index

R Selton,R., 147–9
reading disability (RD), 58, 61 semantic coaching, 281–2
Recalling Sentences and Sentence semantics, 5, 7–8, 160, 181–2, 258
Structure, 264 Senghas, A., 183, 185
reflexivity, 41, 163–5, 164, 187, 193, Sergio, L.E., 237
217 Serial Reaction Time (SRT) task,
Reilly, S., 50, 56 262–4
Reis, A., 205, 207 sesquipedalian, 164
Rendall, D., 85 Shanker, S.G., 85
Ribeiro, S., 87–8 Shoamy, D., 265
Ritchie,G.R., 141 sign–sign relationships, 83, 233
Rizzolatti,G., 20, 115–7, 154 Silberberg, N., 215
Roberts, L., 41, 184 simulation theory, 69
Robe-Torres, K., 269 Singer, W., 182
Rogers, T.T., 169 Siok, W.T., 201
Ruhlen, M., 80 small talk, 31, 33, 138, 143, 155–6
rule-based (RB) category, 34, 106, Smith, K., 173, 174, 286, 288
172–3 social disengagement, 279–80
Rumbaugh, E., 84, 86, 230 social mobility, 32
Russenorsk, 35 spandrels, 14
Rydberg, E., 251 Spaulding, T.J., 266
Ryle, G., 31, 101 special-purpose instrument, 36
specific language impairment (SLI),
40, 42, 50–6, 57–61, 99–100
S Squire, M.E., 35, 101–3, 107, 259,
Saffran, Jenny, 19, 28, 41, 42, 81, 264
94, 95, 97, 98, 266, 269, 276 Stark, R., 273
Sally–Anne Test, 69, 70 statistical/artificial grammar learning,
saltations, 13 124–5
Samson, D., 180, 181 Stefaniak, N., 263
Sasanuma, S., 201 Stroop effect, 198
Saunders, 93, 145 structural impairment, 258
Savage-Rumbough, E.S., 84–6, 230 structural sequence processing (SSP),
Schlesewsky, M., 110 43, 285–8
Schmandt-Besserat, D., 196 Subject-Verb-Object, 92, 110, 111
Schwarz, R.G., 61 subsong, 66
Scott-Phillips, T.C., 141–3, 160 supplementary motor area (SMA),
Scoville, W.B., 102 105, 270
Searle, John, 132 Suwalsky, T.J., 139
Index 305

Swoboda, N., 41, 184 Tubaldi, F., 22, 114


syllabaries, 197, 199 Turella, L., 22, 114, 115
symbolic awareness, 195 Turken, A.U., 172
symbolic communication, 84, 87, turn-taking, 139–41, 144, 155, 279
118, 178, 232 typical language development, 40
symbolic reference, 83, 86, 164, 179, typically developing (TD) children,
181, 231–4 100, 262
symbolic species theory, 23, 81, typically language developing (TLD)
231 children, 61
symbolic threshold, 83–9, 124 Tzeng, J.L., 198
symbol–symbol relationship, 164
synchrony, 140
syntactic writing, 84–5, 196 U
syntax, 6–7, 14, 70, 83, 92, 111, Ullman, M.T., 41, 42, 60–1, 64,
159, 196, 284 81–2, 100–10, 119, 259, 264,
272
unacceptably arbitrary, 53
T universal grammar (UG), 18, 66,
Takahashi, K., 64, 72, 136, 137 111, 200, 233
Tallal, P., 59, 273
Tan, L.H., 201
Tattersall, I., 112, 233 V
Teoh, Wooi, 244, 245 van Balkom, H., 73, 151
Test for Reception of Grammar Vance, R., 275
(TROG), 52, 62 van Weerdenburg, M., 73, 151
The Evolution of Language, 12 Varga-Kadem, F., 62
Thiessen, E.D., 288 Varga, S., 69
Three S, 9–10 Varney, N.R., 14, 212, 213
ToM, 69, 71, 213 Vasey, P., 85
Tomasello, M., 14 ventral and dorsal pathways, 119–21
Tomblin, B., 50, 262, 267 ventro-lateral prefrontal cortex
Toni, I., 22, 116, 121, 123, 176 (VL-PFC), 103
Torrence, N., 202 Verhoeven, L., 73, 151
track articulation, 122 vertical transmission, 24–30, 81,
traditional literate culture, 224 94–100, 124, 125, 134, 144,
transcranial magnetic stimulation 151
(TMS) studies, 113, 180 visual information, 119
transitional probabilities (TB), 94, vocabulary learning, 109, 260, 271,
95, 269, 270, 275, 284 272
306 Index

vocal babbling, 42, 123, 236, 237 Whorph, B.L., 167


vocal communication, 230 Whorphian hypothesis, 167
vocal turn-taking, 72, 135–8 Wilcox, S.E., 116
Volkmar, F., 215 Wilson, B., 59, 97, 138
von Koss Torkildsen, J., 267 window of opportunity, 239, 240
Vouloumanos, A., 25, 231 Wittgenstein, Ludwig, 34, 149, 150
voxel-based morphometry (VBM), 61 Witt, K., 270
Wizlack-Makaravich, A., 110
Woll, B., 54
W Word Structure, 264
Wagner, A.D., 179, 180 working memory (WM), 27, 59,
Wake, M., 151 105, 131, 172, 207, 249, 263,
Waldron, E.N., 172 273, 284
Walk, A.D., 284
Wan, C.Y., 241
Wang, W. S.-Y., 198 Y
Warglien, M., 147–9 Yi, H.G., 64
Watson, J.D.J, 104
Weather Prediction
Task (WPT), 100, 148, 261, Z
264–5 Zafiri, M., 207
Wendelken, C., 180 Zhang, X., 262
Werker, J.F., 25, 231 Zhao, J., 201, 202
Wernicke’s areas, 80, 179, 212, 242 Ziegler, W., 64

You might also like