Linguistic Inquiry Monograph

Samuel Jay Keyser, general editor
1. Word Formation in Generative Gramar, Mark Aronof
2. X Syntax: A Study of Phrase Structure, Ray Jackendof
3. Recent Transformational Studies i European Languages, Samuel Jay Keyser,
editor
4. Studies i Abstract Phonology, Edmund Gussmann
5. An Encyclopedi of A UX: A Stuy i Cross-Liguitic Equivalence, Susan
Steele
6. Some Concepts and Consequences of the Theory of Goverment and Bidig,
Noam Chomsky
7. The Synta of Word, Elisabeth Selkirk
8. Sylable Structure and Stress i Spanish: A Nonlinear Analysis, James W.
Haris
9. CV Phonology: A Generative Theory ofthe Sylable, George N. Clements and
Samuel Jay Keyser
10. On the Nature of Grammatical Relations, Alec Marantz
11. A Grammar of Anaphora, Joseph Aoun
12. Logical Form: Its Structure and Derivation, Robert May
13. Barriers, Noam Chomsky
14. On the Defnition of Word, Anna-Maria Di Sciullo and Edwin Williams
15. Japanese Tone Structure, Janet Pierrehumbert and Mary Beckman
16. Relativized Miimality, Luigi Rizzi
17. Types of
A
-Dependncies, Guglielmo Cinque
18. Argument Structure, Jane Grimshaw
19. Locality: A Theory and Some of Its Empirical Consequences, Maria Rita
Manzini
20. Indjites, Molly Diesing
21. Syntax ofScope, Joseph Aoun and Yen-hui Audrey Li
22. Morphology by Itself Stems and Infectional Classes, Mark Aronof
23. Thematic Structure i Syntax, Edwin Williams
24. Indices and Identity, Robert Fiengo and Robert May
25. The Antiymmetry ofSyntax, Richard S. Kayne
26. Unaccusativity: At the Syntax-Lexical Semantics Interface, Beth Levin and
Malka Rappaport Hovav
27. Lexico-Logical Form: A Radicaly Miimalit Theory, Michael Brody
28. The Architecture of the Language Faculty, Ray Jackendof
The Architecture of
the Language Faculty
Ray lackendof
The MIT Press
Cam bridge, Massachustts
London, England
© 1 997 Massachusetts Institute of Technology
All rights resered. No part of this book may be reproduced in any for by any
electronic or mechanical means (including photocopying, recording, or infor­
mation storage and retrieval) without prission in writing from the publisher.
This book was set in Times Roman by Asco Trade Typesetting Ltd., Hong Kong
Printed and bound in the United States of America.
Library of Congress Cataloging-in-Publication Data
Jackendof, Ray S.
The architecture of the language faculty I Ray Jackendof.
p. cm. -(Linguistic inquiry monographs; 28)
Includes bibliographical references.
ISBN 0-262- 1 059-2. -ISBN 0-262-60025-0 (pbk.)
I. Grammar, Comparative and general. 2. Thought and thinking.
I. Title. 11. Series.
PI 5 1 .J25 1 996
41 5-dc20
96-25903
CIP
Contents
Series Foreword xi
Preface xiii
Acknowledgents xv
Chapter 1
Queston, Goals, Asumptons
Chapter 2
Interace Reprentational
Modularity 21
l .l Universal Grammar 2
1.1.1 The Mentalist Stance 2
1.1.2 The Notion 0/ Mental
Grammar 3
1.1.3 Leamabilty ad Universal
Grammar 4
1.1.4 Innateness 6
1.1.5 Relation o/ Grammar to
Processing 7
1.1.6 Relation o/ Grammar to
Brain 8
1.1.7 Evolutionary Isses 10
1.2 Necessities and Assumptions
11
1.3 Syntactocentrism and
Perfection 15
2.1 The "Articulatory-Perceptual"
Interfaces 21
2.2 The Phonology-Syntax
Interface 25
vi
Chapter 3
More Ü the Syntax-Semantic
Interface 47
Contents
2.3 The "Conceptual-Intentional"
Interface 3 1
2.4 Embedding Mismatches between
Syntactic Structure and Conceptual
Structure 36
2. 5 The Tripartite Parallel
Architecture 38
2.6 Representational
Modularity 41
3. 1 Enriched Composition 47
3. 2 Aspectual Coercions 51
3.2.1 Verbal Coercions 51
3.2.2 Mass-Count Coercions 53
3. 3 Reference Transfer
Functions 54
3.3.1 Pictures, Statues, and
Actors 56
3.3.2 Cars and Other Vehicles 56
3.3.3 Ha Sandwiches 57
3. 4 Argument Strcture
Alterations 58
3.4.1 Interpolated Function Specied
by General Principle 58
3.4.2 Interpolated Function Specied
by Quala of Complement 60
3. 5 Adjective-Noun
Modifcation 62
3.5.1 Adjectives That Invoke the
Qualia Structure of the Noun 62
3.5.2 Adjectives That Semanticaly
SlIhordinate The Noun 64
.15.3 Mod!/icltion That CO(ces the
Im/III.I ottllt' f'ad Nom 65
Contents
vii
Chapter 4
The Lexcal Interface 83
(,hapter 5
Lexical ":ntie, Lexical Rule 109
3. 6 Sumar 66
3.7 Anaphora 67
3.7.1 Bindng inside Lexical
Items 68
3.7.2 Control Deteried by
LCS 70
3.7.3 Ringo Sntences 73
3.7.4 Bound Pronouns in Sloppy
Identity 75
3.8 Quantifcation 78
3.8.1 Quanticatin into Ellpted
Contexts 78
3.8.2 Quanticational Relations That
Depend on Interal Strcture of
LCS 79
3.9 Remarks 8 1
4. 1 Lexical Inserion versus Lexical
Licensing 83
4.2 PIL=CIL 91
4.3 PIL and CIL Are at
S-Structure 99
4.4 Checking Argument
Structure 101
4. 5 Remarks on Processing 103
4.6 The Lexicon in a More General
Mental Ecology 107
5.1 Broadening the Conception of the
Lexicon 109
viii
Chpter 6
Remark on Produtive
Morpholog 1 3 1
Chapter 7
Idiom ad Other Fixe
Expresion 1 53
Contents
5. 2 Morphosyntax versus
Morphophonology 1 1 0
5. 3 Infectional versus Derivational
Morhology 1 1 3
5.4 Productivity versus
Semiproductivity 1 1 5
5. 5 Psycholinguistic
Considerations 1 21
5. 6 "Optimal Coding" of
Semiproductive Forms 1 23
5. 7 Final Remarks 1 30
6. 1 Introduction 1 3 1
6. 2 The Place o f Traditional
Morphophonology 1 33
6. 3 Phonologcal and Class-Based
Allomorphy 1 39
6.4 Suppletion of Compsed Fors
by Iregulars 1 43
6. 5 The Status of Zero
Infections 1 47
6.6 Wy the Lexicon Cannot Be
Minimalist 1 50
7. 1 Review of the Issues 1 53
7.2 The Wheel ofFortue Corpus: If
It Isn't Lexical, What Is It? 154
Contents ix
Chapter 8
Epilogue: How Lguage Help Us
link 179
7.3 Lexical Insertion of Idioms as
X
O
s 158
7.4 Lexical Licnsing of Units Larger
than X
O
161
7.5 Parallels between Idioms and
Compounds 164
7.6 Syntactic Mobility of (Only)
Some Idioms 166
7.7 Idioms That Are Spcializtions
of Other Idiomatic Constructions
171
7.8 Relation to Construction
Grammar 174
7.9 Summary 177
8.1 Introduction 179
8.2 Brain Phenomena Opaque to
Awareness 180
8.3 Language Is Not Thought, and
Vice Versa 183
8.4 Phonetic Form Is Conscious,
Thought Is Not 186
8. 5 The Signifcance of Consciousness
Again 193
8.6 First Way Language Helps Us
Think: Linguistic
Communication 194
x
Appen
The Wheel of Fortue COr 20
Notes 217
References 241
Index 257
Contents
8.7 Second Way Language Helps Us
Think: Making Conceptual Structure
Available for Attention 194
8.7.1 Introduction 194
8. 7.2 The Relation between
Consciouness ad Attention 196
8.7.3 Lnguage Provides a Way to
Pay Attention to Thought 20
8.8 Third Way Language Helps
Us Think: Valuation of Conscious
Percepts 202
8.9 Suming Up 205
8.10 The Illusion That Language Is
Thought 206
Series Foreword
We are pleasd to present this monograph as the twenty-eighth in the
series Linguistic Inquiry Monographs. These monographs will present new
and original research beyond the scope of the article, and we hope they
will beneft our feld by bringing to it perspectives tat will stimulate
further research and insight.
Originally published in limited edition, the Linguistic Inquiry Mono­
grph series is now available on the much wider scale. This change is due
to the great interest engendered by the series and the needs of a growing
readership. The editors wish to thank the readers for their support and
welcome suggestions about future directions the series might take.
Samuel Jay Keyser
fo the Editoral Board
Preface
"Rut it comes time to stop and admire the view before pushing on
again." With this remark, intended mainly as a rhetorical fourish, I
cnded Semantic Structures, my 1990 book on conceptual structure and its
relati on to syntax. The present study is in large part a consequence of
laking my own advice: when I stepped back to admire the view, I sensed
that not all was well across the broader landscape. This book is therefore
an attempt to piece together varous fragments of linguistic theory into a
lIlore cohesive whole, and to identify and reexamine a number of standard
assumptions of linguistic theory that have contributed to the lack of ft.
The architecture developed here already appeared in its essentials in my
II}R7 book Consciousness and the Computational Mind, where I situated
IIIL' language capacity in a more general theory of mental representations,
l·onnccted this view with processing, and stated the version of the modu­
larity hypothesis here called Representational Modularity. At that time I
assumed that the architecture could b connected with just about any
theory of syntax, and left it at that. However, two particular issues led to
the real i zati on that my assumption was too optimistic.
The (irst concers the nature of lexical insertion. In 1991, Joe Emonds
published a critique of my theory of conceptual structure, in which it
''ellcd to me that much of the disagreement turned on difering presup­
posi tions about how lexical items enter into syntactic structure. Around
tin' same timeq the very same presuppositions surfaced as crucial in a
IIl1ll1hl'r of intense discussi ons with Daniel Biring and Katharina Hart-
Î1Jsfnil. Conhequentlyq in repl ying to Emonds, I found it necessary to un­
earth these presuppositions and decide what I thought lexical insertion is
rt'ally like. A fortuitous invitUtion to the 2nd Ti l burg Idioms Conference
alive îI1L` the opportunity to eÅpand these ideas and work out their im­
.,11Ial ions for idioms, loincidentally bringing me back to questi ons of
xiv
Prfac
lexical content that I had thought about in connection with my 1 975
papr on lexical redundancy rules, "Morphological and Semantic Regu­
larities in the Lexicon."
The second major imptus behind this book was the apparance of
Chomsky's Minimalist Program, whose goal is to determine how much of
the structure of language can be deduced from the need for language to
obey boundary conditions on meaning and pronunciation. I found myself
agreeing entirely with Chomsky's goals, but difering radically in what
I considered an appropriate exploration and realization. Bringing these
concers together with those of lexical insertion led to a 1 994 paper en­
titled "Lexical Insertion in a Post-Minimalist Theory of Grammar." This
papr was circulated widely, and I received (what was for me at least) a
food of comments, mostly enthusiastic, from linguists and cognitive
scientists of all stripes. As a consequence, the paper grew beyond a size
appropriate for a joural, hence the present monograph.
Many of the ideas in this study have been foating around in various
subommunities of linguistics, sometimes without contact with each other.
The idea that a grammar uses unifcation rather than substitution as its
major operation, whch I have adopted here, now appears in practically
every approach to generative grammar outside Chomsky's immediate
ci rcle; similarly, most non-Chomskian generative approaches have aban­
doned transforational derivations (an issue about which I am agostic
here). The idea adopted here of multiple, coconstraining grammatical
structures appeared frst in autosegmental phonology of the mid- 1 970s and
continues into the syntax-phonology relation and the syntax-semantics
relation in a wide range of approaches.
What is original here, I think, is not so much the technical devices as
the attempt to take a larger perspective than usual on grammatical struc­
ture and to ft all of these innovations together, picking and choosing
variations that best suit the whole. In fact, for reasons of length and
readability, I have not gone into a lot of technical detail. Rather, my in­
tent is to establish reasonable boundary conditions on the architecture
and to work out enough consequences to see the fruitfulness of the
approach. My hope is to encourage those who know much more than
I about millions of diferent details to explore the possibilities of this
architecture for their own concers.
Acknowledgments
I must express my gratitude to those people who ofered comments
on materials from which the present work developed, including Steve
Anderson, Mark Aronof, Hans den Besten, Manfred Bierwisch, Daniel
Buring, Patrick Cavanagh, Marcelo Dascal, Martin Everaert, Yehuda
J;alk, Bob Freidin, Adele Goldberg, Georgia Green, Jane Grimshaw, Ken
Ilale, Morris Halle, Henrietta Hung, Paul Kay, Paul Kiparsky, Ron
I ªangacker, Joan Maling, Alec Marantz, Frit Newmeyer, Urpo Nikanne,
Curios Otero, Janet Randall, Jerry Sadock, Ivan Sag, Andre Schenk,
Lindsay Whaley, Edwin Williams, Edgar Zurif, and various anonymous
referees. A seminar at Brandeis University in the fall of 1 994 was of great
help in solidifying the overall point of view and developing the material in
�hapters 5 and 6; its membrs included Richard Benton, Piroska Csuri,
.ludi Heller, Ludmila Lescheva, Maria Piiango, Philipp Schlenker, and
Ida Toivonen.
Correspondence with Noam Chomsky has been extremely helpful in
darifying various basic questions. Pim Levelt encouraged me to mention
Jhycholinguistic issues sufciently prominently; Dan Dennett and Marcel
Kinsboume, to include neuropsychological and evolutionary viewpoints.
Iklh Jackendof's energy and enthusiasm in collecting the Wheel of
l'iJrlune corus was an imprtant spur to the development of chapter 7.
Katharina Hartmann read an early draf of most of the book and sug­
glstcd some important structural changes. Comments on the penultimate
draft from Robert Beard, Paul Bloom, Daniel Biring, Peter Culicover,
I >an Dcnnett, Pim Levelt, lda Toivonen, and Moira Yip were instru­
nwntal in tuning it up into its present form. Anne Mark's editing, as ever,
�Illoothed out much stylistic lumpiness and lent the text a touch of class.
My first hook. Semantic Interpretation in Generative Grammar, ac­
knowledged my debt to a numher of individuals in reaction to whose
xvi Acknowledgents
work my own had developd. I am pleased to be able to thank again two
of the central fgures of that long-ago Generative Semantics tradition,
Jim McCawley and Paul Postal, for their many insightful comments on
aspects of the work presented here. It i s nice to know that collegiality can
be cultivated despite major theoretical disagreement.
I also want to thank three people with whom I have ben collabrating
over the past few years, on projects that play important roles in the pres­
ent study. Work with Barbara Landau has deeply infuenced my views
on the relationship btween language and spatial cognition, and conse­
quently my views on interfaces and Representational Modularity (chapter
2). Peter Culicover and I have been working together on arguments that
binding is a relationship stated over conceptual structure, not syntax; I
draw freely on this work in chapter 3. James Pustejovsky' s work on the
Generative Lexicon, as well as unpublished work we have done together,
is a major component in the discussion of semantic composition (chapter
3) and lexical redundancy (chapter 5).
It is a pleasure to acknowledge fnancial a well as intellectual supprt.
I am grateful to the John Simon Guggenheim Foundation for a fellowship
in 1 993-1 994 during which several parts of this study were conceived,
though not yet molded into a book; this research has also ben supported
in part by National Science Foundation grant IRI 92- 1 3849 to Brandeis
University.
Finally, as usual, Ay and Beth are the foundation of my existence.
Chapter 1
Questions, Goals,
Assumptions
Much research in linguistics during the past three decades has taken place
in the context of goals, assumptions, and methodological biases laid down
i n the 1 960s. Over time, certain of these have been changed within various
traditions of research, but to my knowledge there has ben little thorough
examination of the context of the entire theory. The problem of keeping
the larger context in mind has ben exacerbated by the explosion of
research, which, although it is a testament to the fourishing state of the
feld, makes it difcult for those in one branch or technological frame­
work to relate to work in another.
The present study is an attempt to renovate the foundations of linguis­
tic theory. This frst chapter articulates some of the options available for
pursuing linguistic investigation and integrating it with the other cogitive
sciences. In particular, I advoate that the theory be forulated in such a
way as to promote the possibilities for integration. Chapter 2 lays out the
basic architecture and how it relates to the architecture of the mind more
generally. The remaining chapters work out some consequences of these
boundary conditions for linguistic theory, ofen rather sketchly, but in
enough detail to see which options for further elabration are promising
and which are not.
One of my methodological goals in the present study is to keep the
arguments as framework-free as possi ble-to see what conditions make
the most sense no matter what machinery one chooses for writing gram­
mars. My hope is that such an approach will put us in a better position to
evaluate the degree to which diferent frameworks refect similar concers,
and to scc what is essential and what is accidental in each framework' s
way of going about formulating linguistic in sights.
2
Chapter 1
1.1 Universl Gramar
The arguments for Universal Grammar have by now become almost a
mantra, a sort of preliminary ritual to be perfored bfore plunging into
the technical detail at hand. Yet tese arguments are te reason for the
existence of generative grammar, and therefore the reason why most of
today's linguists are in the profession rather than in computer sience,
literature, or car repair. They are also the reason why linguistics blongs
in te cognitive sciences, and more generally why linguistics concers
people who have no interest in the arcana of syllable weight or excep­
tional case marking.
These arguents are due, of course, to Chomsky's work of the late
1950s and early 1960s. Far more than anyone else, Chomsky is responsi­
ble for articulating an overall vision of linguistic inquiry and its place in
larger intellectual traditions. By taking Universal Grammar as my start­
ing point, I intend to reafr tat, whatever diferences surface as we go
on, the work presented here is down to its deepest core a part of the
Chomskian tradition.
1.1.1 Te Mentalst Stanc
The basic stance of generative linguistics is that we are studying "the
nature of language," not a some sort of abstract phenomenon or social
artifact, but as the way a human being understands and uses language. In
other words, we are interested ultimately in the manner in which language
ability is embodied in the human brain. Chomsky makes this distinction
nowadays by saying we are studying "interalized" language (I-language)
rather than "extralized" language (E-language). Generative grammar is
not the only theory of language adopting this stance. The tradition of
Cognitive Grammar adopts it as well, Lakof (1990), for instance, calling
it te "cognitive comtment." On the other hand, a great deal of work in
foral semantics does not stem from this asumption. For instance, Bach
(1989) asserts Chomsky's major insight to be that language is a foral
system-disregarding what I take to be the still more basic insight that
language is a psychological phenomenon; and Lewis (1972), following
Frege, explicitly disavows psychological concers.
What about the abstract and social aspects of language? One can
maintain a mentalist stance without simpl y dismi ssing them, as Chomsky
sometimes seems to. It mi ght be, for instance, that there are purely ab­
stract proprties that any system must have i n order to serve the expres-
Questions, Goals, Assuptions 3
sive purpses that language seres; and there might b properties that
language has because of the social context in which it is embedded. Te
mentalist stance would say, though, that we eventually need to investigate
how such properties are spelled out in the brains of language users, so that
people can use language. It then becomes a matter of where you want to
place your bets methodologically: life is short, you have to decide what to
spend your time studying. The bet made by generative linguistics is that
there are some important properties of human language that can be
efectively studied without taking acount of social factors.
Similar remarks pertain to those aspects of language that go beyond the
scale of the single sentence to disourse and narrative. Generative gram­
mar for the most part has ignored such aspects of language, venturing
into them only to the extent that they are useful tools for examining
intrasentential phenomena such as anaphora, topic, and focus. Again, I
am sure that the construction of discourse and narrative involves a cog­
nitive competence that must interact to some degree with the competence
for constructing and comprehending individual sentences. My assump­
tion, perhaps unwarranted, is that the two competences can be treated as
relatively independent.
Chomsky consistently speaks of I-language as the speaker's kowledge
or linguitic competence, defending this terinology against various
alteratives. I would rather not make a fuss over the terminology; ordi­
nary language basically doesn't provide us with ters sufciently difer­
entiated for theoretical purposes. Where choice of termnology makes a
diference, I'll try to be explicit; otherwise, I'll use Chomsky's ters for
convenience.
1.1.2 The Notion of Mental Gramar
The phenomenon that motivated Chomsky's Syntactic Structures was the
unlimited possibility of expression in human language, what Chomsky
now calls the "discrete infnity" of language. In order for speakers of a
language to create and understand sentences they have never heard be­
fore, there must be a way to combine some fnite nubr of memorized
units-the words or morphemes of the language-into phrases and sen­
tences of arbitrary length. The only way this is possible is for the speaker's
knowledge of the language to include a set of principles of combina­
tion that deterine which combinations are well formed and what they
mean. Such principles are a conceptually necessary part of a theory of
language.
4
Capter 1
The fnite set of memorized units is traditionally called the lexicon. The
set of principles of combination has been traditionally called the grammar
(or btter, mental grammar) of the language; in recent work Chomsky has
called this set the computationl system. Alteratively, the ter grammar
has ben applied more broadly to the entire I-language, including both
lexicon and computational system. Given that many lexical items have
interal morphological structure of interest, and that morhology has
traditionally been called part of grammar, I will tend to use the term
mental grammar in ths broader sense. How the lexicon and its grammat­
ical principles are related to the extralexical (or phrasal) grammatical
principles will be one of the topics of the present study.
Now we come to Bach's point. The major technical innovation of early
generative grammar was to state the combinatorial properties of lan­
guage in terms of a formal system. This confers many advantages. At the
deepst level, foralization perits one to use mathematical techniques
to study the consequences of one's hypotheses, for example the expressive
power (strong or weak generative capacity) of alterative hypothesized
combinatorial systems (e.g. Chomsky 1957; Chomsky and Miller 1963) or
the leamability of such systems (e.g. Wexler and Culicover 1980). At a
more methodological level, foralization permits one to be more ab­
stract, rigorous, and compact in stating and examining one's claims and
assumptions. And, as Chomsky stressed in a much-quoted passage from
the preface to Syntactic Structures, a formalization uncovers conse­
quences, good or bad, that one might not otherwise have noticed.
But formaliztion is not an unmitigated blessing. In my experience,
an excessive preoccupation with formal technology can overhelm
the search for genuine insight into language; and a theory's choice of
formalism can set up sociological barriers to communication with re­
searchers in other frameworks. For these rasons, I prsonally fnd the
proper foralization of a theory a delicate balance between rigor and
lucidity: enough to spell out carefully what the theor claims, but not
too much to become forbidding. It's one of those methodological and
rhetorical matters that's strictly speaking not part of one's theory, but
only part of how one states it.
1.1.3 Leamabilit and Universl Grammar
Chomsky's next question is, If li ngui st ic knowledge consists of a mental
grammar, how does the mental grammar get into the spaker's mi nd?
Clearly a certain amount of envirnmental input is necessary, since chi l-
Questions, Goals, Assumptions 5
dren acquire mental grammars appropriate to their linguistic commu­
nities. However, the combinatorial principles of mental grammar cannot
be directly perceived in the environmental input: they must be general­
izations constructed (unconsiously) in response to perception of the
input. Therefore the language learer must come to the task of acquisi­
tion equipped with a capacity to construct I-linguistic generalizations on
the basis of E-linguistic input. How do we characterize this capacity?
The standard lore outside the linguistics community has it that this
capacity is simply general-purpose intelligence of a ver simple sort. This
lore is remarkably persuasive and persistent, and over and over it fnds its
way into psychological theories of language. Chomsky's (1959) response
to Skinner ( 1957) and Pinker and Prince's (1988) to Rumelhart and
McClelland (1986) are detailed attempts to dispel it, nearly thirty years
apart. As linguistic theory has demonstrated, the complexities of language
ilre such that general-purpose problem solving would seem not enough.
I like to put the problem as the "Paradox of Language Acquisition":
If general-purpose intelligence were sufcient to extract the principles of
mental grammar, linguists (or psychologists or computer scientists) , at
least some of whom have more than adequate general intelligence, would
have discovered the principles long ago. The fact that we are all still
searching and arguing, while every normal child manages to extract the
princi ples unaided, suggests that the normal child is using something
other than general-purpose intelligence.
Following standard practice in linguistics, let's call this extra something
Universal Grammar (VG). What does VG consist of ? Acquiring a mental
grammar requires (1) a way to generate a search space of candidate
mental grammars and (2) a way to determine which candidate best cor­
res ponds to t he environmental input (i.e., a learing procedure). In prin­
ciple, then, VG could involve either a specifed, constrained search space
tÏ a specialized, enriched learing procedure, or both.
Standard practice in linguistics has usually presumed that VG creates
the search space: without considerable initial delimitation, it is unlikely
that any general-purpose learing procedure, no matter how sophisti­
l·aled, could rel iably converge on such curious notions as long-distance
rel1exivcs, reduplicative infxation, and quant ifer scope. Even less is it
likely that any learing procedure could converge on the correct choice
among Government-Bindi ng Theory (GB), Lexical-Functional Grammar
(1.1-;), Head-Oriven Phrase Structure Grammar (HPSG), Cognit ive
(irallllllar, Arc-Pair Grammar, and Tagmemics ( not to mention the as yet
6
Chapter 1
undiscovered correct alterative). Rather, the usual assumption among
linguists is that the correct theory and the repertoire of choices it presents
must be available to the child in advance. Standard practice is somewhat
less explicit on whether, given the search space, the learing procedure for
I-language is thought of a anything diferent from ordinary learing. In
any event, it is unclear whether there is such a thing a "ordinary lear­
ing" that encompasss the learing of social mores, jokes, tennis srves,
the faces at a party, an appreciation of fne wines, and tomorrow's lun­
cheon date, not to mention language.
There is a trade-of between these two factors in acquisition: the more
constrained the search space is, the less strain it puts on the learing pro­
cedure. The general emphasis in linguistic theory, especially since the
advent of the principles-and-parameters approach (Chomsky 1980), has
ben on constraining the search space of possible grammars, so that the
complexity of the learng procedure can be mnimizd. Although I am
in sympathy with this general tack, it is certainly not the only possibility.
In particular, the more pwerful general-purpose learing proves to b,
the fewer burdens are placed on the prespecifcation of a sarch space for
I-language. (One of the threads of Lakof's (1987) argument is that lin­
guistic principles share more with general cognitive principles than gen­
erative gramarians have noticed. Though he may be right, I don't think
al aspects of linguistic principles can be so reduced.)l
1.1.4 Innatenes
The next step in Chomsky's argument is to ask how the child gets VG.
Since VG provides the basis for learng, it cannot itslf be leared. It
therefore must be present in the brain pror to language acquisition. The
only way it can get into the brain, then, is by virtue of genetic inheritance.
That is, VG is innate. At a gross level, we can imagine the human genome
specifying the structure of the brain in such a way that VG comes "pre­
installed," the way an operating system comes preinstalled on a com­
puter. But of course we don't "install sofware" in the brain: the brain just
grows a certain way. So we might be slightly more sophisticated and say
that the human genome specifes the growth of the brain in such a way
that VG is an emergent functional property of the neural wiring.
This is not the place to go into the details. Sufce it to sy that, more
than anything else, the claim of innateness is what in the 1960s brought
l i nguistics to the attention of phi losophers, psychol ogists, bi ol ogi sts-and
the general public. And it is thi s cl ai m that l eads to the potential i ntegra-
Questions, Goals, Assuptions
7
tion of linguistics with biology, and to the strongest implications of lin­
guistics for felds such as education, medicine, and social policy. It is also
the one that has been the focus of the most violent dispute. If we wish to
situate linguistics in a larger intellectual context, then, it behooves us to
keep our claims about innateness in the forefront of discourse, to for­
mulate them in such a way that tey make contact with available evidence
in other disciplines, and, reciprocally, to keep an eye on what other dis­
ciplines have to ofer.
1.1.5 Relation of Gramar to Procesing
Let us look more explicitly at how linguistic theory might be connected
with nearby domains of inquiry. Let us begin with the relation of the
mental grammar-something resident in the brain over the long term­
to the prcesses of spech perception and production. There are at least
three possible positions one could take.
The weakest and (to me) least interesting pssibility is that there is no
necessary relation at all between the combinatorial principles of mental
grammar and the principles of processing. (One version of this is to say
that what linguists write as mental grammars are simply one possible way
of forally describing emergent regularities in linguistic behavior, and
that the processor oprates quite diferently.) To adopt this as one's
working hypothesis has the efect of insulating claims of linguistic theory
from psycho linguistic disconfation.
But why should one desire such insulation, if the ultimate goal is an
integrated theory of brain and mental functions? To be sure, psycholin­
guistic results have sometimes been overinterpreted, for example when
many took the demise of the derivational theory of complexity in sentence
processing (Slobin 1966; Fodor, Bever, and Garrett 1974) to imply the
demise of transforational grammar. But surely there are ways to avoid
such conclusions other than denying any relation between competence
and processing. The relation may have to be rethought, but it is worth
pursuing.
A second position one could take is that the rules of mental gramar
are explicitly stored in memory and that the language processor "con­
sul ts" them or "invokes" them in the course of sentence processing. (One
could think of the grammar as "declarative" in the computer science
sense.) Thi s does not necessarily mean that following rles of grammar
need be anything l ike fol l owi ng rules of tennis, in particular because rules
of grammar are neither consci ousl y accessi bl e nor consciously taught.
8
Capter 1
A thrd possibility is that the rules of mental gramar are "embodied"
by the processor -that is, that they are themselves instructions for con­
structing and comprehending sntences. (Ths corresponds to the com­
puter sience notion of "procedural" memory. ) Back in the 1960s, we
were frmly taught not to thnk of rules of grammar this way. As the story
went, speakers don't produce a sentence by fst generating the symbol S;
then elaborating it into NP+ VP, and so on, until eventually they put in
wor ds; and fnally "semantically interpreting" the resul ting structure , so
in the end they fnd out what they wanted to say. That is, the natural
interpretation of the Standard Theory was not conducive to a procedural
interpretation of the rules. On the other hand, Berick and Weinbr g
(1984) develop an alterative interpretation of transformational gramar
that they do embody in a parsr . Moreover , other architectures and tech­
nol ogies might lend themselves better to an "embodied" interpretation (a
argued by Bresnan and Kaplan (1982) for LFG, for instance).
This is not the place to decide between the two l atter possibilities or to
fnd others-or to argue that in the end they are indistinguishable psycho­
l inguistically. The next chapter will propos an architecture for grammar
in whch at least some components must have relatively direct proces­
sing counterparts, so (like Bresnan and Kaplan) I want to b able to take
psycholinguistic evidence sriously where available .
One rather coarse psychol inguistic criterion will play an interesting role
in what is to follow. In a theory of performance, it is crucial to distinguish
those l inguistic structures that are stored in long-term memory from those
that are composed "on l ine" in working memory. For example , it is
obvious that the word dg is stored and the sntence Your dog jst threw
up on my carpet is composed. However , the status of in-btween cases
such as dogs is less obvious. Chapters 5-7 will suggest that attention to
thi s distinction has some useful impact on how the theory of competence
is to be formul ated.
1.1.6 Relation of Gramar to Brain
Unless we choose to be Cartesian dualists, the mental ist stance entails
that knowledge and use of language are somehow instantiated in the
brain. The importance of this cl aim depends on one ' s stance on the rel a­
tion of mental grammar to processing, since it is processing that the br ain
must accomplisha But if one thinks that the rule s of mental gr ammar have
something to do with processing, then neuroscience immediately bcomes
relevant. Conversely, if one considers st udies of aphasia to b relevant to
Questons, Goals, Assuptons
9
linguistic theory, then one is implicitly accepting that the gramar is
neurally instantiated to some degree.
Can we do better than the formal approach espoused by linguistic
theory, and explain language really in ters of the neurons? Searle (1992)
and Edelman (1992), among others, have strongly criticizd formal/com­
putational approaches to brain function, arguing that only in an account
of the neurons is an acceptable theory to be found. I agree that eventually
the neuronal basis of mental functioning is a necessary part of a complete
theor. But current neuroscience, exciting as it is, is far from being able to
tackle the question of how mental gramar' is neurally instantiated. For
instance:
I. Through animal research, brain-imaging techniques, and studies of
brain damage, we know what many areas of the brain do as a whole. But
with the exception of certain lower-level visual areas, we have little idea
how those areas do what they do. (It is like knowing what parts of a
television set carry out what functions, without knowing really how those
parts work.)
2. We know how certain neurotransmitters afect brain function in a
global way, which may explain certain overall efects or biass such as
Parkinson's disase or depression. But this doesn't inform us about the
fne-scale articulation of brain function, say how the brain stores and
retreves individual words.
3. We know a lot about how individual neurons and some small sys­
tems of neurons work, but again little is known about how neurons en­
code words, or even speech sounds-though we know they must do it
somehow.
For the moment, then, the formal/computational approach is among
the best tools we have for understanding the brain at the level of func­
tioning relevant to language, and over the years it has proven a prag­
matically useful perspective.2 Moreover, it's highly unlikely that the
details of neural instantiation of language will be worked out in our life­
timeo Still, this doesn't mean we shouldn't continue to try to interpret
evidence of brain localization from aphasia and more recently from PET
scans (Jaeger et al. 1996) in ters of overall gramatical architecture.
In addition, as more comes to be known about the nature of neural
computation in general, there may well be conclusions to be drawn about
what ki nds of computations one might fnd natural in rules of grammar.
For example. wc should be able to take for granted by now that no theory
10 Chaptr 1
of processing that depends on strict digi tal von Neumann-style compu­
tation can be corect (Churchland and Sejnowski 1992; Crick 1994); and
this no doubt has ramifcations for the theory of comptence , if we only
keep our eyes open for them. And of cours, conversly: one would hop
that neuroscientists would examine the constraints that the complexity of
language places on theories of neural computation.
1. 1.7 Evolutonar Ise
If language is a specializd mental capacity, it might in principle b dif­
ferent from every other cognitive capacity. Under such a conception, lin­
guistic theory is isolated from the rest of cognitive psychology. Again, I
fnd a less exclusionary stance of more interest. The claim that language is
a specializd mental capacity dos not preclude the pssi bility that it is in
part a specializtion of preexisting brain mechanisms. Tat' s character­
istic of evolutionary engineering. For example , many mental phenomena,
such as the organization of the visual feld, music, and motor control ,
involve hierarchical part-whole relations (or constituent structure); so it
should be no surprise that language does too. Similarly, temporal pat­
tring involving some sort of metrical organization appears in music
(Lerdabl and lackendof 1983) and probably motor control, a well as in
language. This dos not mean language is derived or "built metaphori­
cally" from any of thes other capacities (as one might infer from works
such a Calvin 1990 and Corballis 1991, for instance). Rather , like fngers
and toes, their commonalities may be distinct specializations of a com­
mon evolutionary ancestor.
Pshing Chomsky' s conception of language a a "mental organ" a little
further , think about how we study the physical organs. The heart, the
thumb, the kidney, and the brain are distinct physical organs with their
own particular structures and functi ons. At the same time, they are all
built out of cells that have similar metabolic f unctions, and they all
develop subject to the constraints of genetics and embryology. Moreover ,
they have all evolved subject to the constraints of natural selection. Hence
a unifed biology studies them both for their particularities at the organ
level and for their commonalities at the cellular, metabolic, and evolu­
tionary level.
I think the same can b said for "mental organs. " Language , vision,
proprioception, and motor control can be diferentiated by f unction and
in many cases by brain area as well. Moreover, they each have their own
very particul ar proprti es: onl y l anguage has phonological segents and
Questions, Goals, Assuptions 1 1
nouns, only music has regulated haronic dissonance, only vision has
(literal) color. But they all are instantiated in neurons of basically similar
design (Crick 1994; Rosenzweig and Leiman 1989); their particular areas
in the brain have to grow under similar embryological conditions; and
they have to have evolved from less highly diferentiated ancestral struc­
tures. It is therefore reasonable to look for refections of such "cellular­
level" constraints on learing and behavior in general.
Such a conclusion doesn't take away any of the uniqueness of language.
Whatever is special about it remains special, just like bat sonar or the
elephant's trunk (and the brain specializations necessary to use them!).
However, to the extent that the hypthesis of VG bids us to seek a unifed
psychological and biological theory, it makes sense to reduce the distance
in "design space" that evolution had to cover in order to get to VG from
our non linguistic ancestors. At the same time, the Paradox of Language
Acquisition, with all its grammatical, psychological, developmental, and
neurological ramifcations, argues that the evolutionary innovation is
more than a simple enlargement of the prmate brain (as for instance
Gould (1980) seems to believe).
1.2 Necessites and Asmptions
So far I have just ben setting the context for investigating the design
of mental grammar. Now let us tu to particular elements of its organi­
zation. A useful starting point is Chomsky's initial exposition of the
Minimalist Program (Chomsky 1993, 1995), an attempt to reconstruct
linguistic theory around a bare minimum of assumptions.
Chomsky identifes three components of grammar as "(virtual) con­
ceptual necessities":
Conceptual necessity 1
The computational system must have an interace with the articulatory­
perceptual system (A-P). That is, language must somehow come to be
heard and spoken.
Conceptual necessity 2
The computational system must have an interface with the conceptual­
intentional system (C-I). That is, language must somehow come to
express thought.
Concl'ptual necessity 3
The computational system must have an interface with the lexicon. That
is. words somehow have to b incorporated into sentences.
12 Chaptr 1
This much is unexceptionable, and most of the present study will be
devoted to exploring the implications.
However, Chomsky also makes a number of assumptions that are not
conceptually necessary. Here are some that are particularly relevant for
the purposes of the present study.
Assumption 1 (Derivations)
"The computational system takes representations of a given for and
modifes them" (Chomsky 1993, 6). That is, the computational system
performs derivations, rather than, for example, imposing multiple
simultaneous constraints.
To be slightly more specifc, a derivational approach constructs well­
formed structures in a sequence of steps, where each step adds something
to a previous structure, deletes something from it, or otherwise alters it
(say by moving pieces around). Each step is discrete; it has an input,
which is the output of the previous step, and an output, which becomes
the input to the next step. Steps in the middle of a derivation may or may
not b well formed on their own; it is the output that matters. By contrast,
a constraint-based approach states a set of conditions that a well-formed
structure must satisfy, without specifying any alterations performed on
the structure to achieve that well-formedness, and without any necessary
order in which the constraints apply.
Derivations and constraints are of course not mutually exclusive. In
particular, such important parts of grammar as binding theory are usually
conceived of as constraints on some level of syntactic derivation-though
they are clearly parts of the computational system (see chapter 3). In
addition, as was noted by Chomsky (1 972), it is possible to construe a
derivation simply as a series of constrained relations among a succession
of phrase markers (e. g. leading from D-Structure to S-Structure). What
distinguishes a derivation (even in this latter construal) from the sort of
alterative I have in mind is that a derivation involves a sequence of
related phrase markers, of potentially indefnite length. In more strictly
constraint-based theories of grammar (e. g. LFG, HPSG, Generalized
Phrase Structure Grammar (GPSG), Categorial Grammar, Construction
Grammar, Optimality Theory), constraints determine the form of a single
level or a small, theory-determined numbr of distinct levels. Much earlier
in the history of generative grammar, McCawley ( 1 968a) proposed that
phrase structure rules be thought of as constrai nts on wel l-fored trees
rather than as a production system that deri ves trees.
Questions, Goals, Assumptions 1 3
Chomsky (1993, 10; 1995, 223-224) defends the derivational con­
ception, though he is careful to pint out (1995, 380, note 3) that "the
ordering of operations is abstract, expressing postulated properties of the
language facuIty of the brain, with no temporal interpretation implied."
That is, derivational order has nothing to do with processing: "the ters
output and input have a metaphorical favor .... " We will ret�r to
Chomsky's defense of derivations here and there in the next three
chapters.
The next assumption is, as far as I can tell, only implicit.
Assumption 2 (Substitution)
The fundamental operation of the computational system is substitution
of one string or structural complex for another in a phrase marker. The
Minimalist Program adds the operations "Select " and "Merge, " meant
to take the place of phrase structure expansion in previous Chomskian
theories.
Notably excluded from the repertoir of operations is any variety of
unication (Shieber 1986) , in which features contrbuted by diferent
constituents can b combined (or superimposed) within a single node.
Unifcation plays a major role in all the constraint-based approaches
mentioned above.
The next assumption concers the nature of the lexical interface.
Assumption 3 ( Initial lexical insertion)
The lexical interface is located at the initial point in the syntactic
derivation. This assumption takes two fors.
U. From Aspects (Chomsky 1965) through GB, the interface is mediated
by a rule of lexical insertion, which projects lexical entries into X-bar
phrase structures at the level of D(eep)-Structure.
b. In the Minimalist Program, lexical items are combined with each other
by the operation Merge, building up phrse structure recursively.
(Thi s approach resembles the conception of Tree-Adjoining Grammar
(Joshi 1987; Kroch 1987) .)
Excluded are a number of other possibilities. One is the idea of "late
lexical i nserti on," which crops up from time to time in the literature­
Ntarting with Generative Semantics (McCawley 1968b) and continuing
through Distri buted Morhology (Halle and Marant 1993) . In this
alternative, syntacti c structures are bui lt up and movement rules take
place h(fore l exical i tems (or. i n some variants, phonological features of
1 4
Chapter 1
lexi cal items) are i nserted i nto them. We will explore a more radi cal
alterative in chapter 4.
Another aspect of the standard conception of the lexical i nterface con­
cers what lexical i nserti on i nserts.
Assumption 4 (Xo lexical insertion)
Lexical insertion/Merge enters words (the X
O
categories N, V, A, P) into
phrase structure.
Traditi onally excluded is the possibility of i nserting lexi cal entries that
are larger than words, for i nstance lexi cal VPs. This leads to difculti es
in the treatment of idioms such as kick the bucket, which superfci ally
look like VPs. Also excluded i s the possi bi lity of i nserti ng lexi cal entries
smaller than words, for instance causati ve afxes or agreement markers;
these categories must head thei r own X-bar projections. I will exami ne
the consequences of assumption 4 i n chapters 5-7 and formulate an
alterati ve.
The next assumption is stated very expli citly by Chomsky (1 993, 2).
Assumption 5 (Nonredundancy)
"A working hypothesis i n generative grammar has been that . . . the
language faculty is nonredundant, in that parti cular phenomena are not
' overdeterined' by principles of language. "
Chomsky observes that thi s feature is "unexpected" in "complex bio­
logical systems, more like what one expects to fnd . . . in the st udy of the
i norganic world. " He immediately defends thi s distancng of generati ve
gramar from i nteraction with related domai ns: "The approach has,
nevertheless, proven to b a successful one, suggesting that [it i s] more
than just an artifact refecting a mode of i nquir. "
One might question, though, how successful the approach act ually has
proven when applied to the lexi con, as Chomsky advocates elsewhere
(1 995, 235-236).
I understand the lexicon in a rather traditional sense: as a list of "exceptions, "
whatever does not follow from general principles . . . . Assume further that the lex­
icon provides an "optimal coding" of such idiosyncrasies . . . . For the word book,
it sems the optimal coding should include a phonologcal matrix of the familiar
kind expressing exactly what is not predictable . . . .
As Aronof (1 994) points out, thi s conception goes back at least to
Bl oomfeld (1933), who asserts, "The lexicon i s really an appendix of the
grammar, a list of basic i rregulari ties" ( p. 274).
Questions, Goals, Assumptions
1 5
However, there is a diference between assuming that al irregularities
are encoded in the lexicon and assuming that only irregularties are
encoded in the lexicon, the latter apparently what Chomsky means by
optimal coding. " Chapters 5-7 will evaluate ths latter assumption,
suggesting that a certain amount of redundancy is inevitable. (As a hnt of
what is to come, consider how one might code the word bookend, so as to
express "exactly what is not predictable" given the existence of the words
book and end.)
Although perhaps fonally not so elegant, a linguistic theory that in­
corporates redundancy may in fact prove empirically more adequate as a
theory of competence, and it may also make better contact with a theory
of processing. As a side beneft, the presence of redundancy may bring
l anguage more in line with other psychological systems, to my way of
thinking a desideratum. For instance, depth perception is the product of a
number of parallel systems such as eye convergence, lens focus, stereopsis,
occlusion, and texture gradients. In many cases more than one system
comes up with the same result, but in other cases (such as viewing two­
dimensional pictures or random-dot stereograms) the specialized abilities
of one system come to the fore. Why couldn't language b like that?
1 . 3 Syntactoentrism and Perection
Next comes an important assumption that lies behind all versions of
Chomskian generative grammar from the beginning, a view of the lan­
guage faculty that I will dub syntactocentrim.
Assumption 6 (Syntactocentrism)
The fundamental generative component of the computational system is
the syntactic component; the phonological and semantic components are
i nterprtive.
This assumption derves some of its intuitive appeal from the conception
of a grammar as an algorithm that generates grammatical sentences. Par­
ti cularly in the early days of generative grammar, serial (von Neumann­
style) algorithms were the common currency, and it made sense to gen­
erate sentences recursively in a sequence of ordered steps (assumption 1 ) .
Nowadays, there i s btter understanding of the possibilities of parallel
al gori thms- and of thei r plausibi li ty as model s of mental function. Hence
it i s i mportant to di vorce syntactocentrism from considerati ons of efcient
computati on, especi al l y if one wi shes to integrate l i ngui stics with the
ot her cogni t i ve sci ences.
1 6 Chapter 1
Chomsky, of course, has always been careful to characterize a gen­
erative grammar not as a method to construct sentences but as a formal
way to describe the infnite set of possible sentences. Still , I suspect that
his syntactocentrism in the early days was partly an artifact of the formal
techniques then available.
Another possible rason for syntactocentrism is the desire for non­
redundancy (assumption 5). According to the syntactocentric view, the
disrete infnity of language, which Chomsky takes to be one of its essen­
tial and unique characteristics, arises from exactly one component of the
grammar: the recursive phrase structure rules (or in the Minimalist Pro­
grm, the application of Select and Merge). Whatever recursive properties
phonology and semantics have, they are a refection of interpreting the
underlying recursion in syntactic phrases.
But is a unique source of discrete infnity necessry (not to mention
conceptually necessary)?
As an alterative to syntactocentrism, chapter 2 will develop an archi­
tecture for language in which there are three independent sources of
disrte infnity: phonology, syntax, and semantics. Each of thes com­
pnents is a generative grmar in the foral snse, and the structural
description of a sentence arises from establishing a correspondence among
structures from each of the three components. Thus the grmmar as a
whole is to b thought of as a parallel algorithm. There is nothing esp­
cially new in this conception, but it is worth making it explicit.
Is there anything in principle objectionable about such a conception?
Recalling Chomsky's injunction against construing the grammar as the
way we actually produce sentences, we should have no formal problem
with it; it is just a diferent way of characterizing an infnite set. Does it
introduce unwanted redundancy? I will argue that the three generative
components describe orthogonal dimensions of a sentence, so that it is not
in principle redundant. On the other hand, as we will see, redundancies
are in practice presnt, and probably necessary for learabiIity of the
grammar. Is that bad? This remains to be sen.
From the pint of view of psychology and neuroscience, of course,
redundancy is expected. Moreover, so are multiple sources of infnite
variability, each with hierarchical structure. One can understand an un­
li mi ted number of hierarchi cally organized visual scenes and conjure
up an unlimited number of vi sual i mages; one can plan and carry out an
act i on in an unlimited number of hi erarchical l y organi zed ways; one can
appreci ate an unl i mi ted number of hi erarchi cal l y organi zed tunes. Each
Questions, Goals, Assuptions 1 7
of these, then, requires a generative capacity in a nonlinguistic modality.
Moreover, animal behavior is now understood to be far more complex
and structured than used to be believed; its variety, especially in higher
mammals, calls for a generative capacity that can interpret and act upon
an unlimited number of diferent situations. So language is hardly the
only source of generativity in the brain. Why, then, is there any prima
facie advantage in claiming that language is itself generated by a single
sourc?3
If we wish to explain eventually how language could have arisen
through natural selection, it is also of interest to consider an evolutionary
perspetive on syntactocntrism. An obvious question in the evolution of
l anguage is, What are the cognitive antecedents of the modem language
capacity? Bickerton ( 1 990) ofers an intriguing and (to me at least) plau­
si bl e scenario. He asks, How much could b communicated by organisms
that could prceive ariculated sounds and establish sound-to-meaning
associations (i . e. words), but that had no (or very rdimentary) capacity
for syntax? His answer is, Quite a lot, as we can see from the spakers of
pidgin languages, from children at the two-word stage, and from agram­
matic aphasics. Moreover, if the claims are to be believed, the apes that
have been taught a form of sign language do manage to comunicate
rai rly succssfully, despite (according to Terrace 1 979) lacking any ap­
preciable syntactic organiation. 4 Bickerton proposes, therefore, that
al ong the evolutionary pathway to language lay a stage he calls proto­
langage, which had meaningful words but lacked (most) syntax. Bick­
crton claims that the ability for protolanguage can still manifest itslf in
h umans when full language is unavailable, for example in the cass men­
t i oned above.
Let us suppose something like Bickerton's stor: that earlier homnids
had a capacity to take vocal productions as conventional symbols for
something else-a rudimentary sound-meaning mapping. 5 Now of course,
si mply stringing together a squence of conventionalized symbols leaves it
li p to the hearer to construct, on grounds of pragmatic plausibility, how the
speaker intends the concepts expressed to be related to one another. That
i s. protolanguage is essentially words plus pragmatics. As Pinker and
Rl oom ( 1 990) argue, one can see the selective advantage in adding gram­
matical devices that make overt the intended relationships among words­
t hi ngs l i ke word order and case marking. Having syntax takes much of
the burden of composing sentence meanings of of pure pragatics.
1 8
Chapter 1
If one accepts some version of this senario (and I don't see any alter­
natives other than "unsolved/unsolvable mystery" waiting in the wings),
then syntax is the last part of the language faculty to have evolved. Not
that the evolution of syntax is inevitable, either: Bickerton thinks that
hominids had protolanguage for a million years or more, but that lan­
guage appared only with Homo sapiens, 20,00 years ago or less. (See
Bickerton 1 990 for the reasons behind this claim; also see Pinker 1 992 for
a skeptical appraisal.)
On this view, the function of syntax is to regiment protolanguage; it
srves as a scafolding to help relate strings of linearly ordered words (one
kind of discrete infnity) more reliably to the multidimensional and rela­
tional modality of thought (another kind of discrete innity). Syntax,
then, does not embody the primary generative power of language. It does
indeed lie "between" sound and meaning, as in the syntactocentric view,
but more as a facilitator or a refnement than as a creative fore.
I am the frst to acknowledge that such an argument is only sugges­
tive-though I think it does have the right sort of favor. It dos not force
us to explain all properties of syntax in evolutionary terms, for instance
the existence of adverbs or the particular properties of refexives-any
more than the claim that the human hand evolved fores us to explain the
selective advantage of exactly fve fngers on each hand. The solutions
that natural selection fnds are ofen matters of historical contingency;
they have only to be better than what was around bfore in order to
confer a selective advantage. In ths sense, the design of language may
well be an accretion of lucky accidents.
Chomsky, as is well known, has expressed skepticism (or at least
agnosticism) about arguments from natural selection, one of his more
extreme statements being, "We might reinterpret [the Cartesians' 'creative
principle,' specifc to the human species] by speculating that rather sudden
and dramatic mutations might have led to qualities of intelligence that
are, so far as we know, unique to man, possession of language in the
human sense bing the most distinctive index of thes qualities" ( 1 973,
396). Chomsky might of cours be right: especially i n the principled
absence of fossil vowels and verbs, we have no way of knowing exactly
what happened to language a million years ago. Evolutionary arguments,
especially those concering language, cannot usually b as rgorous as
syntactic arguments: one cannot, for example, imagine trusting an evolu­
ti onary argument that adverbs couldn' t evol ve, and thereupon modifying
onc' s syntactic t heory.
Questions, Goals, Assumptions 19
However, there i s no lnguistic arguent for syntactocentrism. To be
s ure, syntactocentrism has successfully guided research for a long time­
but it is still just an assumption that itself was partly a product of histor­
ical accident. If syntactic results can be rigorously reinterpreted so as to
haronize with psychological and evolutionary plausibility, I think we
should be delighted, not dismissive.
In Chomsky's recent work (especially Chomsky 1 995), a new theme has
surfaced that bears mention in this context.
Assmption 7 (Perfection)
Language approximates a "perfect" system.
It i s not entirely clear what is intended by the term perfect, but hints come
from this passage (Chomsky 1 995, 221 ) :
I f humans could communicate by telepathy, there would b no need for a phono­
logical compnent, at least for the purpses of communication . . . . These require­
ments for language t accommodate to the sensory and motor apparatus] might
tur out to b critical factors in determining the inner nature of CHL [the compu­
tational system] in some deep sense, or they mght tur out to b "extraneous" to
i t, inducing departures from "prfection" that are satisfed in an optimal way. The
l atter possibility is not to b discounted.
I n other words, language could b perfect if only we didn't have to talk.
Moreover, Chomsky spculates that language might b the only such
" perfect" mental organ: "The language faculty might be unique among
cognitive systems, or even in the organic world, in that it satisfes mini­
malist assumptions . . . . [T]he computational system CHL [could be] bio­
logically isolated" (1 995, 221 ) . This view, of course, comports well with
his skepticism about the evolutionary perspective.
I personally fnd this passage puzzling. My own inclination would b to
say that if we could communicate by telepathy, we wouldn't need lan­
guage. What might Chomsky have in mind? An anecdote that I fd
revealing comes from Dennett's (1 995, 387) recounting of a debate at
Tufts University in March 1 978.
There were only two interesting pssibilities, in Chomsky's mind: psychology
could tur out to be "like physics"-its regularities explainable as the con­
sequences of a few deepq elegant, inexorable laws-or psychology could tur out
to b utterly lacking in laws¯in which case the only way to study or expound
psychology would be the novel ist's way . . . . Marin Minsky [ofered a] "third
' i nteresting' possi bi l ity: psychol ogy could tur out to b like engineering" . . . .
Chomsky . . . would have no trck with engineering. It was somehow bneath the
di gni ty of the mi nd to be a gHdgCt Ll Ü col l ection of gadgets-
20 Chapter 1
But of course it is characteristic of evolution to invent or discover
"gadgets" -kludgy ways to make old machinery over to suit new pur­
pss. The result is not "prfection," as witness the fallibility of the
human back. If we were designing a back for an upright walker on opti­
mal physical principles, we would likely not come up with the human
design, so susceptible to strain and pain. But if we had to make it over
from a mamalian back, this might be about the best we could come up
with. If we were desiging reproduction from scratch, would we come up
with sx as we practice it?
Similarly, I would expt the design of language to involve a lot o.f
Good Tricks (to use Dennett's tr) that make language more or less
good enoug, and that get along well enough with each other and the
preexisting systems. But nonredundant perfection? I doubt it. Just as the
visual system has overlapping systems for depth perception, language has
overlapping systems for regulating O-roles, among them word order, case,
and agreement, as well as overlapping systems for marking topic such as
stress, intonation, case, and position. Conversely, many systems of lan­
guage serve multiple purposes. For instance, the same system of cas mor­
phology is often used in a single language for structural case (signaling
grammatical function), semantic case (argument or adjunct smantic
function) and quirky case (who knows what function). The very same
Romance refexive morphology can signal the binding of an argument, an
impersonal subject, the elimination of an argument (decaustiviztion), or
nothing in particular (with inherently refexive verbs).
Ths is not to say we shouldn't aim for rigor and elegance in linguistic
analysis. Admitting that language isn' t "prfect" is not license to give up
attempts at explanation. In particular, we still have to satisfy the demands
of learability. It is just that we may have to reorient our sense of what
"feels like a right answer," away from Chomsky' s sense of "prfection"
toward something more psychologically and biologically realistic. It may
then tur out that what looked like "imprfections" in language-such as
the examples just mentioned-are not reason to look for more "perfect"
abstract analyses, as Chomsky and his colleagues often have done; rather,
they are just about what one should expect.
These issues may seem quite distant from day-to-day linguistic re­
search. Nevertheless, I fnd it important to work through them, in an
efort to uncover some of the biases that for the background for empir­
ical analysi s. Much of what is to fol low here depends on layi ng thes
biases bare and questioning them.
Chapter 2
I nterfaces; Representational
Modularity
Let us look more closly at the frst to "concptual necessities" of the
previ ous chapter: the interfaces between language and the "articulatory­
perceptual" system on one hand and between language and the "con­
lcptual-intentional" system on the other. A curious gap in Chomsky' s
exposi tion of the Minimalist Program is the absence of any independent
characterization of these interfacs, which now become all-important in
constraining the theory of syntax. This chapter will dig the foundations a
l i ttl e deeper and ask what these interfaces must be like.
We will begn with the better understood of the two interfaces, that of
l anguage with the "articulatory-perceptual" systm. We will then briefy
consider the nature of the more controversial "conceptional-intentional"
i nterface, continuing with it in more detail in chapter 3. The present
c hapter will conclude by discussing a more general conception of mental
organi zation, Represntational Modularity, in terms of which these inter­
faces emerge as natural components. Chapter 4 will take up Chomsky' s
t hi rd i nteraceq between the combinatorial system and the lexicon.
2. 1 The "Artculatory-Perceptual" Iterfaces
{ onsider speech production. In order to produce speech, the brain must,
on the basi s of linguistic represntations, develop a set of motor instruc­
t i ons to the vocal tract. These instructions are not part of linguistic repre­
sentati onse Rather, the usual claim is that phonetic representations are the
"cnd" of l inguisti c representati on¯they are the strictly linguistic repre­
sentati ons that most cl osely mirror articulation-and that there is a non­
t ri vi al prOcess of conversion from phoneti c for to motor instructions. 1
Si mi l arl y for spech perception: the process of phoneti c perception is a
l I onl ri vi al conversi on from some sort of frequency anal ysi s to phoneti c
22 Chaptr 2
fon (if it were trivial, Haskins Laboratories would have ben out of
business years ago!). That is, the auditory system pr se is not a direct
source of linguistic representations.
Moreover, the signals coming from the auditory system are not, strictly
speaking, compatible with those going to the motor system. If an evil neu­
rosurgeon somehow wired your auditory nerve up directly to the motor
strip, you would surely not end up helplessly shadowing all spech you
heard. (Who knows what you would do?) It is only at the abstract level of
phonetic representation, distinct from both auditory and motor signals,
that the two can be brought into correspondence to constitute a unifed
" articulatory-perceptual" representation.
We must also not forget the counterpart of phonetic representation for
signed languages: it must represent the details of articulation and percep­
tion in a fashion that is neutral between gestural articulation and visual
perception. It must therefore likewise be an abstraction from both.
In fact, phonetic representation (spoken or signed) is not in one-to-one
correspondence with either perceptual or motor representations. Consider
the relation of auditory representation to phonetic representation. Certain
auditory distinctions correspond to phonetic distinctions; but others, such
as the speaker's voice quality, tone of voice, and sped of articulation, do
not. Likewise, certain phonetic distinctions correspond to auditory dis­
tinctions; but others, such a division into discrete phonetic segments,
analysis into distinctive features, and the systematic presence of word
boundaries, do not.
Similarly for speech production. In particular, phonetic representation
does not uniquely encode motor movements. For example, the motor
movements involved in speech can be modulated by concurrent non­
linguistic tasks such as holding a pipe in one's mouth-without changing
the intended phonetics.
I spoke above of a "conversion" of phonetic representations into motor
instructions during speech production. Given just the elementary obser­
vations so far, we see that such a conversion cannot b conceived of as a
derivation in the standard sense of generative gram ar. It is not like, for
instance, moving syntactic elements around in a tree, or like inserting or
changing phonological features-that is, a series of oprations on a for­
mal structure that converts it into another fonal structure built out of
si mi lar elements. Ratherq phoneti c representations are fonally composed
out of onc alphabet of elements and motor i nstructi ons out of another.
Thus the conversi on from onc to the othcr i s best vi ewed formal ly as a set
of pri nci pks of t he general form ( I ) .
[ nterfaces; Representational Modularity
( I ) General form of phonetic-to-motor correspondence rules
Confguration X in phonetic representation
corresponds to
confguration Y in motor instructions.
23
That is, these principles have two structural descriptions, one in each of
the relevant domains.
Two important points. First, the principles (1) applying to a phonetic
representation underdetermine the motor instructions: recall the example
of spaking with a pipe in one's mouth. Moreover, many aspcts of coar­
ti culation between speech sounds are none of the business of phonetic rep­
resentation-they are determined by constraints within motor instructions
t hemselves.
Second, some of the principles (1) may be "perissive" in force. For
example, a pause for breath (a motor function, not a linguistic function) is
much more likely to occur at a word boundary than within a word or
espcially within a syllable, but a pause is hardly necessar at any partic­
ul ar word boundar. In other words, the presence of breath intake in
motor output is infuenced by, but not determined by, word boundaries in
phonetic representations. To accommodate such circumstances, the gen­
eral for (1) must be amplifed to (2), where the choice of modality in the
rul e specifes whether it is a determinative rule (must), a "permissive" rule
(may), or a default rule (preferably does). 2
( 2) General form ofphonetic-to-motor correspondence rules
Confguration X in phonetic representation
{ must/may/preferably does} correspond to
confguration Y in motor instructions.
Parallel
.
examples ca� easily b constructed for the relation of auditory
perception to phonetic representation and for perception and production
of si gned languages.
Thus the phonetic-motor interface forally requires three components:
0) a. A set of phonetic representations available for motor interpretation
(the motor interface level of phonology/phonetics)
b. A set of motor representations into which phonetic representations
are mapped (the phonetic interface level of motor representation)
c. A set of phonetic-motor correspondence rules of the general for
(2). which create the correl ation between the levels (3a) and (3b)
Thi s prescri pt i on can b general ized to the structure of any i nterface
ht wccn t wo di st i nct f() rms of representati on.
24
Chapter 2
(4) Components of an interface between system A and system B (an A-B
interface)
a. A set of representations in system A to which the interface has
access (the system B interface level ( s) of system A or BILA)
b. A set of representations in system B to which the interface has
access (the system A interface level( s) of system B or AILB)
c. A set of A-to-B correspondence rules, of the general for illustrated
in (5), which create the correlation between BILA and AILB .
(5) General form of correspondence rules
Confguration X in BILA
{must/may/preferably does} correspond to
confguration Y in AILB.
Crucially, the correspondence rules do not perfor derivations in the
standard sense of mapping a structure within a given forat into another
structure within the same forat: they map between one format of rep­
resentation and another. Furtherore, they are not "translations" or one­
to-one mappings from A to B; a given representation in BILA constrains,
but does not uniquely deterine, the choice of representation in AILB.
Thus, when Chomsky (1 993, 2) speaks of the "interface level A-P" as
"providing the instructions for the articulatory-prceptual . . . system[s],"
this is an oversimplifcation. Phonetic representation is only indirectly a
set of instructions to the motor system; and it is not at all a set of instruc­
tions to the auditory system, but rather an indirect consequence of input
to the auditory system.
In fact, it is clear that there cannot be a single interface of language
with the articulatory-perceptual system. The principles placing language
in correspondence with articulation are by necessity quite diferent from
those placing language in correspondence with auditory perception. Thus
Chomsky's assertion of the "virtual conceptual necessity" of an articu­
latory-prceptual interface must be broken into two parts: language must
have an articulatory interface and a prceptual interface.
However, it is a conceptual possibility that one component of these two
interfaces is shared-that the articulatory interface level of language is the
same as the perceptual interface level . The broadly accepted assumption,
of course, is that there is such a shared level: phonetic representation. This
is the force of Chomsky' s assertion that there is "an" interface, which he
cal ls PF (phonetic fon). But the existence of such a shared representation
is (at least i n part) an empi ri cal matter. ¯
I nterfaces; Representational Modularity
25
One further point bears mention. Chomsky asserts the importance of a
pri nciple he calls Full Interpretation.
We might a ø a [say] that there is a principle of full interpretation ( FI) that requires
l hat ever element of PF and LF a a e must receive an appropriate interpreta­
t i on = = a « None can simply b disregarded. At the level of PF, each phonetic ele­
ment must be licensed by some physical interpretation. The word book, for
example, has the phonetic representation [buk]. It could not be represented
I lburk], where we simply disregard [f] and [r]; that would be possible only if there
were particular rules or general principles deleting these elements. ( 1 986, 98)
There is an important ambiguity in this statement. On the one hand, it is
s urely correct that every element of PF that can be related to auditory and
motor representations mut be so related. We clearly do not want a theory
of PF in which the correspondence rules can arbitrarily disregard partic­
ular segents. 4 On the other hand, there may be aspcts of PF that are
present for the purpose of level-interal coherence but that are invisible to
the correspondence rules. For example, as noted earlier, word boundaries
a re not consistently interpretable in auditory or motor tens; nevertheless
t hey are present in PF so that the phonetic string can b related to the
l cxicon and ultimately to syntax and meaning. Nor do phonetic features
necessarily correspond perfectly consistently to motor representation
( much less to "physical interpretation"). Thus the principle FI needs a
l'crtain amount of modulation to be accurate with respct to its role in PF.
This would not be so important, were it not for the fact that Chomsky
uses FI at the PF interface to motivate an analogous principle at the LF
i nterface with more crucial consequences, to which we retur in chapter 4.
2. 2 The Phonology-yntax Interace
I n thi s section we will look more closely withn the "computational sys­
t cm.
"
First, though, a teninological point. The ten syntax can b used
generically to denote the structural principles of any fonnal system, from
h uman language a a whole to computer languages, logical languages,
and musi c. Alteratively, it can be used more narrowly to denote the
structural prnciples that detenine the organization of NPs, VPs, and so
on. I n the present study I will use the ten only in the latter, narrow sense,
often cal li ng it (narrow) syntax a a reminder; I will use the tnformal
system or generative system for the general sense.
Wi th thi s clari fcati on of the teninology, let us ask, Is PF part of
( narrow) byntÖÄ? The general consensus seems to be that it i s not. The
26
Chapter 2
basic intuition is that, for example, noun, an element of (narrow) syntax,
and +voice, an element of PF, should not belong to the same compnent
of grammar . PF is of course a level of phonologcal structure. Now it used
to b thought (Chomsky 1 965, 1 980, 1 43; Chomsky and Halle 1 968) that
phonological structure is simply a low (or late) level of syntactic structure,
derived by erasing syntactic boundaries, so that all that is visible to
phonological rules is the string of words. Hence phonological structure
(and therefore PF) could b thought of as derived essentially by a con­
tinuation of (narrow) syntactic derivation. But the revolution in phono­
logical theory starting in the mid-1 970s (e . g. Librman and Prince 1 977;
Goldsmith 1 976) has altered that view irrevocably.
O current views, which are as far as I know nearly universally ac­
cepted among phonologists, the units of phonological structure are such
entities as segments, syllables, and intonational phrass, which do not
correspond one to one with the standard units of syntax. For instance,
syllabifcation and foot structure (phonological entities) ofen cut across
morpheme boundaries (lexical/syntactic entities).
(6) a. Phonological: [or +ga+ni] [z+tion]
b. Morphological: [[[organ]iz]ation]
English articles form a phonological unit with the next word (i.e . they
cliticize), whether or not they for a syntactic constituent with it.
(7) a. Phonological: [abig] [hous], [avery] [big] [house]
b. Syntactic: [[a] [[big] [house]]],
[[a] [[[very]big] [house]]]
And intonational phrasing cuts across syntactic phrase boundaries.
(8) a. Phonological: [this is the cat] [that ate the rat] [that ate the chees]
b. Syntactic: [this is [the cat [that [ate [the rat [that [ate [the
cheese]]]]]]]]
Consequently, the constituent structure of PS cannot be produced by
simply erasing syntactic boundaries.
To make this point clearer, consider the unit [abig] in (7a). There seems
little sense in trying to identify its syntactic category, in trying to decide
whether it is a determiner or an adjective, since as a unit it has no identi­
fable syntactic properties. Nor can it be produced by any standardly
sanctioned syntactic operation, since the deteriner can't lower to the
adjective posi tion and the adjective can' t undergo head-to-head move­
ment to the determiner (or if it can, the adverb in a very big house can' t).
Rather, rahi1] i s j ust a phonol ogical word, a nonsyntactic uni t .
I nterfaces; Representational Modularity 27
Similarly, intonational phrass such as those in (8a) cannot always be
i dentifed with syntactic units such as NP, VP, or CP. Although it is true
t hat intonational phrasing may afect the interpretation of focus (which is
ofen taken to be syntactic), such phrasing may cut across syntactic con­
stituency. For example, the italicized intonational phrases in (9) are not
complete syntactic constituents (capitals indicate focal stress).
(9) What did Bill think of those DOGS?
[Well] , [he very much LIKED] [the collie and the SPANiel] , [but he
didn't much CAR] [for the boxer and the GREYhound].
Consider also the sentence in ( 1 0), marked with two diferent into national
phrasings (both of which are attested on television). Not only are there
constituents here that are not syntactic constituents, there is no detectable
semantic diference btween the two versions.
( 1 0) a. [Sesame Street is a production of] [the Children's Television
Workshop] .
b. [Sesame Street] [is a production] [of the Children's Television
Workshop] .
Still, intonational phrasing is not entirely arbitrary with rspect to syntax;
an intonation break cannot be felicitously added after Children's in (lOb),
for instance.
The upshot is that intonational structure is constrained by syntactic
structure but not derived from it; some of its aspects are characterizd by
autonomous phonological principles whose structural descriptions make
no reference to syntax, for instance a general preference for the intona­
tional phrases of a sentence to be approximately equal in length (se,
among others, Gee and Grosjean 1 983; Selkirk 1 984; Jackendof 1 987a,
appendix A; Hirst 1 993).
More generally, there appears to b some consensus that phonological
rules cannot refer directly to syntactic categories or syntactic constituency.
Rather, they refer to prosodic constituency, which, as we have just seen, is
only partially determined by syntactic structure. (See, for instance, Zwicky
and Pullum 1 983 and the paprs in Inkelas and Zec 1 990. ) Conversely,
syntactic rles do not refer to phonological domains or to the phonologi­
cal content of words. ¯
Steppi ng back from the detai ls, what have we got? It appears that
phonological structure and syntactic structure are independent foral
systems, each of which provides an exhaustive analysis of sentences in its
own terms.
28
Chapter 2
1 . The generative systm for syntactic structur (SS) contains such prim­
itives a the syntactic categories N, V, A, P (or their feature decomposi­
tions) and functional categories (or features) such as Numbr, Gender,
Person, Case, and Tense. The principles of syntactic combination include
the principles of phras structure (X-bar theor or Bare Phrase Struture
or some equivalent), the principles of long-distance depndencies, the
principles of agrement and cas marking, and so forth.
2. The generative system for phonological structur (PS) contains suh
primitives as phonological distinctive featurs, the notions of syllable,
wor, and phonological and intonational phras, the notions of stress,
tone, and intonation contour. The principles of phonologicaI combination
include rules of syllable structure, stress assignment, vowel harmony, and
so forth.
Each of these rquires a generative grammar, a source of "disrte infn­
ity," neither of which can be reduced to the other.
Given the coxistence of thes two independent analyses of sntences, it
is a conceptual necesity, in prcisly Chomsky's sns, that the grammar
contain a set of correspondence rules that mediate between syntactic and
phonological units. Such rules must be of the general form given in (1 1 ),
pairing a structural description in syntactic terms with one in phonologi-
cal ters.
(1 1 ) General form for phonological-syntactic correspondence rules
(PS-SS rules)
Syntactic structur X
{must/may/preferably does) correspond to
phonological structur Y.
Two simple examples of suh rules ar given in (12).
(12) a. A syntactic XO constituent preferably corrsponds to a
phonological word.
b. If syntactic constituent X I corsponds to phonological
constituent Y I ,
and syntactic constituent X2 corsponds to phonological
constituent Y 2,
then the linear orer of XI and X2 prferably coresponds to the
linear order of Y I and Y } -
These rul es contain the modality "preferably" because they are subject to
excepti on. (l 2a), which asserts the (approxi mate) correspondence of syn-
I nterfaces; Representational Modularity 29
t acti c to phonological words, has exceptions in both directions. Clitics
such as a in (7) are syntactic words that are not phonological words and
sO must adjoin to adjacent phonological words. Conversely, compounds
such as throttle handle are syntactic words-X
o
categories that act as
si ngle syntactic units but that consist of more than one phonological
word.
( 1 2b), which asserts the corrspondence of linear order in phonological
and syntactic structure, is normally taken to be an absolute rather than
tÌ default principle; that is why earlier versions of syntax were able to
assume that phonological structur results from syntactic bracket erasure,
preserving order.
6
However, ther prove to be certain advantages in
al l owing (12b) to have exceptions forced by other principles. I have in
mind c1itic placement in those dialects of Serbo-Croatian where the clitic
cluster can follow the frt phonological constituent, as in (13) (from
I l alpr 1995, 4).
( 1 3) Taj je covek svirao klavir.
that AU man played piano
I l ere the auxiliary appears in PF between the determiner and the head
noun of the subject, syntactically an absurd position. Halper (1995)
and Anderson (1995) suggest that such clitics ar syntactically initial but
require phonological adjunction to a host on their lef; the resulting place­
ment in PF is the optimal compromise possible between their syntactic
posi tion and rle (12b).
This overall conception of the phonology-syntax interface leads to the
fol l owing methodology: whenever we fnd a phonological phenomenon in
whi ch syntax is implicated (or vice versa), a ps-ss corrspondence rule of
t he form ( 1 1) must be invoked. In principle, of cours, ps-ss corre­
spondence rles might relate anything phonological to anything syn­
t actic-but in fact they do not. For example, syntactic rles never depend
on whether a word has two versus three syllables (as strss rules do); and
phonological rules never depend on whether one phrase is c-commanded
hy another (as syntactic rules do). That is, many aspects of phonological
structure are invisible to syntax and vice versa . The empirical problem,
t hen, is to fnd the most constrained possible st of ps-ss rles that both
properl y describes the facts and is learable. That is, UG must delimit the
PS-SS correspondence rles, in addition to the rules of phonology proper
a nd the rules of syntax proper. Such considerations are essntially what
wc sce goi ng on in di scussions of the phonol ogy-syntax interface such as
t hose ci ted a hove t hough they arc rarely couched i n the present terms.
30
Chapter 2
Returing to the initial question of ths section, we conclude that PF,
the articulatory-perceptual interfce level of language, is part of phono­
logical structure, and therefore it is not a level of (narrow) syntactic struc­
ture. In fact, (narrow) syntax, the organization of NPs and VPs, has no
direct interface with the articulatory-perceptual system; rather, it inter­
faces with phonologcal structure, which in tur interfaces with articula­
tion and perception. In trying to forulate a theory of (narrow) syntax,
then, we may ask what point in the derivation of syntactic structure is
accessible to the PS-SS correspondence rules. Leaving the question open
until chapter 4, I will use the term Phonological Interace Level of syntax
(PIL
ss
) for this level of syntax. (This presumes there is only a single syn­
tactic level that interfaces with phonology; chapter 4 will show why it is
unlikely that there is more than one. )
Likewis, there has to be a particular level of phonological structure
that is available to the PS-SS correspondence rules. Let us call this the
Syntactic Interace Level of phonology (SILp). The full phonology-syntax
interface, then, consists of PIL
ss
, SILp, and the PS-SS correspondence
rles.
In Chomsky's exposition of the Minimalist Progrm, the closst coun­
terpart of this interface is the opration Spell- Out. The idea is that "b­
fore" Spll-Out (i. e. on the (narrow) syntactic side of a derivation, or
what Chomsky calls the overt syntax), phonological features are unavail­
able to rules of gramar. Spell-Out strps of the features of phonological
relevance and delivers them to "the module Morphology, which con­
structs wordlike units that are then subjected to further phonological
processes . . . and which eliminates features no longer relevant to the
computation" (Chomsky 1 995, 229); "in particular, the mapping to PF
. . . eliminates formal [i. e. syntactic] and semantic features" (Ch om sky
1 995, 230). In other words, phonological features are accessible only to
phonology-morphology, which in tur cannot access syntactic and seman­
tic features; and Spell-Out, the analogue of the PS-SS correspondence
rules in the present proposal, accomplishes the transition.
From the present perspective, the main trouble with Spll-Out-byond
i ts vagueness-is that it fails to address the independent generative ca­
pacity of phonological structure: the principles of syllable and foot struc­
ture, the existence and correlati on of multiple tiers, and the recursion of
phonol ogi cal consti t uency that is in part orhogonal to syntactic con­
st i t uel1l" Y. as seen in (6) -(1 0). When the phonol ogical component is in­
Vstcd wi t h an i ndependent generative capaci t y, thi s problem drops away
i mmcdia l dy.
I nterfaces; Representational Modularity
3 1
2.3 The "Conceptual-Intentional" Interace
Next let us consider the "conceptual-intentional" system. I take it that
Chomsky intends by this ter a system of mental representations, not
part of language pr se, in ters of which reasoning, planning, and the
foring of intentions takes place. Most everyone assumes that there is
some such system of mind, and that it is also responsible for the under­
standing of sentences in context, incorporating pragmatic considerations
and "encyclopedic" or "world" knowledge. Let us provisionally call this
system of representations conceptual structure (CS); further diferentiation
wi l l develop in sction 2. 6.
Whatever we know about this system, we know it is not built out of
nouns and verbs and adjectives; I will suppose, following earlier work
(Jackendof 1 983, 1 990), that its units are such entities as conceptualized
physical objects, events, proprties, times, quantities, and intentions. These
enti ties (or their counterparts in other semantic theories) are always as­
sumed to interact in a formal system that mirors in certain respcts the
hi erarchical structure of (narrow) syntax. For example, where (narrow)
syntax has structural relations such as head-to-complement, head-to­
specifer, and head-to-adjunct, conceptual structure has structural relations
such as predicate-to-argument, category-to-modifer, and quantifer-to­
bound variable. Thus, although conceptual structure undoubtedly con­
sti tutes a syntax in the generic sense, its units are not NPs, VPs, etc. , and
i t s principles of combination are not those used to combine NPs, VPs,
etc. ; hence it is not syntax in the narrow sense. In particular, unlike syn­
tactic and phonological structures, conceptual structures are (assumed to
be) purely relational, in the sense that linear order plays no role.
7
Conceptual strcture must provide a formal basis for rules of inference
and the interaction of language with world knowledge; I think it safe to
say that anyone who has thought seriously about the problem realizes
t hat these functions cannot be defed using structures built out of stan­
dard syntactic units. In fact, one of the most famous of Chomsky's ob­
ÑerVati ons is that sentence ( 1 4) is perfect in terms of (narrow) syntax and
t hat its un acceptability lies in the domain of conceptualization.
( 1 4) Colorless green ideas sleep furiously.
Si mi l arlyq the val idi ty of the inferences expressed in ( 1 5) and the invalidity
of thos i n ( 1 6) is not a consequence of thei r syntactic structures, which
ÜÎL pa ral l el ; it must b a consequence of thei r meaning.
32
Capter 2
(1 5) a. Fritz is a cat. Therefore, Fritz is an animal.
b. Bill forced John to go. Therefore, John went.
( 1 6) a. Fritz is a doll. #Therefore, Frit is an anmal .
b. Bill encouraged John to go. #Therefore, John went.
If conceptual structure, the for of mental represen�ation in which
contextualized interpretations of sentences are couched, IS not made out
of syntactic units, it is a conceptual necessity that the theory of
.
language
contain rules that mediate btween syntactic and conceptual urts. These
rles the SS- CS correspondnce rules, mediate the interface btween syn­
tax �nd the conceptual-intentional system. Their general for can be
sketched as in ( 1 7). 8
(17) Gneral form of ss-cs correspondence rules
Syntactic structure X
{must/may/preferably does} correspond to
conceptual struture Y.
The theory of the SS-CS interface must also designate one or more levels
of syntactic structure as the Conceptual Interface Level(s) or CIL
ss
, and
one or more levels of conceptual structure as the Syntactic Interface
Level( s) or SILc
s
. Early versions of the Extended Standard Theory such
as Chomsky 1 972 and Jackendof 1 972 proposed multiple CILs in the
syntax. However, a simpler hypothesis is that there is a single point in the
derivation of syntactic strcture that srves as the interfce level with
conceptual structure. This is essentially Chomsky's ( 1 993) assumption,
and I will adopt it here, subject to disconfring evidence.
Chomsky identifes the level of language that interfaces with the con­
ceptual-intentional system as Logical For (LF), a level built up out of
syntactic units such as NPs and VPs. Thus, in our more detailed exposi­
tion of the interface, LF plays the role of the CIL
ss
. In his earlier work
at least, Chomsky explicitly dissociates LF from inference, making clear
that it is a level of (narrow) syntax and not conceptual structure itslf:
"Deterination of the elements of LF is an empirical matter not to be
settled by a priori reasoning or some extraneous concer, for example
codifcation of inference" (Chomsky 1 980, 1 43; an almost identical pas­
sage occurs in Chomsky 1 986, 205). In part, Chomsky is trying to dis­
tance himsel f here from certain traditional concers of logic and formal
smantics, an i mpulse I share (Jackendof 1 983, chapters 3, 4, 1 1 , for
instance) . Sti l l , there has to b some level of mental representation at
I nterfacs; Representational Modularity 33
which inference is codifed; I take this to b conceptual structure, a level
t hat is not encoded in (narrow) syntactic tens. ( Incidentally, contra one
way of reading this quotation from Chomsky, it is clear that speaker intu­
i ti ons about inference are as "empirical" as intuitions about grammati­
cal i ty; consequently theories of a level that codifes inference are potentially
as empirically based as theories of a level that codifes grammaticality. )
agree with Chomsky that, although conceptual structure is what lan­
guage expresses, it is not strictly speaking a part of the language faculty; it
i s language independent and can b expressed in a variety of ways, partly
depending on the syntax of the language in question. I take conceptual
structure to be a central cognitive level of representation, interacting
richly with other central cognitive capacities (of which more in section
2. 6 and chapter 8). Language is not necessary for the use of conceptual
structure: it is possible to imagine nonlinguistic organisms such as pri­
mates and babies using conceptual structures as part of their encoding of
their understanding of the world. On the other hand, the SS-CS corre­
spondence rules are part of language: if there were no language, such rles
would have no point-it would b like ending a bridge in midair over a
chasm.
There has been constant pressure within linguistic theory to minimize
t he complexity of the SS-CS interface. For example, Generative Seman­
t i cs sought to encode aspcts of lexical meaning in tenns of syntactic
combination (deriving kil from the same underlying syntactic structure as
IÚUòc to become not alive, to cite the most famous example (McCawley
1 968b» . Montague Grammar (and more generally, Categorial Grammar)
sLeks to establish a one-to-one relation btween principles of syntactic and
semantic combination. More recently, GB ofen appeals to Baker' s (1 988)
UnifoÎity of Theta Assignment Hypothesis (UTAH), which proposes a
one-to-one relation between syntactic confgurations and thematic roles
( where a thematic role is a conceptual notion: an Agent, for instance, is a
character playing a role in an event); and the feld is witnessing a revival
of the syntacticization of lexical semantics (e. g. Hale and Keyser 1 993), a
t opic to which we retur in chapter 5.
In my opinionq the evidence is mounting that the SS-CS interface has
many of the same "sloppy" characteristics as the PS-SS interface. First of
al l , it is widel y accepted that syntactic categories do not correspond one to
onc with conceptual categories. All physical object concepts are expressed
hy nouns, but not al l nouns express physical object concepts (consider
I'ilrlhquak(', concert, place, redness, laUghtcrq justice) . Al l verbs express
34
Chapter 2
event or state concepts, but not all event or state concepts are expressed
by verbs (earthquake and concert again). Prepositional phrases can express
places (in the cup), times (in an hour), or properties (in the pink). Adverbs
can express manners (quickly), attitudes (fortunately), or modalities
(probably). Thus the mapping from conceptual category to syntactic cat­
egor is many-to-many, though with interesting skewings that probably
enhance leamability. ( 1 8) schematizes some of the possibilities (see Jack­
endof 1 983, chapters 3 and 4, for justifcation of this ontology in con­
ceptual structure).
(1 8) N: Object (dog), Situation (concert), Place (region), Time (Tuesday),
etc.
V: Situation ( Events and States)
A: Property
P: Place (in the house), Time (on Tuesdy), Property (in luck)
Adv: Manner (quickly), Attitude (fortunately), Modality (probably)
Second, other than argument structure, much of the conceptual material
bundled up inside a lexical item is invisible to syntax, j ust as phonological
features are. As far as syntax can see, cat and dog are indistinguishable
singular count nouns, and eat and drink are indistinguishable verbs that
take an optional object. That is, the syntactic refexes of lexical meaning
diferences are relatively coarse. (See Pinker 1 989 for much data and
elaboration. )
Third, some syntactic distinctions are related only sporadically to con­
ceptual distinctions. Consider grammatical gender. In the familiar Euro­
pean languages, grammatical gender classes are a hodgepodge of semantic
classes, phonological classes, and brute force memoriztion. But from the
point of view of syntax, we want to be able to say that, no matter for what
crazy reason a noun happns to be feminine gender, it triggers the very
same agreement phenomena in its modifers and predicates. 9
Fourth, a wide range of conceptual distinctions can be expressed in
terms of apparently identical syntactic structure. For instance, the syn­
tactic position of direct object can express the thematic roles Theme,
Goal, Source, Benefciary, or Experiencer, depending on the verb (Jack­
endof 1 990, chapter 1 1 ); some of these NPs double as Patient as well.
( 1 9) a. Emily threw the ball . (object = Theme/Patient)
b. Joe entered the room. (object " Goal)
c. Emma emptied the si nk. (object ° Source/Patient)
cl. (,corgc hclped the boys. (object " Beneficiar)
I nterfaces; Representational Modularity
35
e. The story annoyed Harr.
f. The audience applauded the clown.
(object " Experiencer)
(object = ?)
To claim dogmatically that these surface direct objects must all have dif­
ferent underlying syntactic relations to the verb, as required by UTAH,
necessarily results in increasing unnaturalness of underlying structures
and derivations. We retur to this issue in the next section.
Fifh, a wide range of syntactic distinctions can signal the very same
conceptual relation. For instance, as is well known (Verkuyl 1 972; Dowty
1 979; Declerck 1 979; Hinrichs 1 985; Krita 1 992; Tenny 1 994; Jacken­
dof 1 99 1 , 1 996e), the telic/atelic distinction (also called the temporally
delimited/nondelimited distinction) can be syntactically diferentiated
through choice of verb (20a), choice of preposition (20b), choice of
adverbial (20c), and choice of deteriner in subject (20d), object (20e), or
prepositional object (20f).
(20) a. John destroyed the cart (in/*for an hour).
John pushed the cart (for/*in an hour).
b. John ran to the station (in/*for an hour).
John ran toward the station (for/*in an hour).
c. The light fashed once (in/*for an hour).
The light fashed constantly (for/*in an hour).
d. Four pople died (in/*for two days).
People died (for/*in two days).
e. John ate lots of peanuts (in/*for an hour).
John ate peanuts (for/*in an hour).
f. John crashed into three walls (in/*for an hour).
John crashed into walls (for/*in an hour).
(telic)
(atelic)
(telic)
(atelic)
(telic)
(atelic)
(telic)
(atelic)
(telic)
(atelic)
(telic)
(atelic)
So far as I know, there have been no successful attempts to fnd a unifor
underlying syntactic form that yields this bewildering variety of surface
fors; rather, it is nearly always assumed that this phenomenon involves
d somewhat complex mapping between syntactic expression and semantic
i nterretationa
1 Ü
I n shortq the mapping between syntactic structure and conceptual
structure is a many-ta-many relation btween structures made up of dif­
ferent sts of pri mi ti ve elements. Therefore it cannot be characterized
deri vationaI l y, that i s, by a multistep mapping, each step of which con­
verts one syntactic structure i nto another. As in the case of the PS-SS
i nterface. then, the empi rical problem i s to j uggle the content of syntactic
36
Chapter 2
structure, conceptual structure, and the correspondence rules between
them, in such a way as to properly describ the interaction of syntax and
semantcs and at the same time make the entire system learable.
2.4 Embdding Mimath beteen Syntctc Stcte and Conceptal
Stcte
Let me illustrate the way expressive power can be juggled btween syntax
and the SS-CS interface. Perhaps the most basic principle of the SS-CS
correspondence is the approximate preservation of embedding relations
among maximal syntactic phrases. This may be stated as (21 ) .
(21 ) If syntactic maximal phrase Xl corresponds t o conceptual
constituent Zl ,
and syntactic maximal phrase X2 corresponds to conceptual
constituent Z2,
then, if Xl contains X2, Zl preferably contains Z2 .
A subase of (21 ) is the standard assumption that semantic arguments
of a proposition norally appar as syntactic arguments (complements or
specifer) of the clause expressing that proposition. This leaves opn which
semantic argument corresponds to which syntactic positon. Do Agents
always appear in the same syntactic position? Do Patients? Do Goals?
UTAH and the Universal Alignment Hypothesis of Relational Gram­
mar ( Rosen 1 984; Perlmutter and Postal 1 984) propose that this mapping
is biunique, pritting the SS-CS connection to b absolutely straight­
forard. On the other hand, the consequence is that the sentences in ( 1 9)
must be syntactically diferentiated at some underlying level in order to
account for their semantic diferences. Their superfcial syntactic unifor­
mity, then, requires complexity in the syntactic derivation.
Various less restrictive views of argument linking, including those of
Carter ( 1 976), Anderson ( 1 977), Ostler ( 1 979), Marantz ( 1 984), Foley and
Van Valin ( 1 984), Carrier-Duncan ( 1 985), Grimshaw (1 990), Bresnan and
Kanera ( 1 989), J ackendof ( 1 990, chapter 1 1 ), and Dowt ( 1 991 ), appeal
to some sort of hierarchy of semantic argument types that is matched with
a syntactic hierarchy such as subject-direct object-indirect object. O this
view, there need be no syntactic distinction among the sentences in ( 1 9),
despite the semantic diferences; the matching of hierarchies is a function
of the SS-CS rules. Hence syntax is kept simpl e and the compl exity is
local i zed in the correspondence rules.
¡ 1
I nterfacs; Representational Modularity
37
The mapping of argument structure is not the only place where there is
a trade-of btween complexity in the syntax and complexity in the cor­
respondence rles. Traditional generative grammar has assumed that (21 )
i s not a default rule but a rigid categorical rule (i . e. preferably is omitted).
Much of te traditon developed in response to apparent countr­
examples. Let us briefy consider a few progressively more problematic
examples.
First, a standard argument-part of the canon-is that (22a) must be
derived from an underlying syntactic structure something like (22b), so
that Bilcan appar within the subordinate clause in consonance with its
rol e as an argument of the subordinate propsition.
(22) a. [Bill seems [to b a nice fellow]] .
b. [e sems [Bill to be a nice fellow]]
However, (21 ) as stated is also consistent with the sort of analysis adopted
wi thin the LFG and HPSG traditions: Bil simply appears syntactically
outside the clause expressing the embedded proposition, and it is asso­
ciated with the embedded proposition by a principle that overrules (21 ).
The traditional solution indeed simplifes the SS-CS interface, but at the
cost of adding complexit to syntax (namely the application of a raising
rule). The alteratve instead simplifes the syntax by "base-generating"
(22a); the cost is an additonal principle in te correspndence rules. At
t he moment I am not advocating one of these views over the other: the
point is only tat tey are both reasonable alteratves.
Similarly, traditional generative grammar has held that the fnal rela­
ti ve cl aus in (23a) is extraposd from an underlying structure like (23b),
in consonance wit its being a modifer of ma.
( 23) a. A man walked in who was from Philadelphia.
b. [A man who was from Philadelphia] walked in.
Al teÎativel y, (23a) could b "base-generated, " and the relative clause
Lould be associated with the subject by a further correspondence rule that
overrides the default cas (21 ) . (See Culicover and Rochemont 1 990 for
one version of this. ) That the latter view has some merit is suggested by
t he fol lowing example, of a typ pinted out as long ago as Perlmutter
i l nd Ross 1 970:
( 24) A man walked in and a woman walked out who happened to look
sort of al i ke.
38 Ch
aptr 2
Here the relative clause cannot plausibly be hosted in underlying structure
by either of the noun phrass to which it pertains (a man and a woman),
bcause it contains a predicate that necessarily applies to both jointly.
(22a) and (23a) are cases where a syntactic constituent is "higher" than
it should be for its semantic rle. (25a,b) are cases where the italicized
syntactic constituent is "lower" than it should be.
(25) a. Norbert is, I thik, a genius.
( � I think Norbert is a genius. )
b. An occasional sailor walked by.
( � Ocasionally a sailor walked by.)
Earlier work in transformational grammar (e. g. Ross 1 967) did not hesi­
tate to derive these by lowering the italicized phrase into its surface posi­
tion from the position dictated by its sense. More recent traditions, in
which lowering is anathema, have ben largely silent on suh examples.
Again, it is pssible to imagine a correspondence rule that accounts for
the semantics without any syntactic movement, but that overrides (21 ) .
This is not the point at whch these alteratives can be settled. I present
them only as illustrations of how the balance of expressiveness can be
shifted between the syntax propr and te correspondence rues. The
point in each case is that some mismatch between meaning and surface
syntax exists, and that the content of this mismatch must be encoded
somewhere in the grammar, either in syntactic movement or in the SS-CS
correspondence. Many more such situations will emerge in chapter 3.
2.5 The Tripartt ParaDel Arhittre
Let us now integrate the four previous sections. The traditional hypothe­
sis of the autonomy of syntax amounts to the claim that syntactic rules
have no access to non syntactic features except via an interface. Given the
distinctness of auditory, motor, phonological, syntactic, and conceptual
information, we can expand this claim, and see phonology and conceptual
structure as equally autonomous generative systems.
We can regard a full grammatical derivation, then, as three indepen­
dent and parallel derivations, one in each component, with the derivations
imposing mutual constraints through the interfaces. The grammatical
structure of a sentence can be regarded as a triple, (PS, SS, CS). Fol­
lowing Chomsky ( 1 993), we can thi nk of a (narrow) syntactic deri vati on
as "convergi ng" if it can be mapped through the i nterfaces i nto a wel l -
I nterfaces; Representational Modularity
Phonological
fonnation
rul es
!
Phonological
st ructures
Syntactic
fonation
rles
!
Syntactic
strctures
Conceptual
fonation
rles
!
Conceptual
strctures
'- ps-ss / '- ss-cs /
Figue 2.1
corespondence
rles
The tripartite parallel architcture
corespondence
rles
39
foned phonological structure and conceptual structure; it "crashes" if nO
such mapping can b achieved. Figure 2. 1 sketches the layout of such a
grammar. In addition, phonological structure is connected through other
sets of correspondence rules with auditory and motor infonation, as we
wi ll see shortly.
A similar conception of parallel, mutually constraining derivations is
found in Autolexical Syntax (Sadock 1 99 1 ), though it deals primarily only
with the phonology-syntax interface. Shieber and Schabes ( 1 991 ) likewise
set up parallel derivations in a Tree-Adj oining Grammar formalism, but
they deal only with the syntax-semantics interface. 1 2 What I think is rel­
ati Yely novel here is looking at both interfaces at once, obsering that
( I ) they have similar structure, and (2) as will be seen in chapter 4, we
must also invoke what is in efect a direct interface from phonology to
semantics. 1 3
As promised in chapter 1 , this conception abandons Chomsky's syn­
t actocentric assumption (assumption 6), in that phonology and semantics
are treated as generative completely On a par with syntax. The syntacto­
centric architecture can b seen as a special case of fgure 2. 1 , One in
which phonological and conceptul structures have nO proprties of their
own-i n which all their properties are derivable from syntax. O such a
vi ew, the phonological and conceptual fonation rues in fgure 2. 1 are in
dTect vacuous, and the correspondence rues are the only source of prop­
l'f ties of phonological and conceptul structure. However, we have seen in
secti ons 2. 2 and 2. 3 that phonological and conceptual structures do have
properties of thei r own, not predictable from syntax, so we must abandon
t he syntactocentric picture.
For fun, suppose i nstead that syntactic structures had no properties of
t lll' i r own, and that a\ l their properties were derivable from phonology
40 Chapter 2
and conceptual structure. Then the syntactic fonation rules in fgure 2. 1
would be in efect vacuous, and we would have a maximally minimalist
syntax-the sort of syntax that the Minimalist Program claims to envi­
sion. However, there are certainly properties of syntactic structure that
cannot be reduced to phonology or semantics: syntactic categories such as
N and V; language-specifc word order, case marking, and agreement
properties; the presence or absence in a language of wh-movement and
other long-distance depndencies; and so on. So syntactic fonation rules
are empirically necessary as well, and the parallel tripartite model indeed
seems justifed.
We have also abandoned the assumption (assumption 1 ) that the rela­
tion among phonology, syntax, and semantics can be characterized as
derivational. Tere may well be derivations in the fonation rules for
each of these components, but between the components lie correspon­
dence rules, which are of a diferent fonal nature.
The notion of correspondence rules has led some colleagues to object to
the tripartite model: "Correspondence rules are too unconstrained; they
can do anything. " What we have seen here, though, is that correspon­
dence rules are conceptualy necessary in order to mediate btween pho­
nology, syntax, and meaning. It is an unwarranted assumption that they
are to be minimized and that all expressive power lies in the generative
components. Rather, as we have seen, the issue that constantly arises is
one of balance of power among components. Since the correspondence
rules are part of the grammar of the language, they must be acquired by
the child, and therefore they fall under all the arguments for VG. In other
words, correspondence rules, like syntactic and phonological rules, must
b constrained so as to be learable. Thus their presence in the archi­
tecture does not change the basic nature of the theoretical enterprise. ( For
those who wish to see correspondence rules in action, much of Jackendof
1 990 is an extended exposition of the SS-CS correspondence rules under
my theory of conceptual structure. )
Other colleagues have found it difcult to envision how a tripartite
parallel model could serve in a processing theor. However, this should b
no reason to favor the syntactocentric model, which, as stressed in chapter
1 , is recognized not to be a model of processing: imagine starting to think
of what to say by generating a syntactic structure. In section 4. 5, afer I
have bui l t the lexicon i nto the archi tectureq I will sketch how the tripartite
archi tecture might serve in processi nga
I nterfaces; Representational Modularity
41
Finally, some people simply se no reason t o abandon syntactocentrism
despite nearly two decades of evidence for the (relative) autonomy of
phonology (I leave semantics aside, on the grounds that I have a personal
i nterest). I urge readers with such inclinations to consider again the pos­
si bi lity that the syntactocentric model is just a habit dating back to the
I 960s.
2.6. Repreentational Moularity
This way of looking at the organization of grammar fts naturally into a
l arger hypothesis of the architecture of mind that might be called Repre­
sentational Modularity (Jackendof 1 987a, chapter 1 2; 1 992a, chapter 1).
The overall idea is that the mind/brain encodes inforation in some fnite
number of distinct representational formats or "languages of the mind. "
Each of these "languages" i s a foral system with i ts own proprietar st
of primitives and principles of combination, so that i t defnes an infnite
set of expressions along familiar generative lines. For each of these for­
mats, there is a module of mind/brain responsible for it. For example,
phonological structure and syntactic structure are distinct representa­
t i onal forats, with distinct and only partly commensurate primitives and
pri nciples of combination. Representational Modularity therefore posits
t hat the architecture of the mind/brain devotes separate modules to these
t wo encodings. Each of these modules is domain spcifc (phonology and
syntax respectively); and (with certain caveats to follow shortly) each is
i nformationally encapsulated in Fodor's (1 983) snse.
Representational modules difer from Fodorian modules in that they
Ó ÎÍ i ndividuated by the representations they process rather than by their
functi on as faculties for input or output; that is, they are at the scale of
l Il di vi dual levels of representation, rather than being an entire faculty
such as language perception. The generative grammar for each "language
of the mind," then, is a formal desription of the reprtoire of structures
ava i lable to the corresponding representational module.
A conceptual difculty with Fodor's account of modularity is that it
rai Is to address how modules communicate with each other and how they
wmmunicate with Fodor's central, nonmodular cognitive core. In partic­
l l l ar, Fodor claims that the language prception module derives "shallow
ft'presentations" -some fon of syntactic structure-and that the cen­
t ral facul ty of "bel ief fxation" operates in terms of the "language of
t hought , " H nonJ i ngui stic encodi nge But Fodor does not say how "shallow
42
Chapter 2
representations" are converted to the "language of thought," as they
must be if linguistic communication is to afect belief fation. In efect,
the language module is so domain specifc and inforationally encap­
sulated that nothing can get out of it to serve cognitive purposes.
1 4
The theory of Representational Modularity addresses this difculty by
positing, in addition to the representation modules proposed above, a sys­
tem of interface modules. An interface module communicates btween two
levels of encoding, say L\ and L2, by carrying out a partial translation of
inforation in L\ for into inforation in L2 for (or, better, impsing
a partial homomorphism between L\ and L2 inforation).
1 5
In the for­
mal grammar, then, the corespndence rule component that links L\ and
L2 can b taken as a foral description of the repertoire of partial trans­
lations accomplished by the L\ -L2 interface module. A faculty such as the
language faculty can then be built up from the interaction of a number of
representational modules and interface modules. The tripartite archtec­
ture in fgure 2. 1 represents each module as a rule component.
An interace module, like a Fodorian module, is domain specifc: the
phonology-to-syntax interace module, for instance, knows only about
phonology and syntax, not about visual prception or general-purpose
audition. Such a module is also inforationally encapsulated: the pho­
nology-to-syntax module dumbly takes whatever phonological inputs are
available in the phonology representation module, maps the appropriate
parts of them into syntactic structures, and delivers them to the syntax
representation module, with no help or interference from, sy, bliefs
about the social context. More generally, interface modules communicate
inforation from one for of representation only to the next for up or
downstream. In short, the communication among languages of the mind,
like the languages themselves, is mediated by modular processes.
1 6
The language faculty is embedded in a larger system with the same
overall architecture. Consider the phonological end of language. Phonetic
inforation is not the only inforation derived from auditory perception.
Rather, the speech signal seems to be delivered simultaneously to four or
more independent interaces: in addition to phonetic perception (the audi­
tion-to-phonology interface), it is used for voice recognition, afect (tone­
of-voice) perception, and general-purpose auditory perception (birds,
thunder, bells, etc. ). Each of these processes converges on a diferent rep¯
resentational "l anguage" -a diferent space of distinctions; and each i s
subject to i ndependent di ssociation i n cases of brai n damage. At the same
t i me. phonol ogical structure (of which phonetic representation is one
I nterfaces; Representati onal Modularity
�General-purpose au
d
ition
.
Voice recognition
Audi tory Input
Afect recognition
Phonological strcture + Syntactic structure
Vocal tract /
(
r
ea
ding) (sign language)
i l l sl ructions �
� Visual system
Nonl inguistic
Figure 2.2
motor control
(
chewing, smili ng, etc. )
43
Some modules that interact with auditory input, voal tract i nstructi ons, and
phonology
aspect) takes inputs not only from the auditory interface, but also from a
number of diferent interfaces with the visual system, including at least
those involved in reading and sign language.
1 7
Moreover, phonological
st ructure is not the only faculty feeding motor control of the vocal tract,
si nce vocal tract musculature is also involved in eating, in facial expres­
si on, and in actions like grasping a pipe in the mouth. Figure 2. 2 sketches
t hese interactions; each arrow stands for an interface module.
There is some evidence for the same sorts of multiple interactions at the
Iognitive end of language as well . Consider the question of how we talk
a bout what we see, posed by Macnamara ( 1 978). Macnamara argues that
t hc only possible answer is that the brain maps information from a visual/
spati al forat into the propositional format here called conceptual struc­
t I I rc. Barbara Landau and I have elaborated Macnamara's position
Uackendof 1 987b, 1 996a; Landau and Jackendof 1 993). We observe that
nrtai n types of conceptual information (such a the type-token distinc­
t ion and type taxonomies) cannot be represented visually/spatially; con­
versel y, certain types of visual/spatial information (such as details of
shape) cannot be represented propositionally/linguistically. Consequently
vi sual /spatial representation must be encoded in one or more modules
di sti nct from conceptual structure, and the connection from vision to
l i l l l guage must be mediated by one or more interface modules that estab­
h�h U partial relation between vi sual and conceptual fonnats, parallel to
! he i nterfaces i nterior to the language faculty.
I ;or the same reason, conceptual structure must b linked to all the
ot ha sensory modal i ties of whi ch we can speak. In lackendof 1 983 I
hypot hesize that concept ual st ruct ure is the l i nk among mul timodal
4
Syntactic
strcture
Auditory
infonation Smell
1 �
.
Chapter 2
Visual
representation
Conceptual Spattal
strcture �
)
representation
( )
Haptic
/ � / � representation
Emotion Action Proproeption
Fgre 2.3
Some moules that interact with conceptual structure
perceptual representations, for example enabling us to identify the sound
of a bell with its apparance. However (in contradiction to lackendof
1 983), there is reason to believe that some multimodal information is
integrated elsewhere in the system. For instance, the visual system is sup­
posed to derive information about object shape and location from visual
inputs. Landau and Gleitman ( 1 985) make the obvious point that the
haptic system (sense of touch) also can derive shape and location infor­
mation; and we can be surprised if vision and touch fail to provide us with
the expcted correlation. Also, body senses such as proprioception can
give us inforation about loation of body parts; and an integrated pic­
ture of how one's body is locted in the environment is used to guide
actions such as reaching and navigation. Landau and I therefore suggest
that the upper end of the visual system is not strictly visual, but is rather a
multimodal spatil representation that integrates all information about
shape and location (Landau and lackendof 1 993). Figure 2. 3 shows all
this organization. 1 s
Within the visual system itself, there i s abundant evidence for similar
organiztion, deriving more from neuropsychology than from a theory of
visual inforation structure (though Marr (1982) ofers a distinguished
example of the latter). Numerous areas in the brain are now known to
encode diferent sorts of visual inforation (such as shap, col or, and
location), and they communicate with each other through distinct path­
ways. For the most part there is no straight "derivation" from one visual
level to another; rather, the inforation encoded by any one area i s
infuenced by its interaces with nuerous others (see Crick 1 994 for an
accessible surey).
The i dea behind the tri partite architecture emerged i n my own thinking
from research on musical grammar ( Lerdahl and lackendof 1 983; lac­
kendolT 1 987a. chapter 1 1 ). In tryi ng to write a grammar that accounted
for t he musi cal i ntuiti ons of an educated l i stener, Lerdahl and I found it
I nterfaces; Representational Moularity
Grouping
/strcture �
.
Audi tor � Musical
I
Time-

pan � Prolo

gatJonal
s i gnal
surface "
. reductIon
reduction
"
Metrical I"
strcture
Iigre 2.4
Modules of musical grammar
45
i mpossible to write an algorithm parallel to a transformational grammar
t hat would generate all and only the possible pieces of tonal music with
wrrect structures. For example, te rhythmic organization of music con­
sists of a counterpoint between two independent structures, grouping
structure and metrical structure, each of which has its own primitives and
principles of combination. In the optimal case, the two structures are in
phase with each other, so that grouping boudaries coincde with strong
metrical bats. But music contains many situations, the simplest of which
is an ordinar upbeat, in which the two structures are out of phase. Con­
Nequently, Lerdahl and I found it necessary to posit separate generative
systems for these structures (as well as for two others) and to express the
structural coherence of music in terms of the optimal ft of these structures
t o t he musical surface (or sequnce of notes) and to each other. ¯ Figure
2. 4 sketches the organization of the musical grammar in ters of repre­
sentati on and interface modules.
Fol lowing the approach of Representational Moduarity, one can
l nvi si on a theory of "gross mental anatomy" that enumerates the various
"l anguages of the mind" and spcifes which ones interface most directly
wi l h which others. Combining fgures 2. 1 -2.4 rsults in a rough sketch of
part of such a theory. A "fne-scale mental anatomy" then details the
I Il lcrnal workings of individual "languages" and interfaces. Linguistic
t heory, for example, is a "fne-scale anatomy" of the language represen­
t .. l i on modules and the interface modules that connect them with each
I I l hcr and with other representations. Ideally, one would hop that
" mental anatomy and brain anatomy will someday prove to be in an
I nt eresti ng relati onshi pø
RLpresentÜti onal Modulari ty is by no means a "virtual conceptual
I I l·cessi ty. " I t is a hypothesis about the overall arhitecture of the mind, to
he veri fi ed in terms of not onl y the l anguage faculty but other faculties as
wdl .
1 II
I therefore do not wish to claim for it any degree of inevi tability.
4 Chapter 2
Nevertheless it appears to be a plausible way of looking at how the mind
is put together, with preliminary support from many diferent quarters.
Returing to the issue with which this chapter began, the syntactocen­
tric architecture of the Minimalist Program is envisioned within a view of
language that leaves tacit the nature of interfaces and isolates language
from the rest of the mind: recall Chomsky' s remark that language might
well be unique among biological systems. Here, by asking how interfaces
work, we have arrived at an alterative architecture for grammar, the
tripartite parallel model . This model proves to integrate nicely into a view
of the architecture of the mind at large. I take this to be a virtue.
Chapter 3
More on the Syntax­
Semantics Interface
3. 1 Enriched Compsition
Let us next focus further on the Conceptual Interface Level (CIL) of
( narrow) syntax. As observed in chapter 2, GB identifes this level as
Logical Form ( LF). It is worth taking a moment to examine the basic
Lonception of LF. According to one of the earliest works making detailed
use of LF, May 1 985, it is suppsed to encode "whatever proprties of
Nyntactic form are relevant to semantic interpretation-those aspects of
semantic structure that are expressed syntactically. Succinctly, the contri­
hution of grammar to meaning" (May 1985, 2). This defnition is essen­
t i O l l y compatible with the defnition of CIL: it is the level of syntax that
l' ncodes whatever syntactic distinctions are available to (or conversely,
�� Äpressed by) conceptual distinctions.
Chomsky (1 979, 1 45) says, "I use the expression logical form really in
O manner diferent from its standard sense, in order to contrast it with
w'lnantic representation[,] . . . to designate a level of linguistic representa­
t i nn i ncorporating all semantic properties that are strictly determined by
l i ngui stic rules. " Chomsky emphasizes here that LF, unlike the under­
l yi ng structures of Generative Semantics, is not supposed to encode
"C' aning per se: other aspects of meaning are left to cognitive, nonlin­
!l I i st i c systems (such as conceptual structure). Crucially, LF does not
purport to encode those aspects of meaning interal to lexical items. For
Cxampl e, although i t does encode quantifer scope, it does not encode the
I n . i cal meani ng di ferences among quantifers such as al and some, or
Ilm'(' and j<lUr. These di ferences appear only in conceptual structure.
I I I l Üter statements Chomsky is less guarded: "PF and LF constitute the
' 1 I I lrfcc' between language and other cognitive systems, yielding direct
fl' prl· scnt at i ons of sound on the onc hand and meani ng on the other . . .
48
Chpter 3
(Chomsky 1 986, 68). Here LF seems to be identifed directly with con­
ceptual structure, and the diferences between syntactic elements and truly
conceptual elements (discussed here in section 2. 3) have been elided. I
fact, looking back to the earlier quotation, we fnd some sigs of the sme
elision in the phrase "a level of linguistic representation incorporating . . .
semantic properties. " If semantic properties are incororated in a lin­
guistic level , but if conceptual structure, the locus of smantic properties,
is not a linguistic level, then in ters of what primitives are the semantic
properties of LF to b encoded? Given that LF is suppsed to be a syn­
tactic level , the assumption has to be that some smantic properties are
encoded in syntactic terms-in terms of confgurations of NPs and VPs.
However, this assumption cannot be strictly true, given that semantic
properties do not involve confgurations of NPs and VPs at all; rather,
they involve whatever units conceptual structures are built out of. So in
what sense can we understand Chomsky' s statement?
It appears that Chomsky, along with many others in the feld, is work­
ing under a standard (and usually unspken) hypothesis that I will call
syntacticaly transparent semantic composition (or simple composition for
short). 1
( 1 ) Syntactically transparent semantic composition
a. All elements of content in the meaning of a sntence are found in
the lexical conceptual structures ( LCSs) of the lexical items
composing the sentence.
b. The way the LCSs are combined is a function only of the way the
lexical items are combined in syntactic structure (including
argument strcture). In particular,
i. the interal structure of individual LCSs plays no role i
determining how the LCSs are combined;
ii. pragmatics plays no role in determining how LCSs are
combined.
Under this set of assumptions, LCSs themslves have no efect on how
they comps with each other; composition is guided entirely by syntactic
structure. Hence one can conceive of syntactic structure as directly mir­
rored by a coarse smantic structur, one that idealizes away from the
interal structure of LCSs. From the point of view of this coars semantic
structure, an LCS can b regarded as an opaque monad. One efect is that
the SS-CS i nterface can b maxi mal l y si mple, si nce there are no i nter­
acti ons in conceptual structure among syntactic, lexical , and pragmatic
The Syntax-Smantics Interace
49
lfects. Under such a conception, we can speak elliptically of syntax
"i ncorporating semantic properties, " meaning that syntax precisely mir­
rors coarse semantic confgurations. I suggest that Chomsky's character­
i zation of LF should be thought of as elliptical in precisely this way.
Note, incidentally, that most work in foral semantics shares Chom­
sky' s assumptions on this issue. In particular, because lexical items are for
t he most part regarded as semantically undecomposable entities, there can
h no interaction between their interal structure and phrasal composi­
t i on. Moreover, the standard treatment of compositionality requires a
di sambiguated syntax; hence no aspects of a sentence's interpretation can
ari se from outside the sentence itself. (See Partee 1 995 for a summary of
t hese issues in foral semantics.)
The hypthesis of syntactically transparent semantic composition has
t he virtue of theoretical elegance and constraint. Its efect is to enable
researchers to isolate the language capacity-including its contribution to
semantics-from the rest of the mind, as befts a modular conception. It
lan therefore be seen as a potentially positive contribution to psycho­
l ogical theor.
However, it is only a hypothesis. This chapter will suggest that, what­
lver its a priori attractiveness, it cannot be sustained. Rather, a more
a ppropriate hypothesis treats simple composition as a default in a wider
rnge of options that I will call enriched composition. (Much of this hy­
pot hesis and many of the examples to follow are derived from Pstejov­
sky 1 99 1 a, 1 995. )
( 2) Enriched composition
a. The conceptual structure of a sentence may contain, in addition to
the conceptual content of its LCSs, other material that is not
expressed lexically, but that must be present in conceptual
structure either (i) in order to achieve well-formedness in the
composition of the LCSs into conceptual structure (coercion, to use
Pustej ovsky' s term) or (ii) in order to satisfy the pragmatics of the
di scourse or extralinguistic context.
h. The way the LCSs are combined into conceptual structure is
determi ned i n par by the syntactic arrangement of the lexical
i tems and in part by the interal structure of the LCSs themselves
( Pustej ovsky' s cocomposition).
Accordi ng to (2), the i nteral strcture of LCSs is not opaque to the
Jll i mi pl es that compose Less i nto the meani ng of the sentence; rather,
50 Chapter 3
composition proceeds though an interaction btween syntactic structure
and the meanings of the words themselves. If this is so, it is impossible to
isolate a coarse semantic structure that idealizes away from the details
of LCSs and that thus mirrors precisely the contribution of syntax to
meaning. One therefore cannot construe LF as a syntactic level that es­
sentially encodes semantic distinctions directly. Rather, the ss-cs in­
terface is more complex: the efect of syntactic structure on conceptual
structure interleaves intimately with the efects of word meanings and
pragmatics.
A theory of enriched composition represents a dropping of constraints
long entrenched in linguistic theory; syntactically transparent composition
is clearly simpler. However, a more constrained theory is only as good as
the empirical evidence for it. If elevated to the level of dogma (or reduced
to the level of presupposition), so that no empirical evidence can be
brought to bear on it, then it is not being treated scientifcally. I am pro­
posing here to treat it as a hypothesis like any other, and to see what
other, possibly more valuable, generalizations will be lost if we insist on
it. 2
In sections 3. 2-3. 6 we will briefy look at several cases of enriched
composition. Any one of them alone might be dismissed as a "problem
case" to be put of for later. However, the number and variety of diferent
cases that can be attested suggest a larger patter inconsistent with syn­
tactically transparent composition.
After looking at these cases, we will see how enriched composition
bears on problems of binding and quantifcation. One of the important
functions of LF-in fact the original reason it was introduced-is to
encode quantifer scope relations. In addition, since the theory of quanti­
fer scop includes the binding of variables by quantifers, anaphoric
binding in general also falls under the theory of LF. In sections 3. 7 and
3. 8 we will see that certain instances of anaphora and quantifcation
depend on enriched composition, drawing on the interal structure of
LCSs and even on semantic material not present in LCSs at all. This
means tat the phenomena of binding and quantication that LF is sup­
posed to account for cannot be described in their ful generality by means of
a syntactic level of representation; rather, they can only be most generaly
encoded at conceptual structure, a nonsyntactic level. In turn, i f the full
range of relevant facts is to be encoded in conceptual structure, it misses a
general i zati on to encode a subset of them i n syntax as wel l , as the stan­
dard concepti on of LF does. The concl usi on wi l l be t hat i t serves no pur-
The Synta-Semantics Interac
51
P?s� t o complicate (narrow) syntax by including a syntactic level of LF
di stInct from S-Structure.
3.2 Aspeta Coercon
3. 2. 1 Verb Coercons
A ca�e of enriched composition that has ben widely studied (see, e. g. ,
PusteJovsky 1 99 1 b; lackendof 1 99 1 ; Talmy 1 978; Hinrichs 1 985; Verkuyl
( 993) concers sentences like those in (3).
(3) a. The light fashed until dawn.
b. Bill kept crossing the street.
( 3a) carries a sense of repeated fashing not to be found in any of its lex­
ical i tems. (One might propose that fsh is vague with respect to whether
or not it denotes a single fash; I would deny this, on the grounds that The
light fashed does not seem vague on this pint. ) (3b) is ambiguous with
respct to whether Bill crossd the street repeatedly or continued in his
effort to cross once. This ambiguity does not appear to arise through any
l exical ambiguity in the words of the sentence: cross is not ambiguous
between crossing repeatedly and crossing partially, and, since Bilkept
sleeping is not ambiguous, it seems odd to localize the ambiguity in keep
I I r -ing. Rather, these readings arise through the interactions of the lexical
meanings with each other.
In the analysis of (3a) ofered by most authors cited above, until
semantically sets a tempral bound on an ongoing process. However,
fash dLs not itself denote an ongoing process. Therefore the sentence is
i Mt Crreted so as to involve a sequence of fashes, which forms a suitable
process for until to bound. The semantic content "sequence" is an aspec­
fIIal coercion, added by a general principle that is capable of creating
aspctual compatibility among the parts of a sentence.
( 3b) presents a parallel story. The verb keep requires its argument to b
a n ongoing process; but cross the street is not an ongoing process, becaus
1 1 has a specifed end pint. There are two ways it can be reinterpreted as
: Ì process. The frst is to construe it as repeated action, like (3a). The other
lN to conceptual Iy "zoom in" on the action, so that the endpoint disap­
pars from view-one sees only the ongoing process of Bill moving in the
t l i redi on of the other side of t he street. Such a process appars in the
s�'lond ÎÜading of ( 3b); the sentence asserts that this process has continued
l lVL` Î some peri od of t i me. Such a zoomi ng i n" is a consequence of
52
Chapter 3
enriched composition, which makes the accomplishment cross the street
aspetually compatible with keep.
Such aspetual coercions do not depend on particular lexical items.
Rather, similar repetition with util appears with any point-action verb
phrase, such as slap Bil , clap, or cross the border; and it is produced not
just with until, but by other "bunding" phrases as well, for instance in a
hour. The choice btween repetition and partial completion occurs with
any gerundive complement to keep that expresses an accomplishment,
such as walk to school (but not walk toward school) or eat an apple (but
not plain eat); and this choice appears not just in complements of keep
but also with stop, continue, and/or a hour.
As these examples show, the need for aspectual coerion can b deter­
mined only afer the composition of the VP has taken place. It depends,
for example, on whether there is a delimited objet or completed path (to)
that renders the entire VP a completed action; or alteratively, whether
there is no objet, an unbounded objet (eat custard), or an incomplete
path (toward) that renders the VP an atelic action. Hence the aspetual
ambiguity afer keep cannot be localized in lexical polysemy.
Aspectual coercions therefore are most generally treated as the intro­
duction of an extra speialized function, interpolated in the course of
semantic composition in order to ensure semantic well-foredness, along
the lines of (4).
(4) a. Repetition
Interpret VP as [REPETITION OF VP].
b. Accomplishment-to-process
Interpret VP as [PROCESSUAL SUBEVENT OF VP] .
The idea behind (4) i s as follows: when the reading of VP is composed
with that of its context, the interpretation of VP alone is not inserted into
the appropriate argument position of the context, as would occur in sim­
ple composition. Rather, VP is treated as the argument of the special
functions REPETITION OF X or PROCESSUAL SUBEVENT OF X,
and the result i inserted into the argument position of the context.
More schematically, simple composition would produce a function­
argument structure F(X), where F is the function expressed by the syn­
tactic head and X is the argument expressed by the syntactic complement.
However, in cases such as (3), X does not serve as a suitable argument for
F. Hence the process of composition i nterpol ates a "coerci ng function" G
The Syt-Sematcs Interface
53
to create instead the structure F(G(X» , where X is a suitable argument
for G, and G(X) is a suitable arguent for F.
3. 2. 2 MasCount Coercions
Many sources point out a formal parallelism between the process-accom­
pl ishment distinction in sentences and the mass-ount distinction in noun
�hrases. And of course, the idea of a sequence of events can be conveyed
ID noun phrases by pluralizing a noun that expresss an event, as in many
flashes of light. Given such parallelisms, one might expect a counterpart
?f aspctual coerion in NPs, and indeed many are well known, for
mStance (5).
( 5) a. I'll have a cofee/three cofees, please.
b. We're having rabbit for dinner.
( Sa) i a case where count syntax is attached to a normally mass noun,
and the interpretation is inevitably 'portion(s) of cofee' . (5b) is a case
where an animal name is used with mass syntax and understood to mean
' meat of animal' .
Although these coerions are often taken to be perfetly general prin­
ci ples (under the rubrics "Universal Packager" and "Universal Grinder"
respectively), [ suspect that they are somewhat speialized. For instance,
t he (5a) frame applies generally to food and drink, but it is hard to
i magine applying it to speifed portions of cement (* He poured three
/ ' I 'ments today) or to the amount of water it takes to fll a radiator (* It
lakes more than three waters to fll that heating system). That is, the
coerced reading actually means something roughly as specifc as 'portion
or food or drink suitable for one person' . 3 Similarly, the coercion in (5b)
l·a n apply to novel animals ( We're having elandfor dinner), but it cannot
t· asi l y apply to denote the substance out of which, sy, vitamin pills are
l I I ade (* There's some vitamin pill on the counter for you). So it appears
t hat, al though thes cases include as part of their meaning a mass-to­
munt or count-to-mass coercing function, one's knowledge of English
)! ILh byond this to stipulate rather narrOw selectional restrictions on
when such coerci on is pssible and what it might mean. At the same time,
t he fact that the phenomenon is general within these rather narOw cate­
)ori cs (for i nstance eland) suggests that lexical polysemy is not the right
sol uti on. Rather, these appear to b spcialized coercions in which the
sn
uree reference is mass and the shifed reference is count (5a) or vice
VÎhH ( 5b), but further content is present in the coercing functi on as well.
54 Chaptr 3
.
The genera� conclusion, therefore, is that the correspondence rles
l� t�e SS-
.
CS mterface permit conceptual structure to contain these spe­
clah�d bl�S of content that are unexpressed in syntax. Like the verbal
co�rclOns m section 3. 2. 1 , such a possibility violates condition ( l a), the
�lalm that semantic content is exhausted by the LCSs of the lexical items
10 the sentence. Rather, this condition must be rplaced with condition
(2a), the premise of enriched composition.
3.3 Reference Transfer Futons
A supercially diferent cas of enriched interpretation turs out to share
certain formal properties with the cases just disussed: the cas of refer­
enc transfer, frst made famous in the "counterpart" theory of McCaw­
ley and Lakof (see Morgan 1 97 1), then explored more systematically by
others (see, e. g. , Nunberg 1 979; Fauconnier 1985; lackendof 1 992b). The
classic cas is Nunberg's, in whch one waitress says to another something
like (6).
(6) The ham sandwich in the corer wants some more cofee.
The subject of (6) refers to a customer who has ordered or who is eating a
ham sandwich. It would be absurd to claim that the lexical entry of ham
sandwich is plysmous between a ham sandwich and a customer. The
alterative is that the interpretation of (6) in context must involve a prin­
ciple that Nunberg calls reference transfer, which allows one to inter­
pret ham sandwich (the "source reading") to mean something roughly
like 'person contextually associated with ham sandwich' (the "shifted
reading").
A advocate of syntactically transparent composition will take one of
two tcks on reference transfer: either (1) it is a rule of pragmatics and
takes place "afer" semantic composition; or (2) the subject of (6) con­
tains some phonologcally null syntactic elements that have the inter­
pretation 'person contextually associated with' . In lackendof 1 992b I
argue that neither of these accounts will sufce.
Consider frst the possibility that a reference transfer is "mere prag­
matics. " A principle similar to that involved in (6) is involved in (7).
Imagine Richard Nixon has attended a prformance of the opera Nixon in
Chia, in which the character of Nixon was sung by lames Maddal ena.
(7) Ni xon was horrifed to watch hi msel f sing a fool ish aria to Chou
En-l ai .
The Syntax-Smantics Interac 55
Here, himsel has a shifed reading: it refers not to Nixon himself but to
the character onstage. That is, a reference transfer makes Nixon and the
anaphor himsel noncoreferential . On the other hand, such noncorefer­
ence is not always possible.
(8) * After singing his aria to Chou En-Iai, Nixon was horrifed to see
himself get up and leave the opera house.
That is, the shifed reading cannot sere as antecedent for anaphora that
refers to the real Nixon. No matter how one accounts for the contrast
between (7) and (8), it cannot be deterined until "afer" the shifted
reading is diferentiated from the original reading. Thus, under the stan­
dard assumption that binding is a gramatcal principle, te reference
t ransfer cannot be relegated to "late principles of pragmatics"; it must be
ful ly integrated into the composition of the sentence. In addition, looking
ahead to the discussion of binding in section 3. 7, we see that, if reference
transfer is a matter of pragatics, the contrast btween (7) and (8) poses
an insuprable problem for a syntactic account of binding.
Note, by the way, that I agree that reference transfer is a matter of
pragmatics-that it is most commonly evoked when context demands a
nonsimple reference (though see section 3. 5. 3). I am arguing, though, tat
it is nevertheless part of the principles of semantc compositon for a sen­
tence-in other words, that one cannot "do semantic composition frst
and pragmatics later. "
However, one might claim that reference transfer has a syntactic refex,
t hereby presering syntactically transparent composition. For instance,
ham sandwich in (6) and himsel in (7) might have te syntactic structures
( la) and (9b) respectively under one possible theory and (lOa) and ( lOb)
under another. Such an account would have the virtue of making the
di sti nction between (7) and (8) present in syntax as well as semantcs, s
t hat binding theory could be presered in syntax.
( 9) a. person with a [ham sandwich]
b. person portraying [himself]
( 1 0) a. [NP e [ham sandwich))
b. [NP e [himsel f] ]
rhe trouble wi th the structures i n (9) is tat they are too explicit: one
woul d need pri nci ples to delete specifed nouns and relatonships-prin­
dpl es of Ü sort that for 25 years have been agreed not to exist in gen­
l' rat i ve grammar (not to mention that (9b) on the face of it has the wrong
56 Chapter 3
reading). So (9) can b rejected immediately. What of the second possi­
bility, in which a null element in syntax is to be interprted by general
convention as having the requisite reading? The difculty is that even a
general convention relies on enriched composition: e has no interpretation
of its own, so a speial rule of composition must supply one . Moreover, as
shown in lackendof 1992b, reference transfers are not homogeneous and
general, so several spcial rules ar necessary. Let us briefy review three
cases here .
3.3.1 Pctue, States, and Actor
The use of Nixon to denote an actor portraying Nixon, or to denote the
character in the opra representing Nixon, falls under a more general
principle that applies also to pictures and statues.
(11) Look! There' s King Ogpu hangirg on the wall/standing on top of
the parliament building.
But this principle does not apply to stories about people .
( 1 2) Harry gave us a desription/an account of Ringo. o ·Harry gave us
Ringo.
Furthermore , the nouns picture and sketch can be used to denote either a
physical object containing a visual representation or a brief description.
Only on the former reading can they b the target of a reference transfer .
(13) a. Bill was doing charcoal ske tches of the Beaties, and to my
delight he gave me Ringo.
b. • Bill was giving us quick (verbal) ske tches of the new employees,
and to my surprise he gave us Harry.
That is, the reference transfer responsible for (7) is not perfectly general; it
ocurs only with a semantically restricted class.
3.3.2 Car and Other Vehce
Another frequently cited reference transfer involves using one 's name to
denote one's car, as in (l4a). ( l4b) shows that refexiviztion is avail able
for such shifted expressions. However, ( l4c-e) show that the shif i s
avai l able only i n descrbing an event durng the time one i s i n control of
the car.
( 1 4) Ü· A truck hi t Bill in the fender when he was momentarily
di stracted by a motorcycle.
Te Synt-Semantics Interace
b. Bill squeezd himself [i.e. the car he was driving] btween two
buss.
57
c. ? A trk ht Ringo in the fender while he was recording Abbey
Road. (OK only if he drove to the studio!)
d. * A truk ht Bill in the fender two days after he died.
e. * Bm repainted himself [i. e. his car].
Morover, ts shift applies to cars and perhaps bicycles, but defnitely
not to horses.
( 1 5) Ringo suf ered smashed spokes/handlebars/*hooves i the accident.
(where it's Ringo's bike/hors)
Unlike the shifed referent in the statue/actor/pictre cas (l 6a), the
shifted referent in the vehicle case cannot comfortably b the target of a
disourse pronoun (1 6b).
( 1 6) a. Hey, that's Ringo hanging over there [pointing to portrait of
Ringo]! Isn't he/it beautifully painted!
b. Hey, that's Ringo parked over there [pointing to Ringo's cr]!
Isn't *he/?it beautifully painted!
3.3.3 Ham Sadwche
The ham sandwich cases require as source of the transfer some salient
property.
( 1 7) The ha sandwich/The big hat/The foreign accent over there in the
corer wants some more cofee.
But it cannot be something that independently would do as the subject of
the sentence.
( 1 8) #The blonde lady/The little dog over there i the corer wants a
hamburger. (i .e. the man with the blonde lady/little dog)
l J nl ike in the previous two cases, there is no posibility of using a shifed
pronoun or refexive whos antecedent is the original referent.
( 1 9) a. * I gave the ham sandwich to itslf [i.e. to the prson who ordered
it] .
b. * The ham sandwich pleasd itself.
Nor can a pronoun or rfexive referring to the source have the shifed
noun as antecedent.
58
Chap
ter 3
(20) * The ham sndwich [i. e. the person who ordered it] put it(self) in his
pocket.
In short, each of these reference transfers has its own pculiar proper­
ties. Evidently, a speaker of the language must have acquired something
in order to use them properly and to have the intuitions evoked above.
My sense therefore is that a speaker' s knowledge includes a principle of
enriched composition, along the lines of the spcialized mass/count coer­
cions in section 3. 2. 2. The three cases sketched here have roughly the
forms in (21 ).
(21 ) a. Interpret an NP a [VISUAL REPRESENTATION OF NP].
5
b. Interpret an NP as [VEHICLE CONTROLLED BY NP].
c. Interpret an NP as [pERSON CONTEXTUALLY
ASSOCIATED WITH NP].
Notice the similarity of these principles to thos in (4): they involve
pasting a coercing function around an NP's simple compositional read­
ing X to form an enriched reading G(X), in violation of the condition
of exhaustive lexical content.
3.4 Argment Strcture Alterations
Let us next consider two cases of argument structure alterations in which
a theory of enriched composition ofers the potential of btter capturing
generalizations.
3.4.1 Interpolated Fnction Speife by General Principle
Consider these argument structure alterations « 22) from Grimshaw
1 979; see also Dor 1 996; (23) from lackendof 1 996c). The sUbscripts on
the verbs discriminate between argument structure/subategoriztion
frames.
(22) a. Bill askeda who came/what happened.
b. Bill askedb the time/her name.
(23) a. Bill intendeda to come/to bring a cake.
b. Bill intendedb that Sue come/that Sue bring a cake.
A paraphrase reveals the semantic relationship between the frames.
(24) a. Bi l l askedb the time. " Bi ll askeda what the time was.
b. Ri l l i ntendedh that Sue come. * Bi l l intendeda to bring about that
Suc comc. (o t r s to hm' (' Sue come)
The Syntax-Smantic Interac
59
These alterations can easily b treated as lexical polysemy. However,
consider ask. Grimshaw ( 1 979) observes that whenever a verb takes an
i ndirect question complement in alteration with a direct object (e. g. tel,
know), it also perits certain direct objects to be interpreted in the man­
ner shown in (24a), as part of an elliptical indirect question what NP i/
was. The NPs that can be so interpreted are limited to a class including
such phrases as the time, the outcome, NP's name, NP's height. Most NPs
are not subject to such an interpretation: ask a question " ask what a
question is; tel the story " tel what the story is; know his wie " know who
his wie is; and so on.
As Grimshaw observes, this distribution suggests that the (b)-typ
i nterpretation is not a lexical property of ask, but rather fol lows from the
fact that ask takes an indirect question complement alterating with a
direct object, plus a general principle of interpretation, which applies to
al l verbs like ask, tel, and know.
A similar result obtains with intend, which alterates btween a con­
t rol l ed VP complement denoting a volitional action (23a) and a that-sub­
j unctive complement (23b). Verbs with the same syntactic alteration and
t he same interpretation of the complement as a volitional action show the
same relationship btween the readings.
( 25) Bill agreed/arranged that Sue bring a cake. " Bill agreed/arranged to
have/bring about that Sue bring a cake.
I ' hi s interpretation however does not appear with verbs such as prefer and
desire, which satisfy the same syntactic conditions as intend, but whose VP
(omplements need not denote volitional actions. Again, it would be pos­
SI ble to treat these alterations as lexical polysemy. However, the char­
ader of the alteration suggests a more general principle of interpretation
t hat supplies the interpolated content italicized in (24b) and (25).
I wi sh to argue, therefore, that ask and intend (as well as the families of
Vlrbs to which they belong) are not lexically polysemous; rather, the
readi ngs i n the (b) frames are in part the product of special SS-CS corre­
spondence rules, stated informally in (26).
( 26) a. Interpret an NP such as the time, the outcome, NP's name, as
[ I ndi rect quest i on WHAT NP BE] . 6
b. Interpret that [ s . . . subjunctive . . . ] as
[Vol unt ar acti on
B
RING ABOUT THAT
S
] .
J ust i n case a verb takes an NP syntactically and an indirect question
"emant ical l y. ( 26a) al l ows the NP to b i nterpreted as an i ndi rect question
6 Chapter 3
and thereby to satisfy the verb's indirect question argument. Similarly,
just in case a verb takes a that-subjunctive syntactically and a voluntary
action semantically, (26b) allows the that-subjunctive to satisfy the vol­
untary action argument.
This solution reduces lexical polysemy: the (b) frames are no longer
lexically specialized readings. The price is the specialized correspondence
rules-rules that go byond syntactically transparent composition to
allow implicit interpolated functions. Notic again that the efect of these
special rules is to paste a coercing function around the reading of the
complement, in ths case to make it conceptually compatible with the
argument structure of the verb (i . e. to make the resulting composition well
fored). Again we are faced with a violation of the condition of exhaus­
tive lexical content, (1 a).
3.4.2 Iterplate Function Specife by Qualia of Complement
Another type of argument structure alteration, discussed by Pustejovsky
( 1 991 a, 1 995) and Briscoe et al . (1 990), appears with verbs such as begin
and enjoy.
(27) a. Mary begana to drink the ber.
b. Mary beganb the beer.
(28) a. Mary enjoyeda reading the novel.
b. Mary enjoyedb the novel.
As in the previous cases, it is possible to fnd a paraphrase relationship
between the two frames. However, there is a crucial diference. In the
previous cases, the appropriate material to interpolate in the (b) frame
was a constant, independent of the choice of complement. In these cases,
though, the choice of appropriate material to interpolate varies wildly
with the choice of complement.
(29) a. Mary bganb the novel. " Mar begana to read/to write/*to
drink/*to appreciate the novel.
b. Mary beganb the ber. " Mary begana to drink/?to bottle/*to
read/*to appreciate the ber.
c. Mary enjoyedb the novel . " Mary enjoyeda reading/writing/
*drinking/*appreciating the novel .
d. Mary enjoyedb the beer. " Mary enjoyedq drinking/?bottling/
*readingJ*appreciating the beer.
A general sol ution to these al ternations, then. cannot i nvoke lexical
pol ysOl Y: each verb woul d havL t have i ndefi ni tely many di ferent read-
The Syntax-Semantics Interace 61
i ngs. Rather, the generalization i s as follows: these verbs select semanti­
cally for an activity. When their complement directly expresses an activity
(i . e. the (a) frame or an NP such as the dance), simple composition suf­
ces. However, if the complement does not express an activity (e. g. the
novel), then two steps take place. First, a rule not unlike those in (4), (21 ),
and (26) interpolates a general function that perits the NP to satisfy the
activity variable of the verb.
(30) Interpret NP as fctv
t
y
F ( NP)]. (i . e. an unspecifed activity involving
NP, "doing something with NP")
However, (30) leaves open what the activity in question is.
Pustejovsky ( 1 99 1 a, 1 995) claims that the second step of interpretation
specifes the activity by examining the interal semantic structure of the
complement. He posits that part of the LCS of a noun naming an object is
a complex called its qualia structure, a rpertoire of specifcations includ­
i ng the object's appearance, how it comes into being, how it is used, and
others. For example, the qualia of book include its physical for, includ­
i ng its parts (pages, covers, spine); its role as an information-bearing ob­
ject; somethng about its (normally) bing printed; something specifying
t hat its proper function is for people to read it. Within its role as an
i nforation-belring object, there will b further specifcation of how the
i nforation comes into bing, including the role of author as creator.
The acceptable interpolated predicates in (29) stipulate either a char­
acteristic activity one perfors with an object or te activity one perfors
i n creating these objects. Hence the desired activity predicates can be
found in the qualia structures of the direct objects. However, this requires
vi olating condition (1 b), the idea that the content of an LCS plays no role
Í Î ¡ how it combines with other items. Rather, we must accept condition
( 2b): the conceptual function of which the direct object of begi or enjoy is
:Ìn argument is deterined by looking inside the semantic content of the
ohjcct. Only the general kind of function (her, activity) is deterined by
I he verb begin or enjoy itslf This is a case of what Pustejovsky calls
( ' l}(' omposition. 7 This approach systematically eliminates another class of
I n. i cal polysemies, this time a class that would be impossible to specify
lI l Inp\ctel y.
I t mi ght b clai med that qualia structure is really "world knowledge"
nnd does not bl ong in l inguistic theory. To be sure, the fact that books
Î ¦ ÎÎ read is part of one s "world knowledge. " But it is also surely part of
lML h knowledge of EnKIh that one can use the sentence Mary began the
62 Chap
ter 3
novel to express the proposition that Mary began to read the novel . To
me, examples like this show that world knowledge necessarily interacts
with the SS-CS correspondence rules to some degree: the correspondence
rules permit certain aspects of a proposition to be systematically elided
if reconstructible from the context-which includes world knowledge
encoded within an LCS.
3.5 Adjetive-Noun Modication
3.5.1 Adjectives That Invoke the Quala Structre of the Noun
The modifcation of nouns by adjectives is usually considered to be for­
mally a predicate conjunction; a red house is something that is a house
and that is red. Pustejovsky ( 1 991 a, 1 995) demonstrates a wide range of
cass (some previously well known) where such an analysis fails. (3 1 )-(33)
presnt some examples.
(3 1 ) a. a fast typist " someone who types fast
: someone who is a typist and who is fast
b. a fast car " a car that travels fast
? " something that is a car and that is fast
c. a fast road " a road on which one travels fast
: something that is a road and that is fast
(32) a. a sad woman " a woman experencing sadness
? " someone who is a woman and who is sad
b. a sd event " an event that makes its participants experience
sdness
? " something that is an event and that is sd
c. a sad movie ¯ a movie that makes its viewers experience sdness
? " something that is a movie and that is sad
(33) a. a good knife " a knife that is good for cutting
: something that is a knife and that is good
b. a good rad ¯ a road that is good for traveling on
: something that is a road and that is good
c. a good typist " someone who is good at typing
: someone who is a typist and who is good
In each cas, the (li kely) meaning of the adjective-noun combi nati on is
rcher than a simple conjunction of the adjecti ve and noun as predicates.
It would mi ss the poi nt to respnd by proposi ng that the adjecti ves arc
The Syntax-Smantics Intera 63
multiply polysemous, for then they would need diferent readings for
practically every noun they modify. A better solution is to make the ad­
jectives closer to monosemous, but to adopt a richer notion of adjective­
noun modifcation. For example, suppose fast simply pertains to the rate
of a process (relative to some standard for that process). If it modifes a
noun that denotes a process, such as waltz, then it can be composed with
it by simple composition. But as it stands, it cannot pertain to a typist (a
prson, not a process) or a road (an object, not a process). The principles
of composition must therefore fnd a process for fast to modify, and they
will look for such a process within the qualia structure of the host noun.
The qualia structure of typit specifes most prominently that such a per­
son types; hence this is the most prominent activity to which fast can
prtain. Similarly, the qualia structure of road specifes that it is an object
over which one travels; thus traveling becomes the activity to which fast
prtains.
Similarly, sad pertains to a prson's emotional experience. It therefore
can modify woman by simple composition, but not event or movie.
Rather, it must pertain to the emotional experience of characters found in
the qualia structure of event and movie, as refected in the paraphrases
in (32). Good evaluates an object in its capacity to sere some function;
in the absence of a specifed function (e. g. This is a good knie for throw­
ing), the default function is chosen from the specifcation of proper
function in the qualia structure of the noun. (Kat (1 966) analyzes (33)
si mi l arly, without drawing the general conclusion about adjective mod­
i t ication pursued here. )
Thi s use of qualia structure difers from that in section 3. 4. 2. There, a
characteristic activity was copied from qualia structure to serve as a
coerci on function in argument structure; one enoys reading the book, for
i nstance. In this case, an element of qualia structure is instead used as host
fo the content of the adjective. The fact that similar elements of the noun
��merge in both cases, for instance characteristic activities, suggests that
qual ia structure has an independently motivated existence.
One can imagine two alteratives to admitting enriched composition in
t hcse examples . As mentioned above, one would be to claim that each of
t hese adjectives i s pol ysemous . The difculty with this (as pointed out by
Ka tz ( 1 966» is that an adjective such as good would need as many dif­
ferent readi ngs as there are functions for objects. The other alterative
woul d be si mpl y to deny the phenomenon, to claim that afaòt road simply
is 'a rad t hat i s fast' , and to further clai m that the speed is attributed
6
Chapter 3
to travel on the road as a matter of pragmatics-"world knowledge of
roads" or the like. I take the latter psition to be an assertion that prag­
matics is unconstrained and autonomous from linguistic semantics. Yet
the fact that similar bits of "world knowledge" play various roles in
understanding various constructions suggests that something systematic
and constrained is going on. I wish the theory of conceptual structure and
of conceptual composition to b able to characterize this systematicity
instead of ignoring it.
3.5.2 Adjetive That Semanticaly Subrinate The Nous
As sen in (34), a well-known class of adjectives fails altogether to make
sense in paraphrases of the sort in (3 1 )-(33). More appropriate para­
phrases look like those in (35). 8
(34) a. a fake gun " something that is a gun and that is fake
b. an alleged killer " someone who is a killer and who is alleged
c. a toy hors " something that is a hors and that is a toy
(35) a. a fake gun ¯ something that is intended to make people think it is
a gun
b. an alleged killer ¯ someone who is alleged to be a killer
c. a toy horse ¯ something whose function in play is to simulate a
horse
The details of the paraphrases in (35) are not crucial . What is important is
that the noun does not appar in the usual frame 'something that is an N
and . . . ', but rather inside a clause whose content is determined by the
adjective (e. g. fake N ¯ 'something that is intended to make people think
it is N'). Thus these adjectives compse with nouns in an unusual way:
syntactically they are adjuncts, but semantically they are heads, and the N
functions as an argument. In other words, we have two quite diferent
principles of composition for the very sme syntactic conguration.
When faced with two distinct ways of compsing A-N combinations,
someone who advocates (or, more often, presupposes) simple composition
will conclude that two distinct syntactic structures give ris to these
interpretations. In particular, one might want to claim that in the latter
form of composition, the adjective is actually the syntactic head as wel l .
However, I see no evidence interal to syntax to substantiate such a
clai m: it is driven only by the assumption that transparent composition
of the purest sort is the only possi bl e real ization of the syntax-semantics
relat i onshi p. In the present approachq the choice of means of combination
The Syntax-Semantics Interae
65
is driven by the LCS of the adjective: if it is a noral attributive adjective,
i t will integrte with the qualia structure of the noun; if it is a fake-type
adjective, it will take the noun as an argument. Such LCS-driven choice is
i n line with hypothesis (2b).
Interestingly, the adjective good is polysmous (a are other evaluative
adjectives such as excellent, terric, louy). One reading, shown in (33),
was already discussed in section 3. 5. 1 . However, as pointed out by
Aronof ( 1 980), another reading emerges in sentences like This stump
makes a good chair. The presupposition is that the stump is not a chair,
but it is being used as one. That is, makes a good N means somethng like
adequately fulflls the normal function of an N (even though it isn't
one)'. This form of composition thus parallels the fake-type adjectives.
Notice that the criteria placed on a good knife that i a knife (composi­
tion as in (33» are considerably more stringent than those on a screw­
dri ver that makes a good knife-and a knife can't make a good knife.
Thi s shows that the two readings are distinct, though related. (The verb
makes, although it forces this reading, is not necessary: This stump i a
good chair is prfectly acceptable as well. )
Notice that this type of compsition also invokes the qualia structure of
t he noun. The qualities that the stump ha that make it a good chair
depend on the proper function of chair. Hence, as in the other reading of
Kood, the proper function of the noun must be pulled out of its qualia
structure.
3. 5. 3 Mocaton That Coerce the Reading of the Head Noun
( onsider modifcation by the adjective wooden. If applied to some physi­
lal object X, it undergoes simple composition to create a reading 'X made
of wood' (wooden spoon ¯ 'spoon made of wood'). However, suppose we
have a combination like wooden turtle. Turtles are not made of wood,
t hey are made of fesh and blood. So simple composition of ths phras
yi el ds an anomaly, just like, say, ·wooden chalk. However, wooden turtle
I urns out to be acceptable because we understand turtle to denote not a
rl�a l turtle but a statue or replica of a turtle. Notice that wooden itself does
not provide the i nformation that the object is a statue or replica; other­
wi se wooden spoon would have the same interpretation, which of course it
does not. And · wooden chalk is bad presumably because we don' t nor­
mal l y conjure up the idea of model s of pieces of chalk. ( Perhaps in the
mlltext of Ô dol lhouse it woul d be appropriate. )
9
6
Chapter 3
The posibility of undertanding turtle to mean ' replica of a turtle', of
cours, is now not just a matter of "convention" or pragmatics; rather, it
is provided explicitly by the reference transfer function (21 a). (21 a) was
proposed as a principle to make linguistic expression compatible with
contextual understanding, including nonlinguistic contxt. However, here
it is invoked to provide a well-fored interpretation of an otheris
anomalous phrase. Moreover, the extra material provided by (21 a) pro­
vides the host for modifcation by the adjective, so it cannot b "added
later," "after" simple composition takes plac. This coercion of the read­
ing of the head noun is yet another example wher a theory of enrche
interprtation is neessar, where not all smantic inforation i an
interrtaton comes from the LeSs of the consttuent words.
3.6 Sum
We have surveye a considerable numbr of situtons that appar to call
for enriche principles of compsiton. Others will emerge in setions 7. 7
and 7. 8. Acros these situations a numbr of commonalities emerge,
sumarzed in table 3 . 1 .
Table 3. 1
Summar of instances of enriched composition
a b
Asptual corcions
Æ
Mass-ount corions
Æ
Pcturs
Æ
Cars
Æ
Ham sandwiches
Æ
Ask-/itend-ty alterations
Æ
Bgi-/eno y-ty alteration
Æ
Fast
Æ
Fake
Æ
Woodn
Æ
a * Not a
l
l semantic inforation comes from
l
exic
l
entres
b * Composition guided by intera
l
structure of LCSs
c ¬ Pra
g
mati cs i nvoked in composition
d " Composi ti on i nvol ves addi n
g
i nterpolated coercion functi on
c
Æ
Æ
Æ
d
Æ
Æ
Æ
Æ
Æ
Æ
Æ
Æ
The Syntax-Semantics Interface
67
Questions raised by these cases include:
1 . What is the overall typlogy of rules of enriched composition? Can
they insert arbitrary material in arbitrary arrangements, or ar there con­
straints? Do the similarities among many of these cases suggest the favor
of tentatve constraints?
2. Are these rules subject to linguistic variation?
3. In the discussion above, I have frequently used paraphrase relations to
argue for enriched composition. How valid is this methodology in gen­
eral? To what extent can a paraphrase employing simple composition
reliably reveal material concealed by enriched composition?
4. In designing a logical language, it would be bizarre to sprinkle it so
l iberally with spcial-purpse devices. What is it about the design of nat­
ural language that allows us to leave so many idiosyncratic bits of struc­
ture unexpressed, using, as it were, conventionalized abbreviations?
Keeping in mind the prvasive natur of enriched composition and its
i mplications for the complexity of the ss-cs interface, let us now tur to
t he two major phenomena for which LF is supposed to be responsible:
bi nding and quantifcation.
3.7 Anapora
This section will briefy consider several cases where the criteria for
bi nding cannot be expressed in syntactic structure. However, they can be
expressed in conceptual structur, either because of the interal structure
of LCSs, or because of material supplied by principles of enriched compo­
si t ion, or because of other semantic distinctions that cannot be reduced to
syntactic distinctions. Thus, I will claim, binding is fundamentally a rela­
t i on btween conceptual constituents, not between syntactic constituents.
What dos this claim mean? The idea is that the traditional notation for
hi nding, (36a), is not the correct formalization. Rather, (36a) should be
regarded a an abbreviation for the three-part relation (36b).
( 17) a. NPi binds [NP, anaphorli
b. NPa [NP, anaphorlb (SS)
I I
corresponds to corresponds to
I I
[ X]" .. bi nds
(CS)
68
Chaptr 3
That is, the antecedent NP expresses a conceptual structure X, the .
anaphor NP expresses a conceptual structure Y, and the actual binding
relation obtains between X and Y. The binding is notated by the Greek
letters: a superscripted Greek letter indicates a binder, and the corre­
sponding Greek letter within brackets indicates a bindee.
Under this conception of binding, what should the counterpart of GB
binding theory look like? If the relation between syntactic anaphors and
their antecedents is mediated by both syntax and conceptual structure, a
shown in (36b), we might expct a "mixed" binding theory that involves
conditions over both structures. More specifcally, when the anaphor and
the antecedent are syntactically explicit (as in standard binding examples),
we might expect both syntactic and conceptual conditions to be invoked.
On the other hand, (36b) presents the pssibility of anaphoric relations
that are not expressed directly in the syntax, but only implicitly, in the
meaning of the sentence. Under such conditions, we might expect only
conceptual structure conditions to apply. It is conceivable that al appar­
ent syntactic conditions on binding can be replaced by conceptual struc­
ture conditions (as advocated by Kuno ( 1 987) and Van Hoek ( 1 995),
among others), but I will not explore this stronger hypothesis here.
In order to show that (36b) rather than (36a) is the proper way to con­
ceive of binding, then, it is necessry to fnd cases in which syntactic
mechanisms are insufcient to characterize the relation between anaphors
and their antecedents. Here are some.
3.7.1 Bidg Inide Lexica Items
What distinguishes the verb buy from the verb obtain? Although there is
no noticeable syntactic diference, X buys Y from Z carries an inference
not shared by X obtain Y from Z, namely that X pays money to Z. This
diference has to come from the LCS of the verb: whereas the LCS of
both verbs contains the inforation in (37a), buy also contains (37b),
plus further information that does not concer us here. (The argument
does not depend on formalism, so I characterize this information totally
informally. )
(37) a. Y changes possession from Z to X
b. money changes pssession from X to Z
Thus the LCS of buy has to assign two distinct semantic roles (9-roles) to
the subject and to the object of from.
The Syntax-Smantics Interface 69
Notice that thes a-roles are present, implicitly, even when the relevant
argument is not expressed syntactically, as in (38).
(38) a. X bought Y.
b. Y was bought from Z.
In (38a) we still understand that there is someone (or something such as
a vending machine) ftom whom Y was obtained, and that this person
rceived money from X; in (38b) we still understand that there is someone
who obtained Y, and that this prson gave money to Z. One could
potentially say that (38a) has a null pp whose obj ect receives these a-roles,
but accounting syntactically for the absence of from proves tricky. Sim­
ilarly, one could potentially say that (38b) has a null underlying subject,
but it is now widely accepted that passives do not have underying syn­
tactic subjects. So there are technical difculties associated with claiming
that such "understood" or "implicit" a-roles are expressd by null argu­
ments in syntax (see Jackendof 1 993b for more detailed disussion of
Emond' s ( 1 99 1 ) version of this claim).
This leaves us with the problem of how implicit characters are under­
stood as having multiple a-roles. In Jackendof 1 990 I proposed that te
LCS of buy encodes the subvents in the way shown in (39). (I omit the
l arger relation that unifes these subvents into a transction rather than
j ust two random transfers of possesion. )
( 39) Y changes possession from za to X
l
money changes pssession from � to a
The Greek letters in the second line function as variables, bound to the
supersripts in the fst line. Consequently, whoever turs over Y receives
money, and whoever turs over money receives Y -regardless of whether
t hese characters are mentioned in syntax.
Such binding of variables in LCS has precisely the efects desired for
standard bound anaphora. For example, Bil bought a book from each
student has the reading ' For each student x, a book changed possession
from x to Bill and money changed possession from Bill to x
'
. The prob­
l em is that the two occurrences of the variable x are inside the LCS of buy
and never appear as separate syntactic constituents. Thus there is no way
t o express their binding at LF, the syntactic level at which binding rela­
t i ons are supposed to be made explicit. LF thus misses an important
,cmantic general ization about bound anaphora. On the other hand, at the
70 Chapter 3
level of conceptual structure, these bound variables are explicit, and their
relation looks just like the conceptual structure associated with bound
anaphora-a necessary coreference between two characters with difer­
ent 9-roles. Hence conceptual structure, the lower level of (36b), is the
proper level at which to encode the properties of bound anaphora most
generally.
3.7.2 Control Deterined by LCS
One of the earliest cited types of evidence that enriched composition is
involve in binding concers the control of complement subjects, obvi­
ously a case that falls within the general account of binding. In Jackendof
1 972, 1 974 I presented evidence that many cases of complement subject
control are govered by the thematic roles of the verb that takes the
complement-where thematic roles are argued to be structural positions
in the verb's LCS. Similar arguments have since been adduced by Cattell
(1984), Williams ( 1 985), Chierchia ( 1 988), Farkas ( 1 988), and Sag and
Pollard ( 1 991 ) .
The original arguments tur on examples like these (where I use tradi­
tional subscripting to indicate coreference):
(40) a. Johnj got [PROj to leave].
b. Bill got Johnj [PROj to leave].
(41 ) a. Johnj promised [PROj to leave] .
b. Johnj promised Bill [PROj to leave] .
When get switches from intransitive to transitive, control switches from
object to subject; but when promie switches from intransitive to tran­
sitive, control does not shift. In Jackendof 1 972 I observed that the
switch in control with get parallels the switch in the Recipient role in (42),
whereas the lack of switch with promise parallels that with a verb such as
sel.
(42) a. John got a book.
b. Bill got John a book.
(43) a. John sold a book.
b. John sold Bill a book.
(John is Recipient)
(John is Recipient)
(John is Source)
(J ohn is Source)
The proposal therefore is that the controller of get is identifed wi th its
Goal role, which switches position depending on transi tivity; the con­
trol l er of promise i s i dentifed with its Source role, which does not swi tch
The Syntax-Seantic Interace
71
position. (The subject i s also Agent, of cours. I choose Source a the
detenining role for reasons to emerge in a moment. )
Verbs that behave like promise are rare in English, including vow and a
couple of others. However, in nominalized form there are some more that
have not been previously pointed out in the literature.
(44) a. John/s promise/vow/ofer/guarantee/obligation to Bill [PROj to
leave]
b. Johnj ' S agreement/contract with Bill [PROj to leave]
Other than promie and vow, these nominals do not correspond to verbs
that have a transitive syntactic frame, so they have not ben recognized as
posing the same problem. In contrast, note that in the syntactically iden­
ti cal (45), control goes with the object of to, the thematic Goal .
(45) John's order/instructions/encouragement/reminder to Billj [PROj to
leave]
The need for thematic control is made more evident by the use of these
nominals with light verbs.

(46) a. Johnj gave Bill a promis [PROj to leave] .
b. John got from Billj a promise [PROj to leave] .
c. John gave Billj an order [PROj to leave).
d. Johnj got from Bill an order [pROj to leave] .
Here control switches if either the light verb or the nominal is changed.
The principle involved matches thematic roles between light verb and
nominal . In (46a) the subject of give is Source; therefore the Source of the
promise is the subject of give. Since the Source of promie controls the
�omplement subject, the controller is John. If the light verb is changed to
.CI' I , whose Source is in the object ofjom, control changes accordingly, as
in (46b). If instead the nominal is changed to order, whose Goal is con­
t rol ler, then control switches over to the Goal of the verb: the indirect
obj ect of give (46c) and the subject of get (46d).
There are excptions to the thematic generaliztion with promise, the
l)l' st -known of which is (47).
( 47) Bi l l ; was promi sed [PROj to be allowed to leave] .
I krc control goes with the Goal of promie. However, this case takes us
I I l 1 l y deeper i nto enri ched composition, since it ocurs only with a passive
compl ement in which the controller receives perissi on or the like.
72
*to penit Harry to leave
to be penitted to leave
(48) BiII
j
was promised [PRO
j
*to get penission to leave ] .
*to leave the room
*to be ht on the head
Chapter 3
In other words, control cannot be detenined without frst composing the
relevant conceptual structure of the complement, which cannot b local­
ized in a single lexical item or strctural position.
More complex cases along similar lines arise with verbs such as ask and
beg.
(49) a. Bill asked/begged Harryj [PROj to leave] .
b. Billj asked/begged Harry [PROj to be penitted to leave] .
One might frst guess that the diference in control has to do simply with
the change from active to passive in the subordinate clause. But the facts
are more complex. Here are two further examples with passive sub­
ordinate clauses, syntactically parallel to (49b):
(50) a. Bill asked Harryj [PROj to be examined by the doctor] .
b. * Billj asked Hary
j
[pROi
fj
to be forced to leave] .
In (50a) PRO proves to be Harry, not Bil; (50b) proves to be ungram­
matical, on grounds that seem to be rooted in the semantics of the com­
plement clause. It appears, then, that the interpretation of PRO is a
complex function of the semantics of the main verb and the subordinate
clause.
It so happens that all the verbs and nominals cited in this subsection
have a semantic requirement that their complements denote a volitional
action, much like the intend-type verbs cited in section 3. 4. 1 . It is therefore
interesting to spculate that part of the solution to the switches of control
in (47)-(50) is that a rle of coercion is operating in the interpretation of
these clauses, parallel to rule (26b). For example, (47) has a approximate
paraphrase, 'Someone promised Bill to bring it about that Bill would be
allowed to leave', and a similar paraphrase is possible for (49b). These
being the two cases in which control is not as it should be, it may be that a
more general solution is revealed more clearly once the presence of the
coerced material is acknowledged. (This is essentially the spi ri t of Sag and
Pollard's ( 1 991 ) solution. )
In short, control theory, which i s part of binding theory, cannot be
stated without access to the interi or struct ure of lexical entries and to
The Syntax-Semantics Interface
73
t he composed conceptual structure of subordinate clauses, possibly even
i ncluding material contributed by coercion. Under the assumption of syn­
t actic binding theory (at S-Structure or LF) and the hypothesis of syntac­
t i cally transparent semantic composition, this is impossible. A propr
sol ution requires a binding theory of the general form (36b), where LeSs
are available to binding conditions, and a theory of enriched composi­
t i on, where not all semantic inforation is refected in syntactic structure.
3. 7.3 Rgo Sentence
Section 3. 3 has already alluded to a class of examples involving the
i nteraction of binding with reference transfer. Here is another example.
Suppose I am walking through the wax museum with Ringo Starr, and
we come to the statues of the Beatles, and to my surprise,
( 5 1 ) Ringo starts undressing himself .
. rhi s has a possible reading in which Ringo is undressing the statue. In
other words, under special circumstances, binding theory can apply even
to noncoreferential NPs.
However, as observed above, there is an interesting constraint on this
lI se. Suppose Ringo stumbles and falls on one of the statues, in fact
( 52) Ringo falls on himself.
The intended reading of (52) is prhaps a little hard to get, and some
�pakers reject it altogether. But suppose instead that I stumble and bump
a gai nst a couple of the statues, knocking them over, so that John falls on
t he foor and
( ' 3 ) ´Ringo falls on himself.
rhi s way of arranging binding is totally impossible-far worse than in
( 52) . But the two are syntactically and even phonologically identical. How
do we account for the diference?
I n Jackendof 1 992b I construct an extended argument to show that
t here is no reasonable way to create a syntactic diference between (52)
and (53), either at S-Structure or at LF. In particular, severe syntactic
di ffi cul ti es ari se if one supposes that the interpretation of Ringo that
denotes hi s statue has the syntactic structure [NP statue of[Ringo]] or even
1 Nl ' Í [ Ringo] ] . Rather, the diference relevant to binding appears only at
t he l evel of conceptual structure, where the reference transfer for naming
,t a t lcs has i nserted the semantic material t hat diferentiates them. In
74
Chapter 3
other words, certain aspects of the binding conditions must be stated over
conceptual structure (the lower relation in (36b)), not syntactic structure;
binding depends on enriched composition.
As it happens, though, the diference in conceptual structure btween
(52) and (53) proves to be identical to that between (54a) and (54b).
(55a,b) show the two forms.
(54) a. Ringo fell on the statue of himself.
b. * The statue of Ringo fell on himself.
(55) a. [FALL ([RINGO]U, [ON [VISUAL-REPRESENTATION [a]]])]
b. [FALL ([VISUAL-REPRESENTATION [RINGO]U], [ON [a]])]
The syntactic diference between (52)-(53) and (54) is that in the latter
case, the function VISUAL-REPRESENTATION is expressed by the
word statue, whereas in the former case it arises through a reference
transfer coercion.
Given that both pairs have (in relevant respects) the same conceptual
structure, the conceptual strcture binding condition that diferentiates
(52) and (53) (one version of which is proposed in lackendof 1 992b) can
also predict the diference between these. However, in standard accounts,
(54a) and (54b) are diferentiated by syntactic binding conditions. Clearly
a generalization is being missed: two sets of conditions are accounting for
the same phenomenon. Because the syntactic conditions cannot generalize
to (52)-(53), they are the ones that must be eliminated. In short, LF, the
level over which binding conditions are to be stated, is not doing the work
it is supposed to.
To the extent that there are syntactic conditions on anaphora, in the
proposed approach they are stated as part of the corespondence rles
that map anaphoric morphemes into conceptual structure variables. That
is, syntactic conditions on binding appear in rules of the for (56).
(56) If NP1 maps into conceptual structure constituent Xn, then
[NP2, +anaphor] can map into conceptual structure constituent [a]
(i . e. NP1 can bind NP2) only if the following conditions obtain:
a. Conditions on syntactic relation of NP1 and NP2 : . . .
b. Conditions on structural relation of xn and a in conceptual
structure: . . .
In other words, syntactic conditions on binding are not altogether ex­
cluded; it is just that they are part of the interface component rather
than the syntax.
The Syntax-Semantics Interface 75
:.4 Bound Ponouns in Sloppy Identity
The next case, presented more fully in Culicover and Jackendof 1 995,
concerns the binding of pronouns in "sloppy identity" contexts. Consider
( 57).
( 57) John took his kids to dinner, but
a. Bill didn't.
b. Bill would never do so.
c. nobody else did.
d. nobody else would ever do so.
These are all subject to the well-known strict versus sloppy identity
ambiguity, whereby (57a), for example, can be interpreted as either ' Bill
di dn' t take John' s kids to dinner' or ' Bill didn't take his own kids to din­
ner ' . In order to account for this diference, the interpretation of the null
VP must contain some variable or pointer that can be bound either to
John or to Bil. Moreover, this variable behaves like a bound variable, in
that it is bound in the standard way by the quantifer nobody else in
( 57c,d). That much is "conceptually necessary. "
The question is at what level of representation this binding i s repre­
sented. It is not at the phonologcal interface level, since the relevant
pronouns are not present in the overt utterance. Usually (e. g. Fiengo and
May 1 994) some process of "reconstruction" is suggested, whereby the
n ull VP in (57a,c) and do so in (57b,d) are replaced by take his kids to
dinner at LF. Then his is bound by noral mechanisms of binding.
Chomsky ( 1 993) suggests instead that there is no reconstruction.
Rather, the null VP in (57a), for instance, is to be produced by a deletion
( I f unstressed take his kids to dinner in the PF component (i. e. in the
(ourse of Spell-Out), but this phrase is present throughout the rest of the
deri vation. However, Chomsky's proposal does not address how to deal
wi t h the replacement of take his kids to dinner by do so (or other VP
anaphors like do it and do lkewise). Thus the invocation of the PF com­
ponent here is ill advised, and we can discard it immediately.
The problem with the reconstruction solution has to do with how the
grammar in general deterines the antecedent of "reconstruction. " Con­
si der sentences such as (58a-c), a type discussed by Akmajian ( 1 973).
(X) Û. J ohn patted the dog, but Sam did it to the cat.
b. Mary put the food in the fridge, then Susan did the same thing
¼i th the beer.
L. John ki ssed Ma ry, and then it happened to Sue too.
76
Chaptr 3
There is no ready syntactic expression of the ellipted material. (59) pres­
ents some possibilities, all miserable failures.
(59) a.
* John patted the dog, but Sam patted (the dog/it) to the cat.
b.
Mary put the food in the fridge, then Susan put the food/it in
the fridge with the beer. (wrong meaning)
c. * John kissed Mary, and then John kissd (Mary/her) happened
to Sue too.
Akmajian shows that the interpretation of such examples depnds on
extracting contrasting foci (and paired contrasting foi such a John/the
dog and Sam/the cat in (58a» , then reconstructing the ellipted material
from the presupposition of the frst clause (in (58a), x patted y) .
Can this ellipted material b reconstructed at LF? One might suggest,
for instance, that focused expressions are lifed out of their S-Structure
positions to
A
-positions at LF, parallel to the overt movement of top­
icalized phrases. Chomsky ( 1 971 ) anticipates the fundamental difculty
for such an approach: many more kinds of phrass can be foused than
can b lifed out of syntactic expressions.
t t
(Most of the examples in (60)
are adapted from Chomsky 1 971 ; capitals indicate focusing stress.)
(60) a. Was he warned to look out for a RED-shirted ex-on?
* [redi [was he wared to look out for a ti -shirted ex-on))
b. Is John certain to WIN the election?
* [wini [is John certain to ti the eletion))
c. Does Bill eat PORK and shellfsh?
* [porki [does Bill eat ti and shellfsh))
d. John is neither EASY to please nor EAGER to please.
* [easYi eager
j
[John is neither ti to please nor t
j
to please))
(Note: The conjoined VPs here present unusually svere
problems of extraction and derived structure.)
e. John is more concered with AFfrmation than with
CONfrmation.
* [afrmationi [confrmation
j
[John is more concered with ti than
with f]]
Suppos one tried to maintain that LF movement is involved in (60a­
e). One would have to claim that constraints on extraction are much
looser in the derivation from S-Structure to LF than in the derivation
from D-Structure to S-Structure. But that woul d undeni ne the cl ai m that
LF movement is also an i nslance of Move Û. si nce i t is onl y lhrugh
The Syntax-Smantic Interac
77
common constraints that the two can b identifed. In tur, the motiva­
tion for LF as a syntactic level in the fst place was the extension of Move
Û to covert cases. (See Koster 1 987 and Berman and Hestvik 1 99 1 for
parallel arguments for other LF phenomena. ) In short, either the difer­
entiation of focus and presupposition is present in conceptual structure
but not in LF-or else LF loss all its original motivation, in which case
the village has ben destroyed i order to save it.
To sum up the argument, then:
I . The strict/sloppy identity ambiguity depends on ambiguity of binding;
therefore such ambiguity within ellipsis, as in (57), must b resolved at a
l evel where the ellipted material is explicit.
2. Because of examples like (58a-c), ellipsis in general must b recon­
structed at a level of represntation where focus and presupposition can
b explicitly distinguished.
). Becaus of examples like (60a-e), focus and presupposition in general
<annot b distinguished in LF, but only in conceptual structure.
4. Therefore the strict/sloppy identity ambiguity cannot b stated in syn­
t ax, but only in conceptual structure.
The following example puts all these piees together and highlights the
wnclusion:
(6 I ) JOHN wants to ruin hs books by smearing PAINT on them, but
BILL wants to do it with GLUE.
Bi l l ' s desi re is subjet to a strict/sloppy ambiguity: whose books does he
ha vc i n mind? In order to reover the relevant pronoun, do it must b
I l· constructed. Its reonstruction is something like ruin his books by
Vlwaring y on them, where y is a variable in the presupposition, flled in
I h� second cl ause by the focus glue. However, there is no way to produce
Iil his books by smearing y on them in the logical form of the frst clause,
ha use paint cannot b syntactically lifed out of that environment, as we
cun sce from (62) . 1 2
( h2) ·I t i s PAINTj that John is ruining his books by smearing tj on them.
I II short, there is no way for LF to encode the strict/sloppy ambiguity in
compl ete general i ty. Therefore LF, which is supposed to be the locus of
I l I ldi ng relati ons, cannot do the job it is supposed to. Rather, the phe­
nOl llena in questi on real l y belong i n conceptual structure, a nonsyntactic
n' present at i on.
78
Chapter 3
We thus have four distinct cases where aspects of binding theory must
apply to "hidden" elements. In each case these hidden elements can b
expressed only in conceptual structure; they cannot be expressed at a
(narrow) syntactic level of LF, no matter how diferent from S-Structure.
We conclude that LF cannot be justifed on the basis of its supposed
ability to encode syntactic proprties relevant to anaphora that are not
already present in S-Structure. Rather, such properties tur out not to be
syntactic at all; the relation "x binds y" is properly stated over conceptual
structure, where the true generalizations can be captured.
3.8 Quantication
The other main function attributed to LF is expressing the structure of
quantifcation. Of course, the structure of quantifcation is involved in
inference (the rules of syllogism, for instance), so it is necessrily repre­
sented at conceptual structure, the representation over which rules of
inference are formally defned. The question, then, is whether scop of
quantifcation needs to be represented structurally in syntax as well.
3.8.1 Quantcation into Elipted Context
The theory of quantifcation has to account for (at least) two things: how
quantifers bind anaphoric expressions, and how they take scop over
other quantifers. The evidence of section 3. 7 shows that the binding of
anaphoric expressions by quantifers cannot in general b expressed in
syntax. For instance, (57c) and (57d) contain a quantifer binding a covert
anaphor within an ellipted VP, and we have just seen that such anaphors
cannot in general be reconstructed in syntactic structure-they appear
explicitly only in conceptual structure. To push the point home, we can
construct a parallel to (61 ) that uses a quantifer.
(63) JOHN is ruining his books by smearing PAINT on them, but
everyone ELSE is doing it with GLUE.
In general, then, the binding of variables by quantifers can b expressed
only at conceptual structure.
What abut the relative scope of multiple quantifers? It is easy enough
to extend the previous argument to examples with multiple quantifers.
(6) a. Some quantifers in English always take scope over other
quantifers, but none i n Turki sh do that.
The Syntax-Semantics Interface
79
b. Some boys decided to ruin all their books by smearing PAIT on
them, bcause none of them had succeeded in doing it with
GLUE.
(n order for (64a) to b interpreted, do that must be reconstructed as
always take scope over other quantiers, and none must take scope over
always. But we have just seen that reconstruction of do that is accom­
pl i shed in general only at conceptual structure, not at a putative syntactic
l evel of LF. The somewhat baroque but perfectly understandable (64b),
which parallels (61 ) and (63), amplifes this point. Therefore LF cannot
express all quantifer scope relations, as it is supposed to.
3. 8.2 Quantfcational Relatons That Depend on Inteal Stucte of
LCS
A more subtle argument concers the semantic nature of "measuring out"
in examples like (65) ( Tenny 1 987, 1 992; Verkuyl 1 972, 1 993; Dowty
( 99 1 ; Krifa 1 992; Jackendof 1 996e).
(65) Bill pushed the cart down Scrack St. to Fred's house.
( n (65) the position of the cart is said to "measure out" the event. If the
cart's path has a defnite spatial endpoint (to Fred's house in (65» , then
the event has a defnite temporal endpoint; that is, it is telic and passes the
standard tests for telicity.
( 66) Bill pushed the cart down Scrack St. to Fred's house in/*for an hour.
( f , on the other hand, the cart's path has no defnite endpoint (delete to
r'ed's house from (65» , the event has no defnite temporal endpoint and
passes the standard tests for atelicity.
( 67) Bill pushed the cart down Scrack St. for/*in an hour.
rhese facts about telicity have typically been the main focus of discus­
si ons of measuring out. " But they follow from a more general relation­
s hi p observed by most of the authors cited above: for each point in time
t hat the cart is moving down Scrack St. to Fred's house, there is a corre­
sponding position on its trajectory.
1 4
That is, verbs of motion impose
H q uantifcati on-l ike relationship between the temporal course of the event
a nd the path of moti on: X measures out Y can b analyzed semantically
Hh ' For each part of X there is a corresponding part of Y' .
Fxactl y whi ch consti tuents of a sentence stand in a measuring-out
,l 'i al i onshi p depends on the verb.
80
Chapter 3
1 . With verbs of motion, the temporal course of the event measures out
the path, as we have just seen.
2. With verbs of prformance, the temporal course of the event measures
out the object being performed. For instance, in Sue sang an aria, for each
point in time during the event, there is a corresponding part of the aria
that Sue was singing, and the beginning and the end of the aria corre­
spond to the beginning and the end of the event respctively.
3. With verbs of extension, a path measures out an object. For instance,
in Highway 95 extend down the East Coast from Maine to Florida, for
each location on the East Coast, there is a corresponding part of Highway
95; and the endpoints of the path, Maine and Florida, contain the end­
points of the road. In this case time is not involved at all . A parallel two­
dimensional case is verbs of covering: in Mud covered the foor, there is a
bit of mud for each point of foor. (Of the works cited above, these cases
are discussed only in Iackendof 1 996e. )
4. Finally, there are verbs involving objects, paths, and times in which no
measuring out appears at all. For instance, in Bil slowly pointed the g
toward Harry, the gun changes position over time, but its successive posi­
tions do not correspond to successive locations along a path toward Harry.
In particular, toward Harry with a verb of motion induces atelicity: Bil
pushed the cart toward the house for/*in ten seconds. But Bil slowly
pointed the gun toward Harry may be telic: its endpoint is when the gun is
properly oriented. (Again, these cases are discussed only in Iackendof
1 996e. )
In other words, the verb has to say, as part of its meaning, whch argu­
ments (plus time) measure out which others; the quantifcational rela­
tionship involved in measuring out has to be expressed as part of the LCS
of the verb.
But this means that there are quantifer scope relationships that are
imposed by the LCS of a verb. By hypothesis, LF does not express
material interior to the LCS of lexical items. Hence LF does not express
all quantifer scop relationshps.
1 s
An advocate of LF might respond that measuring-out relationships are
not standard quantifcation and so should not be expressed in LF. They
are interior to LCS and are therefore not "aspects of semantic structure
that are expressed syntactically," to repeat May's ( 1 985) termi nology. But
such a reply would be too glib. One can only understand which entities
are measured out in a sentence by knowing ( 1 ) which semantic arguments
The Syntax-Semantics Interface
81
are marked by the verb as measured out, (2) in what syntactic positions
these arguments can be expressed, and (3) what syntactic constituents
occupy these positions. That is, the combinatorial system of syntax is
deeply implicated in expressing the combinatorial semantics of measuring
out.
This argument thus parallels the argument for buy with anaphora
(section 3. 7. 1 ). It shows that diferences in quantifer sope, if properly
generalized, are also imposed by the LCS of lexical items and therefore
cannot b encoded in the most general fashion at any (narrow) syntactic
level of LF. Combining this with the argument from ellipsis, we conclude
that LF cannot perform the function it was invented to perfon, and
therefore there is ultimately no justifcation for such a level.
3.9 Remas
t
he present chapter has examined the claim that the Conceptual Interface
Level in syntax is LF, distinct from S-Structure and invisible to phonol­
ogy, which syntactically encodes binding and quantifcation relationships.
This claim has been defused in two ways (to some degree at least).
First, by showing that the relation of syntax to conceptual structure is
not as straightforard as standardly presumed, I have shown that it is not
h\ pl ausible to think of ay level of syntax as directly "encoding semantic
phenomena. " Second, by looking at nonstandard examples of binding
and quantifcation, I have shown that no syntactic level can do justice to
t he full generality of these phenomena. Rather, responsibility for binding
a nd quantifcation lies with conceptual structure and the correspondence
rul es linking it with syntactic structure. There therefore appars no ad­
vantage in positing an LF component distinct from S-Structure that is
"cl oser" to semantics in its hierarchical organization: it simply cannot get
�yntax "close" enough to ,semantics to take over the relevant work.
This is not to say that I have a solution for all the many phenomena for
which LF has been invoked in the literature; these clearly remain to b
dlal t with. However, these phenomena have been disussed almost ex­
dusi vely in the context of GB, a theor that lacks articulated accounts of
wnceptual structure and of the interface between syntax and conceptual
sl rudure. Without a clear alterative semantic account of binding and
quanti fcation, it is natural for an advocate of LF to feel justifed in
t reat i ng them as syntactic. But consider, for instance, the treatment of
l anguages wi t h in si t u wh (e.
g
, Huan
g
1 982), which has been taken as a
82
Chapter 3
strong argument for LF. The ability of verbs to select for indirect ques­
tions does not depend on whether the wh-word is at the front of the
clause: it depends on the clause's being interpretable as a question, a fact
about conceptual structure. 1 6 In tur, the ability of a wh-word to take
scope over a clause, rendering the clause a question, is indeed parallel to
the ability of a quantifer to take scop over a clause. But if quantifer
scop is a proprty of conceptual structure rather than syntax, then there
is no problem with treating wh-scope similarly. Hence there is no need for
syntactic LF movement to capture the semantic properties of these lan­
guages. 1 7 The challenge that such putative LF phenomena pose to the
present approach, then, is how to formulate a rigorous theory of con­
ceptual structure that captures the same in sights. But there are many dif­
ferent traditions in semantics that can be drawn upon in approaching this
task. 1 8
Chapter 4
The Lexical Interface
From a formal point of view, the hypothesis of Representational Modu­
l arity claims that the informational architecture of the mind strictly seg­
regates phonological, syntactic, and conceptual representations from each
other. Each lives in its own module; there can b no "mixed" representa­
tions that are partly phonological and partly syntactic, or partly syntactic
and partly semantic. Rather, all coordination among these representa­
li ons is encoded in correspondence rules.
This chapter explores the consequences of this position for the lexical
i nterface-the means by which lexical entries fnd their way into sentences.
I t also proposes a simple foralization of how the tripartite structures are
coordinated. In the process, important conclusions will b derived about
l he PS-SS and SS-CS interfaces as well.
4. 1 Lexical Isertion verss Lexical Ucensing
i perhaps startling consequence of Representational Modularity is that
,here can be no such thing as a rule of lexical insertion. In order to under­
sland this statement, let us ask, What is lexical insertion supposed to be?
Originally, in Chomsky 1 957, lexical items were introduced into syn­
t actic trees by phrase structure rules like ( 1 ) that expanded terinal sym­
bol s i nto words.
( I ) N " dog, cat, banana, v . .
I n Chomsky 1 965, for reasons that need not concer us here, this mecha­
ni sm was repl aced with a lexicon and a (set of) rule(s) of lexical insertion.
Here is how the process is described in Chomsky 1 97 1 , 1 84:
I et Us dhhUHL - . - that the graHHar contai ns a lexicon, which we tke to be a class
1 >1 I cxi lal ent ries cOllh of whilh spci fes the graHHatical (i . e. phonol ogical ,
84
Chapter 4
semantic, and syntactic) propries of some lexical item . . . . We may think of each
lexical entry as incorprating a set of transformations[!] that inser the item in
question (tat is, the complex of features that constitutes it) in phrase-markers.
Thus
[2] a lexical transformation assocated wth the lexical item I maps a phrase­
marker P containing a substructure Q into a phrase-marker P fored by
replacing Q by I.
Unpacking this, suppose the lexical entry for cat is (3a) and we have the
phrase marker (3b) (Chomsky's P). Lexical insertion replaces the ternal
symbl N in (3b) (Chomsky's substructure Q) wit (3a), producing the
phrase marker (3c) (Chomsky's P).
(3) a. Ikrt
PS
+N -V
sin�lar
ss
£Th
i
ng CAT, TYPE OF ANIMAL, etc.] CS
b. NP

Iet
N'
c. NP
I
N

I et N'
I
Ikrt
+N -V
sin�ular
£Thi
n
g CAT, TYPE OF ANIMAL, etc.]
This leads to the familar conception of the layout of grammar, in which
lexical insertion feeds underlying syntactic form. Let us see how the archi­
tecture of Chomsky's theories of syntax, outlined in fgure 4. 1 , develops
over the years. (I will drop the subscripts on PILss and CILss, because we
are dealing with syntactic structure throughout. )
The constant among all these changes i s the position of the l exical
interface. Throughout the years, the l exicon has al ways been viewed as
The Lexical Interface
(a) Standard Theory (Chomsky 1 965)
Lexicon
1
Deep Structure (= CIL) � CS
1
PS ( ) Surface Structure (= PIL)
(b) Revised Extended Standard Theory (Chomsky 1 975)
Lexicon
1
Deep Stucture
1
PS ( ) Surface Structure (= CIL, PIL) � CS
(c) Goverment-Binding Theory (Chomsky 1 98 1 )
Lexicon
1
D-Structure
1
S-Structure
. "
PF LF (= CIL) � cs
(= PI L? ¯ PS?)
( d) Minimalist Program (Chomsky 1 993)
Lexicon
V . . . Generalized transforations
Spell-Out . . . � . . . LF movement
PF LF (¯ CIL) � CS
(= PIL? ¯ PS?)
Figure 4.1
The architecture of some of Chomsky's theories
85
feeding the initial point of the syntactic derivation, and its phonological
and semantic information is interpreted "later. " It is this assumption
(assumption 3 of chapter 1) that [ want to examine next.
What is a lexical item? As seen in the above quotation, a lexical item is
regarded as a triple of phonological, syntactic, and semantic features (i. e.
as a structure <PS, SS, CS» , listed in long-term memory.
This view of lexical items raises a uncomfortable situation for any of
the layouts of grammar sketched in fgure 4. I -one that could have been
noticed early on. The operation of lexical insertion is supposed to insert
l exi cal items in their entirety into syntactic phrase structures. This means
t hat the phonological and conceptual structures of lexical items are
dragged through a syntactic derivation, inerly. They become available
onl y once the derivation crosses the appropriate i nterface into phonetic
l scmant i c form.
86 Chapter 4
Alteratively, in the Minimalist Program, a lexical item in its entirety
is said to project initial syntactic structures (through the opration of
Merge). For present purposes this amounts to the same thing as tradi­
tional lexical insertion, for, as mentioned in chapter 2, the operation
Spell-Out is what makes lexical phonology available to further rules. This
phonological information has to be transmitted through the derivation
from the pint of lexical insertion; otherwise Spll-Out could not tell, for
instance, whether to spell out a particular noun in the (narrow) syntactic
structure as /ket/ or /bet/. Thus the Minimalist Program too drags the
lexical phonology invisibly through the syntax. (Alteratively, people
sometimes advocate a strategy of "going back to the lexicon" at Spell-Out
to retrieve phonological information. See section 4. 2 for discussion. )
Such a conception of lexical insertion, however, reduces the much­
vaunted autonomy of syntax essentially to the stipulation that although
lexical, phonological, and semantic information is present in syntactic
trees, syntactic rules can't see it. (As Ivan Sag ( personal communication)
has suggested, it is as if the syntax has to lug around two locked suitcases,
one on each shoulder, only to tur them over to other components to be
opned. )
Some colleagues have correctly observed that there is nothing formaly
wrong with the traditional view; no empirical difculties follow. Although
I grant their pint, I suspect this defense comes from the habits of the
syntactocentric view. To bring its artifciality more into focus, let us tur
the tables and imagne that lexical items complete with syntax and
semantics were inserted into phonological structure, only to be "inter­
preted later" by the syntactic and conceptual components. One could
make this work formally too, but would one want to?
Quite a number of pople (e. g. Otero 1 976, 1 983; Den Besten 1 976;
Fiengo 1 980; Koster 1 987; Di Sciullo and Williams 1 987; Anderson 1 992;
Halle and Marantz 1 993; Biring 1 993) have noticed the oddity of tradi­
tional lexical insertion and have proposed to remove phonological infor­
mation (and in some versions, semantic information as wel l) from initial
syntactic structures. On this view, the syntactic derivation carres around
only the lexical information that syntactic rules can access: syntactic fea­
tures such as category, person, number, case-marking properties, the
mass/count distinction, and syntactic subcategorization (i . e. the <-features
of Chomsky 1 98 1 ) . Di Sciul lo and Williams (1 987, 54) characterize this
approach as fol lows:
The Lexical Interface 87
D-Structure
1
S-Structure
. T "
1 ' 1 ' Lexicon LF (= CIL) � cs
Figure 4.2
The architecture of Goverment-Binding Theory but with late lexical insertion
l Wlhat is syntax about? We could say "sentences" in the usual sense, but another
answer could be "sentence forms, " where . . . syntactic fors are "wordless"
phrase markers. "Sentence," then, as ordinarily understood, is not in the purview
of syntax, although the syntactic forms that syntax i about certainly contribute to
the description of "sentences. "
Of course, there comes a point i n the derivation where the grammar has
to know-both phonologically and semantically-whether some noun in
the syntactic structure is cat or bai, whose strictly syntactic features are
i dentical . It therefore eventually becomes necessar to perform some ver­
sion of lexical insertion. The distinctive characteristic of all these ap­
proaches is that they move lexical insertion to some later point in the
syntactic derivation. Grafed onto an otherise unchanged GB theory,
t hey might look in skeleton something like fgure 4. 2. 2
Although late lexical insertion keeps phonological and semantic fea­
t ures out of the derivation from D-Structure to S-Structure, it does not
keep them out of syntax entirely-they are still present inertly in S­
Structure, visible only afer the derivation has passed through the rele­
vant interface. However, according to the hypothesis of Representational
Modularity, such "mixed" representation should be impssible. Rather,
phonological , syntactic, and conceptual representations should be strictly
hegregated, but coordinated through correspondence rules that constitute
t he i nterfaces« 3
The problem this raises for any standard version of lexical insertion is
t hat a lexical i tem is by its very nature a "mixed" representation-a (PS,
SS, CS) triple. Therefore it cannot be inserted at any stage of a syntactic
deri vation wi thout producing an ofending mxed representation.
To put this more graphically, the traditional notation for syntactic trees
(4a) is necessarily a mixed representation, violating Representational
Modul ari ty: the cat, at the bottom, is phonological, not syntactic, infor­
l J I a t i on. (4a) is traditionally taken as an abbreviation for (4b), where the
i l l formal nltati on the cat has been replaced by an explicit encoding of the
ful l l exi cal ent ries for these i tems.
88
(4) a.
b.
NP

Det N
I I
the cat
NP

Det
N
I I
�/kat
[+N, -
V
|
count si ngular
hhing CAT, TYPE OF ANIMAL, etc. ]
[+Det]
[DEFINITE]
Capter 4
In a conception of grammar that adopts Reprsentational Modularityasan
organing principle, things work out diferntly: a represntation like (4b)
is entirly ill formed. Rather, we must rplace it with a triple of structurs
along the lines shown in (5), each of whch contains only featurs from its
propr vocabulary. (I utilie conceptual strcture foralism in the style of
lackendof 1 990; reader should feel free to substitute their own favorite. )
(5) Phrasec

Cl .
Wordb
I I
Õ Õ
A !
6: kat
Det. N
b
I
[c�untl
SlOg J

� ���·OF ·. C 1
|nt r � [ ( ) I " � L
,
The Lexical Interface 89
These three structures do not just exist next to one another. They are
cxplicitly linked by subscripted indices. The clitic the corresponds to the
Det and the defniteness feature, the word cat corresponds to the N and
t he Type-constituent of conceptual structure, and the whole phrase cor­
responds to the NP and the whole Thing-onstituent.
4
If (5) is the propr way of representing the cat, .there cannot be a rule
that inserts all aspects of a lexical item in syntactic structure. Rather, the
only part of the lexical item that appars in syntactic structure is its syn­
tactic features. For example, the word cat is represented formally some­
thi ng like (6).
(
6
)
WLrdb
I
Nb
I
[Thing
TY
PE: CAT
]
b
Õ ..
;
smg
krt
The proper way to regard (6) is as a small-scale correspondence rule. It
l i sts a small chunk of phonology (the lexical phonological structure or
LPS), a small chunk of syntax (the lexical syntactic structure or LSS), and
d small chunk of semantics (the lexical conceptual structure or LCS), and
it shows how to line these chunks up when they are independently gen­
erated in the parallel phonological, syntactic, and conceptual derivations.
Lexi cal items, then, are not "inserted" into syntactic derivations; rather,
t hey license the correspondence of certain (near-)terminal symbols of
syntactic structure with phonological and conceptual structures.
Lexical correspondence rules are of coure not all there is to establish­
i ng a correspondence. For instance, the index c in (5) comes from a larger­
scale correspondence rule that coordinates maximal syntactic phrases
wi th concptual constituents. The PS-SS rules ( l 2a,b) and the SS-CS rule
( 2 1 ) in chapter 2 are schemata for more general larger-scale rules; the
pri nciples of coercion and cocomposition in chapter 3 are more spcifc
phÎasal SS-CS rules; the rules of Argument Fusion and Restrictive Mod­
i f i cati on in lackendof 1 990 (chapter 2) and the principles of argument
st ructure l i nki ng (chapter 1 1 ) are further examples of phrasal SS-CS rules.
I n short, a lexical item is to be regarded as a correspondence rule. and
/he lexicon as a whole is to be regarded as part of the PS-SS and SS-CS
ill /l'rface modules. On thi s vi ew, the foral role of lexical items is not that
I hey arc "i nsertcd" i nto syntactic deri vationsq but rather that they license
90 Chapter 4
the correspondence of certain (near-)terminal symbols of syntactic struc­
ture with phonological and conceptual structures.
5
There is no operation
of inserion, only the satisfaction of constraints. This means there is no
"ordering" of lexical insertion in the syntactic derivation. What it for­
mally means to say that lexical items are "inserted" at some level of S­
Structure is that the licensing of syntactic terminal symbols by lexical
items is stated over this level of syntax. In tur, what is "carried through"
the syntactic derivation, visible to syntactic rules, is not the whole lexical
item, but only its syntactic features, in this case rN, count s
i
ng] and per­
haps its linking subscripts.
This approach, which might be called lexical licensing, is meant to
replace assumption 3 of chapter 1 . Something much like it is found in
the formalisms of HPSG, Tree-Adjoining Grammar, and Construction
Grammar. Interestingly, the view that syntax carries only syntactic fea­
tures is also proposed by Halle and Marantz ( 1 993). However, their
approach has a rule called Vocabulary Insertion that has the same efect
as lexical insertion, except that it applies as part of the mapping between
S-Structure and PF in fgure 4. 2; the idea is that phonological realization
of lexical items best awaits the insertion of syntactic infection.
6
On pre­
liminary examination, it appars that Halle and Marantz's theory can be
recast in terms of lexical licensing, in particular reinterpreting their claim
of late lexical insertion as a claim that the licensing of the PS-SS corre­
spondence involves the syntactic level of surface structure. (On the other
hand, it is not clear to me how semantic interpretation takes place in
Halle and Marantz's approach. )
Lexical licensing eliminates an assumption that is tacit in the way con­
ceptual necessity 3 of chapter 1 is stated: the idea that the computational
system has a distinct interface with the lexicon. On the present view, there
are not three interfaces-a phonological, a semantic, and a lexical one.
Rather, the lexical interface is part of the other two. Notice that, although
a lexical interface is a "conceptual necessity, " nothing requires it to be a
separate interface!
Why should one consider replacing lexical insertion with lexical licens­
ing? For one thing, it should be recalled that no argument has been given
for lexical insertion except that it worked in the Aspects framework;
it is just a 30-year-old tradition. At the time it was developed, symbol
rewriting was the basic operation of foral grammar (assumption 2 of
chapter I ), and lexical licensing would not have made foral sense. Now
The Lexical Interace
91
that constraint satisfaction i s widely acknowledged as a fundamental
opration in grammatical theor, lexical licensing looks much more
natural.
At the same time, I se lexical licensing as embodying a more com­
pletely worked out version of the autonomy thesis, in that it allows us to
strictly segregate information according to levels of representation. Of
course, the traditional architecture in practice enforces this separation of
i nformation (though by stipulation, not on principled grounds). So it is
possible to leave syntactic theory otherwise unaltered, at least for a frst
approximation. ( In particular, for many purposes we can continue draw­
i ng trees the traditional way, as long as we recognize that the line con­
necting N and cat in (4a) is not a domination relation but really an
abbreviation for the linking relations shown in (6). )
4.2 PI ~ CI
Next I want to ask what levels of syntax should be identifed as the
Phonological Interface Level ( PIL) and the Conceptual Interface Level
(elL). But I am going to approach the question in a roundabout way,
wi th a diferent question: Should PIL and CIL be dif erent levels of syn­
t ax, as is the case in all the diagrams above except (b) of fgure 4. 1?
I raise the issue this way for an important reason. Lexical items are very
f i nely individuated in phonology and in semantics, but not in syntax.
Si nce syntax can see only syntactic features, all the words in (7a) are
syntactically identical, as are those in (7b-d).
( 7 ) a. dog, cat, armadillo
b. walk, swim, fy
c. gigantic, slippery, handsome
d. on, in, near
At t he same time, the inforation that a particular word is cat rather than
dOK has to be communicated somehow between phonology and con­
n:rtual structure, via the PS-SS and SS-CS interfaces, so that one can say
what one means. This is accomplished most simply if these interfaces
I I l vol ve the very same level of syntax, that is, if PIL ¯ CIL. Then it is
possi bl e to check a lexical correspondence among all three representations
si mul taneousl y, in efect going directly from fnely individuated lexical
phonol ogical structure /krt/ to fnely i ndividuated lexical conceptual
92
Chapter 4
(a) PIL ¯ CIL
: syt3ctic
- -
!
: derivation :
,
,
i (PILlCIL) i
Ikrtf-PS-SS - sing Jss-cs � [CAT]

corespondence : count : corespondence
(b) PIL " CIL
rules
- - - - - - - - - - -
,
rles
:
- - - - - - - - - - - -
�y���ti�
-
- - - - - - - -1
:
derivation
,
,
: (PIL) · (CIL) :
Ikrtl º PS-SS � W
_
[
smg
t
] - W$ � SS-CS � [CAT]
l
coun
'
correspondence : a a . aa . a . . m a . . . . a a . a a . . a . . . . . . : corespondence
Fige 4.3
rles
rles
Tracking a lexical item through the syntax (a) if PIL * CIL and (b) if PIL " CIL
structure [CAT), at the same time making sure the syntactic structure is a
singular count noun.
What i PIL is not the same level as CIL? Then it is necessary to con­
nect /krt/ to some noun N* in PIL, connect N* through the syntactic
derivation to its counterpart N** in CIL, then connect NU to [CAT). The
problem is that once we are in syntax, we lose track of the fact that N* is
connected to /krt/. Consequently, when we get to its counterpart NU, we
have no way of knowing that it should be connected to [CAT] rather than
[ARMADILLO). In other words, if PIL : CIL, some extra means has to
b introduced to track lexical items through the syntactic derivation.
Figure 4.3 sketches the problem.
Let me outline the two ways I can think of to track lexical items
through the syntax. The frst way involves giving each lexical item a
unique syntactic index that accompanies it through the derivation. Sup­
pose the lexical item cat has the index 85. Then /krt/ maps not just to N
at PIL, but to NS5 . In tur, at CIL, reference back to the lexicon enables
NS5 to map to [CAT] in conceptual structure. This solution works, but the
price is the introduction of this lexical index that serves to fnely individ­
uate items in syntax-a lexical index that syntactic rules themselves can­
not refer to. Since lexical items are already adequately individuated by
their phonology and semantics, this lexical index plays no role in the lex­
icon either. Its only function is to make sure lexical identity is preserved
The Lxicl Interfae
93
through the syntactic derivation from PIL to CIL. In line with minimalist
assumptions (to use Chomsky' s tur of phrase), we should try to do
without it.
One might suggest that the lexical item's linking index (e. g. the sub­
scripted b in (6» might sere the purpose of a lexical syntactic index. But
the linking index's only proper role is as an interface function: it sres to
l i nk syntax to phonology at PIL and to semantics at CIL. If we were also
to use it to keep track of lexical items through a syntactic derivation, this
would in efect be admitting that all the steps of the derivation between
PIL and CIL are also accessible to phonology and semantics: each of
these steps can see which phonology and semantics go together. That is,
this would indirectly sneak the unwanted richness of mixed representa­
tions back into syntax, right afer we found a way to eliminate them.
Another way to track lexical items through the derivation is to give an
index not to the individual lexical items, but to the lexical nodes in the
syntactic tree.
7
For example, the N node of PIL matched with /krt/
might have the index 68. In tur, this index would be identifable at CIL.
However, it does not uniquely identify cat; rather, a well-formedness
condition must be imposed on the <PS, SS, CS) triple for a sentence,
something like (8).
(8) If the phonological structure of a lexical item W maps to node n at
PIL, then the conceptual structure of W must map to node n at CIL.
Again, this would work, but it too invokes a mechanism whose only pur­
pose is to track otherise unindividuated lexical items through the deri­
vation from PIL to CIL.
Part of the reason for removing lexical phonology and semantics from
t he syntax is to eliminate all unmotivated syntactic individuation of lex­
i cal items. These mechanisms to track items through the derivation are
basically tricks to subvert the problems created by such removal. Accord­
i ngl y, minimalist assumptions demand that we try to do without them.
s
r Î short, I will try to work out a theory in which phonology and
semantics interface with syntax at the very same level. This way, the PS­
SS aÎd SS-CS i nterfaces can be checked at the same time, and the fne
i ndi vi duation of lexical items can be regarded as passing directly from
phonol ogy to semantics without any syntactic interention.
I LÜÎ think of three reasons other than simplicity why this is an attrac­
t i ve theor. Fi rst, there is a class of lexical items that have no syntactic
94 Chapter 4
structure. Some of these have only phonology, for instance fddle-de-dee,
tra-Ia-Ia, e-i-e-i-o, and ink-a-dink-a-doo. Others have both phonology and
semantics (or pragmatics), for instance helo, ouch, wow, yippee, and
dmmit.
9
A syntactician might dismiss these items as "outside language, "
since they do not participate in the "combinatorial system, " except in
quotational contexts like (9a), where virtually anything is possible. Note,
however, that they do not occur in an environment such as (9b), reserved
(in nonteenage dialects) for nonlinguistic expressions. ¯ This suggests that
they are regarded as linguistic items.
(9) a. "Hello, " he sid.
b. Then John went, "[belching nois] " / *"Hello. "
In fact, these items are made of standard phonological units, they observe
noral syllable structure constraints and stress rules, and they undeniably
carry some sort of meaning, albeit nonpropsitional. Thus the most ap­
propriate place in the mind to house them would sem to b the natural
language lexicon-especially under the assumption of Represntational
Modularity. They are indeed outside the combinatorial system of (nar­
row) syntax, but not outside language. (A more complex cas, arguably
of the same sort, is expletive infxation, which also has phonology and
smantics/pragmatics but no detectable syntax. See chapter 5. )
Now i the us of lexical items involved inserting them in an X
O
psi­
tion in syntactic structure, then dragging them through a derivation until
they could be phonologically and smantically interpreted, we could not
use thes particular items at all: they could never b inserted, except pos­
sibly under a totally vacuous syntactic node Exc/amation
O
or the like.
Suppse instead that we adopt lexical licensing theory and treat these
peculiar words as defective lexical items: tra-Ia-Ia is of the for (PS, 0,
0), and hello is of the for (PS, 0, CS). They therefore can be unifed
with contexts in which no syntax is required-for example as exclama­
tions and in syntactic environments such as the phrase yummy yummy
yummy or "Ouch, " he said that place no syntactic constraints on the
string in question.
Í Í
Next suppose that the PS-SS interface and the SS-CS interface were to
access diferent levels of syntax, as in (b) of fgure 4. 3. In this cas there
would be no way to map the phonology of helo into its semantics, sinc
the interediate levels of syntactic derivation would be absent. By con­
trastq i PIL * CIL, and all the interface conditi ons are applied at once,
the phonology of helo can be mapped i nto its semantics di rectly, si mply
The Lexical Interace
95
bypassing syntax. That is, on this approach the defective nature of these
l exical items follows directly from saying they simply have no syntax;
beyond that no extra provision has to b made for them.
! 2
This leads directly to a more crucial case. What happens in language
acquisition as the child passes from the one-word stage to the development
of syntax?
! 3
Presumably one-word spakers can construct a sound-to­
meaning association for their lexical items, but cannot construct syntactic
structures with which to combine words meaningfully. ( We might think
of them as having a "protolanguage" in Bickerton's (1 990) sense; see
section 1 . 3. ) In a theory where PIL " elL, we can say that all the words
of a one-word spaker bhave like helo: there is no syntactic structure
t hat imposes further well-foredness conditions on the pairing of sound
and meaning, and that ofer opportunities for phrasal modifcation. The
development of syntax, then, can be seen just as the gradual growth of a
new compnent of structure within an already existing framework.
By contrast, in a theory where PIL i elL, the sound-meaning corre­
spondence has to b mediated through a syntactic derivation. We therefore
have to suppose that the one-word speaker either ( 1 ) really has a syntax but
si mply cannot use it or (2) does not have a syntax but undertakes a radical
restructuring of linguistic organization when syntax develops. Again, both
of these options seem to b rhetorical tricks to avoid unpleasant conse­
quences of an assumed architecture of grammar. The most elegant solu­
t i on, if possible, is to assume only what is "conceptually necessry": the
one-word speaker does not construct syntactic structure-and the acquis­
I t i on of syntax does not restructure the architecture of the child's pre­
vi ously existing for-meaning correspondences, but simply adds new
Ionstraints and possibilities onto them.
A third argument that PIL " elL comes from the fact that topic and
focus in conceptual structure are ofen marked phonologically by stress
and i ntonationq with no spcifcally syntactic efects. If PIL i elL, either
( I ) topic and focus must be tracked through the syntax by syntactically
l I l ert dummy markers (this is the solution in lackendof 1 972), or (2) an
; I l l di ti onal i nterce must be added that bypasses syntactic structure alto­
�lt her. However, syntactic constrctions such as topicalization, left dis­
l ocati on, and elefing al so mark topic and focus-and these redundantly
I lqui re the appropri ate stress and intonation. This means that the syntax
"I' topi c and focus must be correlated with both phonology and semantics.
I kncl the second possi bi l i ty is unl ikely. On the other hand, if PIL " elL,
96
Capter 4
all three components can cross-check each other simultaneously when
necessary.
We thus conclude, tentatively, that PIL " CIL, that is, that phonology
and semantics interface with syntax at the same level. This is a major
boundary condition on satisfactory theories of the interfaces, one that, as
we can see in fgure 4. 1 , has not been explored in the context of Chom­
skian varieties of generative grammar. In Chomsky's spirit of minimal­
ism, we will tentatively adopt this hypothesis and see what follows.
The frst consequence is a simple theory of lexical insertion/licensing.
We can regard lexical insertion/licensing as an operation of unication (in
the sense of Shieber 1 986). However, it difers from standard unifcation
in that an item such as (6) is unifed simultaneously with independently
generated phonological, syntactic, and conceptual structures, along lines
explored by Shiebr and Schabes ( 1 991 ). By virtue of this unifcation, it
contributes its linking indices, which are not generated in the independent
derivations. It thereby helps establish the correspondence among the three
structures.
To illustrate, suppose the three indepndent sets of formation rules
create the structures in ( 1 0).
( 1 0) Phrase

Cl Word
I I
L
L
A
1
6a ..
N
P

Det
N
I
. ,
smg
[
� OF [TYPE: CAT�
1h: np
[ DEF]
J
Te Lexcal Interface 97
Simultaneous unifcation of the lexical item cat (shown in (6» with all
three structures will have the efect of adding indices to produce ( 1 1 ) .
( \ 1
) Phrase

Cl
Wordb
I I
L L
A I
krt
NP

Det Nb
I
�.�
smg J
[ � OF [TYPE: CAT]b]
¯Þtnµ
[DEF
]
Thi s is on its way to the complete correspondence among thes structures
shown in (5); other linking indices will come from the lexical item the and
from larger-scale (phrasal) principles of correspondence. On the other
hand, suppose the phonologcal structure in (1 0) were instead /brt/; then
nl could not unify with it and the derivation would crash for lack of
L1νplete corespondence.
The present proposal, then, is an alterative to assumption 2 of chapter
I . t he assumption that the fundamental opration of the grammar is sub­
st i t ut ion of one symbol (or symbol complex) for another, and in particu­
l ar t hat lexical insertion is a substitution operation.
Note, incidentally, that the lexically listed phonological structure may
l ack syl l abifcation and stress, as is generally assumed in the phonological
l i t erat ure. It can still unify with a syllabifed and stressd phonological
lt rct li re; the grammar need not do syllabifcation and stress "later,"
" l, fler" lexical insertion. In fact, a monostratal theory of phonology, in
which surface output ( PF) does not necessarily correspond to lexically
l i st ed phonol ogy (such as Optimality Theory; Prince and Smolensky
1 111 ), cOll kl he accommodated by rel axi .g the conditions on unifcation,
98
Chapter 4
so that less than perfect ft is necessary for its satisfaction. We will take
this issue up briefy in section 6. 2.
Alteratively, the present approach i s also consistent with traditional
phonological derivations, which account for the disparity btween lexical
phonological for and PF by successive steps of altering phonological
structures. Now, one might think that the argument that PIL ¯ CIL in
syntax could be extended to a parallel argument that PF ¯ Syntactic
Interface Level in phonology, arguing against traditional derivations. But
such would not b the case. The argument that PIL ¯ CIL depends on the
need for the lexicon to link phonological structure, syntactic structure,
and conceptual structure simultaneously. An analogous argument in
phonology would push matters one level of representation toward the
periphery: it would depnd on a nee for the lexicon to link auditor­
motor information, phonological structure, and syntactic structure sim­
ultaneously. But (as far as I know) this need simply does not exist: the
lexicon sys nothing whatsoever about auditory and motor information.
So there i a way that phonology is potentially diferent from syntax, a
Brombrger and Halle (1 989) have asserted. Notice, however, that I sy
"potentially": in fact the architecture proposed here is neutral btween
derivational and monostratal phonology. (Another alterative is to have
two or three discrete levels of phonology related by correspondence rules;
this seems to be the spirit of Goldsmith's ( 1 993) "harmonic phonology. ")
If indeed PIL ¯ CIL, there are consequences for Chomsky's ( 1 993,
1 995) principle of Full Interpretation ( FI). As mentioned in chapter 2, the
idea behind PI is that the interface levels contain al and only elements
that are germane to the interface; for example, PIL contains no extra­
neous phonological segments. However, such a principle is impossible if
PIL ¯ CIL, sinc in general the syntactic inforation relevant to phonol­
ogy is not identical with that relevant to semantics. The proper way of
stating te principle, then, is that PIL/CIL contains all syntactic elements
accessible to PS-SS and SS-CS correspondence rules, and that those rules
must take into account all elements that are "visible" to them.
This version of
F
I does not preclude the existence of syntactic features
that are present just for purposes of syntactic well-foredness, for in­
stance meaningless agreement features. A great deal of the machinery in
Chomsky 1 995 is developed just to eliminate such foral features from
LF, in order to stisfy Chomsky' s version of FI. In the present light such
machnery seems pointless.
The Lexical Interface
4.3 PI and el Ae at S-tructre
99
Having established that PIL and CIL are the same level of syntax, we
now retur to the question that started the previous section: What level of
syntactic structure should be identifed as PIL/CIL?
We know that the phonological interface has to be a level at which
syntactic rules have already attached cas endings, number agreement,
and so forth, so that these can b phonol ogically realized. It also has to be
afer all movement rules, so that the proper linear order of lexical items is
established. PIL therefore has to b at least as late as S-Structure.
Can PIL be any later than S-Structure? In GB, the derivation diverges
afer S-Structure; the "PF component" derives syntactic structures that
are accessible to phonological but not semantic interpretation. The PS-SS
interface level might therefore b somewhere down inside the PF compo­
nent (as in Halle and Marantz 1 993). However, the presnt hypothesis
denies ths possibility: whatever syntactic structure is accessible to the
phonological interface must also be simultaneously accessible to the con­
ceptual interface. In other words, PIL can be no later than S-Structure. It
therefore must b precisely S-Structure.
The immediate consequence is that there can be no syntactic "PF com­
ponent of the sort assumed in GB-a set of (narrow) syntactic rules that
apply after S-Structure and feed the phonological interface. I do not fnd
this a very radical conclusion, given that in a decade and a half of GB
research, no rules of the PF component have been established decisively
enough to make them part of the general canon. In particular, Newmeyer
( 1 988) points out that so-called minor movement rules and stylistic rules,
ofen assumed to be part of the PF component, all either feed Move a
or afect binding conditions, and therefore must precede S-Structure.
Chomsky ( 1 993) suggests that certain processes of ellipsis are part of the
PF component; section 3. 7. 4 has shown that they are not. 1 4
The more radical conclusion concers the Conceptual Interface Level,
which under the present hypothesis must also b S-Structure. Just as there
can be no syntactic structures accessible to phonology that are not acces­
si bl e to semantics, so there can b no syntactic structures accessible to
semantics that are not accessible to phonology. There therefore can be no
syntact ic "LF component
"
(i . e. no "covert compnent" in the sns of
( · homsky 1 995).
Chapter 3 has al ready discussed many independent reasons why a
"l· ()Vcrt " syntax cannot get cl ose enough to semantics to capture the
1 00
D-Stctre
1
PS S-Stctre cs
� (= PIUCIL) /
ps-ss
.- '-
SS-CS
corespondence corespondence
rles rles
'-
Lexicon
.-
Fige 4.4
Chapter 4
The architecture of "neominimalist grammar": the tripartite parallel architecture
with lexical licensing
generalizations attributed to LF. Section 4. 4 will take up one further
aspct of the syntax-semantics interface. Here, let me sum up the con­
clusions of this section by means of a diagram of the architecture of the
grammar, fgure 4. 4. Notice how similar this is to the Revised Extended
Standard Theory ( REST) ( part (b) of fgure 4. 1 ). The one mistake made
in REST was leaving lexical insertion where it was in the Standard
Theor, instead of moving it to S-Structure along with CIL (though sec­
tion 4. 1 mentioned some researchers who saw this possibility). There is a
historical reason for this oversight, to which I will retur in section 4. 4.
The hypothesis that PIL ¯ CIL is also consistent with Chomsky's
approach of 1 957, which derives surface syntactic structure without using
a distinct level of underlying structure, but instead uses generalized trans­
formations to create complex structures. (A similar approach is found in
Tree-Adjoining Grammar (Joshi 1 987; Kroch ( 987). ) Al that is necessar
for phonological and semantic interpretation is that the tree have its com­
plete form at S-Structure. Indeed, this hypothesis also permits "mono­
stratal" theories of syntax such as LFG,
1 5
GPSG, HPSG, Categorial
Grammar, and Construction Grammar. In such theories, fgure 4. 4 is
simplifed still further by eliminating D-Structure, and "base-generating"
S-Structure instead. In fact, the possibility of base-generating S-Structures
was noticed within the Chomskian paradigm by Koster ( 1 978, 1 987) and
Brme ( 1 978) (however, neither of these authors addressed the con­
sequencs for lexical insertion). As I said at the outset, I wi ll remain
agnostic here about whether an independent level of D-Structure is nec­
essar; I'll leave it in just in case.
A variant minimalist proposal has recently been ofered by Brody
(1 995), who claims that the only syntactic level is what he calls Lcxico-
The Lexcal Interace
1 01
Logical For ( LLF). LLF i s "base-generated"; i t i s the level at which
l exical insertion takes place; it also functions as CIL in present terms. In
Brody's theory, there is no syntactic Move a, overt or cover; Brody
argues that the efects of Move a are redundant with constraints on
chains. LLF is also the input to Spell-Out, hence in present ters PIL.
Thus in many respects his architecture is similar to the present one; it
appears that it could readily be adapted to make us of lexical licensing
i nstead of his more traditional lexical insertion.
4.4 Chekng Argent Stucre
Why has lexical insertion always applied in D-Structure? Since the earliest
days of generative grammar, deep structures were postulated in order
to explicitly encode the fact that the "surface subject" of a passive is
"understood as really the object" and that a wh-phrase at the front of a
sentence is "understood as" having some other underlying grammatical
relation. In tr, it was recognized that the underlying grammatical rela­
t i on is crucial to deriving the semantic argument structure-who did what
to whom. In the Standard Theor, wher deep structure was equated with
el L, all this looked very natural.
The discovery that some aspcts of semantics such as anaphora and
4 uantifer scop could not be encoded in deep strcture led to the early
EST, with multiple CILs (Chomsky 1 97 1 ; Jackendof 1 972) . In this theory,
deep structure represented those aspcts of syntactic structure that encode
vramatical relations (i. e. thematic roles), and surface structure repre­
�lnted the rest. Here it still made sense to insert lexical items in deep
�lructures, since this was the proper place to check subcategorzation and
rcad of thematic roles.
However, the introduction of trace theory in the mid- 1 970s provided
a nother possibility: that argument structure conditions can be checked
II/i er movement, by referring to the traces left behind. In other words, D­
St ructure checking is now not "conceptually necessar. " And in fact GB
cl a i ms that LF is the sole conceptual interface level, which requires that
UndÛrlying grammatical relations b recovered via traces. ¯
Traces are not "conceptually necessary," of course, but let us join
( ' homsky i n assumi ng them. Then it is possible to recover the "under­
sl ( )od " grammatical relations of phrases in ters of the chains of traces
t hey l eave bhi nd them as they move through the derivation: the bottom
or t he chai n marks the underlyi ng posi ti on. The pri nciple can be stated a
f ol lows:
1 02 Chapter 4
(1 2) Recovery of Underlying Grammatical Relations (RUGR)
Gven two chains CH, ¯ [a" . . . , U] and CH2 ¯ [�" . . . , �m], where
a, and �, contain lexical material and all other elements are traces
of a, and �, respectively.
CH, subcategorizes/s-slects/9-marks CH2 if a, lexically
subcategorizes/s-selects/9-marks �, in some context C, and an and
�m satisfy context C.
RUGR applies at S-Structure t o provide all the lexically imposd rela­
tions normally assigned to D-Structure. Should it look a little clunky (and
therefore unworthy of our minimalist assumptions), it is worth remem­
bering that in GB and the Minimalist Program, where semantic inter­
pretation is interpreted at LF, verbs and their arguments are not in their
D-Structure positions either. Thus GB and the Minimalist Program re­
quire essentially the same principle in order to integrate the interpreta­
tions of heads with their arguments. ¯ ¯ (RUGR might be thought of a the
basis for "reconstruction" efects as well, a possibility I will not explore
here.)
If lexical structural rlations are imposd on chains at S-Structure
rather than on D-Structure, we can immediately eliminate what I fnd
one of the more curious artifacts of GB, the Projection Principle (which
Chomsky ( 1 995) also rejects). The Projection Principle stipulates that
lexically imposd structural relations (e. g. direct object position for a
transitive verb) are present at all levels of syntax, whether flled by lexical
material or a trace. The reason the Projection Principle is necessary is to
make sure that lexical constraints imposed on D-Structure are recoverable
at LF, the semantic interface.
In the present approach, lexical constraints on syntax (e.g. subcate­
gorization and quite possibly "quirky" morphological case marking) are
imposed by RUGR at S-Structure, where the lexicon interfaces with syn­
tax. In addition, since the lexicon is part of the SS-CS interface, RUGR
also imposes at S-Structure those lexical constraints on semantic combi­
nation that involve recovery of grammatical relations (e. g. s-selection, 9-
marking). Thus there is no need to preserve throughout the dervation al l
the lexically imposd structural relations. They need to be i nvoked only at
the single level of S-Structure.
In this light, the Projection Principle reveals i tself as a way to drag
through the derivation not only the semantic i nformati on i nserted at D­
Structure but al so whatever syntactic structure refects that semantic in-
·.............
·.
formation, so that semantic interpretation can b perormed at LF. The
present approach, where the lexical and semantic interfaces are at the ver
same level of syntax, allows us to dispense with such stipulation.
In fact, in the present theory it doesn't matter whether chains are built
by movement, by base-generation of traces in S-Structure ( Koster 1 978,
1 987; Brody 1 995), or by a single opration Form Chain (Chomsky 1 993,
1 5) . Such constraints as the Empty Category Principle, norally taken to
be constraints on movement, then can b treated instead as part of the
i nterpretation or licensing of chains, along lines suggested by Rothstein
( 1 991 ) (see also Bouchard 1 995).
1 9
4.5 Remarks on Procesig
Ultimately, what does the language processor have to do?
20
In speech
perception, it has to map an auditory signal into an intended (contex­
t ualized) meaning (i. e. a conceptual structure) in spech production, it
has to map an intended meaning i nto a set of motor instructions. The
hypothesis of Representational Modularity suggests that the only way
thi s can be accomplished is via the intermediate levels of representation,
namely phonology and syntax. Moreover, the principles by which the
processor is constrained to carry out the mapping are precisely the prin­
Li ples expressed in the formal grammar as correspondence rules. In other
words, at least this part of the grammar-the part that links auditory and
motor signals to phonology, phonology to syntax, and syntax to mean­
i ng-should have a fairly direct prcessing counterpart.
It is less clear how the generative grammars for each individual level of
representation are realized in processing. As in the syntactocentric model,
we certainly should reject the view that active generation is going on. It
makes little sense to think of randomly generating a phonology, a syntax,
a nd a meaning, and then seeing if you can match them up: "Shucks! The
derivation crashed! Oh, well, I'll try again . . . .
Looking to the general case of Representational Modularity (fgures 2. 2
a nd 2. 3), we see that any single level of representation typically may
I" l'ceive fragmentary input from a number of distinct sources. Its job, then,
is to i ntegrate them i nto a unifed form so as to b able to pass a single
moe complete result on to the next level along the inforation pathway.
So perhaps the best way to vi ew the individual generative grammars is as
provi di ng a repertoi re for i ntegrating and fl li ng out fragmentary infor­
mat i on comi ng from other representati ons vi a correspondence rles.
1 04
Chaptr 4
Let's se, very informally, how this might work out in practice. Let' s
think of linguistic working memory as containing a number of "black­
boards, " each of which can be written on in only one for (i .e. one level
of representation). Consider the syntax "blackboard. " An interface pro­
cessor whose output is syntax (e. g. the PS-to-SS processor) can be thought
of as an agent that writes on the syntax blackboard; an interface processor
whos input is syntax (e. g. the SS-to-CS processor) can be thought of as
an agent that reads whatever happens to be written on the syntax black­
board at the moment. The syntax module proper can be thought of as an
integrative agent that does the houskeeping for the syntax blackboard­
that tries to make fragments written by other agents into a coherent
whole. It is this integrative processor that makes it possible for multiple
inputs from diferent agents (or fragmentary inputs from a single agent)
to coalesce into a unifed representation. (Similar constrals obtain, of
course, for the phonology and conceptual blackboards. )
21
Let's walk through a bit of speech perception, looking at the logic of
the situation. Suppose the auditory-to-phonological proessor, in response
to a speech signal, dumps a string of phonetic information onto the pho­
nology blackboard. The immediate response of the phonology module is
to try to break this string into words in order to make it phonologically
well formed. It snds a call to the lexicon: "Do you know of any words
that sound like this?" The lexicon, by assumption, is dumb and (as Fodor
(1 983) tellingly points out) must be prepared to deal with the unexpected,
so it just sends back all candidates that have phonological structure com­
patible with the sample structure sent it. ( This is consistent with the well­
known literature on lexical priming-Swinney 1 979; Tanenhaus, Leiman,
and Seidenberg 1 979. )
But it doesn't just write these candidates on the phonology blackboard.
In fact, the syntactic and conceptual parts of words ca
'
t be written on the
phonological blackboard-they're in the wrong format. Rather, the lex­
icon simultaneously snds the syntactic and conceptual parts of the can­
didate items to the syntactic and conceptual blackboards respectively,
stimulating those modules to get busy. Meanwhile, the phonology-to­
syntax interface reads information about the relative order of the words
on the phonology blackboard and writes this on the syntax blackboard.
At this point the syntax module has the inforation usually assumed
available to a parser-candidate words and their order.
Suppose the phonological string in question is I�per�nt/ . The lexicon
ofers the two possi bilities in (1 3).
The Lexical Interface
( 1 3) a. Phonology: [apparent]a
b. Phonology: [ala [parent]b
syntax: Adja
syntax: Art
a
Nb
1 05
Suppos the immediately preceding context is the string /6iy/ . The
resulting possibilities are (1 4a,b).
( 1 4) a. Phonology: [the]a [apparent]b
b. Phonology: [the]a [a]b [parentlc
syntax: Arta
"
Adjb
syntax: Arta
"
Artb
"
Nc
Both of thes are phonologically well formed, so the phonol ogy module
accepts them. However, ( 1 4b) is syntactically ill formed, since it has two
articles in a row. It is therefore rejected by the syntax module.
However, for the syntactic parser to reject this analysis is not enough. It
must also snd information back to the phonology that this analysis has
been rejected, so that the phonological sgmentation into the words a
parent and the word boundary between them can also b rejected: one
does not hear a word boundary here (see chapter 8).
Consider an alterative context for the possibilities in (1 3a).
( 1 5) a. Phonology: [it] [was] [only] [apparent]a
Syntax: . . . Adja
b. Phonology: [it] [was] [only] [ala [parent]b
Syntax: . . . Arta
"
Nb
[ n this case the syntax can parse both possibilities, so it doesn't send any
rej ections back to the phonology. But now suppose the following context
i s ( 1 6).
( 1 6) . . . not real
The combination of ( 1 6) with either possibility in ( 1 5) is phonologically
a nd syntactically well formed. However, the integrative processor for
conceptual structure will require the phrases following only and not to
form a contrast. Hence the analysis ( 1 5a) + ( 1 6) will b conceptually well
formed and the analysis ( I 5b) + ( 1 6) will not. Thus the conceptual pro­
cessor snds a signal back to the syntax, rejecting the latter analysis after
. turn, the syntax sends a signal back to the phonology, rejecting
t he analysis with a word boundary. ( If the continuation were « . . not a
I{'acher, ([ Sa) would b rejected instead. )
I t i s important to notice that the rejection in this case is fundamentally
semanti c, not syntactic. That is, acceptance or rejection is not based on
whether the ori gi nal string and the continuation have parallel parts of
"pL�ech . For i nstance, the conti nuati ons « not purple and . . « not a concert
1 06 Chapter 4
have grammatical structure parallel to (1 5a,b) respectively, but make no
sense at all. The requirement is for a slient semantic contrast.
Because the rejection of (1 5b) + ( 1 6) requires two successive steps, from
conceptual structure to syntax, then from syntax to phonology, we might
expect it to take somewhat longer than the rejection of ( 1 4b), which takes
only one step of feedback. Levelt ( 1 989, 278-28 1 ) cites experimental
results for a parallel situation in speech production: phonological feed­
back that causes revision of encoding of conceptual structure into syntax
is shown to take longer than syntactic feedback, apparently because the
former has to pass through the syntactic level on the way.
The fact that a semantic anomaly may infuence syntactic and phono­
logical parsing is no reason to believe that we have to give up the notion
of modularity. Strict Fodorian modularity is perhaps threatened, but
Fodor's basic notion of specialized, fast, mandatory, domain-specifc, and
informationally encapsulated processors is not. It is just that we have to
conceive of the boundaries and interfaces of modules somewhat difer­
ently. In particular, look at the syntactic parser. The only way semantics
can afect syntax is by rejecting (or perhaps diferentially weighting) a
putative syntactic parse in response to the properties of the corresponding
conceptual structure. And the only way syntax can afect phonology is by
doing the same thing to a putative phonological parse. That is, we are
permitting feedback of the sort Fodor rejects, but in a very limited (and as
far as I can tell, theoretically and experimentally justifed) way. (On the
rhetorical issues surrounding "autonomous" versus "interactive" parsers,
see Boland and Cutler 1 996, where it is argued that the distinction in
practice is less sharp tan often claimed. )
This story just looks at the bare-bones logic of processing. It still leaves
a lot of room for interpretation in terms of implementation. But it seems
consistent with my general sense of results in current psycholinguistics,
namely that ( 1 ) there are discrete stages at which diferent kinds of repre­
sentations develop, but (2) diferent representations can infuence each
other as soon as requisite information for connecting them is available.
So, for example, extralinguistic context cannot infuence initial lexical
access in speech prception, because there is nothing in the unorgani zed
phonetic input for it to infuence. But once lexical access has taken placeq
lexical conceptual structure can interact with context, because i t is in
compatible for. In tur, depending on the situati on, this i nteracti on may
take pl ace soon enough to bias l ater syntactic parsi ng, as has been found
for example by Trueswel l and Tanenhaus ( 1 994).
.............
··
Similarly, evidence is accumulating (Garrett 1 975 and especially Levelt
1 989) that spech production invokes a number of discrete processes, each
with its own time course. These correspond (at least roughly) to selection
of lexical items to express intended meaning, construction of a syntactic
structure and lexical morphosyntax, construction of a phonological struc­
ture with syllabifcation and stress, and construction of a motor program.
ZZ
In other words, the architecture proposed here leads to a logic for
processing that has attractive interactions with current psycholinguistic
research.
4.6 Th Lexicon i a More General Mental Ecology
One of the hallmarks of language, of course, is the celebrated "arbitra­
riness of the sign," the fact that a random sequence of phonemes can refer
to almost anything. Thi s implies, of course, that there could not be lan­
guage without a lexicon, a list of the arbitrary matches between sound
and meaning (with syntactic properties thrown in for good measure).
If we look at the rest of the brain, we do not imediately fnd anything
with these same general properties. Thus the lexicon seems like a major
evolutionary innovation, coming as i f out of nowhere.
I would like to suggest that although the lexicon is perhaps extreme in
i ts arbitrariness, it is not entirely unique as a mental component. Recall
again what a word is: a way of associating units from distinct levels of
representation. Now consider what it takes to be able to look at a food
and know what it tastes like: a leared association between a visual and a
gustatory representation. How many of those do we store? A lot, I should
t hi nk. From a formal point of view these are associations of representa­
ti ons not unlike those between phonological and conceptual structures.
And as far as learing goes, they're almost as arbitrary as word-meaning
associations. Mashed potatoes and French vanilla ice cream don't look
t hat diferent.
Perhaps a more telling example comes from vision. Recoverng 30
i nforati on i s essential for identifying objects and navigating through the
worl d ( Marr 1 982). Yet it turs out to be computationally almost impos­
si bl e to construct reliable 30 spatial representations from 20 (or 2. 50)
projecti ons. Some recent work (Cavanagh 1 991 ) has suggested therefore
t hat l ong-term memory stores matchings of 20-to-30 structure for
l il lni l i ar objects-i n other words, that one has a sort of "vi sual vocabu­
l a ry" that onc uses to hel p sol ve the di fcul ties of mapping from 20 to
·.
Chapter 4
3D on line. In this case, the mapping is not entirely arbitrary, but it is
so difcult that the brain stores large numbers of predigested mappings to
use as shortcuts.
A parallel mght be seen also in motor control . Suppose you lear a
complex motor movement, such as a fancy dive or a particular bowing
patter on the violin. We might think of this learing as the construction
of a predigested mapping from a large-scale unit on the level of "intended
actions" to a complex patter on the level of motor instructions. Again
it's not arbitrary, but processing is speeded up by having preassembled
units as shortcuts.
A fnal example emerges from speech production. Levelt and Wheeldon
( 1 994) present evidence that speakers use a "mental syllabary" in map­
ping from phonological structure to motor instructions. This is a stored
mapping of all the possible phonological syllables of one's language
(anywhere from a few hundred to a few thousand, depending on the lan­
guage) into their motor realizations. This is a special case of the previous
one: it is not an arbitrary mapping (except to the extent that it incorpo­
rates subphonetic variation that may play a role in regional accent), but
it provides a systematic system of shortcuts that enable speech to b
executed rapidly. It thus plays a role in the phonetic-to-motor interface
somewhat parallel to the lexicon in the PS-SS and SS-CS interfaces.
What do these examples show? The lexicon may be unique in its size
and its utter arbitrariness, but it is not unique in its foral character. Like
these other cases, it is a collection of stored associations among fragments
of disparate representations. And its role in processing is roughly the
same: it is part of the interface between the two (or more) representations
it connects. Although this may all be obvious, I think it's important to
point it out, because it helps put the theory of language in a more general
cognitive context.
Chapter 5
Lexical Entries, Lexical
Rules
What is in the lexicon and how does it get into sentences? So far we have
dealt only with the lexical insertion/licensing of morphologically trivial
words like cat (chapter 4) . Now we turn to morphologically complex ele­
ments of various sorts. This chapter will be concered with sorting out a
number of issues in morphology within the present framework, in prepa­
ration for subsequent chapters on productive morphology (chapter 6) and
on idioms and other fxed expressions (chapter 7).
5. 1 Broadening te Conception of te Lexicon
Recall why the lexicon is introduced into the theory. In order to produce
t he unlimited variety of possi ble sntences of a language, the language user
must have in long-ter memory not only the combinatorial rules but also
something for them to combine. The lexicon is "conceptually necessry,"
t hen, as the long-term memory repositor of available pieces of language
from which the combinatorial system can build up larger utterances.
In the conception developed in chapter 4, the lexicon is not just a long­
t erm repsitor of pieces of language. Rather, it is a repository of <PS,
SS, CS) triples that enable correspondences to be established between
pi eces of structure derived by the three independent generative systems.
What are these pieces? The usual assumption, I think, is that we have to
l ook for a standard "size" for lexical items, say words. This emerges as
assumpti on 4 of chapter I , "Lexical insertion/Merge enters words (the X
O
ralegories N, V, A, P) into phrase structure, " where the standard size is
� peLi fed in tens of (narrow) syntax. This assumption also appears as a
f undamental posi ti on i n Aronof 1 976, 1 994, for instance. Alteratively,
t he hta ndard si ze is morphemes or morphemes and words both (as i n
l la l le 1 973, for i nstance) .
1 1 0 Chapter 5
But is this realistic? The case of idioms imediately brings us up short,
as will be argued in more detail in chapter 7. Though one may be able to
convince oneself that kick the bucket is a VO (a syntactic word), it is hard
to imagine extending the same treatment to idioms such as NP/s got NP}
where prOj want (s) pro
}
or The eat 's got NP's tongue. Idioms with such
complex structure strongly suggest that lexically listed units can be larger
than XO. This chapter and the next will suggest as well that there are lex­
ical units smaler than XO, namely productive derivational and infectional
morphemes. Thus the view to b adopted here is that, although the
stereotypical lexical item may be an XO such as cat, lexical entries in
fact come in a variety of sizes.
A similar position is advocated by Di Sciullo and Williams (1 987), who
call lexical entries listemes. They distinguish this notion of "word" from
the grammatical (more properly, syntactic) notion of word, which I am
here identifying with XO constituents; they wish to show that "the listemes
of a language correspond to neither the morphological objects nor the
syntactic atoms" (p. 1 ) . Although I agree with them that it is of interest
for the theory of gramar to distinguish the interal properties of XO
from the properties of phrases, I do not agree that the theor of listemes is
"of no interest to the grammarian" (p. 1 ). As we have seen, it is important
to the gramarian to know how listemes fnd their way into sentences. If
listemes come in sizes other than XO, then the standard theory of lexical
insertion must b reconsidered. 1
5.2 Morpbosyntax vers Morpbopbonology
There seems to be a recurrent problem about where morphology fts into
the theory of gramar-whether it is or is not distinct from syntax and!
or phonology. The tripartite architecture, I think, makes the situation
clearer. A lexical item is a triple of phonological, syntactic, and con­
ceptual information that serves as a correspondence rule among the three
components. Therefore we expect it to partake of all three. Consequently
we should also expect morphological relations, which relate words to
words, likewise to partake of all three.
For a simple example, a compound like doghouse requires concatenat­
ing the phonology of the two constituent words and stressing the frst ; i t
also specifes that the result i s syntactically a noun formed of two nouns;
and fnally, it speci fes that the result means ' a house for a dog' . Thus i n
general we must assume that morphology may be divided i nto morpho-
................

phonology, morphosyntax, and morphosemantics-plus the correlations
among the three.
Just as we expect bracketing mismatches between phonology and syn­
tax at the phrasal level (chapter 2), so we expect bracketing mismatches
within words, for example in atomic scientist.
2
(I will concentrate here on
morphophonology and morphosyntax, omitting morphosemantics for the
time being.)
( I ) Morphophonolog
Wda

Wdc Wd
A /
;tam [ k
Wdr Afb
A �
sayant 1 st
Morphosynta
( I ) is the closest equivalent in the present notation to Sadock's ( 1 991 )
"autolexical" approach, which might look something like (2).
( 2)
N

A

N Af
I I
atom ic
I I
N Af

.

N Af
I I
scient i st
I I
N Af
N
" morhological
strcture"
"
SY1' tacti .
structure
nWJ" C arc two di ferences between the two notations. The most obvious
Î^ t hat the bracketing that Sadock cal l b "morphology" is assigne here
·
Chapter 5
to morphophonology. There is no reason for it to contain syntactic cate­
gory labels-and in fact syntactic category labels dominating segmental
phonology are outlawed by Representational Modularity. Lieber (1 992,
chapters 4 and 5) also makes use of a double tree, the lower tree like
Sadok's, but the upper tree composed of phonological units such as
words, feet, and syllables. This is closer to what is desired here.
The second diference between (I ) and (2) is that in (I) no segental
phonology is attached to the morphosyntax (Sadock's "syntax"), again
because of Representational Modularity. Here ( I ) difers also from
Lieber's notation. Instead of matching the two representations through
a shared segental phonology, (1 ) matches them through coindexation
of various constituents. The bracketing mismatch is possible because
coindexation is not complete.
A more extreme case of mismatch appears in Anderson's (1 992) account
of Kwakwala, in which articles, which are syntactic left sisters, are treated
phonologically as sufxes to the preceding word.

is Anderson's auto­
lexical representation (p. 20). This looks so strange because it is a massive
violation of the overhelming preference for syntactic words to �orre­
spond to phonological words (correspondence rule (1 2a) of chapter 2).

� ^
V
Art Adj Dem N Suf N Dem P Art N Dem
I I I I I ! I I I I I I
nanacsi l-ida i1g"I'wat-i "lewinuxwa-s-is mestuw-i la-xa migwat-i
I I I ' /
\ I 1 ' 1 I

V
NP
pp
S
This parallels Sadok' s representations in its use of a double tree. How­
ever, Anderson' s description makes clear that the syntactic labels i n the
upper bracketing are irrelevant-that the principles govering this brack­
eting are those of phonol ogical words. Thus on present principles the
....·........... .
representation would be more like (4), observing strict Representational
Modularity.
(4) Synta
v.
Phonolog
Wd
A
Wd. SUfb
I I
nanaqasi l i da
S
Wdg Wd
/ �
pp
A
P
r
NP
A
Ar,
Wde Wd
A /
Wd Suf Wdh Sufe SUfd Wd Suf Wd
r
Suf;
I I I
I I I I I I
i ?gal'wat i al ewinu�wa . is mestuw la Ja
Wd
A
Wd Suf
I I
migwat i
Under this approach, the question of where morphology belongs
becomes somewhat more diferentiated. Morphosyntax is the syntactic
structure of constituents of XO nodes in syntax; morphophonology is the
phonological structure of constituents of phonological words. The issue of
t he morphology-syntax interface-that is, the question "Does morphol­
ogy i nteract wit syntax?" -now becomes "Does morphosyntax interact
wi th phrasal syntax?" At the same time, issues such as allomorphy and
how afxes afect stress belong to morphophonology.
Table 5. 1 shows how phonological structure, syntactic structure, and
conceptual structure exist parallel at phrasal and lexical scales ("lexical"
i s in scare quotes because the lexicon does in fact include units larger than
Xo ) .
Tlble 5. 1
I ), vi si on of l e.l .·........................ ......·
'" Ilccptu.l structure
Phoolo.ic.l .... .......
structure .... ....
Abovc XO Phrasal phonology ... ·....
......
"" I ow Xl) " . ,cxical " Morphosyntax "Lexical"
( morpho )phonol ogy semantics
1 14
5.3 Ifectional versus Derivational Morphology
Chapter 5
A traditional issue in morphology has been to distinguish infectional
morphology from derivational morphology. Intuitively, afxes like tense,
plural, and case are considered infectional, whereas nominalizing afxes
like -lion and negative afxes like un- are considered derivational. Typi­
cally, infectional afxes fall outside of derivational afxes. Typically,
infection is more regular than derivation. Yet it has proven difcult on
grounds of form alone to distinguish the two (Anderson 1 982, 1 992;
Spncer 1 991 ) .
Some of the problem comes from a failure to distinguish morpho­
syntax from morphophonology. The existence of massive phonological
irregularity in an infectional paradigm, for instance, makes it no less
infectional; nor does the phonological regularit of, say, M* make it
any less derivational. I am essentially going to adopt Anderson's ( 1 982,
1 992) characteriztion of the distinction: infectional morphology consists
of those aspcts of word structure (morphosyntax) that interact with
phrasal syntax. For example, a sentence, in order to be a sentence, must
have a tense marker (or to be an infnitival clause, must lack it); the
presence of case markers is determined by phrasal context; and plurality
on a noun can condition plurality on its determiner and/or on a verb.
Thus it is necessary for purposes of phrasal syntax that nouns appar
in both singuar and plural and in all case fors, and that verbs appear in
all tenses and participial forms and with a full paradigm of agreement
markers.
To b sure, derivational morphology afects the syntactic behavior of
lexical items. For instance, it can change syntactic category. However,
phrasal syntax does not care that recital is derived from a verb; in phrasal
syntax, recital behaves the same way as the underived concert. Similarly,
derivational morphology certainly can change argument structure, which
afects the contexts in which an item can appear in phrasal syntax. For
instance, out-prefxation drastically changes a verb's argument structure,
changing all inputs into transitive verbs (e. g. outswim from intransitive
swim, outspend from transitive spend, but with a diferent argument as
objet). But every lexical item presents an argument structure to phrasal
syntax. As far as argument structure in phrasal syntax is concered, out- V
behaves pretty much l ike the underived verb beat (' win over X in a con­
tesf). That is, the derivational status of outswim is not at issue in phrasal
syntax.
Lexical Entries, Lexical Rules
1 1 5
I n other words, derivational morphology can be regarded as essentially
i nvisible to phrasal syntax, whereas infectional morphology cannot. This
is the sense in which Di Sciullo and Williams ( 1 987) can regard the out­
put of derivational morphology as producing "syntactic atoms. " This is
also the sense intended by the Lexical Integrity Principle of Bresnan and
Mchombo ( 1 995), according to whch syntactic rules do not have access
to the interior of words. Going back further, this is the essential idea
behind Chomsky's ( 1 970) lexicalist hypothesis.
A consequence of this position is that infectional morphology must be
syntactically productive: words must be available in all possible infec­
tional forms in order to satisfy the free combinatorial proprties of syn­
tax. This does not mean that all fors are phonological/y productive, only
that they must exist. 3 By contrast, derivational morphology may or may
not be productive. For syntactic purposes, it is not necessry to have a
consistent output for derivational processes-in fact, phrasal syntax cannot
tell whether it is operating with a derivationally complex item or a simple
one.
5.4 Prouctivity vers Semprouctivity
I t seems possible to distinguish two diferent kinds of morphological
rel ationships obtaining among lexical items. One kind is the sort of pro­
d uctive regularity found in, say, the English plural . Given a count noun,
t he speaker automatically kows that it has a plural for and what that
for means. The speaker also has a default value for its pronunciation,
which can, however, be blocked by leared irregular cases.
This may be contrasted with a "semiproductive regularity" such as the
treation of English denominal verbs like butter, sadle, shelve, and pocket
( see section 5. 6). The syntactic regularity is that the output of the process
lN a transitive verb; semantically, the output may mean roughly 'put N*
i n/on NP' , 'get N* out of/of NP', or 'put NP in/on N*' (where N* i s the
root noun); phonologicaUy, the output is pronounced (almost) like the
noun. This difers from a regular process in two respects. First, we don't
know exactly what the output of the rule is in a particular case; for
i nstanceq saddle means more than ' put a saddle on N', and shelve ends
wi t h a voiced i nstead of an unvoiced consonant. Second, we have to know
whether the output is actual ly a word or not. For instanc, there could be
¡l verb mustard, to spread mustard on NP' , and we could understand it in
' he appropri ate context, but i t ' s not a word i n the stored lexicon. That
.
Chapter 5
is, semiproductive regularities create potential words, but actual words
obeying the regularity still have to be acquired one by one.
In other words, the word bananas does not have to be listed in the lex­
icon, but the word shelve does. We could say that bananas is a word but
not a lexical item (or listeme). By contrast, an irregular plural has to be
listed; for instance, we have to assume that the existence of children blocks
the productive formation of childs. 4 ( I retur to the issue of blocking in
chapter 6. )
Even if shelve is listed, we somehow must also recognize its relation to
shel How is this relation encoded? Since Chomsky's ( 1 970) "Remarks on
Nominalization," generative theor has included the notion of a "lexical
rule" or a "lexical redundancy rule" that captures regularities among
lexical items. For instance, the lexical rule responsible for shelve says how
it is related to shel phonologically, syntactically, and semantically, and
how this fts into a regularity found with many other denominal verbs.
The rule either ( 1 ) prmits the lexicon to list only the nonredundant parts
of the word (the "impoverished entry" theory) or (2) reduces the infor­
mational cost of listing the word in full (the "full entry" theory). I discuss
this choice in section 5. 6. By the time the lexical entry is inserted into a
sentence, though, it is assumed to be fully flled out. That is, lexical rules
are taken to b interal to the lexicon, and invisible to the phrasal part of
the grammar.
To sum up so far: On one hand, a word like shelve must be listed
because the language user has to know that it exists. And it is the word
shelve, not the word shel that helps license the sentence Let 's shelve these
book. On the other hand, somewhere in the grammar the relation of
shelve to the noun shel must be encoded. This is what lexical rules ar
designed to do.
However, are lexical rules the proper way to encode productive regu­
larities? I have in the past taken the position that the regular English past
tense is fored by a lexical rule, so that all regular past tenses are listed
along with the irregulars (Jackendof 1 975); this is also the position taken
by Halle ( 1 973) and the standard psition in LFG and HPSG. Wasow
( 1 977), however, argues that one wants to distinguish the proprties of
lexical rules from those of productive rules (in his framework, trans­
formational relations); he contrasts English adjectival passive participles
(lexical and semiproductive) with verbal passives (transformati onal and
productive).
Lexicl Entries, Lexicl ... ·
Hankamer ( 1 989) presents a strong argument that productive rules are
not lexical in the sense defned above, by drawing attention to languages
with rich agglutinative morphology such as Turkish or Navajo. In such a
language every verb has tens of thousands of pssible forms, constructed
through productive processes. It makes little sense to claim that each form
i s individually listed in the lexicon-espcially since the forms are imme­
diately available when a Turkish or Navajo speaker acquires a new verb
stem. Hankamer calculates in fact that a full listing of the pssibilities for
Turkish words would likely exceed the information storage limits of the
entire brain.
In response to this problem, it is sometimes advocated that lexical rules
create a "virtual lexicon," the space of pssible derived fors-and that
this is distinguished from the "actual lexicon," the list of occurring items.
( This is in a sense the position of Lees ( 1 960)-the idea that the lexical
rules determine what is possible but not what is actul. I think it is the
position of Di Sciullo and Williams ( 1 987) as well. ) There are two dif­
culties with such a position.
The frst is that it fails to distinguish between productive and semi­
productive rules. Lexical rules must still fall into two classes: those for
which we know that an output exists and those for which we must lear
whether an output exists. Without such a diacritic distinguishing lexical
rule types, there is no way of knowing why shelve must fall into the actual
l exicon but bananas need only be in the virtual lexicon.
The second difculty lies in what is meant by the "virtual lexicon"
when it applies to productive rues. Banana is part of the actual lexicon,
hut bananas is supposed to be part of the virtual lexicon, something that is
Üvai l able to be inserted into phrase structures even though not listed.
Notice the terminological fimfam, though: we could equally well speak
of phrasal syntax in the very same terms, saying that the language con­
t ai ns a set of "virtual phrases" that are not listed but are available to
he i nserted as parts of larger phrases. Such terminology would be very
curi ous indeed. Why should "virtual lexical items" be any diferent from
" vi rtual phrases"? Seen in this light, the "virtual lexicon" generated by
productive rules l ooks l i ke just the output of another generative system: a
l:ombi natorial system that applies within XO constituents.
Consequentl y, there seems to me no reason to distinguish productive
1 lcal rules from productive phrasal rules, other than that productive
l exi cal rules apply to consti tuents smaller than XO . Under such a con­
,t ral , al l the ahove i ssues vani sh: hanana is l i sted, as are verb stems in
l l 8 Chapter 5
Turkish, say; but bananas and all the thousands of Turkish verb fors are
not. Rather, they are the product of standard rules of combination that
jut happen to apply within XO constituent.
On such a view, we will treat a productive infectional ending such a
the English plural a its own lexical entry, for a frst approximation
looking like (5). (Chapter 6 will deal with it in more detail. )
Z
�.|

. . A(
sing
I
count
b
plur
[Entity
PLUR
�ntity ) b).
In (5) the left-hand tree encodes the morphophonological structure, a
c1itic pronounced /z/ that attaches to the ends of words. The middle tree
encodes the morphosyntactic structure, an af that when attached to a
singular count noun produces a syntactically plural noun. The right-hand
component is the LCS of the plural, a conceptual function whose argu­
ment is an entity and whose value is a plural entity (see lackendof 1 991
for a more detailed analysis in ters of Conceptual Semantics, or sub­
stitute your own foralism). These three structures together for a cor­
respndence rule because of the subscripted linking indices. The noun
that hosts the plural is subscripted b throughout; the resultant Word,
plural noun, and plural entity are subscripted a throughout; and te c1itic/
af is subscripted c. ( I don't see any need to single out a compnent of
the LCS that correspnds dirctly to the af, so the LCS contains no
subscript c. )
A lexical noun plus (5) will jointly license the correspondence of a plu­
ral noun to its semantics and phonology. For instanc, (6) will be the
phrasal combination for cats, where cat is as spcifed in chapter 4.
(6)
�.'

. A(
sing
I
count
b
pl ur
[Entity
PL
UR
[Entity
TYPE: CAT)
b
).
Lexical Enties, Lexical Rues 1 1 9
In (6) the linking indices for cat have unifed with those for the host noun
in (5), so that the phonological and semantic content of the host noun are
specifed along with the structures in which it is embedded. Such a process
of combination thus closely parallels the licensing of a phrasal constituent
such as a VP jointly by a verb and its direct object.
The distinction between productivity and semiproductivity appears in
derivational morphology a well . Although most derivational processes
are only semiproductive, consider for example the interesting case of
English expletive infation ( McCarthy 1 982). The literature on this pro­
cess has concered the unusual positioning of the afx within a word (e. g.
manu-uckin-acturer). But it has not addressed where such derived words
ft into the morphology in general. I think we can take it for granted that
auto-uckin-matic is not a listeme; and to say it is nevertheless part of
the "virtual lexicon" is only to say it is the output of a productive pro­
cess. Expletive infxation is certainly not an infectional process-there is
nothing at
a
ll in phrasal syntax that conditions it or depnds on it. And in
fact the availability of an expletive infx depends on phonological con­
di tions not visible to syntax, so it cannot be available with the unifority
of an infectional process. If anything, then, this is an instance of deriva­
ti onal morphology, but it is a productive process.
Suppse that we treat the infx as a triple of structures (PS, SS, CS)
approximately like (7).
(
7
)
Wd.

F Fb [F ]
;\ +stress
fAbn
. ( ' he phonology in (7) is an approximation to whatever is the propr en­
vi Îonment for the infx: within a word, afer an initial foot, immediately
hefore a stressed foot. What makes it an infx is that the index b domi­
nati ng the phonological content is dominated by the index a for its host
word.
5
In the syntax, I have somewhat arbitrarily treated the for a a
, ufx. But i n fact it could be anywhere-or even absent, since it has no
¬yÎ tacti c consequences (so that it might be something like helo) . In its
\emantics, the i nfx behaves like any other modifer, adding a property to
i t s head. ( I have not incl uded any account of the fact that this afx is
I nvari ably associ ated wi th topic/ focus stress and semantics. )
1 20 Chapter 5
(5) and (7) will license matchings of freely generated phonological
structure to freely generated syntax and semantics. In that respct they are
like any other lexical item. They are however lexical items that are less
than an XO, that is, smaller than a syntactic word. This appears to be
a plausible model for any regular morphological process, infectional or
derivational.
One might want to say that such forms are stored in a special "afxal
lexicon, " separate from the "word lexicon. " But in fact such a distinction
is unnecessary, since afxes are inherently distinguished by their phono­
logical and syntactic form. We can freely lump them into the lexicon
along with cat and forget, because they can license only the kinds of cor­
respondences they are built to license.
Turing to semi productive rules, the distinction between actual and
virtual lexicon does make some sense here. The "virtual lexicon" here is the
set of potential forms that are easier to understand or lear in context by
virtue of the existence of lexical rules; these forms are nowhere listed. On
the other hand, the actual lexicon contains the forms the language user
lists, along with their actual specialized pronunciations and meanings.
I am advocating, therefore, that the fundamental distinction is not
between lexical rules and phrasal rules, but between productive and semi­
productive rules. 8 For instance, we can say that the regular English past
tense is a productive rule of free combination. The various subregularities
in irregular past tenses, on the other hand, are semiproductive rules, hence
lexical rules. The existence of a listed semiproductive form such as sang
blocks the use of a free combination such as singed, in a manner that we
retu to in chapter 6.
Putting all these pIeces together: The theory of grammar must distin­
guish a notion of syntactic word, which can be identifed with a maximal
XO-the XO that dominates morphologically derived words and com­
pounds, and that defnes a syntactic head around which phrasal argu­
ments and modifers cluster. 9 The theory must also distinguish a notion of
phonological word, which may spcify some minimal prosodic conditions
such as the presence of at least one foot (McCarthy and Prince 1 986).
Both these notions are distinct from the notion of a lexical item or li steme,
which stereotypically consists of a syntactic/phonological word (e. g. dog),
but may also be larger (kick the bucket) or smaller ("ò and -fuckin-).
The distinction between i nfectional and derivational morphology i s a
morphosyntactic di stincti on. A morphol ogical process is i nfecti onal if i t
i nvolves features that are i ndependentl y acti ve i n productive phrasal syn-
Lexical Enties, Lexical Rules
1 21
tax; i t i s derivational otherwis. An infectional process must provide a set
of derived forms for all stems of the requisite sort, whereas a derivational
proess may or may not.
The distinction btween productive and smi prouctive proesss cuts
across the infectional/derivational distinction. Although a infectional
process must provide derived fors for all stems, at least some of these
fors may be provided by semiproductive processes or even suppletion,
as is the case with the English past tense. Conversely, a derivational pro­
cess may b entirely productive, as evidenced by expletive infxation.
The defning characteristic of smi productive rules is that one needs
to know whether each particular derived for exists, as well as (in many
cass) particularities of its meaning and pronunciation. That i s, the out­
puts of semi productive rules must be either individually listed or under­
stood by virtue of context. By contrst, productive processs predict the
existence and form of all derived forms, which need not therefore b listed
(but still .ay b). Productive afxes are treated here as lexical items
that are smaller thaQ X
O
and that undergo productive rules of sub-Xo
combination.
I Ü
5. 5 Psycholingitic Consideraton
The basic psycho linguistic question about morphology is whether com­
plex morphological fors are stord in long-term memory or composed
"on-line" in working memory from stored constituents. The position taken
in the last section trnslates readily into thes terms. In the case of semi­
productive regularities, one must know whether the derived for exists
and exactly what it means. Hence such forms must be stored in long-ter
memor, and lexical rules are to be regarded as relations among entries in
l ong-ter memory. In the case of productive regularties, the afxes are
stored as separate lexical entres, and at least some derived forms are
composd on the fy in working memory.
This distinction appears to be bore out in a growing psycholinguistic
l i terature. This research has been particulary stimulated by the connec­
l i onist proposal of Rumelhart and McClelland (1 986), which claims that
al l past tenses, regular and irregular, are accounted for by a uniform
associati onist process, and by the critique of this account by Pinker and
Pri nce ( 1 988) . Here is a brief summar of some experimental results,
drawn from Pi nker and Pri nce 1 99 1 .
1 22 Chapter 5
The behavior of irregular past tense verbs is afected by their frequency,
whereas that of regular past tenses is not. For instance, lower-frequency
irregulars are more likely to be overregularized by children and to be
uttered incorrectly by aduts when under time pressure ( 8ybee and Slobin
1 982). Ratings of naturalness for past tenses of regular verbs correlate
signifcantly with rated naturalness of their stems, but naturalness ratings
for irregular past tense forms correlate less strongly with their stem
ratings but signifcantly with past tense form frequency ( Ullman and
Pinker 1 991 ). Prasada, Pinker, and Snyder ( 1 990) asked subjects to utter
past tense forms in respns to a stem fashed on a screen. Lower past­
frequency irregulars took signifcantly longer to utter than higher past­
frequency irregulars, but there was no such diference with regular­
past-tens verbs. In a study reported by Marin, Safran, and Schwartz
( 1 976), agrammatic aphasics displayed errors with regular infected fors
to a much greater extent than with iregulars. Pinker and Prince argue
that all these diferences are consistent with irregulars' being stored and
regulars' being generated on line, as advocated here.
In addition, Jaeger et al. (1 996) had subjects undergoing PET scans
produce past tenses in respons to verb stems fashed on a sreen. The
grup of subjects producing regular pasts showed a much diferent in­
volvement of brain areas than those producing irregular pasts. The areas
for the regular verbs were almost a subset of those for the irregulars, but
not quite. ( In addition, reaction time patters were diferent, iregulars
being generally slower than regulars. ) Again this suggests that the brain
processes involved in semiregular morphophonology are diferent from
those involved in productive afxation. (See also papers in Feldman 1 995,
especially Schreuder and 8aayen 1 995 and Chialant and Caramaza
1 995. )
As Pinker and Prince ( 1 991 ) point out, however, the fact that a word is
constructed by a productive process does not preclude its bing listed a
well . In fact, in learing a productive rule, a child presumably frst has
to lear a number of derived forms separately in order to extract the
generalization. Once the prductive process is leare, do the previously
listed derived forms simply vanish? That doesn't seem like the brain' s
way.
Interestingly, some experimental work has verifed this speculati on.
8aayen, Van Casteren, and Dij kstra ( 1 992) found that, i n comprehension,
reaction times to pl ural s with a l ow-frequency stem were i n proportion
Lexical Entre, Lexical Rules
·.
to the frequency of the stem and not to the frequency of the plural for
alone. In contrast, reaction times to plurals with a high-frequency stem
showed a signifcant efect of frequency of the plural form itself. Thus
high-frequency nouns (but not low-frequency nouns) bhave as if they
have independent representations for their singular and plural forms.
Baayen, Levelt, and Haveman ( 1 993) have extended this fding to pro­
duction: in a picture-naming task, nouns (such as eye) whose plural is
more frequent than their singular were slower in production than nouns
whose singular is more frequent than their plural (factoring out overall
frequency efects). Their explanation is that for the "plural-dominant"
nouns, the plural is actually listed as a separate lexical item, and so the
singular and plural are in competition for word selection, slowing the
process down. Stemberger and MacWhinney (1 988) arrive at a similar
conclusion with respect to English past tenses: low-frequency regular past
forms are constructed on line, but high-frequency regular past fors are
stored.
i i
Notice that this approach is consistent with the existence of nouns that
appear in plural for without a correspnding singular, such as scissors
and troops ('soldiers'). These are just listed, the same as eyes. What is not
consistent with thi s approach would b the existence of singular count
nouns that for semantic reasons should have a plural but in fact do not. I
don't know of anything like this (though see note 3).
5.6 "Optimal Coing" of Smiproductive Forms
Let us retur now to the issue of how fors derived by semiproductive
processes, sy shelve, are listed in the lexicon.
1 2
We are under the follow­
i ng two constraints. First, because the process is semiproductive, a speaker
must list in long-term memory that the word in question exists. Second,
by the time the sentence containing shelve is composed, the phonology,
syntax, and meaning of shelve must be completely fled out.
There are two possible solutions. The frst possibility is that shelve is
l i sted in its entirety in the lexicon, just like, say, the underived form
mumble. Let's call this the full entry theory. The second possibility is that
shelve is not listed in its entirety-that its entry contains less complete
i nforation, but that, in the process of its being used in a phrasal struc­
l ure, its ful l i nforation is flled in from the lexical rule that relates it
10 shel Let's cal l this the impoverished entry theory. (Henderson ( 1 989)
1 24 Chapter 5
and Marslen-Wilson et a. (1 994) present surveys of parallel psitions in
psycholinguistics. )
A fairly standard assumption among linguists is that the impoverished
entry theor is ultimately correct. This is expressed, for instance, by as­
sumption 5 of chapter 1 , namely that the language faculty is nonredun­
dant and that the lexicon appars in terms of an "optimal coding" that
expresses "exactly what is not predictable. " However, to repeat the point
made in chapter 1 , although "conceptual necessity" requires that the lex­
icon encode what is not predictable, it does not require that the lexicon
encode only what is not predictable. In fact, section 5. 5 cited psycholin­
guistic evidence that even some regular plurals are lexically listed (in some
for or another), casting doubt on the assumption of nonredundancy and
optimal coding. (Remembr, we are talking about the mental lexicon, not
some mathematically optimal abstract object, so psycholinguistic evi­
dence is absolutely relevant. )
Why is optimal coding appealing? The obvious reason is that it trans­
lates the notion of "lexical generalization" directly into some measure like
numbr of bits of information encoded in the lexicon. If there is a gen­
eralization among a number of items, it takes fewer bits to store them
than if they were unrelated (and therefore in the traditional view of
memory as a fing system, less "space"). In principle, then, this is a nice
way to conceptualize matters.
However, if we try to work out an impoverished entry theory rigor­
ously, a number of discomfting formal issues arise. First, notice that
the impoverished entry theory cannot simply say that the full information
found in the phonological and semantic representation of the lexical item
in its sentential context is "flled in. " Rather, we must posit a gramatical
process that accomplishes this flling in on the basis of lexical rules and/or
syntactic context. Let us call ths process Lexical Assembly. Presumably
Lexical Assembly applies in working memory as part of the on-line com­
position of sentences. ( If it applied to long-term memory, lexical items
would be flled in in long-ter memory, violating the basic assumption of
the impoverished entry theor. )
This said, let us construct an approximate version of the lexical rule for
English denominal verbs like saddle and shelve. The rule will have a
phonological part that says the phonology of the base form remains (in
the default case) unchanged, a syntactic part that says the derived for i s
a verb dominating the base noun,
1 3
and a conceptual part that speci fes a
variety of possible meani ngs built on that of the base.
Lexical Entres, Lxical Rules 1 25
(8) Base form W d
a
Derived form Wdb

a.
[
PUT

.

��·

i . OUT-OF.
b. [ GET

ii. OFF

c.
[
PUT
��·
In the conceptual structures in (8), Y and

are the arguments of the
derived verb; they are realized syntactically as subject and object respec­
tively. The options in (8a-c) are illustrated by the examples in (9).
(9) a. 1 . smoke a ham " put smoke in a ham
ii. roof a house " put a roof on a house
(also saddle a horse, water fowers)
b. i. milk a cow " get milk out of a cow
(also smoke a cigar)
ii. skin a banana " get the skin of a banana
c. i. pocket the money " put money in one's pocket
(also bottle the wine, house the immigrants)
ii. shelve the books ~ put th
e
books on a shel
(also, in my daughters' dialect, roof a frisbee)
(halve illustrates yet another semantic option)
How will these derived verbs b listed? Suppos they were semantically
perfectly predictable. Then, one might suppose, the verb saddle could be
l i sted as shown in ( 1 0).
( 1 0) ["saddle" + rl e 8aiil
But what does "saddle" mean in ( 1 0)? It stands for the lexical noun whose
phonology is sadle-but this noun can't b identifed without spelling
out all the information in the noun itself. Hence the verb ends up taking
as many bits as the noun from which it is derived, missing the point of
optimal coding.
The obvious alterative is to have the verb contain a "pointer" to the
noun. In computer science-like notation, we might draw something like
t hi s:
( 1 1 ) �8aiil

� c�unt]
SI ng
1 26 Capter 5
Thi s looks somewhat btter. But what is a pointer formally? It is an index
attached to x that uniquely identifes the target lexical entr. Which raises
the question, What does it take to uniquely identify a lexical entry?
One possibility is that each lexical entry is individuated by an index, say
a numerical index, the entry' s "address. " In this case, if t noun sadle has
the index 349, then the pointer reduces to a citation of this index, as in ( 1 2) .
( 1 2) [349 + rule 8aii]
This is all well and good, but how do we go beyond the idealization of
semantic regularity and build more realistic lexical entries for this class of
verbs? ( 1 3) lists some of these verbs with paraphrass corresponding
somewhat more closely to their meaning. The italicized material is the
part not predictable from the base noun and the lexical rule; the material
in brackets designates selectional restrictions on an argument.
( 1 3) a. smoke ~ put smoke into [ food] by hanging in an enclosure over a
fre
b. smoke ~ get smoke out of [a cigar, pipe, etc. ] by putting i the
mouth and pufng
c. saddle ~ put a saddle on [horse, donkey, etc. ] in canonical position
d. roof ~ put permanently attached roofing materil on roof of [a
building]
e. roof ~ put object on a roof by throwing, kicking, etc.
f. plate ~ put [ food] on a plate for serving
Extending the for in ( 1 2) to encompass these gves something like (1 4).
( 1 4) [349 + rule 8aii; [HORSE] [IN CANONICAL POSITION))
But thi s is incoherent, because it does not specify how Lexical Assembly is
to incorporate the extra smantic inforation with that provided by the
lexical rule. 1 4
Another such case concers exocentric compounds line yellowjacket ('a
be with a yellow "jacket" '), redoat (' Bri tish soldier of the 1 770s with a
red coat'), big top (' circus tent with a big top'), redead ('person with red
hair'), blackhead (' pimple with a black "head" '). Assuming a lexical rle
something like (1 5),
(1 5) Base form(s) Wd
a
[A]a
[] a
Wd
b
[
Nb Mb
Derved for [ Wda
Wd
b]c
[
N
Aa
Nblc
[Z WITH [Yl
b
THA
T IS [Xla]c
t hese woul d have l exi cal ent ries l i ke those in ( 1 6).
Lexical Entries, Lexical Rules 1 27
( 1 6) a. [2,491 + 63, 1 20 + rule 1 5; [BEE]]
b. [ 1 6,246 + 2,489 + rule 1 5; [BRITISH SOLDIER OF 1 770S]]
Like (1 4), these leave it to Lexical Assembly to fgure out that the residual
conceptual infonation such as [BEE] is to b flled in for Z in the derived
fon of rule ( 1 5).
Now, again, let's remove the idealiztion of semantic regularity and see
how redhead is specifed as meaning 'a prson with red hair on the head' .
( 1 7) [ 1 6,246 + 59, 821 + rule 1 5; [PERSON] [HAIR ON . . . ]]
This is even less coherent than (1 4). Still worse would be a case like pot­
head (' prson who habitually smokes pot' ). How can Lexical Assembly
fgure out how to put these pieces together?
One might respond to this difculty by stipulating that these isolated
pieces are indexed into their proper places in the structure for the derived
form in the rule. (I won't try to fnd a way to state this.) But this too is
problematic. Consider that diferent lexical items put their idiosyncratic
material in diferent places. Consequently, each lexical entry must pick
out the particular parts of the rule to which it adds extra infonation. In
other words, each lexical entr must individuate the relevant parts of the
derived fonn of the rule. In tens of bits of infonnation, this amounts to
reconstructing (much of) the derived fon of the rule within the lexical
entry itself-just what the impoverished entry theory was supposed to
successfully avoid.
Matters are complicated still further by the existence of multiple fonns
related to each other through a nonexistent root (this is a primary argu­
ment against the impoverished entry theory in lackendof 1 975).
( 1 8) a. aggression, aggressive, aggressor (*aggress)
b. retribution, retributive (*retribute, *retributor)
c. aviator, aviation (*aviate, *aviative)
d. fssion, fssile (*fss, *fssive, *fssor)
How is a lexical entry for such items to b fonulated? One possibility,
suggested by Halle ( 1 973), is that the lexicon does list aggress, retribute,
and so on, but marks them with a diacritic [ -lexical insertion]. Presum­
ably, t he morpheme cran- in cranberry would b similarly marked. Under
thi s assumption, all the words in ( 1 8) are related to such hypothetical
roots i n the nonal way.
But consider what such a position entails: aggress has every property of
d l exi cal entry except that of bei ng used as a lexical item in sentences-the
·.
Chapter 5
defning factor of lexical entries. The claim made by such a diacritic is
that the lexicon would b simpler if aggress just lacked this annoying
feature and could be inserted. Of course, we do sometimes fnd back­
formations such as aggress, in response to the need for a verb to express
the requisite concept and to avoid the clumsy periphrastic commit aggres­
sion. But to say this is due to overriding a feature [-lexical insertion] is
a little odd. Consider what it would mean to lear that a lexical item had
such a feature. On this analysis, we would say that for a child to over­
generalize a word (say */ fel the horse 'I made the horse fall') would be
to create a lexical item; to lear that there is no such causative for
would b to mark that form [ -lexical insertion] . So the mental lexicon
ought to be littered with such failed attempts at generalization, among
which happen to b forms like *aggress and *retribute.
A more attractive solution conceptually is that failed lexical forms like
causative fal simply wither away ( I gather that the reasons this might
happn are still obscure to researchers in acquisition; but in any event,
they are the same as those that would prompt a child's adding a feature
[-lexical insertion]). They therefore bcome part of the virtual lexicon;
that is, they are not listed, though they are potential forms made available
by lexical rules. This is, 1 believe, the correct option for the use of aggress
as a neologistic back-formation.
However, this doesn't help us with the relations among aggression,
aggressive, and aggressor. A way to relate such words without making use
of the hypothetical root is to arbitrarly choose one member of each set as
the fully listed form and to list the other members as impoverished entries,
making use of multiple lexical rules. Suppose 2, 1 36 is the address for the
full entry aggression, rule 75 is the rule creating -tion nominals from verbs,
and rule 86 is the rule creating -ive adjectives from verbs. Then ( 1 9) might
b the entry for aggressive. (Rule 75 is preceded by minus rather than plus
to indicate that the rule is invoked in such a way that Lexical Assembly
removes rather than adds the -tion afx. )
( 1 9) [21 36 - rule 75 + rule 86]
This works, but it leaves totally indeterminate which member of the set
should be the fully specifed one. (Aronof (1 976) does give arguments
for the priority of one for, though. ) We get equally valued solutions by
treating aggressive or aggressor as the fully specifed form. So here the
impverished entry theory forces upn us an embarrassment of nota­
tionally indistinguishable solutions.
Lexical Entries, Lexical Rules
1 29
Let us tur now to the full entry theory of lexical listing. It is easy to see
that none of these formal issues ari ses. There is no need for a process of
Lexical Assembly. There is no problem of integrating idiosyncratic infor­
mation in a compsed lexical entry with predictable information bor­
rowed from its root, because the predictable inforation is present in the
entry of the composed item, where it fors a structural matrix in which
the idiosyncratic information is already embedded. There is no need for
entries marked [ -lexical insertion], because a word like aggression is fully
spcifed and does not need to borrow information from its hypothetical
root. And all the members of rootless families like (1 8) are fully listed, so
there is no need to arbitrarily choose which one is basic and therefore
fully listed.
What seems to be lost in the full entry theory, though, is any notion
that semiregularities "save" anything: shelve takes no fewer bits to list
than an underived word of similar complexity. In order to make sense of
the full entry theory, then, we need to develop an alterative notion of
what constitutes "informational cost" in the lexicon. In lackendof 1 975 I
proposed that the notion of "cost" be measured in terms of nonredun­
dancy or "indepndent inforation content. " The idea is that lexical
entries are fully listed, but that lexical rules render parts of these entries
redundant, so the "cost" of learing them and listing them is less. On the
other hand, idiosyncratic content not predicted by a lexical rule costs "full
price. " In the case of derived forms for which there is no lexical root, one
"pays" for the information that would constitute the root without having
to list the root separately; at the same time one gets the derivational
morphology more "cheaply" because of the lexical rule. Finally, in cases
l i ke the rootless families in ( 1 8), one "pays" for the root inforation only
once, as is desired.
Although such a notion of cost can be made forally coherent ( I
t hi nk), its implementation i n a real system admittedly remains somewhat
hazy. It seems to me, though, that such relatively recent computational
noti ons as distributed memory ( Rumelhart and McClelland 1 986) and
Pi nker and Prince's ( 1 988) theory of irregular verb morphology make
t he idea a bit more plausible than it was in the heyday of von Neumann­
s t yl e computational theories of mind. (See also Pollard and Sag 1 987 on
i nheri tance hierarchies. ) Let me speculate that the appropriate interpre­
t at i on of "inforati onal cost" will make more sense once we understand
J l1 0re about the neural i mplementation of long-term memory.

Chapter 5
In any event, we have a trade-of between, on one hand, a theory of
lexical redundancy that is easy to implement in simple cases but rapidly
gets out of hand formally, and, on the other, a theory that is harder to
understand in terms of implementation but raises none of the same foral
problems. At least the fact that this trade-of exists should lead to some
doubt about the assumption that the lexicon is listed in ters of "optimal
coding, " the most extreme version of the impoverished entry theory.
5.7 Fial Remarks
In this disussion of the lexicon I have lef many important issues un­
addressed, for instance the categorial status of afxes in morphosyntax,
the notion of morphosyntactic head in sub-X
o
syntax (Selkirk 1 982;
Williams 1 981 ), whether semiregular afxes have separate lexical entries,
how derivational morphology afects argument strcture ( Liebr 1 992;
Randall 1 988), the precise form of lexical rules, the nature of polysemy­
and indeed the extent to which there need be any morphosyntactic struc­
ture at all (Anderson 1 992, chapter 1 0). I have also ignored the question
of how morphophonology works, other than saying it must be to a cer­
tain extent independent of morphosyntax, an issue to which I tur in the
next chapter.
What I think I have done here is to lay out a framework in terms of
which issues of lexical structure cn b integrated into a larger theory of
the architecture of grammar. Within that, however the structure works
out, it works out. Once the issues of syntactic word versus phonological
word versus listed lexical item, morphosyntax versus morphophonology,
infection versus derivation, and productivity versus semiproductivity have
been set straight, the relation of composed lexical items to the phrasal
grammar bcomes (for my taste) a great deal clearer.
Chapter 6
Remarks on Productive
Morphology
6. 1 Intoducton
Chapter 5 suggested that, unlike standard Chomskian architectures, the
tripartite architecture should support lexical insertion/licensing of items
not only of the size of words ( X
o
), but also of sizes larger and smaller than
words. We now look at smaller-sized lexical entries in more detail ; chapter
7 turs to larger-sized items.
To review the problem: Why do we want listemes smaller than X
o
? The
argument concers productive morphology, cases in which, upon learing
a new word, a speaker knows automatically that a certain morphological
form of it must exist and what that for must be; a typical example is
t he regular plural of English nouns. Viewing the lexicon as the long-ter
memory store of listemes (not just as a foral characterization of all
possible X
O
forms of the language), it makes little sense in general to store
hoth singular and plural fors of English nouns as listemes. In fact sec­
t i on 5. 5 cited some psycholinguistic evidenc suggesting that although
hi gh-frequency regular plurals are stored, the rest are not.
However, if plural nouns are not stored as such, then the plural afx
l 11 ust somehow be represented in long-ter memory in such a way that it
l'an freel y combine with nouns. For a frst approximation, this afx con­
forms easi l y to the tripartite architecture: it is a correspondenc between
O phonol ogical for, a syntactic feature (or functional category) in S­
St r uct ure, and a conceptual function. In very skeletal for, it would look
something l i ke ( 1 ) (repeating (5) from chapter 5). 1
1 32
( 1 )
Z
j
Afc
si ng
I
ount
b
plur
Chapter 6
This chapter will be devoted to showing where some traditional issues
in morphology can be situated in the present framework, so that solutions
more sophisticated than ( 1 ) can be formulated. I do not intend by any
means to break new ground in morphology, or even to come to ters
with much of the existing literature. I wish only to provide a place for
morphology that makes sense in the present scheme of things. The basic
idea, however, is that dividing a morpheme this way into phonological,
syntactic, and semantic parts plus the correspondence among them en­
ables us to parcel out various standard problems of morphology in a
sensible fashion.
In particular, it has often been noted that the notion of an infectional
or derivational morpheme as a constant matching of phonology, syntax,
and semantics, although perhaps a useful frst approximation, is not nearly
fexible enough for what really occurs in language. This chapter will deal
with ways in which such fexibility can be introduced by loosning difer­
ent aspcts of the correspondence btween phonological, syntactic, and
conceptual structure in (1).
As obsered i n chapter 5, i t is only for productive morphology that the
present architecture pushes one toward extracting separate lexical entries
for the morphemes in question. Semiproductive morphology, which pre­
dicts the possibilty but not the existence of forms (and need not com­
pletely predict their meaning), is better accounted for by lexical rules than
by principles of free combination. Lexical rules, it will be recalled, medi­
ate between fully listed but related forms. It should also be recalled that
the productive/semiproductive distinction cuts across the infectional/
derivational distinction; for instance, English expletive infxation and the
derivation of -Iy adverbs from adjectives are productive but not infec­
tional. The present chapter therefore will be concered exclusivel y with
productive morphology, since it is here that the tripartite archi tecture has
somewhat novel consequences for lexical l i sting.
Readers conversant with LFG and HPSG may fnd the present
approach somewhat foreign, i n that these frameworks deal wi th bot h
·.·............
1 33
productive and semi productive morphology by means of lexical rules,
distinct frm rules of phrasal combination. However, I do not see any­
thing essential in those frameworks that prohibits extending rules of free
combination to units smaller than the word. Koenig and lurafsky ( 1 994),
for instance, working in a Construction Grammar/HPSG framework,
propose the same division of morphology as chapter 5, dealing with
semiproductive morphology through lexical rules but with productive
morphology through "on-line" composition. The theoretical shift pr­
posed here is to replace the question "Is this process in the 'lexicon' or
the 'grammar'?" with the question "Is this prcess a relation in long­
term memory (i. e. among forms listed in the lexicon) or a construction in
working memor (i. e. 'on the fy')?"
On the other hand, readers conversant with GB and the Minimalist
Program may fnd this approach foreign for a diferent reason: infectional
afxes here do not constitute heads of larger functional categories, com­
plete with spcifers and complements. In the present approach, lexical
l i censing in syntax is established at S-Structure; therefore the lexicon must
establish correspondences at a point in syntax where infections look like
afxes adjoined to a major category. Syntax may well contain processes
I. hat move verbs to their infections or vice versa, but if so, these processes
have already occurred by the time lexical licensing takes place.
The issues I will take up, all too briefy, are the place of traditional
morphophonology (section 6. 2), phonological and class-based allomor­
phy (6. 3), suppletion of composed forms by irregulars (6.4), and the status
of zero afxation (6. 5).
6. 2 The Place of Traditional Morphophonology
The SPE model (Chomsky and Halle 1 968) took the problem of mor­
phophonology to be (2).
( 2 ) a. What are the underlying fors of morphemes and morpheme
sequences?
b. What are the phonological processes that operate on them to
produce surface phonological forms (or phonetic fors)?
I ,lt us try to translate these questions into the present framework; along
I he way we wi l l see their relation to the approach advocated by Opti­
l I1 a l i ty Theory ( Pri Mce and SMolensky 1 993).

Chapter .
The crucial issue is what is meant by "underlying forms. " First of all,
for the purpses of deriving phonetic forms, we are interested specifcally
in phonological underlying forms; their relation to morphosyntax and
conceptual structure is a separate issue, which we will take up presently.
Where do phonological underlying fors come from?
Each lexical entry in long-term memory lists among other things a piece
of phonological structure, the entry's lexical phonological structure or
LPS (I ignore allomorphy for the moment, returing to it in section 6. 3).
In the course of processing a sentence in working memory, there must b
a level of phrasal phonological structure in which LPSs can be identifed,
so that lexical access (and therefore lexical correspondence) can b estab­
lished. Following the terminology of previous chapters, let us call this the
Syntactic Interface Level of phonology (SILps, or SIL for short).
2
A frst
hypothesis, paralleling our discussion of PIL/CIL in chapter 4, is that all
lexical and phrasal phonologcal properties that are relevant to syntax can
be localized at a single SIL in working memory. (If lexical correspnd­
ences and phrasal correspondences were to tur out to occur at diferent
levels of phonology, it would be too bad, but we shouldn' t rule this ps­
sibility out in advance.)
Under this construal, the issue of phonological underlying forms breaks
into two parts:
(3) a. What phonologcal information is present in LPSs?
b. What phonological information is present in SIL?
The standard SPE-type story is that questions (3a) and (3b) have essen­
tially the same answer: SIL is nothng but a concatenation of LPSs. On
this picture, phonological structure has no independent generative pwer
of its own; all that is present in SIL is the sequence of LPSs coming out of
the lexicon, linearly ordered ad partially bracketed by virtue of the syn­
tactic structure from which they are "interpreted. " ( This view is present
also in the Minimalist Program, where the process of "interpretaton" is
called Spell-Out; recall the disussion in chapter 2. )
Consider also the relation of SIL to the "other end" of phonology,
phonetic form (PF). As discussed in chapter 2, PF is to b regarded as the
Auditory-Perceptual Interface Level of phonology (A/PI Lp
s
), connected
to auditory and motor representations by further sets of correspndence
rles. In the SP E story as standardly construed, all phonological properies
of PF not present in the concatenation of LPSs are i nserted in the course
of phonol ogical derivation, through a sequence of rul e appl i cati ons.
·.·...........
1 35
However, the present architecture allows another possibility. For in­
stance, think about the status of rules of syllable structure. It is widely
acknowledged that syllable structure is predictable and therefore need not
be present in LPS. In an SPE-typ model, this structure is not present in
SIL, either; rather, syllabic bracketing is inserted in the course of deriva­
tion.
3
Within the present framework, an alterative is that syllable struc­
ture is present in SIL, but it is invisible to the ps-ss correspondence
rules-including to lexical entries, which are part of these correspondence
rules. The status of syllabic bracketing would thus miror the status of
much of the recursion in syntactic bracketing; the latter is present for
purposes of relating syntax to semantics but is largely invisible to pho­
nology. In other words, to recall the discussion of chapter 2, one should
not view syntactic bracketing as "erased" by derivational rles, "afer"
which new bracketing is "introduced" by phonology. Rather, the two
ki nds of bracketing exist autonomously in diferent modules and are
essentially invisible to each other.
Similarly, consider the pssibilities for dealing with other kinds of
underspecifcation in LPSs. To take the simplest sort of case, consider
Turkish vowel harmony, in which the LPS of an afx specifes only the
hei ght of its vowel(s), and the rest of the vowel's features are determined
by context on the left. Again there are two possibilities. First, SIL could
correspond exactly to LPS and contain underspcifed vowels, which are
t hen flled in by the phonological derivation (the SPE-type solution).
Al teratively, vowel harony could be a well-foredness constraint on
ful l y specifed structures at SIL. In this case, an LPS with an under­
specifed vowel will unify with a fully specifed vowel in SIL, contributing
a n index that links SIL to syntax. (4) contrasts the two solutions (l is the
u nderspecifed high vowel subject to vowel harony).
( 4) Hypothesis 1
LPSs: eV, lm
SIL: evlm ¨ A/PIL (PF): evim
Hypothesis 2
LPSs: eV, lm
SI L: evim ¯ A/PIL ( PF): evim
A further problem arises in cases where LPSs are intercalated in SIL, as
Í Î ¡ Semi ti c morphology. Here i t makes little sense to have an SIL in which
f hl LPSs are clearl y di stinct, as in hypothesis 1 ; but hypothesis 2 is fairly
" Iai ghtforward.
1 36
(5) Hypothesis 1
LPSs: k-t-b (root), CaCaC (infection)
SIL: r * A/PIL: katab
Hypothesis 2
LPSs: k-t-b, CaCaC
SIL: katab * A/PIL: katab
Chaptr 6
A by now standard approach to thi s phenomenon (McCarthy 1 982)
deconstructs SIL so that the two LPSs are independently identifable in it.
(6)
a
A
e v e V
I I
k
� LPS of i nfec60n
I
b

LPS of root
The association lines between the root and the CV skeleton could be
supplied either by the phonological derivation (as pr hypothesis 1 ) or by
phonological conditions on SIL (as per hypothesis 2).
Let us be a little careful about how hypothesis 2 works here. In the
cours of lexical licensing, both the root and the infection unify with the
entire word. As a consequence, the entire phonological word receives two
linkng indices, one linked with the syntax of the root and one linked with
the syntax of the infection. 4
Now notice that, however this is solved, we have begun to appeal to
a somewhat more abstract notion of licensing, in which unifcation of
LPS with SIL is not a simple mapping of contiguous segments onto con­
tiguous segments. A further example arose already in chapter 5, in the
formulation of English expletive infxation, where the LPS of the host
word (say Suquehanna) had to unify with a discontinuous portion of SIL
( Suqufuckinhana ).
A consquenc of accumulating many such cass is that SIL begins to
look less like a simple concatenation of LPSs and more like PF. One
might ask, therefore, whether it is possible to go all the way and identify
SIL entirely with PF, eliminating SPE-type derivations altogether. The
price would have to be a considerably more abstract relation btween LPS
and SIL. This sems to me the correct way to characteriz the approach
of Optimality Theory within the presnt framework. We could thi nk of
many of the violable constraints of Optimality Theory as constrai ni ng a
"sof uni fcati on" of LPSs wi th SI L, a unifcation in which nol al l pieces
Productve Morhology 1 37
have to match. For instance, the constraints Parse and Fill of Optimality
Theory can be taken as together stipulating a one-to-one match between
segments in LPS and segments in SIL. But if these constraints are for
some reason violated because of higher-ranking constraints, then "sof
unifcation" can proceed anyway, thereby linking SIL with syntax despite
the incomplete match of SIL with the lexicon.
Finally, it is necessary to mention "process" views of morphology such
as Anderson's ( 1 992) "a-morphous morphology. " Anderson argues that
because there exist morphological processes that delete or metathesize
segments, one cannot in general treat the LPS of an afx as a sequence of
segments; rather, it must be thought of as a rule that applies to a stem.
Beard' s ( 1 988) Separation Hypothesis includes a similar psition. In ihis
t reatment, the afxes are not independently identifed in SIL; rather, one
can view the application of afxes as part of the process matching the LPS
of the stem to SIL. 5 In these views, it is still possible to perform purely
phonological manipulations on SIL to derive PF. Thus at this scale of
description they are a mixture of the two other architectures.
This is not the place to discuss the relative merits of these treatments of
morphophonology. My only intention here is to situate morphophonol­
ogy within the present architecture; with the proper construal, any of the
t reatments appears compatible with the framework.
6
At the same time,
t he framework forces us to recognize a clear separation between the long­
t er memor store of LPSs and the working memory assembly of LPSs in
SI L. This seems to me to clear up a certain amount of vagueness that (for
ÎC at least) has been inherent in the notion of "phonological underlying
for" for some time.
An interesting symmetry obtains between the treatments of morpho­
phonology discussed in this section and the contrast of two approaches
to semantic composition discussed in chapter 3. In simple composition,
t he LCSs of the words in a sentence are combined transparently into a
"semantic level," with no further material added in the course of "inter­
pretation" of syntactic structure. Pragmatic information is added "later"
by separate processes to provide a level of contextualized interpretation.
! " hi s paral l el s the SPE model of phonology, i n which LPSs of the words
, I re concatenated to create a level of underlying phonological form, then
plwno\ ogical rules are used to derive the separate level of phonetic form.
!'arts ( a) and (b) of fgure 6. 1 sketch these architectures.
By Lontrast. i n the theory of enriched composition advocated in chapter
1, I ,CSs combi ne i n vari ous i nt ricate ways, so that i ndividual LCSs are
1 38
a. Simple conceptual composition
LCSs
.
Simple process
of unifcation
"semanti; strcture"
Pragmatic�
Contextualized meaning
b. SPEmorphophonology
LPSs
.
Simple process
of unifcation
.
Phonological rules
Underlying PS
) PF
c. Enriched conceptual composition
LCSs
.
Complex process
of unifcation,
including pragmatics
.
Contextualized meaning
d. Optimality Theory
LPSs ( = "Input")
.
Complex proess
of unifcation
.
PF
Chapter 6
Fige 6.1
A symmetry between treatments of morphophonology and of smantic composition
not entirely discrete from each other in phrasal conceptual structure, and
features from diferent LeSs can coconstrain each other in order to pro­
mote overall well-formedness of conceptual structure. Moreover, prag­
matic information is integrated intimately into the composed form in the
process of combining LeSs. Thus there is no separation of levels into one
that consists just of combined LeSs and one that incorporates context.
This parallels my construal of the phonological architecture bhind
Optimality Theory. In Optimality Theory, LPSs can be considerably dis­
torted in the process of unifying with SIL, in order to ensure overall well­
formedness of phrasal phonological structure. Moreover, all sorts of
phonological inforation is present in SIL that does not come from
LPSs, for instance syllable structure. Finally, the hypothesis of Optiral ity
Theor is that the level at which LPSs are combined (i . e. SI L) and the
·.·....Morholog

level of phonetic fon are identical. Parts (c) and (d) of fgure 6. 1 sketch
these architectures.
What is one to make of these parallelisms? At the very least, they
should make clearer the symmetry of issues that arise in dealing with the
ps-ss interface and the SS-CS interface. It is only by stepping back and
considering both interfaces at once-and trying to develop a compatible
view of them-that one can recognize these symmetries. Still, it is an
empirical question whether the issues should be resolved the same way in
bth interfaces. Phonology and semantics are certainly diferent enough
on ot.her grounds that it is hard to expect architectural unifonity. Yet we
now have interesting bases of comparison, which do push us toward a
possible unifonity that could not have been prceived before. The ques­
tions thus acquire a richer texture, which I take to be a sign of progress.
6. 3 Phonologcal and ClasBasd AUomorphy
The matching of LPS, LSS, and LCS in ( 1 ) assumes that there is only
Ü single phonological realization corresponding to a syntactic afx; any
variation in its phonological form is accounted for by regular rules of
morphophonology. Of course, this assumption is false: it is not uncom­
mon for an afx to have phonologically conditioned allomorphs. A simple
example from Spencer ( 1 991 , 1 21 ), citing Lewis ( 1 967), is the Turkish pas­
si ve sufx, which is realized phonologically as /In/ afer /1/, /n/ after vowels,
and /11/ elsewhere (where /1/ is the high vowel subject to vowel hanony).
( 7)
Root Passive
a. al- alin /In/ afer /1/
b. oku- okun /n/ after vowels
c. yap- yapil
}
sev- sevil
tut- tutul
/11/ elsewhere
gor- gortl
I f the three allomorphs were entirely independent, there would be no
probl em: a single morphosyntactic structure could be linked to the three
l I1 orphophonol ogical structures disjunctively. However, (7) presents a
more complex problem. So far we have viewed lexical licensing simply as
Ì l Mi fyi ng a lexical item's LPS with SIL in phonology, its LSS with PIL/
C L ( S-Structure) i n syntax, and its LCS with conceptual structure. By
vi rtue of thi s uni fcati on it adds i ts l i nking index between the three levels
îÏÌ worki ng mcmoy. Under thi b construal , thoughq the "elsewhere" case
··
Chapter .
(7c) in prnciple should be able to unify with any word. The problem,
then, is to properly formulate the blocking of the default allomorph by the
more specialized ones.
In order to do so, I must elaborate the notation for LPS slightly. I will
mark the constituent that the lexical item licenses (i . e. the actual afx)
with a suprsrpt F (for "fgural"), to distinguish it from the allomorph's
contextual conditions. Using this notation, (8) presents a pssible lexical
entry for this afx. (I omit the conceptual structure. )
(8) a. Wd.
b.

Wdb CI�
A A
X I I n
X v n
We now have to formulate lexical licensing slightly more carefully.
In particular, we have to distinguish the fgural constituent from the
context. We want the lexical item to license only the fgural constituent; i t
just checks the context. Checking can be accomplished formally by
attempting to unify the contextual features of the lexical item with SIL
and S-Structure (again omitting conceptual structure for the moment).
If the contextual features can be unifed, I will sy the lexical i tem is
anchored in SIL and S-Structure.
Given the notion of anchoring, we can now state lexical licensing in­
volving multiple allomorphs as (9). The idea, parallel to standard formu­
lations of blocking, i s that only the allomorph with the most spci fic
context i s allowed to l icense the correspondence of its figuraJ constituent
to S-Struct urc.
Productive Morphology 141
(9) Lexical lcensing
a. Allomorphic blocking
If the LPS of a lexical item L has alomorphs AI , . . . , An, and more
than one of these can be anchored at the same position in SIL, call
the one with the most specifc context the designated allomorph,
A *. (If only one can be so anchored, it is A * by default. )
b. Licensing
Ifpossible, unify the fgural constituent of A* with SIL and the LSS
ofL with S-Structure, adding the linking index ofL to both structures.
To see how (9) works, consider how it licenses tutul and ali in SIL but
fails to license *tutun and *ali. Suppose SIL contains tutul. Of the allo­
morphs in (8), only (8c) has the proper contextual features to be anchored
by tutul; hence by default it counts a A *. The
f
gural constitutent of
(8c), /11/, unifes with ul in SIL and therefore can be linked to Passive in
S-Structure.
Suppse instead that SIL contains tutun. Again this anchors only allo­
morph (8c). But this time the fgural constituent of (8c) cannot unify with
un. Therefore the afx cannot be linked to Passive in S-Structure.
Now suppose SIL contains aln. This anchors two allomorphs in (8):
( 8a) and (8c). Of these two, (8a) is more specifc, so it is designated as A*.
I n tur, its fgural consituent, In, unifes with in in SIL and therefore the
afx can be linked to Passive in S-Structure. On the other hand, suppose
SIL contains ali. This again anchors (8a) and (8c); (8a), the more specifc,
is chosen as A*. But this time the fgural constituent of (8a) cannot unify
with il, so the afx cannot be linked to Passive in S-Structure. Table 6. 1
sums up these analyses.
In (8) the allomorphy is based on phonological conditions. Aronof
( 1 994) describes a diferent sort of allomorphy: class-based allomorphy, in
which the choice of allomorph is a consequence of an arbitrary lexical
Table 6. 1
Summar of lexical licensing and blocking in Turkish passive afx
SI L tutul tutun alin alii
Anchored allomorhs (Se) (Se) (Sa,c) (Sa,c)
A* (Se) (Se) (Sa) (Sa)
" i gural consti tuent of A* II II In In
( ' urrespondi ng part of SI L ul un in il
l . i nk to passive l iccnsld by ( R ) yes no yes no
1 42
Chapter .
classifcation of the host. One of Aronof's simpler examples (pp. 72-74)
concers Russian nominal cases. The phonological realization of case end­
ings depends on the infectional class of the noun, which in tur is deter­
mined not entirely reliably (i . e. semiproductively) by syntactic gender. Class
1 includes most masculine and neuter nouns; class 2 is mostly feminine but
includes a few masculines; class 3 is mostly feminine but includes one mas­
culine and a few neuters; class 4 consists of all plurals of all genders. Aronof
argues that such lexical marking of infectional class is autonomous of
both syntactic and phonological structure-"morphology by itself"­
though it may be related through semi productive rules to syntax or phonol­
ogy (depending on the language). Since the pssibility of a word's belong­
ing to an infectional class depends on its syntactic category, this feature
makes most sense as part of lexical syntactic structure. Aronof's analysis
of the dative afx therefore comes out as ( 1 0) in the present notation.
( 10) Wd.
�F
W
d
b
Clc
I
u
Wd.
�F
W
d
b
Clc
I
e
Wd.
�F
Wd
b
Clc
I
Wd.

Wd
b
CI�
I
am
N.

Case,
class 1 µ
I
Dat
N.

Case,
class 2 µ
I
Dat
N.

N Case,
class 3
b
I
Dat
N.

Case,
class 4
b
I
Dat
Productive Morholog 143
These lexical pairings of morphophonological and morphosyntactic
structures are notational variants of Aronof' s "realization pairs. " In this
particular case there is no default for and therefore no blocking, but in
principle such a situation is pssible.
6.4 Suppleton of Compose Forms by Ir egars
We next tur to the problem of tre irregularities, where the existence of
an irregular form blocks the licensing of a regular composed form. Just to
b extreme, let us deal with the suppletion of the regular past tense of go,
+goed, by went. I will assume that in less extreme cases such as sing/sang,
the phonological similarity between the stem and the irregular form is a
matter of semiproductive regularity, hence a lexical rule. Thus the prob­
l em such cases pos for lexical licensing is exactly the sme as for total
suppletion.
In working this out correctly, the key will be in loosning the connec­
ti on between PS-SS linking and SS-CS linking to some degree. As a con­
sequence it will become impossible consistently to list lexical entries that
are complete (PS, SS, CS> triples. I therefore diverge from, for example,
Li eber's ( 1 992) "lexeme-based" morphology, which in efect treats every
lexical entry as a complete triplet. Rather, I concur with Beard's ( 1 987,
1 988) Separation Hypothesis and Halle and Marantz' s (1 993) Distributed
Morphology in granting PS-SS linking and SS-CS linking some indepen­
dence from each other.
To see what is happening in the blocking of +goed by went, we need to
state lexical entries for go, past tense, and went. Here is a frst approx­
i mation for go; I now revert to including LCS, as it is crucial to the story.
( 1 1 ) Wda
go
[Event G ([Tng lA, [Path lA)la
[
State
EXTEND
([Tng lA, [Path lA)la
« . « (other readings)
The two readings in the LCS are those exemplifed by John went to LA
and The road goes to LA respectively;
8
there may well be other readings.
The crucial point for what follows is that the past tense of every reading of
go i s went, and this must follow from the analysis.
Next consider the regular past tense. Since lexical licensing takes place
at S-Structure, the morphosyntactic structure relevant for licensing has
I 'cnse adjoi ned to the verb; it is not a separate functional proj ection. ( In
t he LÜbC where Tense is not syntactically adjoined to the verb, it is realized
1 4
Chapter 6
as a form of do. ) Thus the relevant case of past tense has the following
entry, for a frst approximation; note its parallelism to the entry for plural
in (1 ).
9
d
�tense '

Vc
T
d
I
past
[Situation
PAST
([ Ub
(1 3) gives the phonological and syntactic structure for went.
( 1 3) Wde

went
�tense|

V T
I
past
Notice that some of the morphosyntax is not linked to phonology.
Rather, the phonological word went stands in for the whole syntactic
complex. This corresponds to the notion of "fusion" in Halle and
Marantz 1 993.
How does went get its meaning, though? The brute force way would be
to link it to a st of meanings parallel to those of go, but incorprating
PAST. However, this would miss the generalization that the set of mean­
ings is exactly the same; this is what makes went a form of go.
Just to push the point further, consider the phonologically irregular
present does of the verb do. Like go, do has a series of alterative LCSs,
including empty (do-supprt), light verb (do a job), part of various
anaphoric forms (do so, etc. ), and ' tour' (do London). l O Moreover, the
English present tns has a number of alterative LCSs, including pre­
sent of stative verbs, generic of active verbs, scheduled future ( We leave
tomorrow moring), "hot news" ( Churchil dies!), and stge directions
(Hamlet leaves the stage). Does has exactly the product set of the readi ngs
of do and present tense. The theor should not have to treat thi s as a
coincidence.
The solution lies in how the entries for go and past arc fonul atcd.
Until now, l i nki ng indices have extended across phonol ogy, syntax, and
Prouctive Morhology 1 45
semantics. In order to account for went, we have to weaken the bonds
tying the three components together, separating the PS-SS linking from
the SS-CS linking. (1 4) gives revised versions of go and past; the presub­
scripts a, b, c, and d notate PS-SS linkings, and the postsubscripts x, y the
SS-CS linkings.
( 1 4) a. aWd
a
V,
A
go
d
'�tense|

cVz dT
I
past
[
SItuation
PAST (
[
]y
We can now give the entry for went as (1 5), in which the syntax corre­
sponds idiosyncratically to the phonology but invokes the SS-CS indices
of ( 1 4a,b) in prfectly regular fashion
Y
( 1 5) eWd
I\
went
e [�ense |

V, T
I
past
The upshot is that went becomes syntactically and semantically a past
t ense fon for go; only in its phonology is it idiomatic. If one wants to
t hi nk of (1 5) as a branch of the entry for go, that is fne. But one should
al so think of it as a branch of the entry for past tens. Its position with
respct to the two is symmetricaL
Havi ng dissociated the PS-SS linking from the SS-CS linking, we now
have a place in the theory for a distinction made frequently in the liter­
at ure. Level t ( 1 989), for instance, speaks of the distinction between lem-
lllUÁ and lexical forms. In our terms, a lemma is a linkup between an LSS
and an LCS. Lemmas are individuated by their postsubscripts. Hence the
pai ri ng of the verb and any of the conceptual structures in ( 1 4a) is one
l emma , and the pai ri ng of the tensed verb and the conceptual strcture in
·. Chaptr .
( 1 4b) is another. A lexical form is a linkup between an LSS and an LPS.
Lexical forms are individuated by their presubscripts. Hence the pairing
of the verb and the phonological string /go/ in ( 1 4a) is one lexical form,
that between past tense and /-d/ in ( 1 4b) is another, and that between the
tensed verb and /went/ in ( 1 5) is still another. Each reading of went thus
expresses two lemmas at once with a single lexical form.
1 2
Notice, by the way, that if a word is morphologically regular, there is
never any branching among its linking indices. In such a case, the dis­
sociation of PS-SS and SS-CS linking is efectively invisible, and we retur
to the stereotypical notion of the morpheme a a unitary mapping of PS,
SS, and CS. It is the irregular cases in their great variety that undermine
the traditional notion; the present notation makes it feasible to difer­
entiate the possibilities.
It is not enough, of course, to forulate a lexical entry for went. We
also have to block *goed. Like allomorphic blocking, this has to be stated
as part of lexical licensing. Essentially, lexical licensing must choose the
most inclusive lexical form as the only candidate for licensing, so that the
composite for never has a chance to unify in SIL. This condition can be
added at the beginning of our previous lexical licensing rule (9), which
with suitable editing emerges as ( 1 6). The idea behind the new clause,
( 1 6a), is that a listed lexical form takes precedence over a compsite of
multiple lexical forms.
( 1 6) Lxical licensing (revision)
a. Blocking of composed forms by irregulars
If one or more sets of lexical fors can exhaustively unify their
LSSs with a given syntactic constituent Cs, and one set consists of
a single lexical form, call that one the designated lexical form, L *.
(If only one set can so unify, its member(s) is/are L * by default.)
b. Allomorphic blocking
If the LPS of L * has allomorphs AI , . . . , An, and more than one
of these can be anchored at the same psition in SIL, call the one
with the most spcifc context the designated alomorph, A *. (if
only one can be so anchored, it is A * by default.)
c. Licensing
If possible, unify the fgural constituent of A * with SIL and the
LSS of L * with S-Structure, adding the linking i ndex of L * to
both structures.
How does this block goed? Suppose we have licensed the SS-CS corrc­
spondence shown i n ( 1 7). Thcn there are two sets of l exi cal forms t hat can
Prouctve Morholog
147
unify with the tensed verb; one is the set consisting of go and past tense,
and the other is the set consisting only of went.
( 1 7)
[ �tense|

V
x
T
I
past
Condition ( 1 6a) says that went is the designated lexical fon; condition
( l 6b) says that /wcnt/ is the designated allomorh. Hence, if SIL contains
/go+d/, it never has a chance to link to the syntax and therefore cannot
be licensed.
In most infectional languages, there are infectional afxes that realize
more than one morphosyntactic feature at a time. A simple case is the
English verbal sufx / -z/, which correspnds to present tense plus third
person singular agreement. We can treat such composite afxes along the
same lines as went, except that the compsitionality appears within the
structure of the afx itself instead of spreading into the verb. ( 1 8) gives a
possible lexical entry for this afx. 1 3
Z

[
Situatio
n
PRS
(
[
])
]
z
[
Situation
HOT NEWS
(
[
. . . [other readings]

I Î paÏadigs that are partially compositional (e. g. separate tense and
; l grcement afxes are distinguishable in some forms but not others), the
I ransparently composed fons can b treated as such; the irregular cases,
l i st ed in the manner of ( l 8), will block the licensing of regular compsition.
6. 5 The Status of Zero Infections
What about the Engl i sh present tense, whi ch, except in the third prson
�1 I I �l l lar, is real ized phonol ogica l l y wi t h no i Üfect i onal endi ng on the
1 48 Chapter 6
verb? We know that present tense must b realized in the morpho­
syntax-present is not just the absence of past-since it surfaces as do in
environments where tense cannot attach to the verb. So what is its lexical
entr?
One possibilit is that it is parallel to past tense, except that its LPS
contains no segmental inforation.
1 4
pres

[
bltuatton
PR
ES ([ D
z
[
bttuatt
~
HOT NEWS ([
. . . [other readings]
]
)
]
,

Such a solution, however, grates against one's sense of elegance. A solu­
tion that stipulated no LPS at all would be preferable.
(20)
b
,
c
Wd
[ V ] �
[
bttuatton
PR
ES ([
]
)
]
z
b
+tense
t
[
bttuatton
HOT NEWS ([
� o e e [other readings
]
c
V
T
I
pres

Notice here that the T node in morphosyntax is completely unlinked: it
corresponds to nothing in eiter conceptual structure or phonology. The
double indexation of Wd in (20) indicates that the form of the host verb
and the for of the tensed verb are identical. (Alteratively, the LPS
could have b W d dominating çW d, but this would b just a notational
variant.)
As motivation that (20) is the propr sort of entry, we observe that it is
only one of three sorts of situations in which a constituent of a lexical
entry is unlinked. A second is the existence of "theme vowels": infectional
endings that depnd on inectional class but that express no morpho­
syntactic information (i. e. inforation that is used for syntactic purposes
such as agreement). An entry for a theme vowel in the simplest case might
look something like (21 ), where the clitic constituent is not l i nked to
anything.
Productive Morhology
(21 ) Wd
[
N
]
� • class 2
.Wd CIF
I
149
In addition, there may be pieces of conceptual structure that have no
syntactic or phonological realization but must b present as part of one' s
understanding of a sentence. A clear example is the reference transfer
functions discussed in section 3. 3. The function allowing use of Ringo to
name a picture of Ringo, for instance, could be stated as a correspon­
dence rule like (22), a <SS, CS) pair with structure not unlike that of a
l exical entr. Notice how the indexing between syntactic structure and
conceptual structure here parallels that between phonological structure
and syntactic structure in (20).
(22) NPy¸
y
[VISUAL-REPRESENTATION-OF
(
[
1x)
]
y
Here the syntactic structure is doubly indexed, so it links both to the thing
being represented and to its representation. That is, the conceptual func­
tion VISUAL-REPRESENT A TlON -OF is another kind of zero morph. 1 5
We conclude that it would b useful for the theory to permit unlinked
pi eces of LPS, LSS, and LCS. This poses a minor difculty for lexical
l icensing as conceived so far. Until now, we have considered structures in
SIL, S-Structure, and conceptual structure to be licensed just in case they
can b linked with each other. Now we have encountered cases in which
a l exical item apparently licenses a piece of SIL (a theme vowel), S-Struc­
l ure ( English present tense� or conceptual structure (reference transfer
functions) that is not linked to other structures. This suggests that it is
necessary to separate the licensing function of the lexicon from its linking
fOncti on. With normal stereotypical morphemes the two functions go
hand in hand; but here we fnd licensing without linking. In fact, chapter 4
mentioned lexical items that license phonology but do not link to either
syntax or semantics, for instance fddle-de-dee. So such a licensing func­
l i on without l i nking is independently necessary.
To make this explicit, I will further revise ( 1 6). Its three parts now
(kteni ne l icensi ng of morphosyntactic structure in S-Structure, morpho­
phonol ogical structure in SIL, and the link between them.
1 50 Chapter .
(23) Lexical licensing (further revision)
a. If more than one set of lexical fors can exhaustively unify their
LSSs with a given syntactic constituent Cs, and one set consists of
a single lexical form, call that one the designated lexical form, L *.
(If only one set can so unify, its member(s) is/are L * by default.)
L * licenses C
s
.
b. If the LPS of L * has allomorphs AI , . . . , An, and more than one
of these can be anchored at the same position in SIL, call the one
with the most specifc context the designated alomorph, A *. (If
only one can be so anchored, it is A * by default.)
If the fgural constituent of A * unifes with a constituent Cp of
SIL, then A * licenses Cp.
c. If the LSS of L * is linked to A *, add a linkng index between Cp
and Cs.
Consider how (20) works in this version of licensing. The LSS of L * is
[V,+tens V [T pres]] . This licenses the present tense in S-Structure. L* has
only one allomorph, but this allomorph lacks a fgural constituent. There­
fore A * licenses nothing in SIL, and no linking index is added. Similarly,
in (21 ), the LSS of L * is a class 2 noun. Its single allomorph licenses but
does not link the theme vowel.
I should add that the global conditions on licensing and linking of SIL,
S-Structure, and conceptual structure have not yet been stated; it is these
that determine whether a derivation converges or crashes. Certainly not
all the distinct confgurations for blocking have yet been enumerated, and
the connection of lexical licensing to conceptual structure has not yet ben
state. And there are doubtless many pitfalls still to be avoided. However,
I leave the matter at this point.
6.6 Wy te Lxicon Cannot Be MinimaJis
In the traditional view of generative gramar, a lexical item is a matrix
of syntactic, phonological, and semantic features that is inserted into a
D-Structure representation and "later" interpreted. In the present view, a
lexical item is instead a correspondence rule that licenses parts of the
three independent phonological, syntactic, and conceptual deri vati ons.
We have sen in this chapter that the way this correspondence i s stated
is not simple and that many of the standard problems of morphology
concer the connections among alterative LPSs and the separation of
Productive Morholog
1 5 1
ps-ss connections from SS-CS connections. I n the process, we have
considerably elaborated the theory oflinking indices, which seemed so sim­
ple and innocent in previous chapters. It seems to me that whatever theory
of the lexicon one adopts, similar complications must necessarily appear.
Moreover, in this chapter we have set aside all matters of semiproductive
regularities discussed in chapter 5, which add further complexity to the
stew.
In this light, the idea of the lexicon as "simply" a list of exceptions is
revealed as but a naive hope. To me, the considerations raised in this
chapter and in chapter 5 show that no proposal regarding phonology,
syntax, or semantics can be taken seriously unless it includes in some
reasonable detail an account of the associated LPSs, LSSs, and LCSs and
how they combine into phrasl structures.
Chapter 7
Idioms and Other Fixed
Expressions
7. 1 Revew of te Isue
Every so ofen in the past chapters, I have asserted that it is necessary to
admit idioms as phrasal lexical items, that is, as lexical items larger than
Xo . This chapter will draw out the arguments in more detail and integrate
idioms into the tripartite architecture and lexical licensing theory.
!
Let us start by seeing how idioms bear on various assumptions about
the lexical interface. First, a pointed out a number of times already, a
standard assumption is that the lexicon consists (mostly) of words (as­
sumption 4 of chapter 1 ). Idioms, which are multiword constructions, are
taken to be a relatively marginal and unimportant part of the lexicon.
Second, under the standard architecture, lexical items enter syntax by
means of a rule of lexical insertion that inserts them into D-Structure
( assumption 3 of chapter I ) . By assumptions 2 and 4, lexical insertion sub­
stitutes a lexical item for a terinal element (a lexical category or X
O
cat­
egory) in a phrase structure tree. Thus standard lexical insertion creates a
rroblem for idioms, which do not look like X
O
categories.
Third, under the standard architecture, the semantic interpretation of
syntactic structures is performed composition ally afer lexical insertion (at
D-Structure in the Standard Theory, S-Structure in EST, LF in GB and
t he Mi nimalist Program). This too raises a problem for idioms: how are
t hey marked at the time of lexical insertion so that they receive non­
(omposi ti onal interpretations at this later stage?
Fourth, a l exical entry is ofen assumed to include only information that
l· an not be predi cted by rules (assumption 5). All redundancies among lex­
K· al i tems are extracted and encoded in lexical rules. But since most idioms
. . .. e made up of known l exi cal items (though ofen with wrong meanings),
i l is nol cl ear how to formal i ze the extraction of their redundancies.
1 54
Chapter ·
As stressed throughout, not everyone holds all four of these views; each
one has been questioned many times. But I think it is fair to sy they con­
stitute the defaut asumptions that are adopted when dealing with isues
outside the lexicon, as in Comsky 1 993, 1 995.
In the course of the chapter we will question all four of these asup
tions. We will se that the lexicon contains large numbrs of phrasal
items; that lexical licensing by unifcation is the proper way to get idioms
into sntences; that idioms provide further evidence that the level at which
lexical licensing applies in syntax is also the level of phonological and
semantic correspondence; and that the lexicon must contain redundancy.
7.2 The Wheel of Forn Coru I I Isn't Lexcal, What Is It?
Before tung to idioms, let us ask in what larger space oflinguistic entities
they might be embedded. To st the context, it is of interest to consider a
new source of evidence. Many readers will b familiar with the U . S. tele­
vision show Wheel of Fortune or one of its oversas counterparts. This is a
game show in which contestants try to win prizes by guessing words and
phrass, using a proedure rather like the children's game Hangman. My
daughter Beth collected puzle solutions for me over a few months a she
watched the show; she collected about 60, with no repetition. ¯
This Wheel of Fortune corpus consists mostly of phrases, of all sorts. (A
full listing appears in the appendix. ) There are compunds, for example
( 1 ),
( 1 ) black and white flm
Jerry Lewis telethon
frequent fyer program
idioms of all sorts,
(2) they've had their ups and their downs
yumy yummy yummy
I cried my eyes out
a breath of fresh air
trick or treat
names of pople (3a), names of places (3b), brand names (3c), and orga­
nization names (3d),
(3) a. Clint Eastwood
Count Dracula
Idioms and Other Fixed Expressions
b. Boston, Massachusetts
Beverly Hills
c. Jack Daniels whiskey
John Deere tractor
d. American Heart Association
Boston Pops Orchestra
New York Yankees
1 55
cliches (which I distinguish from idioms because they apparently have
nothing noncompositional in their syntax or meaning),
(4) any friend of yours is a friend of mine
gime a break
love conquers all
no money down
name, rank, and serial nuber
we're doing everything humanly possible
ti tles of songs, books, movies, and television shows,
(5) All You Need Is Love
City Lights
The Price Is Right
The Pirates of Penzance
quotations,
(6) beam me up, Scotty
may the Force b with you
a day that will live in infamy
and foreign phrass.
( 7) au contraire
persona non grata
c' est la vie
All these expressions are familiar to Aerican spakers of English; that
i s, an American speaker must have them all stored in long-ter memory.
And there are parallels in any language. The existence of such "fxed ex­
pressions" is well known; but what struck me in watching Weel of For­
I rme i s how many there are. Consider that the show presents about fve
puzzl es a day and i s shown si x days a week, and that it has been on the air
ti ) f over ten years. This comes to something on the order of ten to ffeen
1 56
Chapter ·
thousand puzzles with little if any repetition, and no sense of strain-no
worry that the show is going to run out of puzzles, or even that the puz­
zles are becoming especially obscure.
To get a rough idea of the distribution of the puzzle types listed in ( l )­
(7), the Weel of Fortune corpus includes about 1 0 single words, about
30% compounds, 1 0% idioms, 1 0% names, 1 0% meaningful names, 1 5%
cliches, 5% titles, and 3% quotations. Assuming (for the ske of argu­
ment) similar proportions in the total distribution of Wheel of Fortue
puzzles, a speaker of English must be carrying around thousands of com­
punds, idioms, names, meaningful names, and cliches, and at least hun­
dreds of titles and quotations that could be potential Wheel of Fortune
puzzles.
As for names, think of al l the names of people you know that could not
be Weel of Fortune puzzles because they're not famous enough-Noam
Chomsky, Otto Jespersen, all your family, colleagues, neighbors, and old
classmates-thousands more. As for titles and quotations, think of all the
petry you know, including lyrics of popular songs, folk songs, and
nursery rhymes, plus advertising slogans. There are vast numbers of such
memorized fxed expressions; these extremely crude estimates suggest that
their number is of about the same order of magnitude as the single words
of the voabulary. Thus they are hardly a marginal part of our use of
language.
It is worth adding to this list the colocations in the language-uses of
particular adjective-noun combinations, for instance, where other choices
would be semantically as appropriate but not idiomatic (in a broader
sense): heavy/ *weighty smoker, strong/ *powerul cofee, sharp/?strong/
*heavy contrast, weighty/ *heavy argument, and so forth. The vast number
of such expressions is stressed by Mel'cuk ( 1 995) and Schenk (1 995).
How is all this material stored? Received wisdom gves us an immediate
reaction: "I don' t know how it is stored, but it certainly isn't part of the
lexicon. It's some other more general-purpose part of memory, along with
pragatics, facts of history, maybe how to cook. " But if it isn't part of
language, what is it? Unlike pragmatics, facts of history, and how to cook,
fxed expressions are made up of word. They have phonological, syntac­
tic, and semantic structure. When they are integrated into the speech
stream as sentences and parts of sentences, we have no sense that sud­
denly a diferent activity is taking place, in the sense that we do when we
say or hear a sentence l ike And then he went, [belhing noisej.
Idioms and Other Fixed Exprssions
1 57
I therefore want to explore the position that fxed expressions are listed
in the lexicon. That is, I want to work out a more inclusive view of lin­
guistic knowledge that includes this material, given that it is made out of
linguistic parts. (I am hardly the frst to suggest this. Langacker (1 987) and
Di Sciullo and Williams ( 1 987) have done so relatively recently. Weinreich
( 1 969) makes a similar point, citing estimates of over 25, 00 fxed expres­
sions; Gros ( 1 982) makes similar estimates for French. )
First, i t i s worth asking why the "received view" would want to exclude
cliches, quotations, and so forth from the lexicon. The likely reason is for
the purpose of constraining knowledge of language-to make language
modular and autonomous with respect to general-purpose knowledge.
But in fact, the boundaries of modules must be empirically determined.
One can't just "choose the strongest hypothesis becaus it's the most
falsifable" and then end up excluding phenomena because they're not
"core grammar" (i . e. whatever one's theory can't handle). In order to
draw a boundary properly, it is necessry to characterize phenomena on
both sdes of it, treating phenomena "outside of language" a more than a
heterogeneous garbage can.
In the present case, in order to draw a boundary between the theory
of words and that of fed expressions, it is necessry to show what the
theory of fxed expresions is like and how it is distinctively diferent from
the theory of words. In fact, the theor of fed expresions must draw
heavily on the theories of phonology, syntax, and semantics in just the
way lexical theory does, and it must account for a bdy of material of
roughly the sme size as the word lexicon. Hence signifcant generality is
mi ssed if there is such duplication among theories-and little motivated
constraint is lost by combining them into a unifed theory. In the course of
t hi s chapter, we will see that most of the properties of fed expressions
( and idioms in particular) are not that diferent from properties found in
scmi productive derivational morphology.
Turning the question around, why would one want to include fxed ex­
pressions in the lexicon? In the context of the present study, the answer
has to do with Representational Modularity. According to Representa­
t i onal Modularty, anything with phonological structure is the responsi­
hi l i t y of the phonol ogical module; anything with syntactic structure is the
responsi bi l i ty of the syntactic module; anything that correlates phonol­
ogy, synt ax, and meaning is the responsibility of the corespondence rle
modul es. On t hi s construal q fi xed expressions such as cliches, names, and
1 58 Chapter 7
idioms necessarily are part of language: they are built up out of linguistic
levels of representation. There is no other faculty of the mind in which
they can be located.
7.3 Lxcal Insron of Idoms as XOs
What motivation is there for assuming that the lexicon contains only
single words? In part, the reasons go all the way back to Syntactic Struc­
tures (Chomsky 1 957), where, as mentioned in chapter 4, lexical items are
introduced by phrase structure rules like (8).
(8) N ¯ dog, cat, banana, . . .
If this is how lexical items are introduced, then an idiom like let the cat
out of the bag will have to be introduced by a rule like (9).
(9) VP ¯ let the cat out of the bag, spill the bans, . . .
The problem with this is that it gives the idiom no interal structur. As a
result, infectional processes can't apply to let, because it isn't structurally
marked as a verb; passive can't move the cat, because it isn't marked as
an NP; and so forth. Only if we restrict lexical expansion to the level of
lexical categories can we proprly create such interal structure.
This convention was carried over into the Aspects lexicon (Chomsky
1 965) without alteration. Potentially the Aspects lexicon could have per­
mitted phrasal lexical items: there is nothing to stop an idiom from being
substituted for a VP in a phrase marker and then itself having interal
syntactic structure that is visible to syntactic rules. But I don' t recall such
a possibility ever being exploited. Rather, the word-only conventions of
Syntactic Structures wer retained, and idioms were always taken to
reuire some spcial machinery.
One version of this special machnery (Chomsky 1 98 1 , 1 46 note 94)
trats kick the bucket as a lexical verb with interal structure.
( 1 0)
[
v
[
v
p[
v
kick] [NP[ Dt the] IN bucket]]]]
Because ( 1 0) is dominated by a V node, it can be inserted into a V psi ­
tion in syntax. In cases where single-word lexical entries are minimally
plausible, such analyses have been accepted with little question. A good
example is the English verb-particle construction, for which the standard
question is whether ( 1 1 a) or ( 1 1 b) is the underlying form.
Idioms and Other Fixed Expressions
1 59
( 1 1) a. Bill looked up the answer.
(+ rghtward particle movement)
b. Bill looked the answer up.
(+ lefward paricle movement)
The crteron that lexical entries must be X
O
categores forcs us to choose
( l l a) as the underlying for, with a syntactic structure something like
( 1 2); this analysis is widely assumed in the lore.
( 1 2) [v[v look] [P up]]
However, on purely syntactic grounds this is the wrong analysis, as
pointed out as early as Emonds 1 970; for in addition to idiomatic verb­
particle constructions like look up, there are compositionally interpreted
ones like ( 1 3a,b) that have the same alternation.
( 1 3) a. put in/down/away the books
b. put the books in/down/away
The particle-fnal version ( l 3b) clearly is related to constructions with a
full PP. put the book in the box/down the chute/on the table. Since a full
PP must appear afer the object, generality demands that (l 3b) be the
underlying position for the particle as well . However, aside from idio­
matic interpretation, ( 1 1 ) and ( 1 3) behave syntactically alike; hence gen­
erality also demands ( 1 1 b) as the underlying form.
This leaves us in the position of requiring a discontinuous lexical item
look NP up. But Emonds points out that similar idioms exist that contain
ful l PPs, such as take NP to task, take NP to the cleaners, bring NP to
light, haul NP over the coals, eat NP out of house and home, and sell NP
down the river. Despite Chomsky's explicit assumption ( 1 98 1 , 146 note 94)
that idioms are not "scattered" at D-Structure, it strains credulity to treat
these as v
l
s containing an adjoined PP, along the li
n
es of ( 1 4).
( 1 4) [v[vp[v sell] [ pp[p down] [N[Det the] [N rver]]]]]
To allow for such a lexical entr, sub-X
o
syntax now has to include
phrasal categories (as already assumed in ( 1 0), of course). But in addition,
some of these interal phrasal categories are obligatorily postposed by
some new sort of movement rule, to a syntactic position that just happens
to make these idiomatic constructions look like ordinary VPS.
4
For those
who find such a sol ution unappealing, the only alterative is to fnd a way
to base-generate discontinuous idi oms as such from the start, an option to
which wc wi l l tur in section 7. 4.
1 60 Chapter 7
More generally, a well-known old fact about idioms is that virtually all
of them have a syntactic structure that looks like an ordinary syntactic
structure. If an idiom is simply a compound under a V node, one part of
which undergoes movement, such a situation would be unexpected.
Even more generally, many idioms are complete sentences, for instance
the jig i up, keep your shirt on, that 's the way the cookie crumbles, and the
eat's got Xs tongue. It is bizare to thnk of these a inserted under a V
node, or in fact under anything less than a full S: how could we explain
why they have garden-variety sentential syntax?
Nevertheless, many theorists, even those sympathetic to the inclusion of
idioms in the lexicon, have tried to maintain the reeived view of lexical
insertion, in which ever lexical item has to be a word. Consequently, lex­
ical items larger than words have always created a problem. Aspects, for
example, ends with an extremely inconclusive discussion of how to encode
verb-particle constructions, among the syntactically simplest of idioms;
Weinreich ( 1 969) and Fraser ( 1 970) go to great lengths trying to insert
idioms in the face of the standard formulation of lexical insertion.
A pssibility somewhat diferent from ( I O) is to more or less simulate
the listing of kick the bucket with monomorphemic lexical entries, by
stipulating that bucket has a spcial interpretation in the context of kick
and vice versa (see Everaert 1 991 for such an approach). But the infor­
mation one has to list in such an approach is a notational variant of list­
ing a lexical VP: the entries for bucket and kick have to mention each
other, and both have to mention that the two items together form a VP.
Under this approach, then, the theory of "contextual specfcation" for
idiomatically interpreted morphemes in efect includes the very same
inforation that would appear if kick the bucket were simply listed lexi­
cally as a VP-and in much less direct form. Given a body of fed ex­
pressions as numerous a the single words, such clumsy encoding should
b suspect.
Indeed, bcause of the complications of such "contextual specifcation,"
no one (to my knowledge) has really stated the details. In particular, it is
(to me at least) totally unclear how to "contextually specify" idioms of
mor than two morphemes. For instance, in order to specify let the cat out
of the bag one word at a time, contextual specifcations must be provided
for let, cat, out of and bag, each of which mentions the others (and what
about the?). Moreover, the correct confguration must also b specifed so
that the special interpretation does not apply in, say, let the ha/ our of the
cat. I concl ude that thi s al terati ve rapidl y col l apses of its own weight.
Idioms and Other Fixed Expressions
7.4 Lexical Licening of Unt Larger tan XO
1 61
Consider instead the view urged i n chapter 4: that a lexical entry enters a
grammatical derivation by being unifed simultaneously with indepen­
dently generated phonological, syntactic, and semantic structures. Under
ths assumption, it is possible to say that kick the bucket, look up, and sell
down the river are listed syntactically in the lexicon as ordinary VPS. 5
( l 5a-c) are identical to (1 0), ( 1 2), and ( 1 4) except that the outer VO cat­
egory is expunged. ( For the moment I revert to traditional notation,
recalling however that the phonological material is really coindexed with
the syntax, not dominated by it.)
( I S) a. [vp[v kick] [NP[Det the] [N bucket)]]
b. [vp
[
v look] [pp
[
p up]]]
c. [vp
[
v sell] [pp[p down] [NP[Det the] [N river]])]
Lexical licensing tries to unify thes structures with VPs that are inde­
pndently generated by the syntactic component. If it cannot, the deriva­
tion crashes. Thus the fact that idioms have ordinary syntactic structures
fol lows readily. 6
What about the direct object in sel down the river? Suppse we assume
that unifcation preserves sisterhood and linear order but not adjacency. 7
Then this leaves open the possibility that the VP with which ( 1 5c) unifes
contains an NP direct object in the usual position ordained by phrasal
syntax. This NP will not be phonologically and semantically licensd by
( I Sc), so it will have to b licensed by other lexical material. On smantic
grounds, such an NP proves necessary in order to stisfy the argument
structure of the idiom, just like the direct object of an ordnary transitive
verb. Furthermore, just as an ordinary transitive verb doesn't have to
syntactical ly stipulate the position of its direct object arguent, the idiom
doesn' t either.
The present theory, of course, also claims that lexical licensing takes
pl ace at S-Structure. If this is the case, how does the verb within the idiom
Il�cei ve i ts agreement features? The same way any verb does: the syntax
I I l st al l s these features in S-Structure, and since they are not contradicted
hy the lexical item, unifcation proceeds norally.
To be sl i ghtly more expl icit, we can give an idiom like take NP to the
dlnes (' get all of NP' s money/ possessions') a lexical entry something
hk� ( 1 6) .
1 62
( 1 6)
a
take
b
to cthe dcl eaners
V
P.

a
V
pp

Chpter 7
Here the whole VP is indexed to the LCS (subscript x). The postsubscripts
A in the LCS are the positions that ar linked to subject and object posi­
tion by the linking theory developed in lackendof 1 990, chapter 1 1 . Note
that, as in (1 5c), the syntax of the direct object does not have to b stipu­
lated in the LSS of the idiom. What makes ( 1 6) idiomatic, though, is that
not all the syntactic constituents correspond to conceptual constituents:
the pp and its NP complement are not linked to the LCS. Thus thes
constituents have no indepndent interpretation.
( 1 6) as stated does not capture the intuition that the constituent words
of the idiom are really the words take, to, the, and cleaners. Using some of
the ideas from section 6. 4, we can actually do a little btter. Recall how
we organized the lexical entry for went: it is a lexical for in which a
syntactic complex [-past] is linked idiosyncratically with a simple mor­
phophonological unit /went/. However, the parts of this syntactic com­
plex are linked into the regular lemas that individually would be
expressed as go and past tense.
The mrror image of this situation occurs with idioms. Here a VP such
as that in ( 1 6) is linked idiosyncratically in a single lemma to an LCS.
However, the : categories that this VP dominates can carr the PS-SS
indices associated with the independent words take, to, the, cleaner, and
Os; thus the idiom actually needs no independent LPS and indepndent
PS-SS linking. This way of arrnging things guarantees that the irregular
verb take will infect proprly with no further adjustment; took to the
cleaers appears instead of * taked to the cleaners under the very same
allomorphic spcifcation that took appears instead of * taked. In other
words, irregular infectional morphology involves a single lexical for
Idioms and Other Fixed Expressions 1 63
expressing multiple lemmas; an idiom involves a single lemma expressed
by multiple lexical forms.
On the other hand, the licensing conditions of idioms are not quite
symmetrical with those of irregular infectons. In te case of infections,
the lexically listed form completely blocks licensing of te composed
form: *taked never appears, for instance. However, in the case of idioms,
the idiomatic (i. e. lexically listed) reading does not block the composed
(i. e. literal) for if there is one. Go for broke has no literal reading, but
kick the bucket does, and John kicked the bucket may indeed mean that
John struck the pail with his foot. Hence te conditions on blocking for
SS-CS connections must be more liberal than those for PS-SS connec­
tions. I leave the proper formulation open.
A number of questons arise for a licensing theory that includes items
l ike ( 1 6). First, how large a unit is it possible to lexically license? This
corresponds to the question within traditional theory of how large idioms
can b: whether they can for instance include the subject of a sentence but
l eave the object (or something smaller than te object) as an argument.
(See the debate btween Bresnan (1 982) and Marant (1984), for example. )
Idioms like the eat's got NP's tongue suggest that lP-sized units are pos­
si ble-including infection, as this idiom does not exist in alterative
tenses and aspects. I don' t think that any constraints on size or argument
structure follow from lexical licensing theory pr se. On te other hand,
admitting phrasal units into the lexicon at least prits one to state the
constraints formally, which is not the case in a theory where the formal
status of phrasal idioms is relegated to a vaguely stated "idiom rue. "
A second queston for licensing theory is exactly how a phrasl lexical
i tem is exempt from the need to link subphrases to conceptual structure.
An answer to this would take us into more detail about licensing and
l i nking than is possible here. But the basic idea is that full linking is neces­
sary only for productive syntactic composition, tat is, syntactic composi­
t i on in the usual generative sense. In productive syntactic composition, the
meaning of a phrase is a rle-govered function of te meanings of its
parts. However, when a syntactic phrase is lexically listed, there is no need
10 build it up semantically from its parts-the meaning is already listed as
wel l , so full linking of the parts is unJecessary.
Once we allow lexical licensing by listemes larger tan XO, it is possible
to include in the lexicon all sors of fxed expressions-not only verb­
pa rticle constructions and larger idioms, but also cliches and quotations
1 6 Chapter 7
of the sort listed in the Wheel of Fortune corpus. So this part of the goal­
to incorporate fxed expressions into "knowledge of language" -has been
accomplished. The next step is to show how some of the peculiar proper­
ties of idioms are possible within lexical licensing theory.
7.5 Parallels bten Idoms and Compounds
Another part of the goal is to show that the theory of fxed expressions is
more or les coextensive with the theor of words. Toward that end, it is
useful to compare fxed expressions with derivational morphology, esp­
cially compunds, which everone acknowledges to be lexical items. I
want to show that the lexical machinery necessary for compunds is suf­
fcient for most of the noncompositional treatment of idioms. Thus, once
we admit phrases into the lexicon, existing lexical theory takes care of
most of the rest of the problems raised by idioms.
Consider three classes of compounds. First, compounds of the simplest
type just instantiate a productive patter with a predictable meaning,
redundantly. The Wheel of Fortune corpus includes such cass as opera
singer and sewage drain. There is nothing special about these items except
that they are known as a unit. Similar phrasl units are cliches/quotations
such as whistle while you work. Ofen such memorized units have some
special situational or contextual specifcation; for instance, whistle while
you work means literally what it says, but at the same time it evokes Dis­
neyesque mindless cheerfulness through its connection with the movie
Snow White. 8
In a second class of compounds, an instance of a productive patter is
lexically listed, but with further semantic specifcation. For instance, a
blueberry is not just a berry that is blue, but a particular sort of blue berr.
The diference between snowman and garbage man is not predictable: a
snowman is a replica of a man made of snow, and a garbage man is a man
who removes garbage. 9 Similarly but more so with exocentrc compounds:
a redhead is a person with red hair on his/her head, but a blackhead is a
pimple with a black (fgurative) head. As pinted out in section 5. 6, the
italicized parts here must b listed i the lexical entry, above and beyond
the meanings of the constituent nouns.
A third type of compound instantiates a productive pattern, except that
some of the strcture of the constituent parts is overri dden phonol ogi­
cally, syntactically, or semantical l y. This case has three subcases:
Idioms and Other Fixed Expressions 1 65
1 . Some part may not be a lexical item at all, as in the case of cranberry,
'particular sort of berry', in which the morpheme cran exists only in the
context of this compound (or did, until cranapple juice was invented).
2. Some part may be a word, but of the wrong syntactic category to ft
i nto the patter. An example is aloha shirt (from the Wheel of Fortune
corpus), in which the greeting aloha is substituted for the frst N or A
in the compound patter. The meaning, however, is semicompositional:
' shirt of style associated with place where one says aloha' .
3. Some part may be a phonological word of the right syntactic category
but the wrong meaning. An example is strawberry, 'particular kind of
brry', which has nothing to do with straw.
Specifying the lexical entry of a compund thus involves two parts:
listing a fxed phrase, and distinguishing which elements of the phrase are
nonredundant and whch are partially overridden. Under the full entry
theory of section 5. 6, the redundant parts-the elements that are inde­
pendently existing words, and the morphological/syntactic structures that
bind the elements together-have to be listed as part of the entry; but
bcause they are predictable, they do not contribute to the independent
information content or "cost" of the item.
Let us apply this approach to idioms. As observed a moment ago,
cl iches are cases in which combinations of independently listed items are
l i sted redundantly-but they are listed, so that whistle while you work is a
k nown phrase, a possible Weel of Fortune puzzle, but hum as you eat is
not. Idioms difer from cliches, of course, in that their meaning is not
entirely a function of the meanings of their constituents. In other words,
i di oms always have overrides and therefore fall in with te third class of
compounds desribed above.
Within this class, we fnd the same three sublasses. First, we fnd
i di oms containing nonwords, for instance running amok. There are also
i di oms containing words in which the noral meaning of one of the
words is completely irrelevant, s it might as well be a nonword, for
i nhtance sleeping in the b�f ( Wheel of Fortune corpus)¯buf normally
denotes either a color or a polishing operation, so its use in this idiom
ÎN Lompletely unmotivated.
Second, there are i dioms containing words with an approximately cor­
rect meaning but of the wrong syntactic categor. Four examples from
t he Wheel of Fortune corpus are in the know (verb in noun position),
flll ' l " V(' had their '! and downs ( prepositions i n noun position), wait and
1 66 Chapter 7
� attitue (verbs in adjective position), and down and dirty (preposition
in adjective position, and the whole can appear prenominally).
Third, the most interesting cases are words in an idiom that are of
the right syntactic category but have the wrong meaning. An important
wrinke not present in compounds appears in idioms bcause of their
syntactic structure: the deviation in meaning can appear at either the
word level, the phrasal level, or some odd combination thereof. Here are
two examples from the Wheel of Fortune corpus:
( 1 7) a. eat humble pie
b. not playing with a full deck
Eat humble pie, which means roughly ' humbly acknowledge a mistake' , is
not totally noncompsitional: humble cleary plays a role in its meaning.
Less obviously, eat can b read metaphorically as 'taking something back'
or 'accepting' . The only part that seems completely unmotivated to me is
pie. ( David Permutter (personal communication) has pointed out that
this sense of eat also occur in the idioms eat crow and eat one's word.)
Similarly, not playing with a ful deck can be paraphrased roughly as 'not
acting wi th a full set of brains', where playing means ' acting' , ful means
' full', and deck (of cards) means 'st of brains' . That is, this fxed expres­
sion is a minimetaphor in which each of the parts makes metaphorical
sense.
I Ü
Thus the lexical listing of idioms like these must override i n whole or in
part the meanings of the constituent wors and the way they are com­
bined in the literal meaing. This is adittedly a more complex process
than the rather simple overrides cited for compound, possibly the only
place where the theor of morphology may have to be seriously supple­
mented in order to account for idioms.
7.6 Syntactc Mobity of (Ony) Sme Idom
Another well-known old fact about idioms is that they have strangely
restricted properties with regard to movement. (Schenk (1 995, 263) extends
this observation to fxed expressions in general. ) The classic case is the
resistance of many idioms to passive: examples such as (l 8a-d) are i mpos­
sible in an idiomatic reading (sigaled by #) .
(1 8) a. #The bucket was kicked by John.
b. # The towel was thrown i n by Bill.
Idioms and Other Fixed Epressions
1 67
c. # The breeze was shot by the coal mners.
d. # The fat was chewed by the hysY
Chomsky (1 980) uses this fact to argue for deep structure lexical insertion.
I confess to fnding the argument obscure, but the idea seems to b that
something specially marked as an idiom in syntax cannot undergo move­
ment. 1 2 Wasow, Nunbrg, and Sag ( 1 984), Ruwet ( 1 991 ), Nunberg, Sag,
and Wasow ( 1 994), and Van Gestel ( 1 995) pint out that this is a counter­
productive move, given that there are signifcant numbrs of idioms that
ocur only in "transfored" versions (Ruwet and Van Gestel present
examples from French and Dutch respctively).
( 1 9) a. Passive
Xj is hoist by proj ' s own petard
X has it made
X is ft to b tied
b. Tough-movement
play hard to get
hard to take
a tough nut to crack
easy to come by
c. Wh-movement and/or question formation
What's up?
How do you do?
NPj is not what proj ' s cracked up to b
d. All kinds of things
far b it from NP to VP
Al so recall the idiomatic interpretations of n' t and shall in inverted yes-no
questi ons.
( 20) a. Isn' t John here?
(Compare to I wonder whether John in' t here)
b. Shall we go in?
(Compae to I wondr whether we shal go i)
I f l exical insertion takes place at D-Structure, somehow these idioms must
hI spcifed to undergo certain movement rules-but only thos!
On the other hand, if lexical insertion/licensing takes place at S-Strc­
l ue, then it is natUral to expect that some idioms (if not many) will stip­
I I l at e " transformed" surface forms with whch they must unify. At the
,: nc ti me, the i nahi l i ty of kick the bucket. throw in the towel, and the
1 68 Chapter 7
like to undergo passive is perfectly natural if these too stipulate surface
forms.
The problem posed by S-Structure lexical licensing is then why some
idioms, such as thos in (21 ), happen to have mobile chunks.
(21 ) a. The hatchet seems not to have been buried yet by those skaters.
b. The ice was fnally broken by Harry's telling a dirty joke.
c. The line has to b drawn somewhere, and I think that it should be
when people start claiming nouns are underlying verbs.
d. The cat was believed to have been let out of the bag by Harry.
A key to the answer is suggested by Wasow, Nunberg, and Sag ( 1 984),
Ruwet ( 1 991 ), and Nunberg, Sag, and Wasow (1 994); a similar proposal
is made by Bresnan ( 1 978) . In an idiom such as kick the bucket, bucket
has no independent meaning and therefore no a-role. On the other hand,
the idioms in (21 ) can be taken as having a sort of metaphorical semantic
composition.
(22) a. bury the hatchet ~ reconcile a disagreement
b. break the ice ~ break down a (fragile rigid) barrier to social
interaction
c. draw the line ~ make/enforce a distinction
d. let the cat out of the bag ¯ reveal a scret
As a result, the LeSs of thes idioms can be partitioned into chunks that
correspond . to "subidiomatic" readings of the syntactic idiom chunks:
bury means 'reconcile' and the hatchet means 'disagreement', for example.
This permits us to construct a lexical entry for bury the hatchet some­
thing like (23). ( In the interests of clarity, I retain the phonological struc­
ture shared with the words bury, the, and hatchet. )
(23)
ß
Wd
b
el
� A
beri 6�
ßV×
NPy

b
De

, SlOg
[ RECONCI LE ([ JA• [ DI SAGREEMENTJ
»
) ,
Idioms and Other Fixed Expressions
1 69
Let's unpack this. The LCS is of course a raw approximation, containing
only the critical parts of argument structure. The subsript x on the whole
LCS says that this maps into the verb. The empty argument subsripted A
will b flled by the conceptual structure of the exteral argument of the
sentence. In an ordinary transitive verb, the second argument would also
be subsripted A, so that it would link to the direct obj ect. But in this case
it is mapped into a stipulated constituent in the syntax by the subscript y,
namely an NP consisting of Det + N. In tu, these syntactic pieces map
i nto the phonology.
What is crucial here is that the syntactic structure does not stipulate a
VP constituent: the V and the NP are not syntactically connected in any
way. Therefore the NP is free to b anywhere in S-Structure, a long as it
heads a chain whos lowest trace can be 8-marked as the scond argument
of the verb. That is, the hatchet behaves exactly like any other "under­
l yi ng object" in syntax, receiving its 8-role via the principle RUGR of
section 4. 4.
Notice that the phonology does not stipulate adjacency of the three
morphemes either. As a result, not only can the hatchet be displaced, but
the and hatchet can be sparated by modifers such a proverbial and
metaphorical, which do not interfere with unifcation in syntax and which
can b smantically integrated with the LCS of the idiom.
This acount with "disonnected" syntax depends on the possibility
t hat each piece can b connected with a piece of the LCS. By contrast, in
kick the bucket, there is no pssibility of distinguishing the bucket a an
a rgument in the LCS, so the lexical entr has to look like (24), with a
fi xed VP.
VP

.
V
x N
P

bDet
e
N
I n other words, the hatchet is linked to bury via its 8-role; but the bucket
has to be l i nked to kick syntactically because it has no 8-role. Hence the
hatchet is movable and the bucket is not.
On the other hand, since the hatchet does not contrast semantically
wi t h any other freely substi tutable expression, it cannot b usd in syn­
l act ic posi ti ons that entai l contrast, for i nstance as topic: The hatchet, we
1 I ' (J11ldn ' ( dare hury is onl y grammatical i n the l i teral readi ng. Thus the fact
1 70 Chapter 7
that an idiom chunk can be moved by passive and raising does not entail
that it can always be moved.
This approach is confrmed by an observation made by Abeille (1 995)
and Schenk ( 1 995). Suppose a passivizable idiom chunk contains a free
position, for example pul X's leg. Although leg does not contrast with
anything and thus cannot be focused or topicalized, X may, in which case
it can carr leg along by pied-piping, yielding examples like MARY's leg,
you'd better not pull-she has a louy sense of huor. (This includes
elefing: It was MARYs leg, not JOHN's, that we were trying to pull. So
this analysis is not confned to movement. )
The story is not so simple, though. Paul Postal (personal communica­
tion) has pointed out that there are some idioms that have a plausible
semantic decomposition but like kick the bucket do not readily undergo
passive.
(25) raise hell " 'cause a serious disturbance'
*Hell was raised by Herodotus. (though ?A lot of hell wil be raised
by thi proposal)
(26) give the lie to X " ' show X to be a falsehood'
*The lie was given to that elaim by John. (though ?The lie was given
to that claim by John's sbsequent behavior)
For such i dioms, it would be possible to say that despite their semantic
decompsition, they are lexical VPs rather than a collection of syntactic
fragments. T hat is, having a decomposition is a necessry but not a suf­
fcient condition for mobility of idiom chunks. This concurs with Ruwet's
( 1 991 ) conclusion. ( However, see Abeille 1 995 for further complications
that I do not pretend to understand. )
To sum up, the treatment of idioms benefts from the theor of lexical
licensing developed in chapter 4. Multiword structures are easily inserted
i nto complex syntactic structures if the process in question is unifcation
rather than substitution. The fact that many idioms are rgid in syntactic
structure, and that some of them have rgid "transfored" structures,
follows from the fact that lexical licensing takes place at S-Structure
rather than D-Structure. The fact that some idiom chunks are still mov­
able follows from the possibility of assigning them indepndent bits of
meaning in the i diom' s LCS, so that they can b i ntegrated wi th the verb
of the idiom by normal a-marking. ¯ ¯
Returing to our main i ssue: by adopting a theory i n which lexical
i tems larger than XO can license syntactic structures, we can i ncorporate
Idioms and Other Fixed Expressions
] 7]
cliches and idioms into the lexicon. Furtherore, many of the cu�o�s
properties of idioms are altogether parallel to those already found wI�hm
lexical word grammar, so they add no further complexity to grammatIcal
theor. As for the remaining properties of idioms, they may have
.
to be
added to grammatical theor-but they had to b added som�wh�re 10 the
theor of mind in any event. The point is that those generahza�l�ns that
do exist can be captured; by contrast, a theory that relegates IdIoms to
some unknown general-purpos component of mind incurs needless
duplication.
7.7 Idioms That Ae Spialation of Other Idiomatic Constuction
One interesting class of idioms in the Wheel of Fortune corpus are resul­
tative idioms, listed in (27).
(27) a. cut my visit short
b. I cred my eyes out
c. knock yourslf out ('work too hard')
d. you scared the daylights out of me
These are idiomatic specializations of the resultative constrction, which
has the syntactic structure (28a) and the interpretation (28b).
( 28) a. [
v
p V NP PP/AP]
b. 'cause NP to go PP/to become AP, by V-ing (NP)'
To show what I mean by iiomatic specilzations, I frst must briefy
di scuss the nature of the resulttive construction. (29a-d) are standard
examples of this construction.
( 29) a. The gardener watered the tUlips fat. ('the gardener caused the
tulips to bcome fat by watering them')
b. We cooked the meat black. ('we causd the meat to become black
by cooking it')
c. The professor talked u into a stupor. ('the professor caused us to
go i nto a stupor by talking')
d. He talked himself hoarse. ('he caused himself to become hoarse
by tal ki ng' ) ("fake refexive" case)
1 1 has ben argued (lackendof 1 990, chapter I D; Goldberg 1 992, 1 995)
I hat the resul tative i s a "constructi onal idiom, " a match of syntactic and
wnccptual struct ure that i s not determi ned by the head verbø Although
I hl' verb is the synl ,u :ti c head oft he VP, it is not the semanti c head; rather,
1 72 Chaptr 7
its meaning is embdded in a manner constituent, as indicated roughy in
(28b). As a result, the arguent structure of the VP is deterined by the
meaning of the constructional idiom, essntially a causative inchoative of
the sort expressed overtly by verbs such as put and make. 14
Under this approach to the resultative, the construction is listed in the
lexicon just like an ordinar idiom, except that it happens to have no
phonological structure: it is a structure of the form (0, ss, CS), where SS
happns to be a whole VP. (28) presnts the matching, with (28b) as a
very inforal acount of the concptual strcture. At the same time, since
the syntactic structure and conceptual structure provide opn places for
the V, NP, and PP/AP; the construction is productive; hence it difer
from ordinary idioms, whose ternal elements are fxed.
To se this more clearly, consider a related constructional idiom that is
more transparent: the "way-constrction" exemplifed in (30).
(30) a. Bill belched his way out of the restaurant.
b. We ate our way across the country.
Note that in thes sentences the intransitive verb cannot licens anything
in the rest of the VP; and the NP in direct object position, Xs way, is not
exactly a noral direct object. Goldbrg ( 1 993, 1 995) and I (Jackendof
1 990) argue that this construction too is a idiom, listed in the lexicon
with the following structure: 1 5
(3 1 )
aWd
I
way
VPx

Vy NP PPz

NP+poss aN
(The conceptual structure in (3 1 ) is coded in the notation of Jackendof
1 990; it says essntially 'Subject goes along Path designated by PP, by
V
­
ing'. The fact that the subject functions as semantic subject of GO and of
the main verb is encoded by the binding of the arguments designated by a,
as mentioned in section 3. 7. See also Goldberg's treatment for further
complications in meaning and for discussion of Marantz' s ( 1 992) coun­
terproposal . )
(3 1 ) licenses corespondences of syntactic structure and conceptual
structure that do not follow canonical princi ples of argument structure
mappi ng. As a result, the verb is not what l i censes the argument structure
Idioms and Other Fixed Expressions 1 73
of the rest of the VP; rather, the construction does. Furtherore, the odd
noun way is the only phonological refex of the construction. Again, b­
caus there are open places in syntactic structure and conceptual strcture,
the construction is productive, unlike ordinar idioms. Thus a construc­
tion like this ofers yet another source of enriched smantic composition,
in addition to the types discussd in chapter 3.
As with the resultative, there exist fxed expressions built out of the
way-construction. For instance, the form wend occurs only in the expres­
sion wend one's way PP; the for wor appears as a verb only in the
expression worm one's way PP. In addition, there is the expression sleep
one's way to the top, 'reach the top of a social hierarchy by granting
sexual favors' , i which sleep and top must b taken in a spcial sense.
If the lexicon allows listing of instances of a productive patter with fur­
ther spcifcation and/or overrides (as argued for compunds in section
7. 5), then it is natural to expect such specialized fors of the productive
construction, containing cranberry morphs and/or special metaphorical
meanings.
Returing to the idioms in (27), thes bar the same relation to the re­
sultative construction as wend one's way PP does to the way-construction.
[t is interesting that diferent spcializations prtain to diferent arguments
of the construction. In (27a), for instance, the idiom is cut NP short,
where the free argument is the direct object; but in (27d) it is scare the
daylghts out of NP, where the free argument is the prepositional object.
I n (27b) the idiom is the whole VP cry NP's eyes out, where NP has
to be a bound pronoun. It shares this bound pronoun with the way­
construction and many other idioms such as gnash NP's teeth and stub
N P's toe (in which, incidentally, the verbs are cranbrr morphs).
How does the lexicon express the relation of a spcialized fxed expres­
si on to a more general construction of which it is an instance? It seems to
ÎC that this relationship is actually not so diferent from that between
subordinate and superordinate concepts such as robin and bird. In either
I<se, both members of the relationship have to b listed in memory.
I ndeed, as is well known, it is not unusual for a subordinate concept to
override features of its superordinate, for instance in stipulating that an
ostrich, though a bird, is fghtless. In other words, the general cognitive
machinery necessar for working out hierarchical relationships of con­
l'pts seems not inappropriate for working out hierarchical relationships
among lexical constructions and idioms built on them.
1 Ó
1 74
7.8 Relation to Construction Gramar
Chapter 7
Such an account of constructional idioms and their lexical spcializations
makes sense, of course, only in a theory of the lexicon that prmits phrasal
lexical items, that is, in a theory of lexical licensing rather than lexical
insrtion. This approach makes clear contact with work in Construction
Grammar ( Fillmore, Kay, and O'Connor 1 988; Fillmore and Kay 1 993;
Goldberg 1 995), whose premise is that a language typically contains a
substantial number of syntactic constructions that ( I ) are not entirely
derivable by principles of "core grammar" and (2) receive spcialized
interpretations. Typical examples for English are shown in (32).
(32) a. The more he likes her, the more she dislikes him.
b. Bill hates chemistry class, doesn't he?
c. One more beer ad I'm leaving. (Culcover 1 972)
d. Fred doesn't even like Schubert, let alone Mahler. ( Fillmore,
Kay, and O'Connor 1 988)
e. Day by day thngs are getting worse. (Williams 1 994)
Since thes constructions, like the way-onstruction, are listed a trples
of partly spcifed phonological, syntactic, and conceptual structurs, the
boundary btween "lexicon" and "rule of grammar" begins to blur. For
instance, Williams ( 1 994) notes that dy by dy blongs to a semiproduc­
tive family of idioms of the form N1 -P-NI : week ater week, inch by inch,
piece by piece, bit by bit, cheek to cheek, and hand over had (plus the
outlier hand over ft, with two diferent nouns). Or are they compounds?
It's hard to tell, bcause they resist modifcation, other than prhaps inch
by agonizing inch.
Another such family includes compound prepsitions like i consid­
eration of with respect to, in touch with, under penalty of in search of in
need of in spite of in synch with, in lne with, and by dint of (with a cran­
brry morph). I call these expressions compounds because, as in other
products of sub-Xo composition, the interal noun resists any determiner
or other modifcation (*in the consideration of etc. ). 1 7 However, a quite
similar group does allow some syntactic fexibility: in case of war/in that
case, for the sake of my sanityior Bill's sake, on behal of the organizationl
on Mike's behal Hence the latter group looks more like a syntactic con¯
struction. We thus have two very closely related constructions, one of
which looks morphological, the other syntactice Thi s hel ps poi nt up the
necessity of treati ng idi osyncratic syntactic const ructions in a way as cl ose
as possi bl e to semi product i vc morphol ogy.
Idioms and Other Fixed Expressions
1 75
How far might one want to push this line of thought? Along with Fill­
more and Kay ( 1 993), Goldbrg ( 1 995), and in some sense Langacker
( 1 987), one might even want to view the "core rules" of phrase structure
for a language as maxim ally underspecifed constructional idioms. If there
is an issue with the Construction Grammar position here, it is the degree
to which the "core rules" are inherently associated with semantics, in the
way specialized constructions like (32) are.
To elaborate this issue a tittle, consider Goldberg's ( 1 995) argument
(related to that in lackendof 1 990) that the double object construction
in English always marks the indirect object as a benefciary of some sort;
she infers from this that the V-NP-NP construction carries an inherent
meaning. However, a larger context perhaps leads to a diferent con­
clusion. Languages difer with respect to whether they allow ditransitive
VPs: English does, French does not. The possibility of having such a
confguration is a syntactic fact; the uses to which this confguration is put
(i. e. the ss-cs correspondence rules that interpret ditransitive VPs) are
semantic. And in fact, although English ditransitive VPs are used most
prominently for Benefciary + Theme constructions (so-called to-datives
and for-datives), the same syntax is also recruited for Theme + Predicate
NP, hence the ambiguity of Make me a milkshake (Poo/ You're a milk­
shake!). And ditransitive syntax also appars with a number of verbs
whose semantics is at the moment obscure, for instance I envy you your
security, They denied Bilhis pay, The book cost Harry $5, and The lol­
pop lasted me two hours. At the same time, there are verbs that ft the
semantics of the dative but cannot use it, for example Tel/* Explain Bil
Ihe answer.
In addition, other languages permit other ranges of semantic roles in
t he ditransitive patter. Here are some examples from Icelandic (33) and
Korean (34) (thanks to loan Maling and Soowon Kim respctively):
( 13) a. kir leyndu
O
laf sannleikanum.
they concealed [from] Olaf(ACC) the-truth( DAT)
b . Sj6rinn svipti hanni manni sinum.
the-sea deprived her(ACC) husband(DAT) her
'The sea deprived her of her husband. '
(Zaenen, Maling, and Thniinsson 1 985)
( ,\4) a . John-un kkoch-ul hwahwan-ul yekk-ess-ta.
John-TOP fowers-ACC wreath-ACC tie-PAST-IND
' j ohn ti ed t he fowers i nto a wreath. '
176
b. John-un cangcak-ul motakpul-ul ciphi-ess-ta.
John-TOP logs-ACC campfre-ACC bur-PAST-IND
'John bured the logs down into a campfre. '
Chapter 7
This distribution suggests that there is a purely syntactic fact about
whether a particular language has ditransitive VPs. In addition, i it has
them, the SS-CS correspondence rules of the language must specify for
what semantic purses (what three-argwent verbs and what adjunct
constructions) they are recruited. It is not that French can't say any of
these things; it just has to use diferent syntactic devices. And although the
grammar of English no doubt contains the form-meaning correspondence
for datives described by Goldberg, the availability of that form as a target
for correspondences seems an indepndent, purely syntactic fact, also part
of the grammar of English.
A similar case, concering the contrast between English and Yiddish
topicalization, is discussed by Prince (1 995). On her analysis, the broader
uses of Yiddish topicalization are not a consequence of a movement
rule that perits a wider range of constituents to be moved; instead,
they are available bcause this syntax is linked with a larger range of dis­
coure functions. Yiddish speakers learing English tend to get the syntax
right-they do not apply verb-second, for instance. But they tend to
use the English construction with Yiddish discourse functions, yielding
"Yinglish" sentences such as A Chomsky, he's not. The pint of her argu­
ment is that there is a certain amount of indepndence btween the exis­
tence of a syntactic construction and what meaning(s) it can express.
If one wishes to adopt the stance of Construction Grammar, then, it
seems one should maintain that among the most general constructions,
the bread and butter of the syntax, are constructions that are purely syn­
tactic, the counterpart of phrase structure rules. These then would b
phrasal lexical items that are of the for (0, SS, 0). These could then b
spcialized into particular meaningful constructions, in the same way as
meaningful constructions such as the resultative can be specialized into
idioms.
But in principle there should be no problem with a theory that blurs
these boundaries: a strict separation of lexicon and phrasal grammar, like
a strict separation of word lexicon and idiom lists, may prove to b yet
another methodological prejudice. I fnd this an intriguing question for
future research.
Idioms and Other Fixed Expresions 1 77
7.9 Summa
To summarize the main points of this chapter:
1 . There are too many idioms and other fxed expressions for us to simply
disregard them as phenomena "on the margin of language. "
2. From the pint of view of Representational Modularity, what counts
as "part of language" is anything stored in memory in phonological and!
or syntactic forat.
3. The standard assumptions about lexical inserion make it difcult to
forulate a general theory of idioms; but these assumptions are in any
event unsatisfactory on other grounds.
4. The alterative theor of lexical lcenSing conforms to Representa­
tional Modularity. It permits fed expressions in the lexicon as a matter
of course: like words, they are fxed matchings of <PS, SS, CS).
5. The theoretical machiner necessary for the noncompsitional aspcts
of morphology (especially compounds) goes a long way toward providing
an account of the noncompositionality of idioms as well, in paricular
allowing for the listing of partly redundant instances of a productive
patter, the further specialization of such instances, and the possibility of
overrides.
6. At least some of the syntactic mobility of idioms can be accounted for
terms of how they map their meanings onto syntax.
7. This approach to idioms extends readily to syntactic constrctions
wi th specialized meanings, and perhaps to more general phrase structure
confgurations as well.
In short, within a theory of lexical licensing, it does not necessarily cost
very much in the generality and constraint of grammatical theory to add a
I heory of idioms, other fxed expressions, and constrctions, properly
enl arging the scope of linguistic knowledge.
Chapter 8
Epilogue: How Language
Helps Us Think
8. 1 Intoucton
It may seem out of place to conclude a book about technical linguistics
with a chapter on how language helps us think. However, some interest­
i ng answers to this question emerge from the architecture for language
developed here and from Representational Modularity. Part of my goal in
thi s book is to emphasize the continuity of technical linguistic theory with
the other cognitive sciences. Hence I fnd it interesting to tur from the
ni tty-gritty of lexical licensing to one of the most general issues a theory
of linguistics may confront. Many outside of linguistics have speculated
on it, but without an informed view of language. By contrast, few inside
of linguistics have tried. Emonds ( 1 991 ) makes some relevant remarks
( i n reaction to which this chapter developed); Chomsky's ( 1 995) minimal
speculations were mentioned in chapter I . Bickerton ( 1 995) thinks through
t he question much more comprehensively, and I will draw on some of his
ideas in the course of the discussion.
Part of the difculty in attacking this question is that it seems altogether
natural that language should help us think-so natural as almost to
requi re no explanation. We difer from other animals in being smarter, in
bei ng able to think (or reason) better; and we difer from other animals
in having language. So the connection between the two seems obvious.
I n thi s chapter, therefore, I must frst show that the fact that language
hel ps us think does require an explanation, before being able to give an
i dea of what the explanation might be like.
hor the way [ have phrased the question, it should be clear that I am
,· mphatical l y not expecti ng an absolute connection between language and
t hought; otherwi se I would have enti tled the chapter "Does Language
1 80
Chapter 8
Enable Us to Think?" Rather, I will be promoting a somewhat more
complex position.
1 . Thought is a mental function completely sparate from language, and
it can go on in the absence of language.
2. On the other hand, language provides a safolding that makes pos­
sible certain varieties of reasoning more complex than are available to
nonlinguistic organisms.
Thus the question of how language helps us think may be focused further.
3. How much of our increased reasoning power is due simply to our big
brains, and how much of it is spcifcally due to the presence of a lan­
guage faculty and why?
For reasons disussed in sections 1 . 1 .6 and 2. 6, we continue to think of
mental functions in computational terms. In particular, we have found it
useful to think of brain processing in ters of the construction of repre­
sentations within representation modules and the coordination of these
representations by interface modules. Thus particular mental functions
are to be loalized in particular representation or interface modules. In
addition, we have taken cre to ask how te information that appears in
working memory (i spech perception and/or production) is related to
long-term memory knowledge of language (lexicon ad grammar).
Seen from a larger perspective, then, this chapter is an exercise in
showing how to tease apart which linguistic phenomena belong to which
modules. For cognitive scientists interested in other faculties of mind, this
may be taken to provide a model for how mental faculties can be decon­
structed, a formal and functional approach that can complement inves­
tigation into brain localization.
8.2 Brain Phenomena Opaque to Awarenes
In working out a computational theory of brain function, it immediately
becomes necessary to claim that certain steps behind common sense ar
hdden from conscious experience. Such a claim supposes that we can
distinguish two sorts of phenomena in the brain: those that are present to
awareness-that show their face as something we experience-and those
that are not (with perhaps some in-between cases as well). The immediate
question that arises, then, is which inforation within the brain contrib­
utes directly to the character of conscious experience and which only
indirectl y.
How Language Helps Us Think 1 8 1
We can immediately obsere that experience i s connected to working
memory and not long-term memory. We can be aware of a (long-term)
memory only if the memory is recovered into working memory. Hence,
for example, the lexicon and the rules of grammar are not accessible to
awareness. Only their consequences, namely linguistic expressions, are
consciously available.
In earlier work (Jackendof 1 987a; henceforth C&CM) I proposed the
further hypthesis that the distinction between conscious and unconscious
information can also be made partly in ters of the levels of representa­
tion available to the brain-that some levels much more than others can
be seen mirored directly in the way we experence the world. If this is the
correct approach, it makes sense to ask which of the levels of represnta­
tion are the ones most directly connected to awareness.
Let us start with a simple case, far from language. Notice how the
images in your two eyes are subtly diferent. If you alterately close one
eye and then the other, you see the image change. But in normal binoc­
ular vision, the two images are fused into a single image, which contains
an element of depth not present in either eye separately. How does your
brain do that? No matter how hard you think about it, you can't catch
yourself in the act of fusing the two images into one. Phenomenologically,
it just happns, as if by magic. Explaining exactly how the brain does this
is a major preoccupation of vision resarch (e. g. Jules 1 971 ; Marr 1 982;
Crck 1 994). But even when we fgure out how the brain does it, we still
won't be able to catch ourselves in the act! And this is true of peripheral
i nformation in general . We can't be aware of the frequency analysis our
auditory system perfors on an incoming sound wave; we just hear a
sound ( Bregman 1 990). We can't be aware of the input from stretch
receptors in the proprioceptive system; we just feel a weight ( Lackner
1 985).
I n fact, it sems plausible that all perpheral snsory represntations
i n the brain are totally inaccessible to awareness. A crde diagram like
Ilgure 8. 1 (next page) schematizes the situation: what we might call the
"outer rng" of representations is completely unconscious.
Once we have established that some representations are unconscious,
t he obvious question that arises is exactly which are unconscious. Is it only
t he peripheral ones, or are there more? Section 8.4 will try to show that
t here are indeed more, and subsequent sections will work out interesting
Ionscquences for the relati on of language and thought.
1 82
Eyes
Ears
AUDITORY
SIGNALS
FURTHER
BRAIN
PROCESSES
OTHERS
Tongue, etc.
Fiue 8.1
An outer shell of snsory representations is unconscious
Chapter 8
Body
Note that I am not asking the questions everyone seems to ask, namely
"What Is Consciousness?" or "What Is Consciousness For?" (This in­
cludes, for instance, Hofstadter 1 979, Searle 1 992, Baars 1 988, and Block
1 995. ) I suspect that many people who insi st on this question (though
prhaps not the authors just cited) really want to know something like
"What makes human life meaningful?" And in many cases-my students
have ofen been quite explicit about this-they are just hoping that some­
how we'll b able to bring back some magic, some hop for a soul or the
pssibili ty of immortality. By contrast, I am asking a less cosmic and more
structural question, simply, "Which representations are most di rectly
refected in consciousness and which ones are not, and what are the con­
sequences for the nature of experience?" If you don' t care for thi s ques-
How Language Helps Us Think 1 83
tion, I am the frst to acknowledge that it's a matter of taste; but I hop to
convince you that my question can still lead to a certain kind of progress.
8.3 Language Is Not Thought, and Vice Vera
Let us focus more closely on the relation of language and thought. We
very ofen experience our thought as "talking to ourslves" -we actually
hear words, phrases, or sentences in our head-and it's very tempti ng
therefore to characterize thought as some sort of inner speech. I want to
argue, though, that thought cannot be simply bits of language in the head,
that it is a diferent kind of brain phenomenon. These arguments are by
no means new (see Dascal 1 995 for some history of the di spute), but it's
worth rehearsing them again in this context.
First, thinking is largely indepndent of what language one happens to
think in. A French speaker or a Turkish spaker can have essentially the
same thoughts as an English speaker can-they're just in French or
Turkish.
2
The point of translation between languages is to preserve the
thought behind the expression. If diferent languages can express the same
thought, then thoughts cannot b embalmed in the form of any single
language: they must b neutral with respect to what language they are
expressed in. For instance, the sme thought can be expressed in English,
where the verb precedes the direct object, and in Japanese, where the verb
follows the direct object. Hence the for of the thought must not depend
on word order. Language, by contrast, does depend on word order: you
(or your brain) have to choose a word order in order to say a sentence­
or even just to hear it in your head. Consider also bilinguals who can
"think in two languages. " We would like to b able to say their thoughts
are essentially the same, no matter which language they are "thinking in. "
Thi s is possible only if the form of thought i s neither of these languages.
In chapter 2, following earlier work (Jackendof 1 983, 1 987a, 1 990), I
took the position that the nonlinguistic level of conceptual strcture is one
of the fors in which thought is encoded, and i terms of which inference
l a kes place. ¯
I n fact, we fnd dissociations of ability in linguistic expression (syntax
and phonology) from reasoning, for example children with Williams syn­
drme ( Bellugi , Wang, and Jerigan 1 994). These individuals at frst
�I ance seem rather precocious, because they chatter away in animated
fashi on, usi n
g
all ki nds of advanced vocabulary. But their language skills
t ur out to be an i solated hi gh poi nt in an otherwi se bleak i ntellectual
..
Chapter .
landscape; they evidently cannot use their advanced language to reason
very well.
Another such case is the linguistic idiot savant studied by Smith and
Tsimpli ( 1 995). This is an individual retarded in nearly every respect
except for the learing of languages, in which he displays remarkable
ability, having acquired substantial fuency in Danish, Dutch, Finnish,
French, German, Greek, Hindi, Italian, Noregian, Polish, Portuguese,
Russian, Spanish, Swedish, Turkish, and Welsh. Interestingly, although his
word-for-word translation and use of grammatical inection is excellent,
he is poor at making use of contextual relevance to refne his translations,
suggesting a dissociation between using language and understanding it in
any depth.
In these cases, then, linguistic expression is highly structured but is
not connected to commensurately complex meaning. Conversely, complex
thought can exist without linguistic expression. Think about pople like
Beethoven and Picasso, who obviously displayed a lot of intelligence and
deep thought. But their thoughts were not expressible as bits of lan­
guage-their intelligence was in the musical and visual domains respec­
tively. (Music and visual art are indeed modes of communication, but not
lnguages; see Lerdahl and Iackendof 1 983 for discussion. )
For a much more mundane example, think of the motor intelligence
displayed in the simple act of washing dishes. It is well beyond current
understanding in computer science to program a pair of robot eyes and
hands to wash dishes in the manner that we do, with the skill and fexi­
bility that any of us can muster. I suspect it would be impossible to train
an animal to carr out such a task, much less induce it to have the per­
sistence to want to fnish the job (this parallels in part Dennett' s ( 1 995)
example of tending a fre). Yet very little of ths skill is verbalizable.
It would seem to me, then, that humans display considerable intelligent
behavior that is govered by representations outside of syntax and pho­
nology. This may reveal the existence of further cognitive specializations
in humans besides language, or it may be simply a consequence of the big
brain afording more computational capacity in preexisting modules.
Which is closer to correct, it seems to me, is a matter for future research.
Turing to animals, there are ample arguments for reasoning in non­
human primates, going back to Kohler (1 927) and continuing i n a rich
tradition to the present. If apes and monkeys can think without lan­
guage, at least to some degree, then we are forced to acknowledge the
independence of thought from language. To make this point morc vividly.
How Lnguage Helps Us Think 1 85
it is of interest to consider a phenomenon called redirected aggression,
described by Cheney and Seyfarth ( 1 990). From extended observation of
and experimentation with vervet monkeys in the wild, Cheney and Sey­
farth show that if monkey X attacks monkey Y, there is a strong like­
lihood that monkey Y will shortly thereafer attack some other member
of X's kin-group. Consider what reasoning must be attributed to Y to
explain this behavior: Y must know ( 1 ) that X attacked Y, (2) that retal­
iation is a desirable response, (3) that this other monkey, call her Z, is a
member of X's kin-group, (4) that kin-groups count as some sort of
equivalence class for attack and retribution, and (5) that facts ( 1 )-(4) give
Y reason to attack Z. This seems to me li ke fairy complexly structured
thought, and, other than the actual attack, the facts involved are related
to perception only very abstractly. The level of conceptual structure has
the appropriate degree of abstraction for expressing them. (Cheney and
Seyfarth mount extended arguments against regarding this behavior as
any sort of simple stimulus generalization; their explicit goal is to con­
vince animal behaviorists that reasoning is indeed taking place. )
The picture that emerges from these examples is that although language
expresses thought, thought itself is a separate brain phenomenon.
Bickerton ( 1 995) claims, by contrast, that language itself is the form in
which thought is encoded; Emonds ( 1 991 ) makes a similar claim that it is
language that permits "connected thought. " However, neither of them
addresses the question of what levels of linguistic representation support
reasoning and how rules of inference can be defned over them. They seem
to think that syntactic structure is an appropriate structure for thought.
Yet syntactic structure does not carr such elementary semantic dis­
ti nctions a the diference between dogs and cats (see chapter 4), so it
clearly cannot supprt inference of any complexity.
Bickerton's and Emonds's ciaims rest in part on their assumption that
8-roles (the relations of who did what to whom) are syntactic notions:
they argue that since reasoning involves computation of a-roles, it must
necessarily tur on syntactic strcture. I have argued extensively (Jack­
cndof 1 972, 1 976, 1 990, and many other places) that a-roles are aspects
of conceptual structure, not of syntax, so Bickerton's and Emonds's
arguments are unfounded.
Bickerton also confuses the broad notion of syntax (i. e. the organiza­
ti on of any formal system) with the narrow notion of syntax (the organi­
zati on of NPs and VPs) . He says that si nce thought is (broadly) syntactic,
it requi res (narrow) synt ax, which is simply false.
.. Chaptr .
Finally, Bickerton calls the aspect of language used for reasoning, ab­
stracted away from the particulars of the syntax and phonology of indi­
vidual languages, "Language-with-a-capital-L. " This seems to me to fall
in with Chomsky's statement, quoted in chapter 1 , to the efect that if
only we had telepathy, language could be "perfect" and we could com­
municate pure thought. Along with Pnker (1 992), I submit that the level
of representation satisfying these desiderata is precisely conceptual struc­
ture, a level with which language communicates, but not itself a strictly
linguistic level. 5
8.4 Phonetc For is Conscious, Thougt Is Not
Next I want to point up a diference between language and thought with
respect to consiousness. Compare rhyming with entailment. When we
observe that two words rhyme, say mouse and house, we can easily see
why: the ends of the two words sound the same, -oue. That is, the rele­
vant parts of words in a rhyming relationship are immediately transparent
to awareness. By contrast, consider an entailment relationship, for instance
if Bil kiled Harry, then Harry died. This entailment is intuitively trivial.
But exactly what parts of the frst sentence are responsible for entailing
exactly what parts of the second sentence? One is tempted to sy "That' s
just what killing is: making someone die. " But this merely restates the
problem: there's nothing about the conscious form of the word kill that
makes it obvious that it has anything to do with the form of the word die. 6
The relationship of entailment feels to us as obvious and mechanical as
the relationship of rhyming, but at the sae time we can't put our fnger
on what parts of the words make the entailment mechanical. Entailment
involves an intuitive step, a step that is opaque to awareness.
In this light, we can regard the development of foral logic, from the
Greeks to the present, as a series of attempts to make explicit the steps
involved in reasoning, to lay bare the mechanical principles of thought.
Such an enterprise has always been deemed necessary precisely bcause
these principles are not evident in natural language. By contrast, until the
advent of moder phonology, no one found it necessary to uncover the
mechanical principles behind rhyming: they are self-evident.
Of course, a major diference between rhyming and entailment is that
rhyming is a relation between the linguistic form of words-i n particular
their phonetic for-whereas entai lment is a relation between the mean­
ings of sentences, between the thoughts lhe sentences express. In C&CM I
How Language Helps Us Think
1 87
Available 10 awareness:
Unconscious:
`.. �
Fige 8.2
The forms govering rhymes are available to awareness; the fors govering
entailment are not
proposed that this diference is a key to understanding why the forer is
transparent to awareness and the latter is not. Figure 8. 2 schematizes my
analysis. The idea is that phonetic forms are available to awareness-we
can consciously analyze them into their parts, at least to a degree. We can
hear directly the analysis into syllables, and with training that everyone
has undergone in learing to read, the analysis into individual phonetic
segments. (On the other hand, we cannot hear directly the analysis into
distinctive features; this is why beginning linguistics students have such
difculty memorizing the features. ) Thus the structures involved in judg­
ing rhymes are consiously available.
By contrast, conceptual structures, although expressed by conscious
linguistic fors, are not themselves available to consciousness. Thus in
fgure 8. 2, phonetic form, entails phonetic for2 only indirectly, via a
principled relationship btween the conceptual structures they express.
Si nce the conceptual structures are not consciously available, entailment
has an intuitive, quasi-magical gap in the way it is experienced.
More generally, I am inclined to believe that thought per se is never
conscious. This may sound radical, but consider: When we engage in
what we call conscious thinking, we are usually experiencing a talking
voi ce in the head, the so-called stream of consciousness, complete with
segmental phonology, stress, and intonation. In other words, we are expe­
rienci ng something that has all the hallmarks of phonetic form. For most
of us, thi s voi ce never shuts up-we have to do Zen or something to make
it quiet in there. But we know that phonetic form is not the for of
t hought; it is rather a consci ously avai lable expression of the thought. If
1 88
Chapter 8
we can catch ourselves in the act of thinking, then, it is bcause the lin­
guistic images in our heads spll out some of the steps.
Remember too that there are those times when an inspiration jumps
into awareness; it comes as if by magic. Of course, as good cognitive si­
entists, we're not allowed to sy it's magic. Rather, we have to assume
that the brain is going abut its business of solving problems, but not
making a lot of conscious noise abut it; reasoning is taking place without
being expressed as language. So when the fnal result pops out a a lin­
guistic expression ("Hey! I've got it! If we just do such and such, every­
thing will work fne! "), it comes as a surprise.
So far, then, I'm advocating that we bcome aware of thought taking
place-we catch ourselves in the act of thinking-only when it manifests
itself in linguistic form, in fact phonetic form.
This is an oversimplifcation, however, bcause there exist other con­
scious manifestations of thought, such as visual images. For instance, if I
say Bil killed Harry, you may have a visual image of someone sticking a
knife into someone else and of that second prson bleeding and falling
down and ceasing to breathe, and you say "Oh yes, Harry died! " So you
might suspct that the connection between the two thoughts is made in the
visual image, a mental phenomenon that does appear in consciousness.
The problem with this solution was pointed out by Bishop Berkeley
back in the 1 8th century; the modem version of the argument appears in
Fodor 1 975. An image is too spcifc. When I sid Bill killed Harry, your
image had to depict Bill stabbing or strangling or shooting or pisoning
or hanging or electrocuting Harry. Any of these count as killing. And
Harry had to fall down, or expire sitting down, or die hanging from a
rope. Any of these count as dying. So how can any of them be the concept
of killing or dying? That is, the thoughts expressed by the words kill and
di, not to mention the connections between them, are too general, too
abstract to be conveyed by a visual image.
A second problem, emphasized by Wittgenstein (1 953), concers i den­
tifcation. How do you know that those pople in your visual image are
Bill and Harr respectively? There's nothing in their apparance that gives
them their names. ( Even if they're wearing sweatshirts with Bill and Harry
emblazoned on them, that still doesn't do the trick!)
A third problem: What is the visual image that corresponds to a ques­
tion like Who killed Roger Rabbit? This sentence clearly expresses a com­
prehensible thought . But there i s nothing that can be put in a vi sual image
to show that it corresponds to a questi on rather than a statement, say
How Language Helps Us Think

Someone unknown killed Roger Rabbit. In fact, what could the image be
even for someone unknown? The situation is still worse when we try to
conjure up a useful image for virtue or social justice or seven hundred
thirty-two.
My view is that visual images, like linguistic images, are possible con­
scious maifestations of thought, but they are not thoughts either. Again,
this is noting new.
But now let me put this together with fgure 8. 1 . Figure 8. 1 distin­
guishes a sort of shell of sensory representations that are completely
unconscious. In the case of vision such representations encode such things
a fusion of the two retinal images, edge detection, and stabilization of the
visual feld in spite of eye movements. In the case of language they include
at least the frequency analysis of the auditory signal and any further
stages prior to the conversion into phonetic for.
I am now suggesting that there is also a central core of representations
that is inaccessible to consciousness. In the linguistic faculty these include
at least syntactic structure and oonceptual structure; outside of language
these include for example the multimodal spatial representations (section
2. 6) that coordinate vision, touch, and action. Thus the representations
that are conscious form a· sort of interediate ring btween the sensor­
imotor periphery and the cognitive interior. Figure 8. 3 is a crude picture
of this. ( Figure 8. 2 can b regarded as a segment of fgure 8. 3. ) Since we
act on the world a well as perceive it, fgure 8. 3 includes not only sensory
information coming into the brain but also motor instructions going out.
Only the representations in the shaded ring are conscious. The outer
pripheral ring is unconscious, a in fgure 8. 1 , and the inner cognitive
core is also unconscious.
Figure 8. 3 schematizes a psition I have called the Intermediate Level
Theory of Consciousness, worked out in a good deal more detail in C&CM.
With respect to language and thought, the idea is that the form of our
experience is driven by the for of language, especially by phonetic form.
We experience language as organized sequences of sounds. On the other
hand, the content of our exprience, our understanding of the sounds, is
encoded in diferent representations, in particular conceptual structure
and spatial representation. The organization of this content is completely
unconscious.
Thought ofen drives language: the presence of a conceptual structure
i n worki ng memory often causes the brain to develop a corresponding
l i ngui stic struct ure, whi ch we may either pronounce or just experience as
1 90
Eyes
Fige 8.3
Ears
SYNTAX
SPATIAL t
REPRES
E
N
..
..
CONCEPTUAL
TATIONS
STRUCTURES
Tongue, etc.
Chapter 8
The Intermediate-Lvel Theory: peripheral and central representations are inac­
cessible to consciousness
linguistic imagery. Conversely, linguistic structures in working memor
(most ofen created in response to an incoming speech signal) invariably
drive the brain to try to create a corresponding conceptual structure (the
meaning of the heard utterance). Consquently many of our thoughts
have a conscious accompaniment: the lingui stic structures that express
them.
Thought can b driven by other modalities as well . For instance, con­
sider fgure 8.4. In respons to viewing a tree in the world, the vi sual sys­
tem constructs a represntation of the foÎm of the tree-and, through the
appropriate interface modules, it al so drives the conceptual system to
How Language Helps Us Think
WORLD BODY
B R A
Figue 8.4
What is conscious in seeing a tree and talking about it
1 91
(Sound wave)
N
t
t
t
t
t
t
Vocal tract
Muscl es
MOTOR
SI GNAL
Iriy/
SYNTAX
[ �ount 1
CONCEPTUAL t
STRUCTURE [TREE]
SPATIAL /
REPRESENTATION
OF TREE
retrieve the concept of a tree, a combination of its conceptual and spatial
representations (C&CM, chapter 1 0; lackendof 1 996a). That is, our
understanding of what we see is a consequence not only of visual images
but also of the conceptual organization connected to those images. As in
the case of language, we have no direct awareness of our understanding of
trees; we have only a perceptual awareness of the visual form of what we
are seei ng. The visual for is in the intermediate shell of brain processes
in fgure 8. 3, but the understanding is in the unconscious central core.
Tn tur, the unconscious central core links the visual prception with
l anguage. If the vi sual /spatial system drives the conceptual system to
1 92
Chapter 8
develop a conceptual structure for the tree, then conceptual structure,
through its lexical and phrasl interface with syntax and phonology, can
drive the language faculty to create an utterance such as "Hey, a tree! "
Thus, i n fgure 8. 4, the two conscious parts, the visual image and the
word, are linked together by the unconscious concept of a tree.
Part of the idea behind fgure 8. 3 is that the same general architecture is
preserved in other organisms, at least those with higher mammalian brain
structure (though not necessrily reptiles or Martians). For instance, a
monkey will have essentially the same linkages among representations as
we do, lacking only the link provided by language. In fact, an organism
might have other modalities of experience that humans don't have. Bats
presumably have some kind of experience through their echolocation
modality whose quality we can't imagine-hearing shape and distance
and motion! This follows from the Intermediate-Level Theory, which says
that somewhere between sensation and cognition lies a level of repre­
sentations that is conscious. We can thus envision taking the basic model
of fgure 8. 3 and adding and subtracting diferent modalities of experience
to imagine the fors of experience for diferent organisms.
The overall spirit of this analysis, of course, derives from Representa­
tional Modularity, since it presumes that the representations underlying
diferent mental functions can be formally segregated from each other and
linked via interface modules. In fgure 8. 3 the diferent sectors of each ring
stand very roughly for diferent representation modules.
A crucial part of the analysis comes from the nature of interfce mod­
ules. We saw in chapter 2 that, almost as a point of logical necessity, we
need modules that interface between faculties: a module that dumbly,
obsessively converts conceptual structure into syntax and vice versa;
another that dumbly, obsessively converts visual images into spatial rep­
resentations and vice versa; and so forth. As we saw in chapters 2 and 3,
part of the nature of interface modules i s that they efect only a partial
correspondence between representations; much inforation in each of
the linked representations is invisible to the interface module. Moreover,
the mapping perfrmed by an interface module is often many-to-many,
so that inforation content is by no means well preserved across the
interface.
In particular, not all aspects of conceptual structure lend themselves to
visual imagery or linguistic form, so the interface modules just do the best
they can. The places where they fail are just those placs where we have
How Language Help Us Tink
1 93
found gaps between the conscious forms of linguistic and visual experi­
ence on one hand and the needs of thought and reasoning on the other.
8.5 The Sigifcance of Consiousness Again
Let us continue to look at fgure 8. 3, and ask what it sys about con­
sciousness. Section
8
. 2 mentioned that many people want to ascrib to
consciousness some cosmic signifcance-they want to say our conscious­
ness is what makes us human, it's connected to our highest capacities, it's
what separates us from the animals. If anything like fgure
8
. 3 is correct,
the signifcance of consciousness can't b anything like that. If there is
anything that makes us human, that has enabled us to build great civi­
lizations, it is our capacity for reasoning-which isn ' t conscious! Con­
sciousness is not directing our behavior. Rather, it is just a consequence of
some intermediate phenomena along the chain of connection between
perception (the periphery) and thought (the core).
Ths goes strongly against intuition, of course. But it stands to reason
that it should. The conscious phenomena are all we can know directly
about our minds. As far as consciousness is concered, the rest of the
mind simply doesn't exist, except as a source of miracles that we get so
accustomed to that we call them "common sense. " Hence we naturally
ascribe any intellectual powers we have to consciousness. [ submit that, if
we really look at the department of so-called miracles, we fd that most
of the interesting work of intelligence is being done in that department­
not in the conscious department. (My impression is that this has certainly
been one of the lessons leared from research in artifcial intelligence. )
Notice that thi s conclusion does not demean our humanity in any way.
We can still do whatever we can do. Maybe it does reduce the chance that
it would b interesting to have a consciousness that continues beyond our
death-an immortal soul : with neither sensory input nor conceptual rep­
resentations, there certainly wouldn't b much to experience. But, after
al l , wishing for something doesn't make it true. We'll have to look for
reasons other than the hope of immortality to fnd our lives meaningful.
This completes the frst part of this chapter, in which I've tried to dis­
soci ate language from thought and to show that they are distinct brain
phenomena. According to this picture, the mere fact of having language
does not make thought possible, nor does language have a direct efect on
t hought prosses. In the rest of the chapter I want to suggest three ways
in which language has important indirect efects on thought processes.
··
Chapter 8
8.6 First Way Language Helps Us Thnk: Linguistic Comunicaton
The frst way language helps us think is fairly obvious. By virtue -of having
language, it is possible to communicate thought in a way impossible for
nonlinguistic organisms. Language permits us to have history and law
and science and gossip. The range of things we can think abut is not
limited to what we can dream up ourselves-we have all of past and pres­
ent culture to draw upn. That is, language permits a major enhancement
in the range of things the thought processes can pertain to-the con­
ceptual structures that can be accumulated in long-term memory-even if
the construction of conceptual structure and the inferential processing
operating on it may be exactly the same.
Without language, one may have abstract thoughts, but one has no way
to communicate them (beyond a few stereotyped gestures such as head
shaking for afrmation and the like). Without language, there is no way
to communicate the time reference of events: for instance, does a pan­
tomimed action describe a past action or a desired future action? Without
language, one may have thoughts of being uncertain whether such and
such is the case, but one cannot frame questions, which communicate
one's desire to be relieved of that uncertainty. And so on.
As a result of our having language, vastly more of our knowledge is
collective and cumulative than that of nonlinguistic organisms. To b
sure, other higher mammals have societies and even perhaps protocultures
(Wrangham et al. 1 994). But the collectivity and cumulativity of human
thought, via linguistic expression, almost certainly has to be a major fac­
tor in the vast expansion of knowledge and culture in human societies.
Good ideas can be passed on much more efciently.
8.7 Second Way Language Helps Us Think: Makng Conceptual
Stucure Available for Attention
The enhancement of thought by virtue of linguistic communication mighl
strike many readers as a sufcient connection between language and
thought. But I want to dig a little deeper, and explore the relation betweel
language, thought, and consciousness that emerged in the Interediate
Level Theory sketched in fgure 8. 3.
8.7.1 Intoduction
Imagine what it would be l i ke to be an organi sm j ust l i ke a human excepl
lacki ng language. Accordi ng to fgure 8. 3, you coul d see and hear and fee
How Language Helps Us Think 1 95
exactly the same sensations, prfon the same actions, and in particular
think exactly the same thoughts (have the same conceptual structures).
There would be two diferences. First, the one just discussed: you couldn't
beneft from linguistic communication, nor could you make your knowl­
edge available to others through the linguistic medium. So the range of
things you could think abut would be relatively impoverished.
But I want to concentrate on another diference, the phenomenological
diference: you couldn't experience your thought as linguistic images. That
is, a whole modality of experience would be simply absent-a modality
that, as pointed out earlier, is very imprtant to human exprience.
What would the efect be? Let's retur to the monkeys who practice
redirected aggression. Recall what happns: monkey X attacks monkey
Y, and then monkey Y attacks not monkey X, but monkey Z, who hap­
pns to be a member of X's kin-group. ( It's not always that bad: some­
times Y will act nicely toward Z instead, as if to indirectly make up with
X or appease X. ) Recall that something like the following pieces of rea­
soning are logically necessary to account for this behavior:
( I ) a. X attacked me.
b. An attack on me makes me want to take retaliatory action
(alteratively, to make up).
c. The members of an attacker's kin-group are legitimate targets for
retaliation (or making up).
d. Z is a member of X' s kin-group.
e. Therefore, I will attack (or make up with) Z.
As we noticed earlier, factors ( l b), ( l c), and ( I d) are abstract, that is,
not connected directly to perception in any way-( 1 c) especially so. Now
what seems so intuitively strange about this chain of reasoning is that we
can' t imagine a monkey saying things like ( l a-e) to herself. And indeed
she doesn't, because she doesn't have language. Yet thoughts very much
l i ke those expressed by ( l a-e) must be chained together in the monkey's
head i n order to account for the behavior.
How do we escape this conundrum? Given the discussion up to this
poi nt, the answer is fairly clear: the monkey has the thoughts, but she
doesn' t hear the corresponding sentences in her head, because she has no
l i nguistic medium in which to express them. Consequently, the monkey
doesn' t experience herself as reasoning through the chain in ( I ) . All she
experi ences is the outcome of the reasoning, namely an urge to attack
monkey Z, and perhaps an image of herself attacki ng monkey Z. The
·. Chaptr 8
monkey's experience might be like our sudden urges, for no apparent
reason, to go get a beer from the fridge. By contrast, humans do have the
linguistic medium, and so, when they have thoughts like thos expressd
in (1), they may well experience the thoughts as a conscious plotting of
revenge.
The point is this: Only by having a linguistic modality is it possible to
experience the steps of any sort of abstract thought. For example, one
can't directly see that monkey Z is a member of X's kin-group-this is an
abstract predicate-argument relationship, encoded in conceptual struc­
ture. But this relationship can be made explicit in consiousness by using
the linguistic structure Z i kin to X Similarly, the notion of retaliation­
performing some action for a particular reason-is unavailable to non­
linguistic consciousness. A reason as a reason cannot be represnted in a
visual image. But by using the word becaue to relate two propositions,
the linguistic modality can make reasons as such available in conscious­
nes. More generally, of all the representations available to consciousness,
only phonetic form has constituent strcture even remotely resembling the
predicate-argument organization of conceptual structure. The correspon­
dence between conceptual constituency and phonetic constituency is not
perfect by any means (chapters 2 and 3), but it is a great deal more
transparent than, say, that between conceptual constituency and visual
images. 8
Still, afer reading the argument up to here, you may well say, "So
what?" The fact that these thoughts appear in experience doesn't change
them, and it doesn't change the steps of reasoning that can b performed
on them. Language is just a vehicle for exteralizing thoughts; it isn't the
thoughts themselves. So having language doesn't enhance thought; it only
enhances the experience of the thought. Maybe thinking is more fun if we
can experience it, but this doesn't make the thought more powerful . And
that is about where I left matters in C&CM.
8.7.2 Te Relatio betwe Consiounesand Attetion
However, I now think this is not the end of the story. I want to suggest
that, by virtue of being available to consiousness, language (in particular
phonetic form) allows us to pay attention to thought, yiel ding signi fcant
benefts. To see this, let us set language aside for a moment and thin
k
about the relation between consiousness and attention.
It is often pointed out that consciousness and attention are i ntimatel y
l i nked. An ofen-cited example involves drving al ong, conversing with a
How Language Helps Us Think 1 97
passenger. On the standard account, you are said to be unconscious of the
road and the vehicles around you-you navigate "without awareness"­
until your attention is drawn by some complicated situation, perhaps a lot
of trafc weaving through the intersction up ahead, and you have to
suspend conversation momentarily while attending to the trafc. From
anecdotal experiences like this, it is often concluded that consciousness is
a sort of executive (or trafc cop) that resolves complicated situations in
the processing of information, and that we are only conscious of things
that are hard to process. ( I understand Minsky ( 1 986) a taking essn­
tially this position, for instance. )
Notice that such a view goes well with the idea that consciousness is
a top-level mental faculty, deeply entwined with our higher intelligence,
free will, and so forth. In working out the Intermediate-Level Theory of
Consciousness (fgure 8. 3), I have challenged that overall prejudice, and
so I a automatically suspicious of drawing such a connection between
consciousness and attention.
Let us therefore look a little more closely at the situation of driving and
conversing at the same time. Are you really unconscious of your driving?
You may not remember what you did afterward, and you are certainly
not paying srious attention to it, but at the same time you are not com­
pletely unaware of what you are doing, in the way, say, that you are un­
aware of your saccadic eye movements or of the fow of serotonin in your
brain, or for that matter of what is going on in your environment when
you are asleep. Rather, I think it is fair to say that when you are driving
and conversing, you are at least vaguely aware of the fow of trafc. It's
not as if you're blind. (Searle ( 1 992), among others, suggests a similar
reanalysis. )
Consider also another situation. Suppose you are just lying on a beach,
i dly listening to the waves, wiggling your toes in the sand, watching people
go by. There is no sense of anything being hard to proess, it's just utter
rel axation. But surely you are conscious of all of it. Analyses that view
consciousness as a high-level decision-making part of the mind tend not to
take such cases into account.
We certainly need to make a distinction between fully attentive aware­
ness and vague awareness. But both are states of consciousness-some­
t hi ng is in the feld of consciousness. The goings-on that you are only
vaguely aware of may be less vivid or immediate, but they are still "there"
for you. To account properly for such phenomenology, we must make our
t ermi nol ogy Ü bi t more preci se than common l anguage, because we ofen
1 98
Chapter 8
d say "I wasn't conscious of such and such" when we mean something
like "Such and such didn't attract my attention. "
My sense is that consciousness has nothng at all to do with processing
difculty or executive control of processing: that is the function of at­
tention. Attention, not consciousness, is attracted to those parts of the
perceptual feld that potentially present processing difculty: sudden
movements, changes in sound, sudden body senstions, and so forth.
We are therefore led to ask another of those superfcially obvious
questions: What happens in the brain when one pays attention to some­
thing? Most research on attention seems to have focused on what it takes
to attract attention, or on the maximal capacity of attention (see Allport
1 989 for a survey). I am asking a diferent question: What good does
attention do once it is focused on something?
A traditional approach to attention (e. g. Broadbent 1 958) takes the
position that the function of attention is to flter out incoming infora­
tion that would "overwhelm" the processing capacity of the brain. On
this view, the function of attention is to keep out everything not attended
to. I would prefer a more positive characterization of the same intuition:
the processing capacity of the brain for dealing with incoming signals
is indeed limited, but resources can be distributed in diferent ways. If
resources are distributed more or less evenly, the result is a more or less
unifor degree of detail throughout the prceptual feld. Alteratively,
resources can be distributed unevenly, so a to enhance certain regions,
but at the price of degrading others. I take attention to be this function of
selective enhancement.
There seem to be at least three things that we can do with a percept
when we pay attention to it, things that we cannot do with nonattended
material. First, as just posited, focusing attention on something brings
more processing resources to bear on it, so that it can be resolved faster
and/or in greater detail . In tur, this extra detail is what makes the con­
siousness more vivid and imediate. Becaus the remaining phenomena
are allotted fewer processing resources, they are not resolved in as much
detail, and so they are vaguer in consiousness. On the other hand, if
attention is not particularly focused, sy when lying on the beach, the
quality of consciousness is perhaps more unifor across the prceptual
feld.
A second thing that happens when we pay attention to an object is that
we can "anchor" it: stabilize it in working memory while compari ng i t
with other objects i n the environment, shifi ng rapidly back and forth, or
How Language Helps Us Think 1 99
while retrieving material from memory to compare with it. We can also
anchor the percept while "zooming in" on details and attending to them.
Another aspect of anchoring is the ability to track moving objects against
a background, if we pay attention to them-or, more difcult, to track one
moving object amidst a swar of similar objects (Culham and Cavanagh
1 994).
A third thing that happens when we pay attention to something is that
we can individuate it and remember it as an entity in its own right. If we
take in a particular visual fgure just a part of the texture of the wall­
paper, for example, we will never remember it. But once we pay attention
to it, we can go back and pick out that very one¯or notice that it isn't
there if it has been erased. Formally, the process of noticing an entity
amounts to constructing a constituent in conceptual structure that en­
codes this entity. This constituent may or may not have further features;
but it is enough to perit us to construct a corresponding linguistic
referring expression, at the very least an indexical: That! (perhaps ac­
companied by a pointing gesture).
I have been saying sometimes that attention is drawn by an object in
the world, sometimes that it is drawn by a percept in the head. Which is
right? If we want to be precise, we have to say "a percept. " Attention is a
process going on in the brain-so it cannot be directed by things moving
out there in the external world. Attention is not a little prson in the brain
watching the world and pointing a spotlight at interesting thngs out there
SO they can be seen better. Rather, attention has to be directed by the
character of brain phenomena that have occured in response to things in
the exteral world that may be of potential interest to the organism-a
crucial diference. It all has to happen inside the brain.
This now raiss an interesting question, which I think has not been
asked in exactly this way before: Of all the brain phenomena that may
take place in response to, say, a sudden motion or noise in the world,
which ones are capable of being focused on by attentional processes? If we
consider the phenomenology of attention, an answer suggests itself: We
LÚM pay attention only to something of which we are conscious. We may not
understand what we are paying attention to (in fact that may be the reason
we are paying attention to it)-but we must certainly be aware of it. In
other words, the representations that fall into the interediate ring of
figure 8. 3 play some necessary role in the functioning of attention; per­
haps we can thi nk of these representations as being potential "handles"
hy which attenti on "grasps" and "hol ds onto" percepts.
200
Chaptr 8
This puts a new twist on the more or less standard story that con­
siousness drives attention. Instead of thinking of consciousness as the
smart (or even miraculous) part of the mind that deterines which per­
cepts need attention, I am claiming that consiousness happens to provide
the basis for attention to pick out what might b interesting and thereby
put high-power processing to work on it. In tur, the high-pwer proces­
sing resulting from attention is what does the intelligent work; and at the
same time, as a by-product, it enhances the resolution and vividness of the
attended part of the conscious feld.
9
8.7.3 Language Povides a Way to Pay Attention to Thought
Returing to language, we will now be able to see the reason for this
detour on consciousness and attention.
Remember two points: frst, language provides a modality of con­
siousness that other animals lack; second, language is the only modality
that can present to consiousness abstract parts of thought like predicate­
argument structure, kinship relations, reasons, hypothetical situations,
and the notion of inference. Thus only through language can such con­
cepts form part of experience rather than just being the source of intuitive
urges.
At the point where we lef of in section 8. 7. 1 , we could see that
although it might be more fun to exprience one's reasoning through
language, such exprience would not yet help reasoning in any way.
Attention, though, adds a new factor: having linguistic expressions in
consiousness allows us to py attention to them. And now te extra
proessing pwer of attention can be brought into play.
Consider yet again the monkey who is calculating some redirecte
aggression. Let's make it instead a person, sy Bill, who has a language
faculty and with it linguistic awareness. Instead of Bill just experiencing
an urge to attack whoever it is, say Joe, Bill's concepts may drive his
language faculty to produce some utterance (he may actually sy it, or he
may just hear it in his head).
(2) I'm gonna KILL that guy!
First of all, notice that the language faculty forces Bill to package his
thought in some particular way, in accordance with l exical items and
meaningful constructions that happen to exist i n English+ The exi stence of
prepackaged lexical items that are selected to express the thought can
How Language Helps Us Think 201
themselves refne thought: Bill's generalized sense of aggression gets
molded into a threat to murder Joe, as opposed to, say, just insult him.
Now, if Bill doesn't pay attention to his utterance, it's just mumbling
and has no further efect-it's just a simple statement of generalized
aggression, a bit of texture in the auditory wallpaper. But suppose Bill
does listen to himself-suppose he pays attention to his utterance. Then
the sentence gets anchored, more computing power is devoted to it, and
details can develop.
(3) KILL? Or just mangle, or just insult?
That guy? Or someone else, or his whole family?
Gonna? When? Tomorrow? Next week?
Why? What do I hope to accomplish? Is there a better way?
How? Should I shoot him? Stab him? Poison him? Club him?
What then? What will happen if I kill him?
Attention to some of these details may lead to elaboration of further
details.
(4) Club him? With what? A baseball bat? A fence post? . . .
And any promising options may be remembered. Notice again that this
detail is all a concequence of attention being paid to the phonetic form.
Without the phonetic form as a conscious manifestation of the thought,
attention could not be applied, since attention requires some conscious
manifestation as a "handle. "
We should also notice hidden i n these ruminations of Bill's the expres­
sion of some concepts that would not be available without language, for
example the notion "next week. " What does it take to think about weeks?
A nonlinguistic organism can obviously detect patters of light and dark­
ness and respond to diural patters. But it takes the lexical item day to
abstract this patter out a a sort of object, so that attention can be drawn
to it as a constancy. The phonetic form of the lexical item is a perceptual
object that anchors attention on the patter and allows it to be stored as a
repeatable and retrievable unit in memor.
What about a week-a unit of seven days? This is a completely non­
perceptual unit. I don't think it could be conceived of at all without
l i nguistic anchoring. Even if such a unit were potentially available as a
concept, it couldn't be accessed without having language to hold it in
at tention, which enables us to stow such a unit away for future use. I
t hi nk it fair to say that although nonlinguistic organisms may be able
202 Chapter 8
to develop a concept of a day, they will never attain the concept of a
week.
Finally, the phonetic form next week provides a conscious handle on an
actual point in time. This handle provides a constituent in conceptual
structure for a token point in time that can later be reidentifed. Ths
suggests that it is the existence of time expressions in language that
enables us to identify points in the past and future and keep track of
them.
To sum up this section: Language is the only modality of consciousness
in which the abstract and relational constituents of thought correspond
even remotely to separable units. By being conscious, these phonetic
constituents bcome available for attention. Attention in tur refnes the
associated conceptual structures, both by anchoring and drawing out
details and by concretizing or reifying conceptual units that have no stable
prceptual base.
8.8 Th Way Language Help Us Tn: Valuation of Conscious
Percept
The third way language helps us think is related to the second. To under­
stand it, we have to step back yet again and examine another property of
consciously availabl
e
percepts.
What is the diference between the appearance of something that looks
familiar and that of something that doesn't look familiar? In general,
nothing: as you get used to the apparance of some novel picture, let' s
say, the appearance doesn't change-it Just somehow feels diferent than
when it was novel. The same is true for the sound of a new tune as you
gradually get to know it. Or, for the classic case, suppose you're a subject
in a psychologcal experiment and you're asked which nonsense syllables
are the same as ones you were given yesterday. What is the diference
between the familiar ones and the novel ones? They all sound more or less
like nonsense, except that some come with this feeling of familiarity and
some don't.
I will call this feeling of familiarity or novelty attached to a prcept a
valuation of the percept. We can even have this sense of fami li arity with­
out clearly knowing what we are seeing-"I'm sure I know you from
somewhere, but I'm damned if I can remember who you are or when we
met. " When we have a deja vu experience, we somehow get a feel i ng or
fami li arity attached to an object or si t uation that we rational ly know we
How Language Helps Us Think 203
have never experienced before; that is, deja vu is a error or illusion of
valuation.
The familiar/novel distinction is not the only valuation that the brain
applies to percepts. Another one is the distinction between percepts that
are taken to be imaginary and those that are taken to be real. ("Is this a
dagger which I see before me?") This valuation is a bit trickier to isolate,
because things we judge to be real tend to be vivid and substantial in
consciousness, whereas things we judge to b imaginary tend to be fuzzy
and feeting. But in certain limiting cases, it is possi ble to see that the
valuation can make an indepndent diference. Suppose you are trying to
make your way through a thick fog, and you are just not sure what you're
seeing. You detect a blurry motion: is it something really out there, or do
you just think it is? Was that noise you heard real, or just your imagi­
nation? When for some reason perception is impeded, the very same per­
cept may come to be judged either real or imagnary, and there is nothing
clear about its apparance that helps us make the decision.
Another sort of limiting case is dreams, where things may seem quite
real that upon awakening are judged imaginary. "Don't be frightened: it
was only a dream! " we say. Like deja v, thi s is a error or illusion of
valuation.
A related valuation concers whether a percept is exterally or inter­
nally initiated. Consider a visual image you may get as a result of my
telling you "Imagine a pink elephant," and compare it with a similar
image you might get as a result of drinking too much, where it "just pops
into your head. " The foner image, you feel, is under your control; the
latter is not. Yet both are in fact generated by your own brain activity,
and you can't catch yourself in the act of making either image happen.
There's just a mysterious, miraculous "act of will" -or its absence.
Or think of the sense of blinking your eyes voluntarily as opposed to
doing it automatically. The movements themselves are essentially the
same, and both ultimately require similar nere activation of the musles.
But they feel diferent, namely in or out of your control.
Of course, the hallucinations of schizophrenics are errors in both of
these latter valuations: they hear voices that they take to be real and
exterally generated ("God is really speaking to me," they say), whereas
the voices are in fact imaginary and interally generated.
The general idea, then, is that our cognitive repertoire contains a family
of valuati ons, each of which is a bi nary opposi ti on that can deterine part
of the "fccl " of consci ous percepts. These three and a number of others
20 Chapter 8
are discussed in C&CM chapter 1 5 (where they were called "afects, " a
term I was never quite comfortable with). Valuations of percepts have not
to my knowledge been singled out for attention elsewhere in the liter­
ature.
1 0
What is curious about them is that they are not part of the form
of consciousness; as stressed above, they are more like a feeling associated
with the form.
But if we have language, we can give these feelings some form: we have
words like familir, novel, real, imaginary, selcontroled, halucination
that express valuations and therefore give us a conscious link to them.
This conscious link permits us to attend to valuations and subject them to
scrutiny: Is this percept really familiar, or is it a case of deja vu? Is it real,
or is it a dream? And so forth. A dog awakening from a dream may b
disturbed about what happened to the rabbit it was chasing; but it can­
not formulate the explanation "It was a dream. " Rather, something else
attracts its attention, and life goes on. But with language, we can f on
this valuation as an independent object in its own right and thereby
explain the experience-as well as recognize a category of experiences
called "dreams. "
The plot thickens. Because phonetic forms are percepts too, they can
themselves be subject to valuation. For instance, what is going on when
we judge that some sentence is true? There is nothing about the literal
sound of a true sentence that is diferent from the literal sound of a false
sentence, yet we say "It sounds true to me. " That is, the sense that a sen­
tence is true or false is also-from a psychological pint of view-a kind
of valuation. It is altogether parallel to a judgment that a visual prcept is
something really out there. Similarly, the concept that we express by sup­
pose that or i is a valuation that suspends judgment, parallel to evaluat­
ing some visual image as imaginary and interally produced.
Now let us combine this with the previous pint. Like other valuations,
the valuations of language can be expressed in language, with words like
true, not, i and so forth. Therefore, by virtue of their phonetic for.
these valuations can b attended to as indepndent objects in their own
right and focused on, and so we get full-blown recursion: a thought about
the valuation of another thought, the larger thought having its own valu­
ation. We can express thoughts like "Suppose I am incorect about such
and such . . . then such and such other belief of mine is also fal se. " That is.
it is precisely because thoughts map into phonetic fors that it is pssible
to reason about reasoning. There i s no other modali ty i n which valuations
can he given pal pable for so they can be attended to and thought about .
How Language Helps Us Tik 205
And certainly a crucial source of the pwer of our reasoning is its ability
to examine itself.
I do not see any evidence that nonlinguistic organisms can engage in
such metareasoning. Aps and dolphins can b very clever in solving cer­
tain kinds of problems, and they can be uncertain about how to solve a
problem, but I do not think they can wonder why they are uncertain. They
may be able to believe something,
1
1
but they cannot wonder why they
believe it and thereby be motivated to search for evidence. It takes lan­
guage to do that.
1
2
8.9 Sumg Up
The frst half of this chapter established that language is not itself the
form of thought and that thought is totally unconscious. However,
thought is given a conscious manifestation through the phonetic forms
that it corresponds to. The second half of the chapter suggested three
ways in which having language enhances the pwer of thought:
1 . Because language allows thought to be communicated, it permits the
accumulation of collective knowledge. Good ideas don't get lost. This
conclusion is certainly nothing new.
2. Language is the only modality of consciousness that makes prceptible
the relational (or predicational) form of thought and the abstract elements
of thought. Because these elements are present as i solable entities in con­
sciousness, they can sere as the focus of attention, which prmits higher­
power processing, anchoring, and, prhaps most important, retrievable
storage of these otherwise nonperceptible elements.
3. Language is the only modality of consciousness that brings valuations
of percepts to awareness as independent elements, permitting them to be
focused on and questioned. Moreover, since phonetic forms and their
valuations are also percepts, having language makes it possible to con­
struct thoughts about thought, otherwise unframable.
Although these conclusions may seem in the end intuitively obvious,
I ' ve tried to fnd my way more carefully to them, in the course of which
I ' ve challenged some fairly standard preconceptions about the nature of
consciousness. The interest of the argument lies, I think, in the intricacies
of the connections among language, thought, consciousness, and atten­
ti on. In turn, untangling these intricacies depends heavily on the mental
archi tecture posi ted by Representational Modularity.
206
8. 10 The Dusion That Language Is Thought
Chapter 8
One nice thing that emerges from the present analysis is an explanation
for the common snse identifcation of thought with language. As pointed
out in sction 8. 5, all we can know directly of our own minds are those
brain phenomena that are consious; in the ters of section 8. 7, these are
the only ones we can pay attention to. Hence these are the phenomena to
which we asribe responsibility for our behavior. Since the phonetic fors
accompanying thought are conscious and the thoughts themselves
"
are
not, it is altogether natural to think that the phonetic form is the thought.
Consequently language is quite naturally taken to be the substrate for the
act of reasoning. This illusion that language is thought has been the
source of endless philosophical dispute ( Dasal 1 95). We now can see
why the illusion is so intuitively persuasive.
Recognizing thi s illusion allows us to examine the dark side of our ini­
tial question: why language can be a less efective tool for reasoning than
we are often prone to assume. There are at least fve sorts of gaps where
language does not adequately express the structure of thought. In each
case, illusions develop in reasoning because language is all we have to pay
attention to.
1 . The smallest unit of thought that can be expressed as an independent
percept is a word. Because a word is a constant percept in our exprience,
we treat the thought it expresses as a constant thought-even though in
fact we bend and stretch the concepts expressed by words every which
way, especially in the process of combining them into sentences by coer­
cion and cocomposition (chapter 3). Just within the ambit of thi s chapter,
consider how the word unconsciou in common usage means anything
from being out cold to being vaguely aware (but not noticing)-not to
mention te technical use of the tr I've applied here to particular brain
processes. It takes careful analysis to notice the disparity among these
usages, and when we're done we don't know whether the word expresss
one bendable concept or a family of more rigid, related ones. Intui tion is
not much of a guide.
This issue arises not only with fancy words like uconscious, but even
with simple, obvious words. For instance, considerable current research is
devoted to studying the semantics of prepositions. It proves to be Ô di f­
fcult problem to decide whether the preposition in expresses the same
concept in the cofee in the cup and the crack in the cup ( I thi nk so), and
whether the preposition into expresses the bÔMÛ concept in jump into tie
How Lnguge Helps Us Think
207
pool and crash into the wall ( I think not). Whatever the correct answer
turs out to be, the nature of the problem is clear: in both cases, the use of
the identical word invites us to presume we are dealing with the identical
concept. Yet closer examination brings to light the unreliablity of such
presumptions. (See Herskovits 1 986, Vandeloise 1 991 , and Jackendof
1 996b for examples. )
2. The opposite side of thi s problem is the delegitimation of concepts
for which no sufciently precise word exists. A prime example arises with
the concepts of reasoning and belief (see notes 7 and 1 1 ). If one insists
that a belief is propositional, that reasoning involves relations among
propositions, and that propositions are linguistic (thereby at least partly
succumbing to the illusion that language is thought), then there is no term
available in the language for how animals' minds organize their percep­
tion and memory and create novel bhavior on the basis of this organi­
zation. One is not alowed to say they have beliefs and reasoning. The
forced move is to attribute to them abilities for which there are words, for
example "instinct" or "associative learing," often prefxed by "mere. "
The efect is to inhibit examination of what mental ability animals
actually have, because there happns to be a gap in our vocabulary just
where the interesting possibilities lie.
3. Not only do we fail to recognize gaps, we tend to treat all existing
words as though they have references in the real world, along the lines of
concrete words like dog and chair. This tendency means we're always
reifying abstract terms like Truth and Language, and constructing
theories of their Platonic existence-or having to spend a lot of efort
arguing, through careful linguistic analysis, against their reifcation.
4. As pointed out by Lewis Carroll ( 1 895) as well as by Wittgenstein
( 1 953), we don't really know how we ultimately get from one step in rea­
soning to the next. How do we know that if A implies B and B implies C,
then A implies C? And how do we know that any particular chain of
reasoning is an example of this rule? At bottom, we always have to fall
back on a certain feeling of conviction, which can't be justifed through
any more general laws. That is, sooner or later we hit a stage of pure
val uati on with no language to make it conscious. Yet we think we are
reasoning completely "rationally" (i. e. explicitly). Worse, we ofen get
thi s feel i ng of conviction when it's utterly unwarranted; we are prone to
del ude oursel ves and yet feel perfectly justifed. Just as in any other ap­
pl icati on of attention, the default situation is that we don't pay attention
to our val uations unl Lhs wc have to, unless we fnd ourselves in troubl e.
208
Chapter 8
5. As a refnement of the previous point, our sense of convicton is too
ofen driven by our desires, by what we want to be true. We are less likely
to question our convictions if they lead to desired conclusions. At the
sme time, such gaps in reasoning all seem perfectly rational, bcause
they are supported-to a great enough extent-by language.
None of these gaps would be possible if language really were the form
of thought. If it were, all reasoning would be completely up front, and
there would be no room for weaseling around. Behind much of the
development of formal logic lies the desire to provide a more satisfactory
form of language, in which all terms are precise and context-free, and in
which the steps of reasoning are entirely explicit-that is, in which the
idealizations in pints 1 -5 are not illusory but true.
By contrast, in the present perspective, in which language is only an
imprfect expression of thought and furthermore is the only form in
which many important elements of thought are available for conscious
attention, these illusions are just what we would expct. And they are in
large par irremediable precisely because of the architecture of the sys­
tem-because of the way the interaction of language, thought, conscious­
ness, and attention happened to evolve in our species. At the sme time,
as fawed as the system is from such an ideal point of view, it's all we've
got, and we might as well enjoy it. There is no question that it has gven
our species a tremendous boost in its ability to dominate the rest of the
environment, for better or for worse.
Appendix
The Wheel of Fortue Corpus
Collected by Beth lackendof
Note: The classifcations given here are only heuristic and are not intended to be
of any particular theoretical sigifcance.
Compounds
A-N Compounds
acoustic guitar
Arabian horse
Australian accent
black and white flm
black eye
black hole
blue cheese
British accent
catalytic converters
cellular phones
circular staircase
Corish game hen
early American furiture
electrical outlet
elementary school
favorite son
grizly bear
Mexican pso
miniature golf
mobile army surgical hospital
moer art
natural childbirth
overdue library book
peranent wave
pol ar ice cap
pri vate eye
public education
Scholastic Aptitude Test
substitute teacher
subterranean parking
white fag
N-N Compunds
Academy Award winners
airline pilot
apple dumpling
April shower
ballpoint pn
beef stew
bench press
birdhouse
boot camp
Broadway play
campfre ghost stories
cellophane wrappr
chain gang
char bracelet
chicken noodle soup
cofee break
cold wave
college entrance examination
computer screen
costume designer
cover girl
death pnalty
21 0
Dixieland jaz
Dolby noise reuction
dream seuence
exclamation pint
executive secretary
feld goal
goal post
high school sweetheart
high school graduation
hot water heater
jaz band
Jerry Lewis telethon
joke book
junk food
light bulb
magazine section
maximum security prison
meter maid
milk chocolate
mountain spring water
music video
New
Y
ork ticker tap parade
ocean liner
ocean view
oi I painting
opra singer
oxygen tent
pancakes
peanut butter
parl necklace
piano bench
piggyback ride
pillbox hat
pocket calculator
puppet show
push-button telephone
rearview mirror
retirement home
rhubarb pie
roast beef sandwich
robber baron
roulette wheel
ruby slipprs
salt substitute
sewage drain
shower cap
skin cancer
soccer ball
soda pp
strawberry margarita
teddy bear
travel agent
tulip bulb
TV dinner
Volkswagen bug
wedding anniversry
wedding vows
weenie roast
whelbarrow race
wind chimes
wine cellar
youth hostel
Appendix
Participial Compunds ( V-ing-N,
N--N, N-V-r)
baked beans
best-selling novel
blackjack dealer
chafng dish
chicken-fried steak
coloring bok
galvanized steel
Gatorade thirst quencher
haunted house
heat-seeking missile
heavy-handed
help wanted ad
high school marching band
homogenized milk
money-saving coupn
one-ared bandit
piano player
pie-eating contest
poached egg
popor popper
relining chair
road runner
scheduled departure time
stock car racing
sweepstakes winner
The Weel of Fortune Corpus
swiming pool
telephone answering machine
tongue twister
N'sN Compouds
cashier's check
catcher's mitt
lover's leap
pig's knuckles
Verbal Compuds
backorder
jump-start
lip-sync
Num-N-N Compounds
fve-story building
twelve-round heavyweight bout
two-dollar bill
IA-NJ-N Compuds
frequent-fyer program
front-row seats
high-rise building
long-haired cat
moder-day hero
Olympic-size swimming pool
open-heart surger
round-trip airfare
third-string quarterback
white-water rapids
Oddsand-Ends Compounds
aloha shirt
bed-and-breakfast
blackmail scheme
copycat
country and wester band
crew cut
deepsea diving
honor bound
l ost-and-found column
moring wake-up call
nine-to-fve work day
pop quiz
starter ki t
Idioms
Nonsyntactic Idioms
believe it or not
down and dirty
for example
in the know
johnny-on-the-spot
once upn a time
running rampant
sitting pretty
that'll b the day
21 1
they've had their ups and their downs
wait-and-se attitude
yummy yummy yummy
Reulttive Idioms
butter him up
cut my visit short
I cried my eyes out
knock yourself out
you scared the daylights out of me
Ot VP Idiom
bridge the gap
corer the market
don' t lose your balance
eat humble pie
fying high
give it your best shot
go for it
going from bad to worse
hit the road
I blew it
I'm at the end of my rope
I'm at your mercy
it's as clear as day
it took my breath away
I've had it up to here
keep your shirt on
knocking on heaven's door
never throw in the towel
not playing with a full dek
sleeping in the buf
stand pat
you're whi stl i ng in the dark
21 2
N Idoms
a breath of fresh air
bridge over troubled water
food for thought
forty winks
pie in the sky
point of view
red-blooded Americans
rite of passage
son of a gun
stick-in-the-mud
table of contents
the last straw
the life of the party
pp Idoms
all in a day's work
down in the dumps
in touch with one's feelings
of the top of my head
right on the money
S Idoms
hold everything
that's a laugh
that takes the cake
trick or treat
Otr
clean as a whistle
ft a a fddle
fip-fopping
no end in sight
would you like to take it for a spin
Name
Addis Ababa, Ethiopia
Alabama
A Capne
Alec Guinness
Ali Baba
Athens, Greece
Barey Rubble
Beverly Hills
Bill Murray
Boise, Idaho
Boston, Massachusetts
Chicago, Illinois
Clint Eastwood
Count Dracula
Croodile Dundee
David Lee Roth
Debby Thomas
Dick Van Patten
Dinah Shore
Donald Trump
George Lucas
George Orwell
Jesse Jackson
John F. Kennedy
Kenny G
Kurt Thomas
Leonard Nimoy
Maid Marian
Michael Dukakis
Mohammed Ali
Monte Carlo, Monaco
Narcissus
New Hampshire
Oklahoma
Palm Springs, Califoria
Pat Sajak
Paul Bunyan
Peggy Lee
Rip Van Winkle
Robin Hood
Robin Williams
Santa Claus
Santiago, Chile
Sarajevo
Spike Lee
Sylvester Stall one
The Queen Mary
Vana White
Walla Walla, Washington
Wally and Beaver Cleaver
WaIt Disney
William Tell
Woodrow Wi l son
Yugoslavia
Appndix
The Weel of Fortue Corpus
Meanu Name
American Heart Association
Battle of Britain
Boston Pops Orchestra
Brooklyn Bridge
Central Park
Cherokee Indians
Coney Island
Democratic Convention
Federal Express
Flying Walendas
Georgetown University
Golden Gate Bridge
Granny Goose ptato chips
Great Depression
Honest Abe
Hostess Twinkies
Industrial Revolution
Interational Red Cross
Ivory Coast
Ivy League
Jack Daniels whiskey
Joh Deere trator
Marlboro man
Marx Brothers
Miami Dolphins
Mickey Mouse Club
Milky Way galaxy
National Organization for Women
New York Yankees
Palm Springs, Califoria
Philippine Islands
Pittsburgh Pirates
Playboy bunny
Pop Warer Football League
Pythagorean theorem
Queen of Spades
Rocky Mountain National Park
San Francisco Bay area
Sergeant Joe Friday
Sierra Nevada mountains
Si nger sewing machine
Souther Hemisphere
Star of David
Stetson cowboy hat
Strategic Defense Initiative
The Big Apple
The Blues Brothers
The Grateful Dead
The Iron Man Triathlon
The Roaring Twenties
The Spirit of Saint Louis
The Third Reich
theory of relativity
United States Capitol
United States Olympic Team
United States Senate
Waldorf salad
walkie talkie
Winchester rife
World Hokey Assoiation
Wrigley Field
Clche
a day that will live in infamy
a toast to your health
all hands on deck
21 3
any friend of yours is a friend of mine
baby-blue eyes
bring home the gold
business suit and power tie
by a strange twist of fate
changing of the guards
conquer the world
cowboys and Indians
don't be so prsnickety
don't cry over spilt milk
drama on the high seas
everything he touches turs to gold
fair weather
faster than a speding bullet
follow these easy guidelines
gmme a break
good things come in small packages
haven't I seen you someplace before
he couldn't punch his way out of a
paper bag
herbs and spices
high as a kite
home sweet home
· ·
hot and humid weather
if the shoe fts wear it
I'll show you the door
in the middle of the night
I wouldn't be surprised
just another face in the crowd
just around the corer
just between you and me
just when you thought it was safe
leave a message at the tone
let's get out of here
life begins at forty
like taking candy from a baby
living breathing organism
look on the bright side
looks like something the cat dragged in
love conquers all
mad scientist's laboratory
Monday through Friday
muscularly built
name, rank, and serial number
no money down
open twenty-four hours
our military heroes
pay to the order of
people are focking to see it
pick on someone your own size
poke in the ribs
poke your head out the window
prime rib and baked potato
quiet as a mouse
reckless driving
rosy cheeks
satisfaction guaranteed
see you later
shape up or ship out
so near yet so far
standing room only
standing the test of time
steak and eggs
taped in front of a live audienc
that's the way the cookie crumbles
the best that money can buy
the catch of the day
the face of an angel
there' s a lot more where that came from
Appndix
the right to bear ars
they're popping up everywhere
time marches on
time to go on a diet
time will tell
too close for comfort
too much time on my hands
use only a directed
wait and see
we're doing everything humanly
possible
we've got to stick together
what's the big deal
will you please excuse me
yes or no
you can't judge a book by it cover
Title
All You Need Is Love
April in Paris
City Lights
Consumer Reports
Friday the Thirteenth
Good Morng America
Hill Street Blues
Lady and the Tramp
Little House on the Prairie
Little Orphan Annie
Mary Tyler Moore Show
Material Girl
National Enquirer
Pirates of Penance
Pumping Iron
Rock-a-Bye Baby
Rocky Mountain High
Spin the Bottle
Swing Low Sweet Chariot
The Adventures of Robin Hood
The Bionic Woman
The Goose That Laid the Golden Egg
The Little Drumer Boy
The Little Rascals
The Long and Winding Road
The People's Court
The Price Is Right
The Weel of Fortune Corpus
The Tortoise and the Hare
Vanity Fair
Watership Down
When Irish Eyes Are Smiling
White Christmas
You're a Good Man, Charlie Brown
Quotation
and the rockets' red glare
beam me up, Scotty
do you believe in mirales
it's Miller time
long long ago in a galaxy far far away
may the Force be with you
now I lay me down to sleep
smile, you're on Candid Camera
someone's i the kitchen with Dinah
there's no plae like home
to fetch a pail of water
whistle while you work
you ain't seen no thin' yet
Pa
fint and steel
Mork and Mindy
Popeye and Olive Oyl
Simon and Garfunkel
Sonny and Cher
Starsky and Hutch
Foreig Pha
amour
au contraire
c'est la vie
creme de menthe
mademoiselle
persona non grata
tam-o' -shanter
terra fra
2 1 5
Notes
Chapter 1
1 . An important bundary condition on the simplicity of the learing theory, one
that is ofen neglected in the syntactic acquisition literature, is that we lear tens of
thousands of words, very few of which submit to any sort of si mple defnition
( Fodor et al . 1 980). Word learing presents foridable theoretical problems in its
own right ( Macnamara 1 982; Landau and Gleitman 1 985; Pinker 1 989; Bloom
1 994; and many others); it appears that it requires a substantial built-in specialized
learing proedure, including a rich conceptual system and considerable abil ity to
infer the intentions of other speakers, as well as some prespecifcation of the con­
straints of grammar. So one might ask wheter there is any pint in simplifying
the proedure for grammar acquisition any further than is independently neces­
sary for word acquisition. I leave the issue wide opn.
2. I thus regard the foral/computational approach as a perspective for under­
standing, rather than as some ultimate truth. Regarding it this way undercuts the
criticisms of Searle and Edelman. Were one to take their arguments one step fur­
ther, one might legitimately claim that indeed there aren't neurons in the head any
more than there are computations: actually, there are only quarks and leptons,
and consequently brain function must be explained only in ters of elementary
particles. Dennett (1 995) has called this absurd sort of argument "greedy reuc­
tionism": it demands too much of theoretical reduction and thereby prevents
anyone from understanding larger scales of organization. I submit that neuro­
science, like the computational theory of mind, is just another prspctive, and
that it is at l east premature, if not altogether illegitimate, to expet the latter to be
replaced entirely by the forer.
3. A perhaps wild speulation: Chomsky's commitment to syntactocentrism may
go still deeper, to his identifcation of the goals of generative grammar with those
of "Cartesian linguistics" (Chomsky 1 966) . According to the Cartesians (on
Chomsky's reading), humans are distinguished from animals by virtue of pssess­
i ng a "creative principle" that enables them to produce bhavior of infnite diver­
si ty; and this "creative principle" is most directly evinced by the discrete infni ty of
l inguistic behavior. At the same time, the choice among this disrete infnity
of hehavil ll"s is jwn tvLr to the capaci ty or "rree wi l l , " the character or which
21 8
Notes to Pages 1 7-21
Chomsky relegates to the category of humanly insoluble mysteries (while occa­
sionally alluding to it as the basis of much of his political stance). For Descartes, I
think, free will was taken to b a unitary capacity; hence it is einently logical
that the discrete infnity over which it chooses is localized at one point in the mind.
Hence, by association, the creative aspct of language should be locaized in a
single source. I do not think Chomsky or anyone else has ever made this argument
explicitly, but I think one may detet some version of it lurking in the backgroud,
tacitly linking a numbr of the important thees of Chomsky's thought.
In any event, if there was ever any credence to the argument from fre will, it is
displle by current vies in cognitive science. Ther is no single soure of control
to be found in the bran, no "centra executive"; rather, control is distributed
widely, and action arises from the interaction of many seiautonomous centers.
Te notion that there is a uifed source of fre will is a cognitive illusion, what
Dennett ( 1 991 ) calls the "narrative center of gravity." This does not solve the
mystery of free will, but it certainly encourages us to take quite a diferent starting
point in the search space of possible explanations. And it undercuts any heuristic
conetion between free will and a single source of discrete infnity in languge.
4. Unlike children, aps also reuire intensive training in order to lear signs, and
they never acquire a vocabulary of more than a few hudrd. Is what apes lear
diferent from what children lear, or is it just that their method of learing is not
as powerful? An interesting question, but not crucial for my present point.
5. Se Wilkins and Wakefeld 1995 for some discussion of the possible preursors
of this stage.
6. If you fnd Pinker and Bloom's ( 1 990) arguments for the selective advantage of
syntax unprsusive, compare languge with music. Try to fnd an argument for
the seletive advantage of the gramatical organiztion of music ( Lerdahl and
Iackendof 1 983) that doesn't make Pnker and Bloom's arguments look brilliant
by comparison. Personally, I fnd the reasons for evolution of a musical capacity
a good dea more mysterious than those for languge; if anything is a "spandrel"
(to use the famous term of Gould and Lewontin 1 979), it is music.
Chapter 2
1 . Although I use the term representation out of respct for custom, I recognize
that it may be misleading. Bluntly, I do not take it that mental representations
"represent" anything, espcially anything out in the world. That is, I rejet the
notion that intentionality in Searle's ( 1 983) or Fodor's ( 1 975, 1 987) sense plays
any role in mental representation. For me, what is crucial about a mental repre­
sentation is only how it difers from other mental representations. That is, the
notion makes snse only in the context of a structured space of distinctions, or
"forat of representation," available to the brain. Phonetic representation is one
such spac, encoding the possibilities for phonetic expression; motor representa­
tion is another. A phonetic representation has "meaning" only i nsofar as ( I ) it
contrasts with other possible phonetic representations, (2) it can bCÎC as i nput to
phonetic-interal processes such as the computation
o
f rhyme, (3) it can bÜÏVC Hh
Notes to Pages 23-29
21 9
input to interface processes such as the production of speh instructions to the
vocal tract. A similar conception applies to all other representations discussed in
the present study-crucially, including conceptual representations. Se lackendof
1 992b, chapters 1 , 2, and 8 for more extensive discussion.
2. The constraints of Optimality Theory, for instance, are ranke instances of
"preferably does" -typ rues.
Calling these principles "correspondence rles" may b misleading to those
who think of a rule in ters of computing an output from an input. The pris­
sive and default modalities of some of these rules may b espcially jarrng.
Readers who prefer the term correspondence constraints should fel fre to make
the substitution.
3. Given that the distinctive feature set for signed languages is diferent from that
for spoken languages, it is actully unclear whether a unifed interface level for
both spoken and signe languges i s empirically possible-though if there ar
distinct interface levels, they are undoubtedly closely relate. As long as we stay
within a single modality of language, though, a single interface level for both
articulation and perception sems plausible.
4. Even here, some qualifcation might be in order. It might be, for example, that
the dropping of certin phonetic distinctions in rapid speh is bst regarded as a
matter not of PF but only of its motor realization-in which case full iter­
pretation is a misnomer.
S. This is not quite accurate: intonational phrasing plays some role in such syn­
tactic phenomena as Heavy NP Shif and Right Node Raising. Stedman ( 1 990,
1 991 ) develops an integrated theory of syntactic and intonational constituency,
which works out some of these relationships. Concere most heavily with the
interpretation of topic and focus, Stedman concludes (if I may grossly over­
simplify a complex exposition) that the surface strcture of a sentence is essentially
the set of its possible intonational phrasings. For reasons unclear to me, he fnds
this view simpler than one in which there is a single syntactic structure that
is partially indepndent of intonational phrasing. One might wonder, though,
whether Stedman's conclusion is a symptom of trying to confate two partially
related structures into one.
On the other hand, I agree with Steeman that topic and focus marking depend
more directly on prosodic than syntactic constituency, and that this realiztion
permits a simplifcation of some of the ugly conditions of focus assignment in
lackendof 1 972. And, along with most everyone else, I owe him a counter­
explanation of intonation-dependent syntactic phenomena.
6. Kayne ( 1 994) can be seen in the present prspctive as proposing an alterative
to ( \ 2b).
( 1 2b' ) If syntactic constituent Xl corresponds to phonological constituent YJ ,
and syntactic constituent Y2 coresponds to phonological constituent Y2 ,
then
If and onl y i f XI asymmetrically c-commands X2 , YI precees Y2 .
220
Notes to Pages 3 1 -36
In Kayne's theory, all syntactic proprties follow from asymmetrical c-ommand,
and linear order plays no syntactic role; we can therefore eliminate linear order
from syntactic theory and regard it as a purely phonological relation. I will not
comment here on the virtues or vices of Kayne's propsal; rather, I will simply
continue to assume the traditional view that dominance and linear order are in­
depndent variables in syntactic strcture.
7. Of course, a theory of conceptual structures has to notate units in a partcular
linear order, but this is just an artifact of putting structures on papr. By contrast,
the linear order of phonological and syntactic units des model a psychologically
real distinction (with caveats concering syntax if one adopts a suggestion like
that in note 6).
8. There may be a further typ of representation on the way from synta to con­
ceptual structure, such as the language-spcifc "semantic strctures" proposed
indepndently by Pinker (1 989) and Bierisch and Lang (1 989). For reasons dis­
cussed in Jackendof 1 983, chapter 6, I prsonally doubt such an exta level of
representaton is necssary; but if it is, then we need two sets of corespondenc
rules to get from syntax to conceptual strcture. A extra layer of correspndence
rules doesn't change anything essential in the picture I am painting here. If any­
thing, it only makes things worse for a strictly derivational theory.
The basic observation used by Pinker and by Bierisch and Lang to motivate
this extra level is that not all aspcts of meaning are gerane to syntactic dis­
tinctions. For instance, no grammatical distinction depnds on color; although
many languages make a syntactic singular-plural distinction, no language makes a
grammatical distnction between six and seven objects. They propse therefore
that the "semantc level" include only those aspcts of meaning that are relevant
to syntax. My psiton is tat tis constraint is a consequenc of the SS-CS cor­
respondence rules: only crain aspcts of conceptual structure are "visible" to the
corespondence rles. Bouchard (1 995) advocates a similar position, using the
term G(rammar ) -Semantics for the subset of conceptual structure accessible to the
SS-CS correspndence rules.
9. The same is true for Lakof's (1 987) cases of Japanese and Dyirbal nou
classes (the latter including his famous class of "women, fre, and dangerous
things"). There are fortuitous semantic links among the members of these classes,
but there is nothing semantically inevitable that makes them go together. One can
change the semantc criteria for the classifcaton of nouns without changing the
syntactc classifer system.
1 0. Tenny (1 987, 1 992), however, makes a valiant and widely cited attempt to
connect telicit with dirct object psition. Sections 3. 2. 1 and 3. 8. 2 discuss aspcts
of telicity further; in paricular, note 1 5 to chapter 3 mentions some of the prob­
lems with Tenny's propsal. Her claim has ben somewhat weakened in Tenny
1 994.
1 1 . The level of "argument structure" in Grmshaw 1 990 and other works then
emerges as an "interlevel" that helps compute the relation between syntactic
structure and conceptual structure. If such a level is necessar, it appars to me
Notes to Pages 39-42 221
that it deals exclusively with the central grammatical relations subject, object, and
indirect or second object. It does not seem to play a role in the linking of adjuncts
and oblique arguments.
It is interesting· in this light that whole theories of syntax have been developd
whose main contribution is in the treatment of subjects, objects, and indirect
objects, for example Relational Grammar and LFG. In particular, although the
f-strcture level of LFG includes all te constituents in a sentence, the main
argumentaton for f-structure and the main work it does in describing grammat­
ical strcure concers the cntral grammatical relations exclusively. One might
wonder, therefore, whether the function of f-structure might not be limited to
regulating the relation btween semantic arguments and the central grammatical
relations. Se Nikanne 1 996 for discussion.
1 2. One can also see such a conception inside of the phonological component,
where autosegmental theory views phonological structure as composed of multiple
quasi-indepndent "tiers," each with its ow autonomous strcture, but each
linked to the others by "association lines" established by correspondence rles. In
other words, phonology is itself a microosm of the larger organization seen here.
1 3. I say "relatively novel" because Multidimensional Categorial Grammar (e. g.
Bach 1 983; Oehrle 1 988) recognizes the tripartite nature of derivations (though see
note 20). HPSG likewise segments its strctures into phonological, syntactic, and
semantic attribute structures. Connoisseurs may notice as well a resemblance to
the basic architecture underlying Stratifcational Grammar (Lamb 1966).
One way the present conception difers from Catgorial Grammar and HPSG,
I think, is in the status of constituency across the three components. Because
these other theories build up constituents "simultaneously" in the three comp­
nents, there is a strong presumption that phonological and syntactic constituency
coincide-which, as we have seen, is not corect. I am not familiar with proposals
in these theories that account for the basic indepndence of phonological and
syntactic constituency. On the other hand, one could imagine variants of the
HPSG formalism that would build up constituency indepndently.
14. Varous colleagues have ofered intrpretations of Fodor 1983 in which some
further vaguely spcifed proess accomplishes the conversion. I do not fnd any
supprt for these interpretations in the txt.
1 5. Because the mapping i not, strictly spaking, inoration preserving, I would
prefer to abandon the ter transltion module used in lackendof 1 987a.
More generally, let us retur to the remarks in note 1 . The caveats applied there
to the term representatin also apply here to iformatin. "Mental inforaton"
does not "infor" anyone other than the modules of the brain with which it
comunicates via an interfac. Moreover, it does not make sense to spak of,
say, visual inforation simply being "sent" to the language systm. One must
not think of inforation passing through the interface as a fuid being pumpd
through "mental tubs" .or as bits bing sent down a wire. Rather, as I have
stressed throughout this chapter, the corespondence rules prfor complex
negoti ations btween two parially incompatible spaces of distinctions, in which
only certai n parts of each are "visible" to the other.
222
Notes to Pages 42-45
1 6. It is of course pssible for Fodorian modularity to incorporate the idea of
interace mappings, solving the problem of communication among modules.
However, since interface modules as conceived here are too small to be Fodorian
modules (they are not input-output faculties), there are two pssibilities: either the
scale of modularity has to b reduced from faculties to representations, along lines
propsed here; or else interfaces are simply an integrated part of larger modules,
so they need not themselves be modular. I take this choice to b in part merely a
rhetorical diference, but also in part an empirical one.
1 7. A less well known visual input to phonetics is the McGurk efect (McGurk
and Macdonald 1 976), in which lipreading contributes to phonetic perception. One
can exprience this efect by watching a videotap in which a person utters da, da,
da, . . . on the sound track but utters ba, ba, ba, . . . on the synchronized video. It
tus out that the video overrides the acoustics of the spech signal in creatingjudg­
ments about what the utterance souds like; the viewer has no conscious aware­
ness of the disparity btween lip position and the acoustic signal, and cannot help
harig the syllable ba. (Althoug Fodor ( 1 983) dos mention this phenomenon
(footnote 1 3), the remark does not occur in the section on inforational encap­
sulation, where it would ofer a signifcant counterexample to his forulation. )
1 8. In tur, Landau and I draw on neuropsychological evidence that spatial rep­
resentation has two subcomponents, the "what" and "where" systems, served by
ventral (tempral) and dorsal (parietal) processing ( Ungerleider and Mishkin
1 982; Farah et al. 1 988). Ths elaborates spatial representation in fgure 2.3 into
two submodules, further complicating the diagam but not changing it general
character.
Incidentally, the organiztion sketched in fgure 2. 3 begins to break down
Fodor's notion of a nonmodular central core. At least spatial representation
and conceptual structure belong to the central core; they are bth domains of
"thought" rather than "perception. " See chapter 8 and Jackendof 1 987a, chapter
1 2, for discussion.
19. The notion of "optimal ft," expressed in ters of set of violable "preference
rles, " anticipates by about a decade the similar application of violable con­
straints in Optimality Theory ( Prince and Smolensky 1 993)-though the manner
of interaction among constraint is slightly diferent. In tur, Lerdahl and I found
antecedents for this approach in the work of the Gestalt psychologists (e.g. Kofa
1 935).
20. Multidimensional Categorial Grammar (e.g. Bach 1 983; Oehrle 1 988) treats
rles of grammar forally as functions that simultaneously compse phonologi­
cal, syntactic, and semantic structures. In this theory, essentially every aspct of
phonological, syntactic, and conceptual structure is ptentially accessible to every
other. Oehrle takes the empircal task of linguistic theory to b that of rei ning i n
ths richness, of fnding the sublass of such grammars that actuall y appar i n
natural language; he includes the standard GB architecture as a special case.
The proposal sketched in fgure 2. 1 can be considered a di ferent special case of
Oehrle's architecture; under that construal, the arguent here is that i t is H more
adequate way than the GB architecture to constrain mul tidi mUnhionÜl callgori al
Nots to Pages 48-58
223
grammars. ( However, see not 13 for a way in which Categorial Grammar might
not be rich enough to account for language.)
On the other hand, suppse one tries to extend the multidimensional categorial
approach across all the modules of the mind sketched in fgures 2.2 and 2. 3. The
result is an n-tuple of strctures, any one of which can in principle interact freely
with any of the others. Such a conception would seem to abandon modularity
altogether. By contrast, the present approach loalizs the interaction btween two
kinds of representations in an intrface moule that knows nothing about any
other representations or intrfaces; so the overall conception is one of more lim­
itd communication among representations.
Chapter 3
1 . Within Chomskian generative grammar, this is a presumption behind Baker's
( 1 988) UTAH, for instance. Outside generative grammar, somethi ng ver like this
is presumed in Montague Grammar and Categorial Grammar; Dowty's ( 1 979)
minimal lexical decomposition of verbs is an unusual exception wi thin these
traditions.
2. Deja Y. In Jackendof 1 972 I discussed two interpretations of Katz and Post­
al's (1 96) hypthesis that "deep structure determines meaning, " a virtual dogma
of the mid- 1 960s incorporatd into the Standard Theory (Chomsky 1 965). On the
weak reading, deep structure was to be construed as encoding whatever dis­
tinctions in syntax contribute to semantics. On the strong reading, it was to be
construed as encoding all semantic distinctions, an interpretation that led inex­
orably to Generative Semantics. In Jackendof 1 972 I argued that only the weak
reading was warantd by Katz and Postal's evidence (then went on to pint out
that it was still only a hypthesis, and to argue against even the weak form).
A similar ambiguity lies in the conception of LF: Does it provide whatever gram­
matical information is relevant to meaning, or does it provide all information
relevant to meaning? Simple compsition is compatible with either reading;
enriched compsition is compatible with only the weaker.
3. Marolein Groefsema (prsonal communication) has pointed out, however,
that the principle can emerge in other restricted contexts where standard prtions
exist, for example among employees of a garden supply shop: Would you bring up
three potting soils from the celar?
4. Paul Bloom (prsonal communication) has pinted out the relevance to this
case of the conceptual and proprioceptive sense of having a vehicle or tool act as
an extnsion of one's body. For instance, hitting a ball with a tennis racket is
phenomenologically not so diferent from hitting it with one's hand. He suggests
that the reference transfers in ( 1 4a,b) and ( 1 5) are semantically supported by this
deepr phenomenon of body sense and that the constraints on this reference
transfer may thereby b explained. I leave the exploration of this intriguing pos­
sibility for future resear<h.
5. As Paul Bloom (prsonal communication) has pointed out, the principle i s
actual l y sl ightly more general than thi s, gi ven that spakers who are blind can us
t hi s rcfen'n�'c t ransfer equally wel l .
224 Notes to Pages 59-77
6. The propr statement of the rule, I imagine, will provide an appropriate
semantic characteriation of which NPs the rule can apply to. I have no such
characteriztion at the moment, so an inforal list will have to do.
7. Ateratively, given appropriate context, predicates other than those from the
object's qualia structure can be read into the (b) frames. For example:
(i) My pt goat was going around eating all kinds of crazy things. Afer a while
he bgan (on) the newspapr, and enjoyed it a lot.
Here the goat is obviously not reading the newspapr but eating it. Hence, in
general, qualia strcture is used in these situations not to provide deterinate
readings, but to provide default readings in the absenc of strong counterailing
context.
8. This subsection and the next (though not the proposed solutions) were inspired
by Franks 1 995.
9. As Peter Culicover (prsonal communication) pints out, there are certain in­
between cases such as gold tooth, where one is not so clear whether the object is to
count as a tooth or not, presumably because it not only looks like a tooth but also
serves the function of one.
1 0. This problem is already cited in Iackendol 1 972, 21 7, as a "long-standing
puzle. " I have no memory of where it originally came from.
1 1 . Chomsky ( 1 971 ) is arguing against a proposal in which foused exprssions
are lowered from Deep Structure topic positions into their Surface Structure loca­
tion. But the arguent works just as well against a raising-to-LF acount.
1 2. One might avoid this problem by maintaining, rather plausibly, that the
reconstruction is just rin his books. However, I fnd the fuller reconstruction a
more natural interpretation, espcially if wants to rin hi book by smearig in the
frst clause is sufciently destressed. Moreover, the putative reconstruction of the
second clause under this interpretation is rin his books with glue. In this case any
other tool of destruction ought to b acceptble as well . However, I fnd o + . but
Bill wants to do it with a KNIFE quite odd as the fnal clause of (61 ). This suggests
that by smearing is indeed retained in the reconstruction.
1 3 . It is noteworthy that Fiengo and May ( 1 994) use reconstruction of sloppy
identity as a prime argument for the level of LF. However, all the cases of sloppy
identity they examine use VP-llipsis, which dos not prit the Akmajian-typ
constructions (e. g. •John petted the dg, then Bil did to the cat), so they do not
consider the sor of evidence presented here.
Among Fiengo and May's main cases are so-alled antecedent-ontained dele­
tions such as (i), where on their analysis the reconstruction of the ellipted V
escaps infnite rgres only bcuse the quantifed object moves out of the VP at
LF.
(i) Dulles suspcted everyone who Angleton did.
An ideal refutation of their position would show that an Akmaji an-type sentence
can appar in an antecedent-contained deletion. It i s difcult to construct prag­
matically plausible examples that combi ne antecedent-contained deletion wi th an
Notes to Pages 79-82 225
Akmajian-typ constrction; however, (ii) seems fairly cose when read with
propr intonation.
(ii) Bill painted red cheeks with a BRUSH on ever doll for which he had not yet
done so with a SPRAY gun.
If (ii) is acceptable, done so cannot be reconstructed syntactically, and therefore
LF cannot solve antecedent-ontained deletion in full generality.
1 4. The depndency is ofen stated the other way, from positions on the trajectory
to time. But, as Verkuyl ( 1 993) stresses, the cart may stop or even back up in the
course of the event, thus ocupying the same position at diferent times. Hence the
depndency is btter stated with time as the indepndent variable.
1 5 . Tenny ( 1 987) claims to identify measuring-out rel ationships with syntactic
direct argument psition (direct object + subject of unaccusatives). However, her
analysis neglects a vast range of data ( Dowty 1 991 ; lackendof 1 996e). She does
not adequately account for verbs of motion, where the path rather than the direct
argument dos the measuring out, and she does not consider the verbs of extension
and orientation, which clearly counterexemplify her hypothesis. Moreover, i n the
present context, simply placing a "measured-out" argument in a particular syn­
tactic psition has nothing to do with quantifer scope efects, æ the larger
semantic generaliztion is missed entirely.
1 6. The fact that a clause encodes a question is sometimes taken to involve a
syntactic feature [+wh] on the cause, an option that goes back to Kat and Postal
1 96. However, there is really no syntactic justifcation for this feature other than
a theory-interal need to make the selection of indirect question complements
emerge as a syntactic fact. See Grimshaw 1 979 for discussion.
1 7. Van Valin ( 1 994) and Legendre, Smolensky, and Wilson ( 1 996) indepndently
suggest that the distinction between "moved" and "in situ" wh can b charac­
terized a diferent way. In order for a clause to be interpreted as a wh-question, its
conceptual structure requires some oprator that marks the clause a a question
(the "scop" of the question), plus a variable in the sentence that carries the 9-role
of the questioned entity. In a language that "moves" wh-phrases, the SS-CS cor­
respndence rules stipulate that the wh-phrase appars in a position spcifed by
the scop of the question oprator and that a trace appars in the variable posi­
tion. By contrast, in a language with "in situ" wh-phrases, the SS-CS rules stip­
ulate that the wh-phrase appars in the syntactic psition of the variable, and
nothing (or a question marker, as in Korean) appars in the syntactic position of
the oprator. To the extent that the so-alled constraints on movement are
actually constraints on the relation btween the oprator and the variable, both
kinds of language will exhbit the same constraints on the ocurrence of wh­
phrases. (Though see Kim 1 991 for some diferences. ) Brody ( 1 995) makes a sim­
ilar proposal for eliminating LF movement of in situ wh-phrases, except that he
localizes the wh-oprator at LF, in my view an unnecessary addition to syntax.
1 8. For example, Kamp' s Discourse Representation Theor (Kamp and Reyle
1 993) and related approaches such as those of Heim ( 1 982), Verkuy1 ( 1 993) and
Chierchia ( I 911) in model-theoretic semantics, K uno ( 1 987) in functional gram­
mar, and Fll llconni l'r ( 1 9!5) and Van Hock ( 1 992, 1 995) in Cogni tive Semantics.
226 Notes to Pages 84-90
to name only a few. Within the general tradition of GB, one fd LF-Iess archi­
tectures such as those of Koster ( 1 987) and Williams ( 1 986), a monostratal syntax
in which LF * PIL (the input to Spell-Out) such as that of Brody (1 995), and
proposals for quantifer interpretation such as that of Lappin ( 1 991 ); also within
the GB tradition, Kim ( 1 991 ) shows that in situ wh in Korean behaves more like
quantifcation than like fronted wh in English. Csuri ( 1 996) and I (lackendof
1 992b, 1 996e ) make spcifc proposals that incorprate aspcts of anaphora and
quantifcation into the theory of conceptual structure developed in lackendof
1 983, 1 990.
Captr 4
I . It is not clear to me why each item should require a set of transforations
rather than one-or in fact why the transforation should be localized in the
lexical item rather than in a single rle for inserting aI lexical items. The latter, I
tn, is the assumption that has prcolated into general pratice. The diference,
however, plays no role in what is to follow.
2. Halle and Marant ( 1 993) actually place their rule of Vocabulary Insertion at
a level of Morphological Strcture interediate between S-Struture and PF.
Chomsky ( 1 979 and elsewhere) acknowledges the possibility of late lexical inser­
tion, though to my knowledge he never explores it.
3. Halle and Marantz (1 993) advocate half of this position: they properly desire to
eliminat phonological inforation from syntactic strcture. Phonologcl infor­
mation is introduc by their re of Voabular Insertion at Morphological
Structure, a post-S-Structure level on the way to PF. They propose (p. 1 21 ) that
"at LF, OS [O-Structure), and SS [S-Structure) terminal nodes consist exclusively
of morphosyntactic/semantic features and lack phonologcal features . . . . The
semantic features and proprties of terinal nodes created at OS will . . . be drawn
from Universal Grammar and perhaps from language-particular semantic cate­
gories or concpts. " In other words, they still endorse mixe syntactic ad
smntic representations, which I wish to rule out.
4. Incidentally, on this foralization, the feature TOKEN and the function
INSTANCE OF may possibly be regarde as coercions, not contributed by the
LCS of the or cat. In a preicative or generic context, the conceptual structure of
t NP would lack these elements, expressing simply a Type-onstituent. (Se
some discusion in lackendof 1 983, chapter 5. )
5. It is possible to see here a foral counterpart for Levelt's ( 1 989) distinction
between lemmas and lexical forms in procssing. For Level t, a lemma is the part of
a lexical listing that enables an LCS (in his ters, a lexical concept) to be mapped
into a lexical category in syntactic structure. I tese ters, the lemma for cat is
the SS-CS correspndenc in (5). Lvelt's lexical for is the pt of a lexical listing
that enables a syntactic category to be realize phonologically; the lexeme cat is
thus the PS-SS correspndence in (5). Lvelt shows exprimentlly that these two
aspts of lexical entries play diferent roles in spech production. The text'me/
lexical form distinction of Halle and Marantz ( 1 993) appars to b prllel .
Notes to Pages 90-94 227
This distinction is imprtant because the selection of the particular lexical form
for a lemma depnds on the outome of syntactic infectional processes such as
tense attachment, case marking, and agreement. See chapter 6.
6. Interestingly, Generative Semantics (McCawley 1 968b) also took the view that
lexical items are inserted in the course of a derivation. And Chomsky ( 1 993, 22)
proposes that "[c]omputation [of phrase markers] proeeds in parallel, selecting
from the lexicon freely at any point. . . . At any point, we may apply the operation
Spell-Out, which swithes to the PF component. . . . After Spll-Out, the compu­
tational proess continues, with the sole constraint that it has no further acss to
the lexicon. " Here phonological and semantic material are still passed inertly
through the syntax, though not necessarily all the way from initial phrase markers;
as suggested i chapter 2, the uncharacterizd operation of Spell-Out might play
the role assigned here to the PS-SS correspondence rules.
7. An equivalent in the theor outlined in Chomsky 1 995 might b to index each
element in the numeration of lexical i tems that leads to the construction of the
syntactic tree through Merge.
8. Parallel arguments can b constructed for two-stage theories of lexical insertion
in which lexical items are inserted without their phonology (or without their
inectional morphology) in D-Structure, and the grammar then "goes back to the
lexicon" at a later stage to recover, for instance, iregularly infected fors. As
mentioned earlier, Halle and Marant's ( 1 993) theor is a recnt example. To my
knowledge, though, proponents of such theories have not raised the question of
how the grammar can "go back to the lexicon" and know it i fnding the same
item.
9. I am grateful to Urpo Nikanne for pointing out to me this class and its signif­
icance. Paul Postal and Jim McCawley (persona communication) have pinted
out that some of these items do have minimal combinatorial proprties (e. g. helo
there,*here, goodbye now,*then,*there, dmmit allf*some). I am inclined to regard
these not as productive syntactic combinations but as memorized fxed expressions
(see chapter 7). In other cases (e. g. hurray forf*against linguistics, shit onf*with
liguistics), there is perhaps some free syntactic combination, but there i certainly
no evidence for a standard X-bar schema associated with hurray and shit, or even
for a standard syntactic categor (see Quang 1 969).
1 0. I am grateful to an anonymous reader for pointing out this fact. Note, how­
ever, that helo or any other phrae is aceptable in (9b) if uttered with some dis­
tinctive intonation or tone of voice; the sentence then draws attention to the
auditor rather than linguistic characteristics of the utterance.
1 1 . As Martin Everaert (personal communcation) has pointed out, this includes
cases like His loud "yummy yummy yummy" always irritates me and I heard him
ouch-ing his way through acupuncture. In the frst case, the phrase is a quotation,
and any part of spech can go in its place, even though the psition is norally a
noun position. In the second case, ouch-ing is the consequence of the productive
morphological rul e that produces such coinages as He Dershowitzed the interview
(Clark and Cark 1 979). This does not necessarly show that ouch is a noun; per­
haps it si mpl y dos not confi ct wi th the structural descri pti on of the rule. These
228 Notes to Pages 95-1 01
are without doubt cases of enriched compsition i n semantics, and also perhaps in
syntax.
1 2. There are also items that have phonological and syntactic structure but no
semantics, such as expletive subjects and the d of English do-supprt. Empty
categories that carry a 9-role, for instance PRO (if it exist), are items with syntax
and semantics but no phonology.
1 3. I ofer this argument with some trepidation. Children of course demonstrate
some comprehension of syntactic structure long before they produce much more
than single-word utterances. The argument to follow therefore probably should
not b applied to speech production per se. Rather, it must be modulated in one or
bth of the following ways: ( 1 ) either it applies to the growth of language com­
prehension, rather than to production; or, (2) if the gap between comprehension
and production proves to follow from a lag in the ability to construct syntactic
trees i production, it applies to the trees actually constructed during speech pro­
duction, even while prceptive trees are richer.
1 4. Aoun et al. ( 1 987) propse that binding theory makes certain distinctions
based on whether anaphoric elements are phoneticized in PF. However, as section
3 . 7 has made clear, it is equally pssible in the present theory of binding to make a
distinction btween binding with and without accompanying overt syntax and/or
phonology. Another recent proposal for the PF compnent is that of Halle and
Marantz ( 1 993), where, within the PF compnent, syntactic rles manipulate
infectional aspcts of S-Structures to for Morphological Structures prior to
Vocabulary Insertion. I will not deal with their proposals here, while acknowl­
edging that the present approach needs to ofer an alterative; chapter 6 gets the
discussion of the ground.
1 5. Te f-structure of LFG is made not of primitives such as NPs and VPs, but of
networks of grmmatical functions. I think it is therefore more properly seen as an
"interlevel" btween syntax and semantics, not as a part of syntax propr. In this
sense the syntax propr of LFG-the c-strcture-is monostratal.
1 6. Curiously, Chomsky ( 1 995) explicitly disclaims trying to account for some of
these aspects, for example topic/fous strcture, within his minmalist architecture.
1 7. Even so, Chomsky sometimes still does not quite abandon D-Structure
interpretation of 9-roles. For instance, in Chomsky 1 986, 67, he partly justifes
D-Structure by saying, " . . . D-structures sere as an abstract representation of
semantically relevant grammatical relations such as subject-verb, verb-object, and
so on, one crucial element that enters into semantic interpretation of senten­
ces . . . . " However, the immediate continuation is, "(recall that these relations are
also indirectly expressd at S-structure, assuming traces). " In addition, he observes
that "other features of semantic interpretation" are represented at LF, and by the
next page asserts that "PF and LF constitute the 'interface' between language and
other cognitive systems, yielding direct [sic] representations of sund on the one
hand and meaning on the other . . . . " In other words, in the end it has been denied
that D-Strcture directly feeds semantic representation, and it is presuppsed that
there is a strictly derivational relationship from D-Strcture to both sound and
meaning.
Notes to Pages 1 02-1 1 0
229
1 8. Chomsky is alluding in essence to RUGR when he says (1 993, 1 0), " . . .
[when] H heads a chain (H, . . . , t), . . . this chain, not H in isolation, enters into
head-a relations. " RUGR is also essentially equivalent to Brody's (1 95, 14)
Generalized Projection Principle.
1 9. Chomsky himself ( 1 993, 46 note 20) alludes to such a possibility of dispensing
wth movement altogether, replacing it with well-foredness conditions on chains.
He argues against this possibility, on the grounds that Move a can b shown
necessary if the relation between a moved element and its trace can b disrpted or
obliterated by subsequent movement. The single example he presents involves the
adjunction of a verb to one of its functional projections, which then goes on to
adjoin to a functional projection abve it. This analysis seems to me sufciently
theory-interal-and, even within the theory, sufciently open to ingenious options
to overcome loss of locality-that I need not answer it here. Brody ( 1 995) how­
ever does provide discussion.
RUGR as it stands does not, I think, adequately account for pleonastic NPs
such as it and there, since they do not, strictly speaking, cary thematic or selec­
tional roles. I leave the propr account open for the moment.
20. This section draws on the somewhat more elaborate discussion in lackendof
1 987a, chapter 6.
21 . This propsal does not preclude the pssibility, suggested by Marcel Kins­
bure (prsonal communication), that in the actual brain the blackbards are of
fexible size and can compete with each other for space (and therefore computa­
tional resources).
Another indepndent alterative is to tink of the blackbards as active, self­
organizing agents, that is, not to make a distinction btween the medium for
working memory storage of inforation and the integrative processor that struc­
tures the inforation. Ultimately I think this is probably the right way to think of
it neurally. But since nothing I will have to say here hinges on the distinction, and
since the separation of blackboard and proessor is so much easier to deal with
intuitively, I will stay with the more primitive metphor.
22. Levelt (1 989) presents strong psycho linguistic evidence that lexical activation
in production occurs in two stages, the lemma, or access from meaning to mor­
phosyntax, and the lexical form, or access to phonology. It is not clear to me how
to interpret these results in the present approach, given that I have tried to avoid
the notion of "going back to the lexicon later" i the foral grammar. What I
think has to b sought is the proper trope on "lexical activation" in ters of the
foral model and its implementation in a processor. See section 6.4 for foral
justifcation of the separation of lemmas from lexical forms.
Chapter 5
I . In fact, Di Sciullo and Williams (1 987) appear t presume a standard theory
of lexical insertion (or, later, a standard theory of lexical insertion moved to S­
Structure (p. 53» , si nce they speak of "i tems that are syntactic atoms (i nserable i n
XO slots i n syntactic strctures)" (p. I ). Al though they speak of lexical VPs such as
230
Notes to Pages 1 1 1 -1 20
take NP to task as listemes (p. 5), they do not address how these come to b
insertd into syntactic structures. From the present point of view, this omision is
signifcant: the insertion of phrasal listemes ougt certainly to b "of intrest to
the grammarian. "
2. Notice that the constituency of the morphophonology in (2) i s not congruent
with the syllabifcation.
I am assuming that the bracketing of the morphosyntax in (2) mirrors the
semantics: 'someone who does [atomic science] ' . Alteratively, the morpho­
syntactic bracketing might parallel the phonology, in which case the mismatch
lies btween syntactic structure and conceptual structure.
3. Granted, Halle ( 1 973) cites some odd situations in Russian in which certain
cse fors seem not to exist (see also Anderson 1 982). I don't know what to say
abut them; but such cases are, I gather, quite rare.
4. What about cases in which a regular and an iregular form coxist (uneasily),
for instance dived and dve? I think we have to assume that bth are listed, so that
they can compete with each other. See section 5. 5. (This solution is advoated also
by Pinker and Prince (1 991 ). )
5. This probably entails some trick i n the formaliztion of unifcation. Suppose
unifcation of a string of phonological units (feet, segments) with a lexical item can
b accomplished if linear order is presered-but gaps are allowed, just in case
they are licensed by something else. Then automatic would license the appropriate
segments of autofuckinmatic, and the remaining segments would be licensed by the
expletive. On the other hand, the intrsive segments in, say, *autobargainmatic
would not be licensed by anything, so the word would not b legal.
We will encounter the need for such a convention on unifcation again in chap
ter 6 in connection with templatic morphology, and in chapter 7 in connection
with discontinuous idioms.
6. Anderson (1 992, 64-69) argues against listing morphemes this way, on the
grounds that there are instancs of subtractive morphophonology, where the
derived for is smaler than the stem, and of morphologcally conditioned meta­
thesis, where no afxal addition is feasible. See the brief remarks in section 6. 2.
7. On the other hand, as Robrt Beard (personal comunication) has reminded
me, afxes do share crtain spcial properties with closed-lass words such as
pronouns and auxiliary verb. For instanc, closed-lass words can cliticiz and
undergo contraction, making them look phonologically more like afxes; bth
closed-lass words such as it and afes suh as structural case can b semantically
empty. Even if such behavior warants its own special corer of the lexicon, as
Beard ( 1 988) advoates, t point relevant hr is tat this spcial corer contains
both afxes and words.
8. One of the reasons I was unable to make this distinction in lackendol 1 975, 1
think, is that the notion of morphological blocking was not well establi shed at that
time.
9. A plausible constraint, for the most part obsered i n practice, i! that Xn
constituents may not domi nate phrasal constituents. However, Di Sci ul l o and
Notes to Pages 1 21 -1 23
23 1
WilJiams (1 987) argue that French compounds like trompe-l'oeil (lit. 'deeive the
eye'), arc-en-ciel ('arc in sky' (rainbow» , and hors-la-loi ('outside the law' (out­
law» are N
'
s that dominate phrasal constituents-so this constraint might have to
b relaxed (or abandoned). English counterparts are words like mother-m-law,
commandr-in-chif Se however Anderson's ( 1 992, 3 1 5-3 1 8) disussion of Di
Sciullo and Williams's argument.
Another case, presented by Liebr ( 1 992), concers constructions like a Charles­
and-Di syndrome, a let's-get-on-with-it attitude, the three-strikes-and-you're-out
legilation. She takes these to be adjectival compounds. But she presents no evi­
dence that they are A
o
s, and I suspct they ar not, since they resist modifcation
typical of adjectives, such as ?rve never seen a more let's-get-on-with-it attitude
than yours, *John's attitud is very let 's-get-on-with-it. Also, Bresnan and
Mchombo ( 1 995) obsrve that the phrases that cn b so used are slected from
fed expressions of the language (i. e. listemes). For example, random productive
substitutions cannot b made in these phrases while presering aceptability: ?a
Harry-and-Betty syndrome, ?a let's-get-some-vanila-ice-cream attitud, ?the four­
bals-and-you-walk opportunity. So Liebr's characteriztion of these cases may b
too unconstrained.
There are, however, still worse cases, such as the I-don't-have-to-tel-you-any­
thing kind of love, pinted out by Biljana Misic Ilic (personal communication). I
know of no discusion of these in the literature.
1 0. I fnd it reasonable, as advocated by Di Sciullo and Williams ( 1 987), following
Aronof ( 1 983), that there may be something of a cline in semiproductive pro­
cesses, shading toward productive ones.
1 1 . This claim should not b construed a implying that there is a strict cutof in
frequency btween stored fors and forms generated on-line. Rather, I would
expct a dine in accessibility for stored fors, including the usual diferential in
accessibility btween recognition (better) and recall (worse).
1 2. This seems an appropriate plac to comment on Hale and Keyser's ( 1 993)
analysis of such denominal verbs. Hale and Keyser propse, for instance, to derive
the verb shelve from a syntactic structure like (i), listed in the lexicon.
(i) [vp[
v
, e] NP [VP[V2 e] [pp
[
p e] [N
shelfl l l ]
The two verbs VI and V2 and the preposition are syntactically empty, and
"therefore" interpreted (by unstated principles) as cause, be, and at/on respec­
tively. Shel undergoes head-to-head movement to the P node, which in tur raises
to the V2 node, which in tur raises to the VI node, yielding the surface for.
This approach is subject to many of the same objections that Chomsky ( 1 970)
raised to the Generative Semantics treatment of lexical semantics. Here are some
of the problems.
a. Shelve means more than 'put on a shelf' . One can't shelve a single pt or
dish, for example; and one can' t shelve books just by tossing them randomly on a
shel f This is not predicted by the syntax, so there must b some aspects of the
semantics of shelve that go byond the expressive power of syntactic representa­
tion. ( I n many of these verbs, the extra semantic material i nvolves a characteri stic
232 Notes to Pages 1 24-1 26
use or function, so it may be pulled out of the qualia structure of the root noun­
yet another use of qualia strcture in addition to those cited in sections 3.4 and
3. 5. )
b. Hale and Keyser do not address how the phonological form i s realized as
shelve rather than shel This bcomes more acute with verbs of removal, which
presumably have similar derivations. For instance, how is phonological structure
conected to syntax and semantics so as to get skin and weed but also uncover and
unmask, dclaw and defang, disrobe and diembowel?
c. Widen and thin are suppose to b derived from syntactic structures similar
to (i), but with the APs wide and thin at the bottom instead of the PP e shel (i.e.
'cause to bcome wide'). Grow has the same thematic roles as these verbs, as can
b seen especially from its similarity to the deadjectival enlarge. But there is no
adjective that can b used as the base for this structure, and the noun growth is
more morphologically complex and mor semantically specialized than the verb
tat mght be supposed to be derived frm it.
d. More acutely, at this gan of distinctions, kil has the same smantic struc­
ture as widen; that is, it means 'cause to bcome dead' . UTAH therefore requires
that kil be derived from the same syntactic strcture. In other words, we are
directly back in the world of Generative Semantics (McCawley 1 968b). Although
I agree with the semantic insights that Generative Semantics sought to express, the
possibility of expressing them by syntactic means has been largely discredited (e. g.
Chomsky 1 970; Bolinger 1 97 1 ; lackendof 1 972). Incidentally, Fodor's ( 1 970)
"Three Reasons for Not Deriving ' Kill' from ' Cause to Die,' ' ' widely taken to
discredit lexical decompsition, apply equally to lexically transparent causative
relations such as transitive versus intransitive widen and break. See lackendof
1 983, 1 24-1 26, and 1 990, 1 50-1 5 1 , for rebuttal of Fodor's arguments.
e. Hale and Keyser's proposal claims that the NP shel satisfes the Location
role in We shelved the books. However, We shelved the books on the top shel has
an overt Loation, hence a double flling of the Location role. This of course vio­
lates UTAH, since it is impossible for two diferent NPs with the same a-role to b
in the same underlying syntactic position. In addition it violates the a-criterion; it
should be as bad as, say, ¯He opened the door with a key with a skeleton ky. But
it's prfect. See lackendof 1 990, chapter 8, for an acount of such sentences
within a lexicalist theory of verbs like shelve.
1 3 . I am actually unclear whether the syntax needs to spcify that the verb domi­
nates a noun. Afer all, it could just be a plain verb, for all the syntax cares. One
piece of evidence that there is morphosyntactic structure comes from cases pinted
out by Pinker and Prince ( 1 988) such as fied out, where the verb is derived from
the noun f (bal) , in tur derived from the verb fy. Something is necessar t
suppress th
e
\apparance of the irregular pastfew; Pinker and Prince claim it is the
extra bracketing
[V[N
V
]
.
1 4. Incidentally, these forms also provide decisive evidence ( I think) against a
view of lexical relations that which Chomsky has ocasionally entertained (e. g.
1 970). His idea is that destroy and destruction, for instance, constitute a single
Note to Page 1 3 1
233
lexical item, unspcifed for which part of spe ch it is. If inserted into a V slot, it
comes to be "spelled out" (by Lexical Assembly) phonologically as dstroy; if
inserted into an N slot, as dstruction. Although this may pssibly work for words
as closely related as destroy and destruction, it cannot work for denominal verbs
such as saddle and shelve. For one thing, it cannot acount for the semantic
peculiarities of these fors. More crucially, though, it cannot pssibly account for
the existence of more than one verb built from the same noun, as in the examples
sok and roo/here.
Chptr 6
1 . There are questions abut whether all the morphosyntactic structure in ( 1 ) is
necessary. For example, if plurality appears in the afx, must we also encode i t as
a feature in the upper node? I am encoding it in bth places as a way to represent
the efet of "percolation" of plurality up to the maximal phrase node (NP or
DP), so that it can be detected by rules of agreement both inside the maximal
phrase (e. g. agement with deteriners and adjetival modifers) and outside the
maximal phrase (e. g. agreement with a verb of which this is the subjet). Alter­
natively, other solutions for "percolation" might b envisioned, and I leave the
issue open. In particular, it may not b strictly necessary to encode the result of
prcolation in the lexical entry itself. (Se Liebr 1 992 for discussion of percola­
tion; parallel issues arise in HPSG.)
Similarly, one might ask whether the extra node for the afx needs to b syn­
tactically ordered with respet to the host. In cases where an afx is phonologically
intercalated with the host, such as English expletive inxation, German past par­
ticiple circumfxation ( ge-kauf
O, and much Semitic morphology (see setion 6. 2),
there is actually no syntactic evidence for ordering, and we may well consider the
host and afx syntactically unordered. On the other hand, ordering mut be
stipUlated if there are multiple afxes attached in a fat structure, as in (ia).
(i) d. X b. X
� �
Af
l
Af2 X
Af
l
X

X
Whether there are such structures is an open question. In some cases, reursive
binary branching like (ib) is motivate. However, a language with separate subjet
and object agreement markers on the verb (say Swahili) provides no (non-theory
i nterai) evidence for syntactic reursion; and the complex templates assoiated
with Navajo verbs (Spncer 1 991 , 208-21 4) are a still more extreme case.
In any event, if morphosyntactic structure can be pared down, or if it nees to
be further augmented, I see no basic problems in adapting the present proposals to
such alteratives.
234 Notes to Pages 1 34-1 4
2. Note that I do not call it the "Lexical Interace Level," as one might frst
be tempted to do. Remember that the lexicon is part of the PS-SS and SS-CS
correspondence components, not itself a level of structure with which phonolog
cn interace.
3. SPE itself did not make use of the notion of syllable, of course. However, if it
had, this is how it would have ben done. The case of stress, which SPE did treat,
is similar.
4. As suggested in note I , the morphosyntax ned establish no linear order
between the verb stem and the tense.
5. It is unclear to me how the index associated with the afx in syntax would then
correspond to an index in SIL. Perhaps the index could b associated with the
entire phonological forative, so that, for instance, the phonologicl form /krts/
as a whole would b coindexed in syntax with both the count noun and the plural
afx. It would then parallel the treatment of /katab/ in hypothesis 2 of (5). This
solution seems congenial to Anderson' s general approach.
6. As suggested in section 4. 2, Goldsmith's (1 993) "harmonic phonology" is
another variant consistent with the present approach. Goldsmith's "M-level," the
level at which morphemes are phonologiclly spcifed, appars to b LPS; his
"W-Ievel," at which expressions are structured into well-formed syllables and
words, appears to b SIL; his "P-Ievel" is PF. Instead of being connected by a
derivation in the manner of SPE, the levels ar connected by correspondence rules
of roughly the sort advocted here.
One proposal that I cnnot locate in this spetrum is Kiparsky's ( 1 982) Lexicl
Phonology, in which some regular phonological rules apply "inside the lexicon, "
bfore lexical insertion, and some apply "outside the lexicon, " afer words are
inserted into phrasl structures. In the present architecture, forms are not "actively
derived" in the lexicon; the only rules "in the lexicon" are semi productive rules
that relate fully listed forms.
7. I have ben compulsively complete; one can imagine abbreviatory notations to
compress (1 0). Also, in this Ü it is not clear to me if it matters whether the F
superscript is placed on the syntactic or the phonological constituent.
8. The subscripts A in the LCS of (1 1 ) are the linking indices for argument strc­
ture; they stipulate that the Thing-argument and the Path-argument must b real­
ized in syntax. The fact that the former is realized as subject and the latter as a PP
follows from the principles of linkng worked out in Jackendof 1 990, chapter 1 1 .
For a more detailed treatment of GO and EXT and their relationship, M Jack­
endof 1 996e. As usual, readers should fel fre to substitute their own favorte
formalism for mine.
9. I will assume that the allomorphy of /-c/, /-t/, /-�d/ is a fuction of regular
phonologicl rules of voicing assimilation and vowel epnthesis. If not, espcially
in the cse of /-�d/, then phonological allomorphy along the lines of section 6. 3
must b invoked in this entry. (A similar assumption obtains in my treatment of
the LPS of the plural. )
Notes to Pages 14-148
235
1 0. This difers slightly from the treatment of do-support by Grimshaw ( 1 997),
who claims that do is a "minimal" light verb and therefore automatically flls the
spot where a stranded Tense neds help. I am inclined to question her analysis, in
that do both a a light verb and as part of an anaphor stands in only for action
verbs, whereas do-support do applies to statves as well. On Grimshaw's theory,
one might perhaps expect the minimal stative verb be to support Tense in ques­
tions with stative verbs. This sugests to me that do-support d should be listed as
an allomorph of Tense, licensed in phonology just when Tense is not adjoined to a
verb in syntax. However, it dosn't appear that anything much in ei ther my anal­
ysis or Grimshaw's depnds on the corret answer here.
1 1 . Recall that the linking indices can b regarded alteratively as the endpoints
of association lines. Hence reinvoking the indices A and y is in efect connecting
went to the meanings of both go and past. ( For complete accuracy, then, the
indices in each form in ( 1 0) should b diferent. )
1 2. Aronof's ( 1 994) ter morphome in present ters designates a set of allo­
morphs that may b linked to more than one LSS, for instance the set l
-
dl, I
-
tl,
I-ad/, which is used not just for regular pasts but also for regular past participles
and passive participles. Another example, from Beard 1 987, is the set of allo­
morphs in Russian that can express either diminutivization or feminization.
1 3 . In ( 1 8) the morphosyntax has ben conveniently arranged æ that Tense and
Agreement fall under a single constituent that can b linked with the phonology in
the usual way. I recognize that this is not neessarily feasible in general, in which
case more elaborate linking conventions will have to be devised. Te tree in (i)
represents another possibility for the morphosyntax, with two linking indices
simultaneously linking to the single phonological constituent. Such a strcture will
make the statement of lexical licnsing more complex as well. I forgo the details.
(i)
IV 1
+tense

[V ] eAgr
+tense
I
A
bV eT
I
pres
3sg
1 4. This for of course alterates with the spial case of third person singular,
( 1 8). The two fors share the lemma for present tense, indexed by Z in both
structures. Presumably the zro morph is the default case, and third person sin­
gular bloks it beause of its greater spcifcity in morphosyntax. Such bloking
dos not fall under either of the conditions on lexical licensing that have ben
stated so far; i t is presumably a condition parallel to allomorhic blocking, but
stated over I ,sS instead of LPS.
236
Notes to Pages 1 49-1 61
1 5. As mentioned in section 3. 3, I have shown elsewhere (lackendof 1 992b)
that there is no evidence for embdding in the syntactic structure associated with
this construction, say as in (i). Hence the double indexation is the only pssible
solution.
(i)
[N
P e NPx]
y
Chaptr 7
1 . This chapter is adapted from lakendof 1 995 and appears here by prmission
of Lawrence Erlbaum Associates, Inc. Most of the examples come from the Weel
of Fortune corpus (see sections 7. 2 and the appndix), Weinreich 1 969, Culicover
1 967, or Wasow, Nunberg, and Sag 1 984.
2. A numbr of her examples came from the Weel of Fortune computer game;
there were a few overlaps between these and the television show puz les.
3. Another widely held view is that names are noncompositional, functioning
merely as rigid designators. Brand names and organiztion names provide con­
vincing evidence against this view, in that they have (semi)meaningful compsi­
tional structure. For instance, it would b rather surreal to name a linguistic
society the Pittsburgh Pirates; and the pint of gving musical groups names like
The Grateful Dead or No Dogs Allowed is to invoke such surrealism. If names
were truly nonmeaningful, this efect would be impossible.
4. An alterative account for sel X down the river, in the spirit of Chomsky 1 993,
might b a D-Structure with a Larsonian shell around the compound verb.
(i) [
v
p e X [
v
[ M sell down the river] ] ]
Sel would undergo head-to-head movement (somehow escaping the next V node
up, a nontrivial problem) to move to the lef of its object. I leave the details of
such an account to those who might fnd it attractive.
5. This hypothesis is worked out within Tree-Adjoining Grammar formalism by
Abille (1 995). A verion of it also appars as the "en bloc" insrion of Van
Gestel (1 995), and it is certainly feasible within HPSG formalism.
6. This approach does not work for the few idioms that do not have identifably
normal syntactic structure. Examples are by and large, for the most part, to king­
dom come, happy-go-lucky, trip the light fatastic, every which way, and easy does
it¯but they are not easy to come by and appear trly marginal. However, English
also contains a few single words that have abnormal syntactic structure. Enough is
a degree word that follows rather than precedes the phrase it modifes: so tal but
tal enough. Galore is a quantifer that follows rather than precedes the noun it
quantifes: my balloons but baloons galore. Else is an adjective (I suppose) that
ocurs only in the construction Q-one/body/place/where else and in or else. Ago is
a temporal preposition that either follows instead of precedes its complement (in/
afer 5 years but 5 years ago) or else, unlike ever other preposition, takes an
obligatory measure phrase spcifer. (The latter analysis is advocated by WilJiams
(1 994), but he does not remark on the rarity of obligator spcifers of this sort. )
Notes to Pages 1 61 -1 70 237
Any mechanism that deals with lexical licensing of these syntactically deviant
words can likely b extended to licnse syntactically deviant idioms.
7. Recall that a similar assumption was made in the analysis of expletive in­
fxation in chapter 5.
8. I am inclined to think that this class includes the expression pet fsh, ofen cited
(e.g. by Fodor 1 996) as evidenc against any sort of stereotypical meaning. The
problem is that a stereotypical pt is a dog or a cat, not a fsh; a stereotypical fsh
is a trout or a prch. Yet a stereotypical pt fsh is a goldfsh or a guppy, which is
not stereotypical in either of the constituent categores. ( Fodor presents no alter­
native acount.) I think the answer for this case is that pet fh is a fxed expression
( prhaps a collocation) for which one lears what the stereotypical membrs are.
A freely produced expression of comparable structure might be pet marsupial, for
which one has no particular spcial knowledge of stereotypes. Here, as in pet fsh,
the "pet" status is highly marked, since people don't usually keep marsupials as
pts. For the "marsupial" part, we are likely to generate a stereotyp of a kanga­
roo or opossum, in acordance with stereotypical marsupials. In this case, then, it
looks like there is simple semantic compsition of the sort Fodor argues does not
exist.
9. The lexical rules for interpreting English compunds of cours prit the ps­
sibility that snowm could mean 'man who removes snow', and one can imagine
a contxt where such an interpretation would b plausible. But the fact that
spakers of English would recogniz such a usage as a neologsm or a joke shows
that the standard meaning must b lexically listd. As emphasizd in chapter 5,
this is what distinguishes a lexical rule like English N-N compunding from a
truly productive rule: a lexical rule creates pssibilities, but one must lear form
by form which of these actually exist.
1 0. It is worth noting that a vast number of the metaphors cited by Lakof and
Johnson ( 1 980) are listed fxed expressions of this character. This does not impugn
the semantic analysis ofered by Lakof and Johnson, but it does raise the question
of whether what they are talking about is metaphor in the literary sense. Mayb
the more general term fgurative use would b more appropriate. See J ackendof
and Aaron 1 991 for discussion of other aspects of this use of the term metaphor.
1 1 . Jim McCawley ( personal communication) suggests that The breeze wa being
shot/The fat was being chewed by two of the taver's regular customers is better
than the examples in the text. Better, but still pretty degraded, I think.
1 2. Incidentally, it is not at all clear how this stipulation is to b reinterpreted
within later versions of GB and especially within the Minimalist Program, where
j ust about everything moves at some time or another during the derivation.
1 3. This is not to say that all issues in the syntactic variability of idioms can nec­
essarly be resolved by these proposals. I have not investigated enough cases to
make a strong claim. In particular, I have nothing to say about the "double"
passives in take advantage of and make mention of which are among Chomsky's
most ofen ci ted examples of idioms (though see Nunberg, Sag, and Wasow 1 994) .
238
Notes to Pages 1 72-1 86
1 4. See lackendof 1 990 and Carrier and Randall, in press, for arguments against
other approaches to resultatives in the literature.
1 5. Poss appears as an element in the syntax. However, within the idiom itself i t
receives no realiztion in the phonology. Rather, its realiztion i s deterined by
the standard PS-SS correspondence for poss, in particular that it combines with
pronouns to produce possessive pronouns instead of the regular afx 's. This sit­
uation parallels the ue of the lexical forms go and went in idioms (e. g. go/went for
brok). See section 6. 4.
1 6. Lakof (1 990) has argued for similar hierarchical relationships among more
spcifc and more general metaphor schemas.
1 7. Although in frequent touch with occurs, ?in occasional touch with sounds odd,
suggesting the forer is an idiom on its own. Notice that there is a related
"intransitive" family, for instance for instance, in fact, of course, in part, at least,
and on tie.
Chapte 8
I . This chapter is adapted from lackendof 1 996d and appears by perission of
the editor of Pragmatics ad Cognition.
2. I say "essentially" here in order to hedge on possible "Whoran" efects. There
are undoubtedly expressive diferences among languages in vocabulary, as well as
in grammatically necessary elements such as tense-aspect systems and markers of
social status (e. g. French tu vs. vous). Recent research reprted by Levinson ( 1 996)
has further uncovered cross linguistic diferences in prepsitional systems that
afect the exprsion of spatial relations. But such diferences must not b blown
out of proprtion; they are decidedly second- or third-order efects (Pullum 1 991 ).
They may create difculties for literary translation, where style and assoiative
richness are at stake, but no one seriously questions the possibility of efectively
translating newspaprs and the like.
3. At this level of generality, I concur with Fodor's (1 975) Language of Thought
Hypthesis. However, in other respcts I disagree with Fodor; for extended dis­
cussion of Fodor's position, see lackendof 1 983, section 7. 5, lackendof 1990,
sections 1 . 8 and 7. 8, and lackendof 1 992a, chapter 8. Fodor mentions my posi­
tion only in Fodor 1 996; his remarks address only an isolated corer of my theory
of conceptual structure as if it were the entire theory.
4. Incidentally, I disagree with Bickerton's ( 1 995) assessment that chimps lack
"of-line" thinking (i. e. reasoning and planning not conditioned on the imediate
environment). For instance, Kohler ( 1 927) douments instances in which a chimp
gives up trying to solve a problem, then some time later, while doing something
else, displays what i a human we would call an "aha-experence," rshes back to
the site of the problem, and immediately solves it.
5. Bickerton (1 995) relies heavily on an assumption that a single cognitive inno­
vation in modem humans has led to ever advance over earlier hominids. Si nce
language obviously had to have evolved at some point, he then argues that it is
Notes to Pages 1 86-205 239
implausible that any other possible independent innovation can have taken place,
for instance a capacity for "of-line thinking. " Though parsimonious, this assump­
tion is hardly necessary. There are enough other physical diferences btween
people and aps that it is absurd to insist that language must b the only cognitive
innovation.
6. In a language with a productive causative morpheme, of course, this particular
case would not go through; but any other inferential relationship will do as well.
7. A terinological remark. Some readers may b uneasy calling what the mon­
key does "reasoning" or, as Emonds ( 1 991 ) puts it, "connected thought," pre­
cisely bcause it has no linguistic accompaniment. My question then is what term
you would prefer. As Cheney and Seyfarth ( 1 990) argue at length, no notion of
"mere" stimulus generalization will account for the behavior. Rather, whatever
unconscious proess is respnsible must logically connect pieces of inforation
that are parallel to those expressed in ( l a-e). If you prefer to resere the ter
"reasoning" for a mental process that neessarily involves linguistic acompani­
ment, then call what the monkeys do whatever else you like, say "unconscious
knowledge manipulation. " In that case, my claim is that "reasoning" in your
sense amouts to unconscious knowledge manipulation whose steps are expressed
linguistically.
8. This linguistic consciousness is the closest equivalent in the present theory to
Bickerton's ( 1 995) Consciousness-3, the ability to reprt experience. More specif­
ically, it is the way in which we experience our reprts of our exprience.
9. Block ( 1 995) makes a distinction btween what he calls Phenomenal Con­
sciousness (or P-Consciousness) and Acess Consciousness (or A-Consciousness).
In efect, I am arguing here that the efects Blok ascribs to A-Consciouness
should atually b attributed to attention. If I read his discussion correctly, what
he has called P-Consciousness corresponds fairly well to what I have called simply
consciousness.
1 0. Damasio's ( 1 995) "somatic marker hypothesis" is a pssible exception.
Damasio argues (in present terms) that certain percepts and memories are fagged
by features that mark them as especially desirable or harful, and that these
enhanc attention and memory recovery. He moreover proposes a brain mecha­
nism for this fagging. These markers appar to correspnd to the valuation
"afective (+ or -)" in C&CM ( pp. 307-308). However, Damasio does not iden­
tify these markers as part of a larger heterogeneou set of valuations.
1 1 . Similar remarks apply to the ter "belief" here as to "reasoning" in note 7.
1 2. At one presentation of the material in this chapter, a membr of the audience
pinted out that my argument may pertain not only to animals but also to deaf
individuals who have not ben exposed to language. According to this conjecture,
such individuals could certainly think (and, by virtue of their larger brains, btter
than animals). However. , they would not possess the advantages i thought con­
ferred by language. I think I have to admit this conclusion as a distinct pssibility,
even if pl itically incorrect. If ethically feasible, it deseres exprimenttion. If
empiricaly correct, it is just one more reason to make sure that deaf individuals
are exposd I n siln l anguage as earl y i n l i fe as possi bl e.
References
Abeille, Anne. 1 988. Parsing French with Tree Adjoining Grammar: Some lin­
guistic accounts. In Proceedings of the 12th Interational Conerence on Compu­
tational Linguitics ( CO LING '88) . Budapest.
Abeille, Anne. 1 995. The fexibility of French idioms: A representation with 1ex­
icalized tree adjoining grammar. In Everaert et al. 1 995, 1 5-42.
Akmajian, Adrian. 1 973. The role of focus in the interpretation of anaphoric
expressions. In Stephen R. Anderson and Paul Kiparsky, eds. , A festschrit for
Morris Halle, 21 5-23 1 . New York: Holt, Rinehart and Winston.
Allport, Alan. 1 989. Visual attention. In Michae1 I. Posner, ed. , The foundtions of
cognitive science, 63 1 -682. Cambridge, Mass. : MIT Press.
Anderson, Stephen R. 1 977. Comments on Wasow 1 977. In Petr Culicover,
Thomas Wasow, and Adrian Akmajian, eds. , Formal syntax, 361 -377. New York:
Academic Press.
Anderson, Stephen R. 1 982. Where's morphology? Linguistic Inquiry 1 3, 57 1 -61 2.
Anderson, Stephen R. 1 992. A-morphous morphology. New York: Cambridge
University Press.
Anderson, Stephen R. 1 995. How to put your clitics in their place. To appear in
Linguistic Review.
Aoun, Joseph, Norbert Homstein, David Lightfoot, and Amy Weinberg. 1 987.
Two types of loality. Linguitic Inquiry 1 8, 537-578.
Arono
f
, Mark. 1 976. Wordformation in generative gramar. Cambridge, Mass. :
MIT Press.
Arono
f
, Mark. 1 980. Contextua1s. Lnguage 56, 744-758.
Arono
f
, Mark. 1 983. Potential words, actual words, productivity, and frequency.
In S. Hattori and K. Inoue, eds. , Proceedings of the 13th Interational Congress of
Linguists. Tokyo.
Aronof, Mark. 1 994. }rphology by itsel: Stems and i1ectional clases. Cam­
bridge, Mass. : MIT Press.
Ballrs, Bcrnard J . 1 988. A cognitive theory of consciouness. Cambridge: Cam­
hri d�c I l niwrsi t y Press.
242 References
Baayen, Harald, Willem Levelt, and Alette Haveman. 1 993. Research reported in
Annual Reprt of the Max-Planck-Institut ffr Psycholinguistik, 6. Max-Planck­
Institut, Nijmegen.
Baayen, Harald, Maarten van Casteren, and Ton Dijkstra. 1 992. Research
reprted in Annual Reprt of te Max-Planck-Institut ffr Psycholinguistik, 28.
Max-Planck-Institut, Nijmegen.
Bach, Emmon. 1 983. On the relation btween word-grammar and phrase-gram­
mar. Natural Languge & Liguitic Theory I , 65-90.
Bach, Emon. 1 989. Inormal lectures on formal semantics. Albany, N. Y. : SUNY
Press.
Baker, Mark. 1 988. Incorporation: A theory of gramatical function changig.
Chicago: University of Chicago Press.
Beard, Robert. 1 987. Morpheme order in a lexeme/morpheme-based morphology.
Ligu 72, 1 -44.
Beard, Robert. 1 988. On the separation of derivation from morphology: Toward a
lexeme/morpheme-based morphology. Quaderi di Semantica 9, 3-59.
Bellugi, Ursula, Paul P. Wang, and Terry L. Jemigan. 1 994. Williams syndrome:
An unusual neuropsychological profle. In Sarah H. Broman and Jordan Graf­
man, eds. , Atypical cognitive defcits in developmental diorders: Implcations for
brai fuction. Hillsdale, N.J. : Erlbaum.
Berman, Steve, and Arild Hestvik. 1 99 1 . LF: A critical surey. Arbitspapiere
des Sonderforschungsbereichs 340: Sprachtheoretische Grndlagen ffr die Com­
puterlinguistik, Bericht Nr. 14. Institut fr maschinelle Sprachverarbitung, Uni­
versitat Stuttgart.
Berwick, Robert, and Ay Weinbrg. 1 984. The grammatical basi of liguitic
performace. Cambridge, Mass. : MIT Press.
Besten, Hans den. 1 977. Surface lexicaliztion and trace theory. In Henk van
Riemsdijk, ed. , Green ieas blown up. Publikaties van het Instituut voor AIgemene
Taalwetenschap, 1 3. University of Amsterdam.
Bickerton, Derek. 1 990. Laguage and specis. Chicago: University of Chicago
Press.
Bickerton, Derek. 1 995. Laguage and human behavior. Seattle, Wash. : University
of Washington Press.
Bierisch, Manfred, and Ewald Lang. 1 989. Somewhat longer-much deeper­
further and further: Epilogue to the Dimensional Adjective Project. In Manfred
Bierisch and Ewald Lang, eds. , Diensional adectives: Grammatical structure
and conceptual interpretation, 471 -51 4. Berlin: Springer-Verlag.
Blok, Ned. 1 995. On a confusion about the function of consciousness. Behavioral
and Brai Scinces 1 8, 227-287.
Bloom, Paul. 1 994. Possible names: The role of syntax-semantics mappings in the
acuisition of nomina Is. Ligua 92, 297-329.
References
243
Bloomfeld, Leonard. 1933. Language. New York: Holt, Rinehart and Winston.
Boland, lulie E., and Anne Cutler. 1 996. Interaction with autonomy: Multiple
Output models and the inadequacy of the Great Divide. Cognition 58, 309-320.
Bolinger, Dwight. 1 97 1 . Semantic overloading: A restudy of the verb remid.
Lnguage 47, 522-547.
Bouchard, Denis. 1 995. The semantics of syntax. Chicago: University of Chicago
Press.
Brame, Michael. 1 978. Bse-generated syntax. Seattle, Wash. : Noit Amrofer.
Bregman, Albrt S. 1 990. Auitory scene analysis. Cambridge, Mass. : MIT Press.
Bresnan, loan. 1 978. A realistic transforational grammar. In Morris Halle, loan
Bresnan, and George Miller, eds. , Linguistic theory and psychological reality, 1 -
59. Cambridge, Mass. : MIT Pres.
Bresnan, loan. 1 982. Control and complementation. In loan Bresnan, ed. , The
mental representation of grUatical reltions, 282-39. Cambrdge, Mass. : MIT
Press.
Bresnan, loan, and lonni Kanera. 1 989. Locative inversion in Chichewa: A case
study of factoriztion in gammar. Linguitic Inquiry 20, 1 -50.
Bresnan, loan, and Ronald M. Kaplan. 1 982. Introduction. In loan Bresnan, ed. ,
The mental representation of gramatical relations, xvii-lii. Cambridge, Mass. :
MIT Press.
Bresnan, loan, and Sam A. Mchomb. 1 995. The Lexical Integity Principle:
Evidence frm Bantu. Natural Language & Liguitic Theory 1 3, 1 8 1 -254.
Briscoe, E. , A. Copstake, and B. Boguraev. 1 990. Enjoy the paper: Lexical
semantics via lexicology. In Proceedigs of the 13th International Conference on
Computational Liguitics, 42-47. Helsini.
Broadbnt, Donald E. 1 958. Perception and communication. Oxford: Pergamon.
Brody, Michael. 1 995. Lexico-Logical Form: A radicaly minialit theory. Cam­
bridge, Mass. : MIT Press.
Bromberger, Sylvain, and Morris Halle. 1 989. Why phonology is diferent. Lin­
guitic Inquiry 20, 51 -70.
Buring, Daniel. 1 993. Interacting modules, word foration, and the lexicon in
genertive gammar. Theore des Lexicons, Arbitn des Sonderforschungsbe­
reichs 282, No. 50. Institut fr deutsche Sprache und Literatur, Koln.
Bybe, loan, and Dan Slobin. 1 982. Rules and schemes in the development and
use of the English past tense. Language 58, 265-289.
Calvin, William H. 1 90. The cerebral symphony: Seashore refections on the
structure ofconsciousness. New York: Bantam Books.
Carier, lill, and lanet Randall. In press. From conceptual structure to syntax:
Projecting from resultatives. Dordrecht: Foris.
Carrier-Duncan. lill. 1 985. Linking of thematic roles i n dervational word for­
mation. I.injui.\·(i(' Inquiry 1 6, I 34.
244
References
Carroll, Lewis. 1 895. What the Tortoise said to Achilles. Md 4, 278-280.
Reprinted in Hofstadter 1 979, 43-5.
Carter, Richard. 1 976. Some linking regularities. In On likig: Papers by Richard
Carter (edited by Beth Levin and Carol Tenny), 1 -92. MITWPL, Department of
Linguistics and Philosophy, MIT ( 1 988).
Cattell, Ray. 1 984. Composite predicates i English. New York: Academic Press.
Cavanagh, Patrick. 1 991 . What's up in topdown processing? In Andrei Gorea,
ed. , Representations of viion: Trends and tacit assumptions i viion research, 295-
304. Cambridge: Cambridge University Press.
Cheney, Dorothy L., and Robrt M. Seyfarth. 1 990. How monkys see the world.
Chicago: University of Chicago Press.
Chialant, Doriana, and Alfonso Caramaz. 1 995. Where is morphology and how
is it proessed? The cse of written word recognition. In Feldman 1995, 55-76.
Chierchia, Gennaro. 1988. Aspcts of a categorial theory of binding. In Richard
T. Ohrle, Emmon Bach, and Deirdre Wheeler, eds. , Categorial Grammars and
natural language structures, 1 25-1 52. Dordrecht: Reidel .
Chomsky, Noam. 1 957. Syntactic structures. The Hague: Mouton.
Chomsky, Noam. 1 959. Review of Skinner 1 957. Lnguage 35, 26-58.
Chomsky, Noam. 1 965. Aspects of the theory of syntax. Cambridge, Mass. : MIT
Press.
Chomsky, Noam. 1 966. Cartesin liguistics. New York: Harpr & Row.
Chomsky, Noam. 1 970. Remarks on nominaliation. In Richard Jacobs and Peter
Rosenbaum, eds. , Readigs i Englih transformational grammar. Waltham,
Mass. : Ginn.
Chomsky, Noam. 1 97 1 . Deep structure, surface structure and semantic inter­
pretation. In Danny D. Steinbrg and Leon A. Jacobovits, eds. , Semantics. Cam­
bridge: Cambridge University Press.
Chomsky, Noam. 1 972. Some empirical issues in the theory of transforational
grammar. In Stanley Peters, ed. , Gals of linguitic theory. Englewood Clifs, N. J. :
Prentic-Hall.
Chomsky, Noam. 1 973. For reasons ofstate. New York: Vintage Books.
Chomsky, Noam. 1 975. Refction on language. New York: Pantheon.
Chomsky, Noam. 1 979. Lnguage and responiility. New York: Pantheon.
Chomsky, Noam. 1 980. Rules and representations. New York: Columbia Uni­
versity Press.
Chomsky, Noam. 1 98 1 . Lctures on goverment and bidig. Dordrecht: Foris.
Chomsky, Noam. 1 986. Knowledge oflanguge. New York: Praeger.
Chomsky, Noam. 1 993. A minimalist program for linguistic theor. In Kenneth
Hale and Samuel Jay Keyser, eds. , The view from Building 20, I -52. Cambridge,
Mass. : MIT Press. Also i n The Minimalist Program, 1 67·· 2 1 8. Cambridge, Mass. :
MI T Press ( 1 995).
References
245
Chomsky, Noam. 1 995. Categories and transforations. In The Miialit Pro­
gram, 21 9-394. Cambridge, Mass. : MIT Press.
Chomsky, Noam, and Morris Halle. 1 968. The sound patter of Englih. New
York: Harpr & Row.
Chomsky, Noam, and George Miller. 1 963. Introduction to the foral analysis of
natural languages. In R. Duncan Luce, Robert R. Bush, and Eugene Galanter,
eds. , Hadbook of mathematical psychology, vol . 11, 269-322. New York: Wiley.
Churchland, Patricia, and Terrence Sejnowski . 1 992. The computational brain.
Cambrdge, Mass. : MIT Press.
Clark, Eve, and Herbert Clark. 1 979. When nouns surface as verbs. Language 55,
767-81 1 .
CorbaIlis, Michael C. 1 991 . The lopsied ape: Evolution of the generative mid.
New York: Oxford University Press.
Crick, Francis. 1 994. The astonihig hypothesi: The scientic search for the soul.
New York: Scribner.
Csuri, Piroska. 1 996. Generalized referential depndencies. Doctoral dissertation,
Brandeis University.
Culham, 1. C. , and P. Cavanagh. 1 994. Motion capture of luminance stimuli by
equilunous color gratings and by attntive tracking. Viion Research 34, 2701 -
2706.
Culicover, Peter. 1 967. The treatment of idioms in a transforational framework.
Ms. , IBM Boston Programming Center, Cambridge, Mass.
Culicover, Peter. 1 972. OM-sentences: On the derivation of sentences with sys­
tematically unspeifable interpretations. Foundtions of Lnguge 8, 1 99-236.
Culicover, Peter, and Ray lackendof. 1 995. Somethig else for the binding
theory. Liguitic Inquiry 26, 249-275.
Culicover, Peter, and Michael Rochemont. 1 990. Extrapsition and the Comple­
ment Principle. Liguitic Inquiry 21 , 23-48.
Damasio, Antonio R. 1 995. Descartes' error: Emotion, reason, and the human
brai. New York: Putnam.
Dascal, Marlo. 1 995. The dispute on the primacy of thnking or speaking. In M.
Dascal, D. Gerhardus, G. Meddle, and K. Lorenz, eds. , Philosophy of languag:
An iterational handook of contemporary research, vol . 11. Berlin: D Gruyter.
Declerck, R. 1 979. Aspct and the bounded/unbounded telic/atelic distinction.
Linguistics 1 7, 761 -794.
Dennett, Daniel C. 1 991 . Consciouness explaied. New York: Little, Brown.
Dennett, Daniel C. 195. Darwi's dagerou iea. New York: Simon & Schuster.
Di Sciullo, Anna Mara
l
and Edwin Williams. 1 987. On the dfition of word.
Cambrdge, Mass. : MIT Press.
Dor, Daniel . 1 996. Representations, attitudes, and factivity evaluations. Doctoral
dissertat i on. Stanford Uni versity.
246
Referencs
Dowty, David. 1 979. Word meaig and Montagu Grammar. Dordrecht: Reidel.
Dowty, David. 1 991 . Thematic proto-roles and argument selection. Lnguage 67,
547-619.
Edelman, Gerald M. 1 992. Bright air, briliant fre. New York: Basic Books.
Emonds, Joseph E. 1 970. Root and structure-preserving tranformations. Doctoral
dissertation, MIT.
Emonds, Joseph E. 1 991 . Subategorization and syntax-based theta-role assign­
ment. Natural Lnguage & Liguitic Theory 9, 369-429.
Everaert, Martin. 1 991 . The lexical representation of idioms and the morphology­
syntax interfac. Ms. , University of Utrecht.
Everaert, Martin, Erik-Jan van der Linden, Andre Schenk, and Rob Schreuder,
eds. 1 995. Idioms: Structural and psychological perspectives. Hillsdale, N.J. :
Erlbaum.
Farah, M. , K. Hammond, D. Levine, and R. Calvanio. 1 988. Visual and spatial
mental imagery: Dissoiable systems of representation. Cognitive Psychology 20,
439-462.
Farkas, Donca. 1988. O obligatory control. Lingistics ad Phiosophy 1 1 , 27-
58.
Fauconnier, Gilles. 1 985. Mental spaces: Aspects ofmeanig construction i natu­
ral languge. Cambridge, Mass. : MIT Press.
Feldman, Laurie Beth, ed. 1 995. Morphological aspects of language processing.
Hillsdale, N. J. : Erlbaum.
Fiengo, Robr. 1 980. Surface structure: The iterace ofautonomous components.
Cambridge, Mass. : Harvard University Press.
Fiengo, Robr, and Robrt May. 1 994. Indices and idntity. Cambridge, Mass. :
MIT Press.
Fillmore, Charles, and Paul Kay. 1 993. Construction Grammar coursebook. Copy
Central, University of Califora, Berkeley.
Fillmore, Charles, Paul Kay, and Mary Catherine O'Connor. 1 988. Regularity
and idiomaticity in grammatical constructions: The case of let alone. Langage 6,
501 -538.
Fodor, Jerry A. 1 970. Three reasons for not deriving "kill" from "cause to die. "
Lingitic Inquiry 1 , 429-438.
Fodor, Jerry A. 1 975. The languge ofthought. Cambridge, Mass. : Harvard Uni­
versity Press.
Fodor, Jerry A. 1 983. Modularity ofmind. Cambridge, Mass. : MIT Press.
Fodor, Jerry A. 1 987. Psychosemantics. Cambridge, Mass. : MIT Press.
Fodor, Jerry A. 1 9%. Concepts: Where cognitive scence went wrong. Ms. ,
Rutgers University.
Fodor, Jerr A. , Thomas Bever, and Merrill Garrett. 1 974. The psychology of
language. New York: McGraw-Hil l .
References 247
Fodor, J. A. , M. Garett, E. Walker, and C. Parkes. 1 980. Against defnitions.
Cogition 8, 263-367.
Foley, William A. , and Robrt D. Van Valin. 1984. Fuctional syntax and Uni­
versal Grammar. Cambridge: Cambridge University Press.
Frank, Robrt, and Anthony Kroh. 1 995. Generalized transforations and the
theory of grammar. Studia Linguistica 49, 1 03-1 5 1 .
Franks, Bradley. 1 995. Snse generation: A "quasi-classical" approach to con­
cepts and concept combination. Cogitive Science 1 9.
Fraser, Bruce. 1 970. Idioms in a transforational grammar. Foundations of Lan­
guge 6, 22-42.
Garrett, Merrill. 1 975. The analysis of sentenc production. In Gordon H. Bower,
ed. , Psychology of learing a motivation, vol . 9, 1 33- 1 77. New York: Academic
Press.
Gee, James, and Fran�ois Grosjean. 1 983. Perforance strctures: A psycholin­
guistic and linguistic appraisal. Cogitive Psychology 1 5, 41 1 -458.
Goldbrg, Adele. 1 992. In supprt of a semantic account of resultatives. CSLI
Technical Reprt No. 1 63, Center for the Study of Language and Inforation,
Stanford University.
Goldbrg, Adele. 1 993. Making one's way through the data. In Alex Alsina, ed. ,
Complex predicates. Stanford, Calif: CSLI Publications.
Goldbrg, Adele. 1 995. Constructions: A Construction Grammar approach to
argument structure. Chicago: University of Chicago Press.
Goldsmith, John. 1 976. Autosegmental phonology. Dotoral dissertation, MIT.
Distributed by Indiana University Linguistics Club, Bloomington.
Goldsmith, John. 1 993. Haronic phonology. In John Goldsmith, ed. , The last
phonological rule, 21 -60. Chicago: University of Chicago Press.
Gould, Stephen Jay. 1 980. Our greatest evolutionary step. In The panda's thumb,
1 25-1 33. New York: Norton.
Gould, Stephen Jay, and Richard Lewontin. 1 979. The spandrels of San Marco
and the panglossian paradig: A critique of the adaptationist programme. Pro­
ceedings ofthe Royal Socity B205, 581 -598.
Grimshaw, Jane. 1 979. Complement selection and the lexicon. Liguitic Inquiry
1 0, 279-325.
Grimshaw, Jane. 1 990. Argument structure. Cambridge, Mass. : MIT Press.
Grimshaw, Jane. 1 997. Projection, heads, and optimality. Liguitic Inquiry 28.
Gross, Maurice. 1 982. Une classifcation des phrases "fgees" en fran�ais. Revue
QUbecoise de Iguistique 1 1 , 1 5 1 -1 85.
Hale, Kenneth, and Samuel Jay Keyser. 1 993. O argument strcture and the
lexical expression of syntactic relations. In Kenneth Hale and Samuel Jay Keyser,
eds. , The view from Building 20, 53-1 0. Cambridge, Mass. : MIT Press.
248
References
Halle, Morris. 1 973. Prolegomena to a theory of word-formation. Liguitic
Inquiry 4, 3-1 6.
Halle, Moris, and Alec Marantz. 1 993. Distributed Morphology and the pieces of
infecton. In Kenneth Hale and Samuel Jay Keyser, eds. , The view from Buildig
20, 1 1 1 -1 76. Cambridge, Mass. : MIT Press.
Halpm, Aaron. 1 995. On the placement ad morphology of cUtics. Stanford,
Calif: CSLI Publications.
Hankamer, Jorge. 1 989. Morhological parsing and the lexicon. In William
Marslen-Wilson, ed. , Lexical representation and process, 392-408. Cambridge,
Mass. : MIT Press.
Heim, Irene. 1 982. The semantics of defnite and indefnite noun phrases. Doctoral
dissertation, University of Massachusetts, Amherst.
Henderson, Leslie. 1 989. On mental representation of morphology and its diag­
nosis by measures of visual acess speed. In William Marslen-Wilson, ed. , Lexical
representation and procees, 357-391 . Cambridge, Mass. : MIT Press.
Herskovits, Anette. 1 986. Laguage and spatil cognition. Cambridge: Cam­
bridge University Press.
Hinrichs, Erhard. 1 985. A compositional semantics for Aktionsarten and NP ref­
erence in English. Dotoral dissertation, Ohio State University.
Hirst, Daniel. 1 993. Detaching intonational phrases from syntactic strcture.
Liguistic Inquiry 24, 78 1 -788.
Hofstadter, Douglas R. 1 979. Godel, Escher, Bach: An eteral golden braid. New
York: Basic Books.
Huang, C. -T. James. 1 982. Logical relations in Chinese and the theory of gram­
mar. Doctoral dissertation, MIT.
Inkelas, Sharon, and Draga Zec, eds. 1 990. The phonology-syntax connection.
Chicago: University of Chicago Pess.
Jackendof, Ray. 1 972. Semantic iterpretation in generative gramar. Cambridge,
Mas. : MIT Press.
Jackendof, Ray. 1 974. A Deep Strcture projection rule. Liguitic Inquiry 5,
48 1 -506.
Jackendof, Ray. 1 975. Morphological and semantic regularities in the lexicon.
Laguge 51 , 639-67 1 .
Jackendof, Ray. 1 976. Toward an explanatory semantic representation. Ligui­
tic Inquiry 7, 89-1 50.
Jackendof, Ray. 1 983. Sematics and cogition. Cambridge, Mass. : MIT Press.
Jackendof, Ray. 1 987a. Consciousness and the computational mind. Cambridge,
Mas. : MIT Press.
Jackendof, Ray. 1 987b. O byond zebra: The relation of linguistic and vi sual
inforation. Cognition 26, 89-1 1 4.
References
lackendof, Ray. 1 990. Sematic structures. Cambridge, Mass. : MIT Press.
lackendof, Ray. 1 991 . Parts and bundaries. Cognition 41 , 9-45.
249
lackendof, Ray. 1 992a. Laguages ofthe mindo Cambridge, Mass. : MIT Pess.
lackendof, Ray. 1 992b. Mme. Tussaud meets the binding theory. Natural Lan­
guage & Liguistic Theory 1 0, 1 -3 1 .
lackendof, Ray. 1 993a. Patterns i the mind. London: Harester Wheatsheaf;
New York: Basic Books.
lackendof, Ray. 1 993b. The role of conceptual structure in argument selection: A
reply to Emonds. Natural Laguage & Linguistic Theory 1 1 , 279-3 1 2.
lackendof, Ray. 1 995. The boundaries of the lexicon. In Everaert et al. 1 995,
1 33-1 66.
lackendof, Ray. 1 996a. The architecture of the linguistic-spatial interface. In Paul
Bloom, Mary A. Peterson, Lynn NadeI, and Merrll F. Garrett, eds. , Languae
and space, 1 -30. Cambridge, Mass. : MIT Press.
lackendof, Ray. 1 996b. Conceptual semantics and cognitive semantics. Cognitive
Linguistics, 7, 93-1 29.
lackendof, Ray. I 996. The conceptual strcture of intending and volitional
action. In Hector Campos and Paua Kemphinsky, eds. , Evolution and revolution
in lnguitic theory: Studies i honor of Carlos P. Dtero, 1 98-227. Washington,
D. e. : Georgetown University Press.
lackendof, Ray. 1 996d. How language helps us think. Pragmatics and Cognition
4, 1 -24.
lakendof, Ray. 1 996e. The propr treatment of measuring out, teiicity, and
perhaps even quantifcation in English. Natural Language & Liguitic Theory 1 4,
305-354.
lackendof, Ray, and David Aaron. 1 991 . Review of George Lakof and Mark
Turer, More than cool reason. Languge 67, 320-338.
laeger, 1. , A. Lockwood, D. Kemmerer, R. Van Valin, B. Murphy, and H. Ka­
lek. 1 996. A positron emission tomographic study of regular and irregular verb
morphology in English. Language 72.
loshi, Aravind. 1 987. An introduction to tre-adjoining grammars. In Alexis
Manaster-Ramer, ed. , Mathematics of Language, 87-1 1 4. Amsterdam: lohn
Benjamins.
lulesz, BeIa. 1 97 1 . Foundtions of cyclopea perception. Chicago: University of
Chicago Press.
Kamp, Hans, and U. Reyle. 1 993. From discourse to logic. Dordrecht: Kuwer.
Katz, lerrold. 1 966. The philosophy of laguage. New York: Harpr & Row.
Katz, lerold, and Paul M. Postal. 1 964. An itegrated theory of linguistic
descriptions. Cambridge, Mass. : MIT Press.
Kayne. Richard. 1 994. The antisymmetry of åyntaA. Cambridge. Mass. : MIT
Press.
250
Referencs
Kim, Soowon. 1 991 . Chain scop and quantifcation structure. Doctoral dis­
sertation, Brandeis University.
Kiparsky, Paul. 1 982. From cyclic phonology to lexical phonology. In Harry van
der Hulst and Neil Smith, eds. , The structure of phonological representations.
Dordrecht: F oris.
Konig, Jean-Piere, and Daniel Jurafsky. 1 994. Typ underspcifcation and on­
line typ construction in the lexicon. In Proceedigs of the West Cast Cnference
on Formal Linguitics. Stanford, Calif. : CSLI Publications. Distributed by Cam­
bridge University Press.
Kofa, Kurt. 1 935. Principles of Gestalt psychology. New York: Harcourt, Brac
& World.
Kohier, Wolfgang. 1 927. The mentality of apes. London: Routledge & Kegan
Paul.
Koster, Jan. 1 978. Locality principles in syntax. Dordreht: Foris.
Koster, Jan. 1 987. Domain and dynasties. Dordreht: Foris.
Krifa, Manfred. 1 992. Thematic relations as links between nominal reference
and temporal constitution. In Ivan Sag and Anna Szbolcsi, eds. , Lexical matters,
29-54. Stanford, Calif.: CSLI Pblications.
Kroh, Anthony. 1 987. Unbounded depndencies and subjacency in a tree
adjoining grammar. In A1exis Manaster-Ramer, ed., Mathematics of language,
1 43-1 72. Amsterdam: John Bnjamins.
Kuno, Susumu. 1 987. Functional syntax. Chicago: University of Chicago Press.
Lackner, James. 1 985. Human sensory-motor adaptation to the terestria forc
environment. In David J. Ingle, Marc Jeannerod, and David N. Lee, eds. , Brai
mechanims and spatil visin. Dordreht: Martinus Nijhof.
Lakof, George. 1 987. Women, fre, and dngerous thigs. Chicago: University of
Chicago Press.
Lakof, George. 1 990. The invarianc hypothesis: Is abstract reasoning based on
image schemas? Cognitive Liguistics 1 , 39-74.
Lakof, George, and Mark Johnson. 1 980. Metaphors we live by. Chicago: Uni­
versity of Chicago Press.
Lamb, Sidney. 1 966. Outlie of Straticational Grammar. Washington, D. e. :
Georgetown University Press.
Landau, Barbara, and Lila Gleitman. 1 985. Lnguage and experience: Evience
from the blid child. Cambridge, Mass. : Harard University Press.
Landau, Barbara, and Ray Jackendof. 1 993. "What" and "where" in spatial
language and spatial cognition. Behavioral and Brai Sciences 1 6, 21 7-238.
Langacker, Ronald. 1 987. Foundtins of cognitive grammar. Vol . J . Stanford,
Calif: Stanford University Press.
Lappin, Shalom. 1 991 . Concepts of logical for in linguistiL and philosophy. In
Asa Kasher, ed. , The Chomskya tur, 300-333. Oxford: Blackwel l .
References
25 1
Lees, Robrt B. 1 960. The grammar of English nomializatins. The Hague:
Mouton.
Legendre, Geraldine P. , Paul Smolensky, and Colin Wilson. 1 996. Wen is less
more? Faithfulnes and minimal links in wh-hains. In Proceedngs of the Work­
shop on Optiality in Syntax. Cambridge, Mass. : MIT Press and MIT Workng
Paprs in Linguistics.
Lerdahl, Fred, and Ray Jackendof. 1 983. A generative theory of tonal music.
Cambrdge, Mass. : MIT Press.
Leveit, Willem. 1 989. Speaking: From intention to articulation. Cambridge, Mass. :
MIT Press.
Levelt, Willem, and Linda Wheeldon. 1 994. Do speakers have acss to a mental
syllabary? Cognition 50, 239-269.
Levinson, Stephen. 1 996. Frames of reference and Molyneaux' s problem: Cross­
linguistic evidenc. In Paul Bloom, Mary A. Peterson, Lynn Nadel, and Merll
F. Garett, eds. , Lnguag and space. Cambridge, Mass. : MIT Press.
Lewis, David. 1 972. General semantics. In Donald Davidson and Gilbert Har­
man, eds. , Semantics of natural language, 1 69-21 8. Dordrecht: Reidel .
Lewis, Geofrey. 1 967. Turkish grammar. Oxford: Oxford University Press.
Libran, Mark, and Alan Prince. 1 977. On stress and linguistic rhythm. Li­
guitic Inquiry 8, 249-336.
Lieber, Rohelle. 1 992. Deconstructing morphology: Word foration in syntactic
theor. Chicago: University of Chicago Press.
Macnamara, John. 1 978. How do we talk about what we see? Ms. , McGill
University.
Macnamara, John. 1 982. Names for thigs. Cambridge, Mass. : MIT Press.
Marantz, Alec. 1 984. On the nature of grammatical relations. Cambridge, Mass. :
MIT Press.
Marant, Alec. 1 992. The way-onstruction and the semantics of direct arguments
in English: A reply to Jackendof. In Tim Stowell and Eric Wehrli, eds. , Syntax
and semantics. Vo!. 26, Syntax and the lexicon, 1 79-1 88. San Diego, Calif. : Aca­
demc Press.
Marin, 0. , E. Safran, and M. Schwar. 1 976. Dissoiations of language in
aphasia: Implications for normal function. In S. R. Hamad, H. S. Steklis, and J.
Lancaster, eds. , Origin and evolution of language ad speech. Annals of the New
York Academy of Sciences 280, 868-884.
Marr, David. 1 982. Viion. San Francisco: Freeman.
Marslen-Wilson, William, Lorraine K. Tyler, Rachelle Waksler, and Liane
Older. 1 994. Morphology and meaning in the English mental lexicon. Psycho­
logical Review 1 01 , 3-33.
May, Robrt. 1 985. Logical Form: Its structure and derivation. Cambridge, Mass. :
MIT Press.
252
References
McCarthy, John. 1 982. Prosodic structure and expletive infxation. Lnguge 58,
574-590.
McCarthy, John, and Alan Prince. 1 986. Prosodic Morpholog: Ms. , University
of Massachusetts, Amherst, and Brandeis University.
McCawley, James D. 1 968a. Concerng the base component of a transforma­
tional grammar. Foundtions ofLanguage 4, 243-269.
McCawley, James D. 1 968b. Lexical insertion in a transformational grammar
without Deep Strcture. In B. Darden, C. -J. N. Bailey, and A. Davison, eds. ,
Papers from the Fourth Regional Meeting, Chicago Linguistic Society. Chicago
Linguistic Society, University of Chicago.
McGurk, H. , and J. Macdonald. 1 976. Hearing lips and seeing voices. Nature 264,
746-748.
Mel'cuk, Igor. 1 995. Phrasemes in language and phraseology in linguistics. In
Everaert et al. 1 995, 1 67-232.
Minsky, Marvin. 1 986. The society of mind. New York: Simon & Schuster.
Morgan, Jerry. 1 97 1 . On arguing about semantics. Papers in Lingitics 1 , 49-70.
Newmeyer, Frit. 1 988. Minor movement rules. In Caroline Duncan-Rose and
Theo Vennemann, eds. , On langge: Rhetorica, phonologica, syntactica: A fes­
tschri for Robert P. Stockwel from his friend and coleages, 402-41 2. London:
Routledge.
Nikanne, Urpo. 1 996. Lexical conceptual strcture and argument linking. Ms. ,
University of Oslo.
Nunberg, Geofrey. 1 979. The non-uniqueness of semantic solutions: Polysemy.
Linguistics and Philosophy 3, 1 43-1 84.
Nunberg, Geofrey, Ivan Sag, and Thomas Wasow. 1 994. Idioms. Lnguage 70,
491 -538.
Oehrle, Richard T. 1 988. Multi-dimensional compositional functions as a basis for
grammatical analysis. In Richard T. Oehre, Emmon Bach, and Deirdre Wheeler,
eds. , Categorial grammars and natural languge strctures, 349-390. Dordrecht:
Reidel.
Ostler, Nick. 1 979. Case linking: A theory of case and verb diathesis applied to
Classical Sanskrit. Doctoral dissertation, MIT.
Otero, Caros. 1 976. The dictionary in a generative grammar. Ms. , UCLA.
Otero, Carlos. 1 983. Towards a model of paradigatic grammar. Quadri di
Semantica 4, 1 34-144.
Partee, Barbara H. 1 995. The development of formal semantics in linguistic
theory. In Shalom Lappin, ed., The handook of contemporary semantic theory,
1 1 -38. Oxford: Blackwell.
Perlmutter, David, and Paul M. Postal. 1 984. The I -Advancement Exclusiveness
Law. In David Perlmutter and Carol Rosen, eds. , Studies in Relational Grammar
2, 8 1 -125. Chicago: University of Chicago Press.
References
253
Perlmutter, David, and John Robert Ross. 1 970. Relative clauses with split ante­
cedents. Linguitic Inquiry I , 350.
Pinker, Steven. 1 989. Learnability and cognition: The acquiition of arguent
structure. Cambridge, Mass. : MIT Press.
Pinker, Steven. 1 992. Review of Bickerton 1 990. Lnguage 68, 375-382.
Pinker, Steven, and Paul Bloom. 1 990. Natural language and natural selection.
Behavioral and Brain Sciences 1 3, 707-726.
Pinker, Steven, and Alan Prince. 1 988. On language and connectionism: Analysis
of a parallel distributed proessing model of language acuisi tion. Cognition 26,
1 95-267.
Pinker, Steven, and Alan Prince. 1 991 . Regular and iregular morphology and the
psychological status of rules of grammar. In Laurel A. Sutton, Christopher John­
son, and Ruth Shields, eds. , Proceedings of the Seventeenth Annul Meeting of the
Berkeley Linguistics Society, 230-25 1 . Berkeley Linguistics Soiety, University of
Califoria, Berkeley.
Pollard, Carl, and Ivan Sag. 1 987. Information-baed syntax and semantics. Stan­
ford, Calif: CSLI Publications.
Pollard, Carl, and Ivan Sag. 1 994. Head-Driven Phrase Structure Grammar. Chi­
cago: University of Chicago Press.
Prasada, S. , S. Pinker, and W. Snyder. 1 990. Some evidence that irregular fors
are retrieved from memory but regular fors are rule-generated. Paper presented
to the Annual Meeting of the Psychonomic Society, New Orleans.
Prince, Alan, and Paul Smolensky. 1 993. Optimality Theory: Constraint interac­
tion in generative grammar. Ms. , Rutgers University and University of Colorado.
Prince, Ellen F. 1 995. On the limits of syntax, with reference to Topicalization and
Lef-Dislocation. To appear in P. Culicover and L. McNally, eds. , The limits of
syntax. San Diego, Calif: Academic Press.
Pllum, Geofrey K. 1 99 1 . The great Eskimo vocabulary hoax. Chicago: Uni­
versity of Chicago Press.
Pstejovsky, James. 1 991 a. The generative lexicon. Computational Linguistics 1 7,
409-441 .
Pstejovsky, James. 1 991 b. The syntax of event structure. Cognition 4 1 , 47-82.
Pustejovsky, James. 1 995. The generative lexicon. Cambridge, Mass. : MIT Press.
Quang Phuc Dong. 1 969. Phrases anglaises sans sujet grammatical apparent.
Lngages 1 4, 4-5 1 .
Randall, Janet. 1 988. Inheritance. In Wendy Wilkins, ed. , Syntax and semantics.
Vol. 2 1 , Thematic relations, 1 29-1 46. San Diego, Calif: Academic Press.
Rosen, Carol. 1 984. The interface between semantic roles and initial grammatical
relations. In David Perlmutter and Carol Rosen, eds. , Studies in Relational
Grammar 2, 38-77. Chicago: University of Chicago Press.
254
References
Rosenweig, Mark R. , and Arold L. Leiman. 1 989. Physiological psychology.
2nd ed. New York: Random House.
Ross, John Robrt. 1 967. Constraints on variables in syntax. Dotoral disser­
tation, MIT.
Rothstein, Susan. 1 991 . LF and the strctre of the grammar: Comments. In Asa
Kasher, ed. , The Chomskyan tur, 385-395. Oxford: Blackwell .
Rumelhart, David, and James McClelland. 1 986. On learing the past tenses of
English verbs. In James McClelland, David Rumelhart, and the PDP Research
Group, Paralel distributed processing: Explorations in the microstructure of cog­
nition. Vol. 2, Psychological and biological models, 21 6¯271 . Cambridge, Mass. :
MIT Press.
Rumelhart, David, James McClelland, and te PDP Research Group. 1 986. Par­
alel distriuted processing: Exploration i the microstructure of cognition. Vol . 1 ,
Foundation. Cambridge, Mass. : MIT Press.
Ruwet, Nicolas. 1 991 . Syntax and human experience. Chicago: University of Chi­
cago Press.
Sadock, Jerold M. 1 991 . Autolexical Syntax. Chicago: University of Chicago
Press.
Sag, Ivan, and Carl Pollard. 1 991 . An integrated theory of complement control .
Language 67, 63-1 1 3 .
Schenk, Andre. 1 995. The syntactic behavior of idioms. I n Everaert e t al. 1 995,
253-27 1 .
Schreuder, Robrt, and R. Harald Baayen. 1 995. Modeling morphological pro­
cessing. In Feldman 1 995, 1 3 1 -1 54.
Searle, John R. 1 983. Intentionality. Cambridge: Cambridge University Press.
Searle, John R. 1 992. The rediscovery ofmind. Cambridge, Mass. : MIT Press.
Selkirk, Elisabeth O. 1 982. The syntax of words. Cambridge, Mass. : MIT Press.
Selkirk, Elisabth O. 1 984. Phonology and syntax: The relation between sound and
structure. Cambridge, Mass. : MIT Press.
Shiebr, Stuart. 1 986. An introduction to unication-based approaches to grammar.
Stanford, Calif: CSLI Publications.
Shieber, Stuart, and Yves Schabs. 1 991 . Generation and synchronous tree
adjoining grammars. Joural ofComputational Inteligence 7, 220-228.
Skinner, B. F. 1 957. Verbal behavior. New York: Appleton-Century-Crofs.
Slobin, Dan. 1 966. Grammatical transformations and sentence comprehension in
childhood and adulthood. Journal of Verbal Learing and Verbal Behavior 5, 21 9-
227.
Smith, Neil V. , and Ianthi-Maria Tsimpli. 1 995. The mind of a savant. Oxford:
Blackwell.
Spencer, Andrew. 1 991 . Morphological theory. Oxford: Blackwell.
References 255
Steedman, Mark. 1 990. Gapping as constituent coordination. Linguitics and
Philosophy 1 3, 207-263.
Steedman, Mark. 1 99 1 . Structure and intonation. Language 67, 260-296.
Stembrger, Joseph Paul, and Brian MacWhinney. 1 988. Are infecte fors
stored in the lexicon? In Michael Hammond and Michael Noonan, eds. , Theoret­
ical morphology: Approaches in moder liguitics, 1 01 ¯ 1 1 6. San Diego, Calif.:
Academic Press.
Swinney, David. 1 979. Lexical access during sentence comprehension:
( Re)consideration of context efects. Joural of Verbal Learing and Verbal Be­
havior 1 8, 654-659.
Talmy, Leonard. 1 978. The relation of grammar to cognition: A synopsis. In D.
Walt, ed. , Theoretical isin natural laguge processing 2, 1 4¯24. New York:
Assoiation for Computing Machinery.
Tanenhaus, MichaeI, J. M. Leiman, and Mark Seidenberg. 1 979. Evidence for
multiple states in the proessing of ambiguous words in syntactic contexts. Joural
of Verbal Learing and Verbal Bhavior 1 8, 427-4 0.
Tenny, Carol. 1 987. Grammaticalizing aspct and afectedness. Doctoral disser­
tation, MIT.
Tenny, Carol. 1 992. The aspctual interface hypthesis. In Ivan Sag and Anna
SzaboIcsi, eds. , Lexical matters, 1 -28. Stanford, Calif: CSLI Publications.
Tenny, Carol. 1 994. Aspectual roles and the syntax-semantics interface. Dordrecht:
Kluwer.
Terrace, Herbrt. 1 979. Nim. New York: Knopf
Treswell, John c. , and Michael K. Tanenhaus. 1 994. Toward a lexicalist frame­
work for constraint-based syntactic-ambiguity resolution. In Charles Clifon,
Lynn Frazier, and Keith Rayner, eds. , Perspectives on sentence processing, 1 55-
1 79. Hillsdale, N. J. : Erlbaum.
Ullman, M. , and S. Pinker. 1 991 . Connectionism versus symbolic rules in lan­
guage: A case study. Papr presented at the Spring Sympsium Series of the
American Association for Artifcial Intelligence, Stanford.
Ungerleider, Leslie G., and Mortimer Mishkin. 1 982. Two cortical visual systms.
In David J. Ingle, Melvyn A. Goodale, and Richard J. W. Mansfeld, eds. , Anal­
ysi of viual behavior, 549-586. Cambridge, Mass. : MIT Press.
VandeIoise, Claude. 1 991 . Spatil prepositions: A case stuy from French. Chi­
cago: University of Chicago Press.
Van Gestel, Frank. 1 995. En bloc insertion. In Everaert et al. 1 995, 75-96.
Van Hoek, Karen. 1 992. Paths through conceptual structure: Constraints on pro­
nominal anaphora. Dotoral dissertation, University of Califoria, San Diego.
Van Hoek, Karen. 1 995. Conceptual reference points: A Cognitive Grammar
account of pronominal anaphora constraints. Lngage 7 1 , 3 1 0- 340.
Van Val i n, Robrt D. 1 994. EAtrdcti on restrictions, compting theories. and
the i\r�umcnt frm the poverty of t he sti mul us. I n Sus ¶tn D. Li ma. Robrta I . .
256
Referencs
Corrigan, and Gregory K. Iverson, eds. , The reality of linguitic rules, 243-259.
Amsterdam: John Benjamins.
Verkuyl, H. J. 1 972. On the compositional nature of the aspects. Dordrecht: Reidel.
Verkuyl, H. J. 1 993. A theory of aspectuality: Te iteraction between temporal
and atemporal structure. Cambridge: Cambridge University Press.
Wasow, Thomas. 1 977. Transforations and the lexicon. In Peter Culicover,
Thomas Wasow, and Adrian Akmajian, eds. , Formal syntax, 327-360. New York:
Academic Press.
Wasow, Thomas, Geofrey Nunbrg, and Ivan Sag. 1 984. Idioms: A interim
report. In S. Hattori and K. Inoue, eds. , Proceedigs of the 13th Interational
Congress of Linguists. The Hague: CIPL.
Weinreich, Uriel. 1 969. Problems in the analysis of idioms. In Joan Puhvel, ed. ,
Substace and structure of language, 23-8 1 . Berkeley and Los Angeles: University
of Califora Pres. Reprinted in On sematics (edited by William Labv and
8eatric Weinreich), 208-26. Philadelphia: University of Pennsylvania Press
( 1 980).
Wexler, Kenneth, and Peter Culicover. 1 980. Formal principles of language
acquiition. Cambridge, Mass. : MIT Press.
Wilkins, Wendy, and Jennie Wakefeld. 1 995. Brain evolution and neurolinguistic
preconditions. Behavioral ad Brain Sciences 1 8, 1 61 -1 8 1 .
Williams, Edwin. 1 98 1 . On the notions "lexically related" and "head of a word. "
Linguitic Inquiry 1 2, 245-274.
Willias, Edwin. 1985. PRO and subject of N. Natural Lnguage & Liguitic
Theory 3, 297-3 1 5.
Williams, Edwin. 1 986. A reassigment of the functions of LF. Liguitic Inquiry
1 7, 265-30.
Williams, Edwin. 1994. Remarks on lexical knowledge. In Lila Gleitman and
Barbara Landau, eds. , The acquiition of the lexicon, 7-34. Cambridge, Mass. :
MIT Pres.
Wittgenstein, Ludwig . 1 953. Phiosophical ivestigations. Oxford: Blackwell.
Wrangham, Richard W. , W. C. McGrew, Frans 8. M. de Waal, and Paul G.
Heltne, eds. 1 994. Chipazee cultures. Cambridge, Mass. : Harard University
Press.
Zaenen, Annie, Joan Maling, and Hoskuldur Thniinsson. 1 985. Case and gram­
matical functions: The Icelandic passive. Natural Lguage & Liguitic Theory 3,
441 -484.
Zwicky, Amold, and Geofrey K. Pullum. 1 983. Phonology in syntax: The Somali
optional agreement rule. Natural Lnguage & Liguitic Theory 1 , 385-402.
Index
Aaron, David, 237
Abeille, Anne, 170, 236
Acquisition, 95, 128,217,218,228. See also
Learnability
Adjective-noun modification, 62-66
Adverbs, -Iy, 132
Affixallexicon, 120, 230
Agreement, 147-148, 161, 224, 235
Akmajian, Adrian, 75-76, 224
Allomorphy, 139-147, 234, 235
Allport, Alan, 198
Anaphora. See Binding Theory
Anderson, Stephen R., 29, 36, 86, 112, 114,
130, 137, 230, 234
Animal behavior, 17. See also Bats;
Primates
Antecedent-contained deletion, 224-225
Aoun, Joseph, 228
Argument structure, 36, 101-103, 171-173,
220-221,234
Argument structure alternations, 58-62
Aronoff, 14,65, 109, 128, 141-142,
231,235
Aspect. See Telicity
Attention, 196-202
Auditory perception, 42-43, 181
Auditory-phonetic interface, 21-25
Autolexical Syntax, 39, 111
Autonomy of syntax, 86, 91,157
Baars, Bemard, 182
Baayen, Harald, 122-123
Bach, Emmon, 2, 4, 221, 222
Baker, 33, 223
Bats, 192
Beard, Robert, 137, 143,230,235
Bellugi, Ursula, 183
Berman, Steve, 77
Berwick, Robert, 8
Bever, Thomas, 7
Bickerton, Derek, 17-18,95,179,185-186,
238,239
Bierwisch, 220
Binding Theory, 50, 54-55, 57, 67-78, 81,
101, 226, 228
Block, Ned, 182, 239
Blocking, morphological, 140-141, 143-
147, 163,230,235
Bloom, Paul, 17,217,218,223
Bloomfield, Leonard, 14
Boland, Julie E., 106
Bolinger, Dwight, 232
Bouchard, Denis, 103, 220
Bracketing mismatches, 26-27, 29, 36-38,
111-113,221
Brame, lOO
Bregman, Albert S., 181
Bresnan, Joan, 8, 36,115, 163, 178,231
Briscoe, E., 60
Broadbent, Donald E., 198
Brody, 100-101, 103,225,226,229
Bromberger, Sylvain, 98
Buring, Daniel, 86
Bybee,Joan, 122
Calvin, William, 10
CaramalZa, Alfonso, 122
Carrier(-Duncan), lill, 36, 238
Carroll, Lewis, 207
Carter, Richard, 36
Cartesian linguistics, 18,217-218
Categorial grammar, 12,33, 100,221,222-
223
Cattell, Ray, 70
Cavanagh, Patrick, 107, 199
Chains, 102 229
Cheney, Dorothy, 185,239
Chialant, Doriana, 122
258
Chierchia, Gennaro, 70, 225
Chomsky, Noam, 2-6, 10, 11-16, 21, 24-
26,28,30,32,33,38,47-49,75,76,83-
86,96,98,99, 101, 115, 116, 133, 154,
158, 159, 167, 179, 186,217-218, 223,
224, 226, 227, 228, 229, 231, 232, 236, 237
Churchland, Patricia, 10
Clark, Eve and Herbert, 227
Cliches, 155, 156-157, 164, 165
Qiticization, 26
Coarse individuation of lexical items in
syntax, 34, 91, 93
Cocomposition, 49, 61, 206
Coercing function, 52, 58, 60, 61
C o e ~ o n , 49, 66, 72, 206
aspectual,51-53
mass-count, 53-54, 223
Cognitive Grammar, 2, 225
Collocations, 156
Compounds, 110, 126-127, 154, 156, 164-
165, 174, 177,237
Computational system, 4, 19
Computational vs. neural models, 9-10,
217,222
Conceptual Interface Level (CIL) of syntax,
32,47,81,91-103
Conceptual structure, 31-38,43--44,47-48,
70,73-74,77,81, 185, 187, 189-191,
201-202, 219
Connectionism, 9-10, 121
Consciousness, 180-183, 186-193, 196-205
Constraints on movement, 76, 103, 225
Construction Grammar, 12, 90, 100, 133,
171-176
Control (of complement subject), 70--73
Corballis, Michae1 C., 10
Correspondence rules, 40, 42, 219. See also
Phonology-syntax (PS-SS) interface;
Syntax-conceptual (SS-CS) interface
general form, 23-24
lexical, 89
in processing, 103-107
Cranberry morph, 165, 173
Crick, Francis, 10, 11,44, 181
Csuri, Piroska, 226
Culham, J. C., 199
Culicover, Peter C., 4, 37,75,174,224,236
Cutler, Anne, 106
Damasio, Antonio, 239
Dascal, Marcelo, 183,206
Dec1erck, Renaat, 35
D(eep}-structure, 100-103, 150, 153, 167,
170,223,224,227,228
Den Besten, Hans, 86
Dennett, Daniel c., 19-20, 184,217,218
Index
Denominal verbs, 115-116, 123-126,231-
232
Derivations, phonological, 98, 134, 137-138
Derivations vs. constraints, 12-13, 24, 35,
40,44,229
Dijkstra, Ton, 122
DiSciullo, Anna-Maria, 86, 110, 115, 117,
157,229,230,231
Discontinuous lexical entries, 159-160, 169
Discourse, 2
Discourse Representation Theory, 225
Discrete infinity of language, 3, 16-17,28
Distributed Morphology, 13, 90, 99, 143,
226
Do-support, 144, 228, 235
Dor, Daniel, 58
Double object construction, 175-176
Dowty, David, 35, 36, 79, 223, 225
Dyirbal, 220
Edelman, Gerald, 9, 217
Ellipsis, 99. See also VP Ellipsis
Embedding, approximate SS-CS correspon-
dence, 36-38
Emonds, Joseph E., 69, 179, 185, 239
Empty Category Principle, 103
Enriched composition, 49-51,56,58,63,67,
73,137-138,173,223,228
Everaert, Martin, 160,227
Evolution, 10-11, 17-20, 180, 218, 238
Expletive infixation, 119, 132, 136, 233
Extended Standard Theory, 32, 85, 100, 101,
153
Farkas, Donca, 70
Fauconnier, Gilles, 54, 225
Fe1dman, Laurie Beth, 122
Fiengo, Robert, 75, 86, 224
Fillmore, Charles, 174-175
Fixed expressions, 155-158, 163,227
Focus, 27, 76-77, 95, 119, 170,219,228
Fodor, Jerry A., 7, 41-42, 104, 106, 188,
217,218,221,222,232,237,238
Foley, William A., 36
Foreign phrases, 155
Formalization, degree of, 4
Formal semantics, 2, 32, 49, 225
Franks, Bradley, 224
Fraser, Bruce, 160
Free will, 217-218
French, 175,231
Full Interpretation (FI), Principle of, 25, 98,
219
Functional categories, 28, 133. See also
Syntactic features
Functional grammar, 225
Index
Garrett, Merrill, 7, 107
Gee, James, 27
Generalized Phrase Structure Grammar
(GPSG), 12, 100
Gender, grammatical, 34, 220
Generative Semantics, 13, 33,47,223,227,
231-232
German past participle circumfixation, 233
Gleitman, Lila, 44,217
Goldberg, Adele, 171-172, 174-175
Goldsmith, John, 26, 98, 234
Gould, Stephen Jay, 11,218
Government-Binding Theory, 33, 68, 81, 85,
87,99,102,133,153,222,226,237
Grirnshaw, Jane B., 36, 58-59, 220-221,
225,235
Groefsema, Marjolein, 223
Grosjean, Fran""is, 27
Gross, Maurice, 157
Hale, Kenneth, 33, 231-232
Halle, Morris, 13,26, 86,90,98,99, 109,
116,127,133,143,144,226,227,228,230
Halpem, Aaron, 29
Hankamer, Jorge, 117
Haveman, Alette, 123
Head-Driven Phrase Structure Grammar
(HPSG), 12,37,90,100,116, 132-133,
221,233,236
Heavy NP Shift, 219
Heim, lrena, 225
Henderson, Leslie, 123
Herskovits, Annette, 207
Hestvik, Arild, 77
Hinrichs, Erhard, 35, 51
Hirst, Daniel, 27
Hofstadter, Douglas, 182
Huang, C.-T. James, 81
Icelandic, 175
Idioms, 14, 110, 153-173
chunks, movement of 166-171
Implicit arguments, 69
Impoverished vs. full entry theory, 116,
123-130
Inference, 31, 32-33, 78, 186-187,207
Inkelas, Sharon, 27
Innateness, 6-7, 217
Intentionality, 218
Interfaces, 11,89, 192
Interfaces, 21-24. See also Conceptual
Interface Level of syntax; Phonetic-motor
interface; Phonology-syr:tax (PS-SS)
interface
basic components 21 24
259
conceptual-intentional (C-I), 11, 31-36.
(see also Syntax-conceptual (SS-CS)
interface)
in processing 103-107
Intermediate Level Theory of
Consciousness, 189-192, 195-197
Intonational structure, 26·-27, 219
Jackendoff, Beth, 154
Jaeger, Jeri, 9, 122
Japanese, 220
Jemigan, Paul P., 183
Johnson, Mark, 237
Joshi, Aravind, 13, lOO
J ulesz, Bela, 181
Jurafsky, Daniel, 133
Kamp, Hans, 225
Kanerva, Jonni, 36
Kaplan, Ronald M., 8
Katz, Jerrold J., 63, 223, 225
Kay, Paul, 174-175
Kayne, Richard, 219-220
Keyser, Samuel Jay, 33, 231-232
Kim, Soowon, 225
Kinsboume, Marcel, 229
Kiparsky, Paul,234
Koenig, Jean-Pierre, 133
Koffka, Kurt, 222
K6hler, Wolf gang, 184, 238
Korean, 175, 225
Koster, Jan, 77, 86,100, 103,226
Krifka, Manfred, 35, 79
Kroch, Anthony, 13, 100
Kuno, Susumu, 68, 225
Kwakwala, 112-113
Lackner, James, 181
Lakoff, George, 2, 6, 54, 220, 237, 238
Lamb, Sydney, 221
Landau, Barbara, 43,44, 217, 223
Lang, Ewald, 220
Langacker, Ronald, 157, 175
Language of thought, 183, 185,238
interface with language, 41-42
Lappin, Shalom, 226
Larson, Richard, 236
Late lexical insertion, 13,86-87, 226, 227
Leamability, 4-7, 34. See also Acquisition
Lees, Robert B., 117
Legendre, Geraldine, 225
Leiman, Amold L., 11
Leiman, J. M., 104
Lemma, 145-146, 162-163,226-227,229,
235
260
Lerdahl, Free!, 10,44-45, 184,218, 222
Levelt, Willem, 106, 107, 108, 123, 145,226,
229
Levinson, Stephen, 238
Lewis, David, 2
Lewis, Geoffrey, 139
Lewontin, Richard, 218
Lexeme. See Lexical form.
Lexical Assembly, 124-129,233
Lexical Conceptual Structure (LCS), 48, 65,
68-70,80-81,89,118,162,168-169
Lexical form (Iexeme), 143, 145-146, 162-
163,226-227
Lexical-Functional Grammar (LFG), 8, 12,
37, lOO, 116, 132-133,221,228
Lexical insertion, 13-14,83-91, lOO,
109,116,153,158-160,167,177,226,
229
Lexical Integrity Principle, 115, 231
Lexical items without syntax, 94-95, 119,
149,227. See also Expletive infixation
Lexical licensing, 89-91,94,96-97,131,
133,139-141, 146, 154, 161-164, 167-
170,177
licensing vs. linking function, 149-150
Lexical Phonological Structure (LPS), 89,
133-141, 148
Lexical Phonology, 234
Lexical (redundancy) rules 116-118, 124-
128, 132-133. See also Semiproductive
(morphological and lexical) regularities
Lexical Syntactic Structure (LSS), 89, 162,
168-169,235
Lexicalist Hypothesis, 115
Lexico- Logical Form, 10 1
Lexicon, 4, 104, 107-108, 109-130,200
Liberman, Mark, 26
Lieber, Rochelle, 112,130, 143,231,233
Linear order in phonology and syntax, 28-
29,219-220
Listeme, 110, 119, 120, 131, 230
Logical Form (LF), 25, 32,47-51,69,73,
76-82,98,99-103, 153,223,224,226,
228
Long-term memory. See Memory
Macnamara, John, 43, 217
MacWhinney, Brian, 123
Marantz, Alec, 13, 36, 86,90,99, 143, 144,
163, 172,226,227,228
Marin, 0., 122
Marr, David, 44, 107, 181
Marslen-Wilson, William, 124
Mass-count distinction, 53-54,86, 223
May, Robert, 47, 75, 224
McCarthy, John, 119, 120, 136
Index
McCawley, James, 12, 13, 33, 54, 227, 232,
237
McClelland, James, 5, 121, 129
McGurk effect, 222
Mchombo, Sam A., 115,231
Measuring out, 79-81, 225
Mel'cuk, Igor, 156
Memory, 8,121,156, 181,231
Mental organs, 10, 19
Mentalist stance, 2-3, 8
Metaphor, 166, 168, 170, 173,237
Miller, George A., 4
Minimalist Program, 11, 13, 16,21,40,46,
85,86, lOO, 102, 133, 134, 153,237
Minsky, Marvin, 19, 197
Misie Hie, Biljana, 231
Mishkin, Mortimer, 222
Modularity. See Representational
modularity
Montague Grammar, 33, 223
Morgan, Jerry, 54
Morpheme, 109,132
Morphology, derivational, 113-115, 119,
120-121. See also Morphophonology;
Morphosyntax
Morphology, inflectional 113-115, 118, 121,
132-133, 162, 227. See also Morphopho-
nology; Morphosyntax
Morphome, 235
Morphophonology, 110-113, 118, 133-139,
230
Morphosyntax, 110-113, 114, 118, 120,
143-174,227,230,233
Motor control, 10,21-23,43, 108, 184. See
also Phonetic-motor interface
Music, 10, 16,44-45, 184, 218, 222
Names (proper), 154, 156-157
Narrow syntax, 25, 31, 185
Navajo, 117, 233
Neuroscience, 8-9, 10-11
Newmeyer, Fritz, 99
Nikanne, Urpo, 221, 227
Novelty, feeling of, 202
Nunberg, Geoffrey, 54, 167, 168,236,237
O'Connor, Mary Catherine, 174
Oehrle, Richard, 221, 222
Optimality Theory, 12,97,133,136-138,
219,222
Ostler, Nick, 36
Otero, Carlos, 86
Out-prefixation, 114
Paradox of Language Acquisition,S, I1
Parallel algorithms, 15
Index
Parallel architecture. See Tripartite
architecture
Parsing. See Processing
Partee, Barbara, 49
Past tense, English, 116, 120-123, 143-147
Perception of language. See Processing
Perfection ofianguage, 19-20, 186
Perlmutter, David, 36, 37, 166
PET scan, 9, 122
PF component, 99
Phonetic form 21-26, 97-98, 134, 136, 187,
196,201-202,204,218,226,227,228
Phonetic-motor interface, 11, 21-25, 108
Phonetic perception, 42-43
Phonological Interface Level (PIL) of
syntax, 30,91-100
Phonological underlying form, 98, 133-135,
137-138
Phonology-syntax (PS-SS) interface, 25-30,
39,89,91-100, 133-139, 143
Pinker, Steven, 5,17, 18,34, 121-122, 129,
186,217,218,220,230,232
Plural
Dutch, 122-123
English 1l5-1l6, 1l8, 131
Pollard, Carl, 70, 72, 129
Polysemy, 53, 54, 59-61,63
Postal, Paul, 36, 170,223,225,227
Pragmatics, 17, 31, 54-55, 64, 66,137-138
Prasada, Sandeep, 122
Present tense, English, 147-148
Primates, 17, 184-185, 194, 195-196,200,
205,218,238,239
Prince, Alan, 5,26,97, 120,121-122,129,
133, 222, 230, 232
Prince, Ellen, 176
Principles-and-parameters approach, 6
Processing 7-8, 40, 103-107, 121-123,226,
229
Production of language. See Processing
Projection Principle, 102
Proprioception, 181, 223
Prosodic constituency, 27. See also
Intonational structure
Protolanguage, 17-18,95
Pullum, Geoffrey, 27, 238
Pustejovsky, James, 49, 51,60-62
Qualia structure, 61, 63, 65, 224, 232
Quang Phuc Dong, 227
Quantification, 47, 50,75,78-81, 101,226
Quotations, 155,156-157
Raising, 37
Randall, Janet, I :l0, 238
Reconstruction. 75 79,224
261
Recovery of Underlying Grammatical
Relations (RUGR), 102-103, 169,229
Redirected aggression, 185, 195-196
Redundancy of language, 14-15,16,20,153
Redundancy rules. See Lexical redundancy
rules
Reference transfer functions, 54-58, 66, 149
Relational Grammar, 36, 221
Relative clause extraposition, 37-38
Representational modularity, 41-46,83,
87-88,94, 103-106, 112-113, 157, 177,
180,192,205,221,222,223
Resultative construction, 171 173, 238
Reyle, U., 225
Rhyming, 186-187,218
Right Node Raising, 219
Rochemont, Michael, 37
Rosen, Carol, 36
Rosenzweig, Mark, 11
Ross, John Robert, 37, 38
Rothstein, Susan, 103
Rumelhart, David,5, 121, 129
Russian, 141-142,230,235
Ruwet, Nicolas, 167, 168, 170
S-structure, 73, 78, 81, 87, 99-103,133,161,
170,224,226,228,229
Sadock, Jerrold, 39, Ill, 112
Saffran, Eleanor, 122
Sag, Ivan, 70, 72, 86, 129, 167, 168,236,237
Schabes, Yves, 39, 96
Schenk, Andre, 156, 166, 170
Schreuder, Robert, 122
Schwartz, Myma, 122
Searle, John, 9,182,197,217,218
Seidenberg, Mark, 104
Sejnowski, Terrence, 10
Selkirk, Elizabeth 0., 27, 130
Semantic versus conceptual representation,
220
Semiproductive (morphological and lexical)
regularities, 1l5-120, 123-130, 132, 147,
157,164-165,174,231-233
Semitic morphology, 135-136,233
Serbo-Croation, 29
Seyfarth, Robert, 185,239
Shieber, Stuart, 13, 39, 96
Signed languages, 22, 219
Simple (semantic) composition. See Sytac-
tically transparent semantic composition
Skinner, B. F., 5
Slobin, Dan, 7, 122
Sloppy identity, 75-79, 224
Smith, Neil, 184
Smolensky, Paul, 97, 133,222. 225
Snydcr, W .. 122
262
Somatic marker hypothesis, 239
SPEmodel,133-138
Spell-Out, 30, 75, 86, 101, 134,226,227
Spencer, Andrew, 114, 139,233
Standard Theory (Aspects), 8, 13, 85, 90,
lOO, 153, 158, 160,223
Steedman, Mark, 219
Stemberger, Joseph, 123
Stereotypes, 237
Stratificational Grammar, 221
Stress, 97
Subcategorization, 102
Substitution, 13,90,97, 153. See also
Unification
Suppletion, morphological, 143-147
Swahili, 233
Swinney, David, 104
Syllabary, 108
Syllable structure, 26,97,230
Syntactic features, 86,90,91,98
Syntactic Interface Level of conceptual
structure, 32
Syntactic Interface Level (SIL) of
phonology, 30,134-141,146,234
Syntactic word, 120
Syntactically transparent semantic
composition, 48-50, 54, 55, 65, 74, 137-
138,223
Syntactocentrism, 15-19,39,46,86,217
Syntax-conceptual (SS-CS) correspondence
rules, 32, 89, 220, 225
Syntax-conceptual (SS-CS) interface, 32, 39,
47-82,89, 138-139, 143,220,225. See
also Conceptual Interface Level of syntax;
Enriched composition
Talmy, Leonard, 51
Tanenhaus, Michael, 104, 106
Telicity, 35, 51-53, 79-81, 220, 225
Tenny, Carol, 35, 79, 220, 225
Terrace, Herbert, 17
Thematic (theta-) roles, 34, 68-73,101-103,
168, 170, 185,228,232
Theme vowel, 148-149
Titles, 155, 156
Topic, 169, 176
Topicalization,95
Trace, 101-103,225,228
Tree-Adjoining Grammar, 13,39,90,100,
236
Tripartite architecture, 38-41, 88, 100, 131-
132, 149
Trueswell, John, 106
Tsimpli, Ianthi-Maria, 184
Turkish, 117, 118, 139-141
vowel harmony 135
Ullman, M., 122
Ungerleider, Leslie, 22
Index
Unification 13, 90, 96-97,139,154,161,
169,230
"soft", 136-138
Universal Alignment Hypothesis, 36
Universal Grammar, 2-7,11,40
Universal Packager/Grinder, 53
UTAH, 33, 35, 36, 223, 232
Valuation of percepts, 202-205
Van Casteren, Maarten, 122
Vandeloise, Claude, 207
Van Gestel, Frank, 167,236
Van Hoek, Karen, 68, 225
Van Valin, Robert D., 36,225
Verb-particle construction, 158-160
Verkuyl, Henk, 35, 51,79,225
Virtual lexicon, 117
Vision, 10, 16,43-44, 107, 181, 189, 190-
191,222
redundancy of, 20
Visual imagery, 188, 196,203
Visual vocabulary, 107
VP Ellipsis, 75-79, 224
Wakefield, Jennie, 218
Wang, Paul P., 183
Wasow, Thomas, 116, 167, 168,236,237
Way-construction, 172-173
Weinberg, Amy, 8
Weinreich, Uriel, 157, 160,236
Wexler, Kenneth, 4
Wh-constructions, 81-82, 225, 226
Wheeldon, Linda, 108
Whorfian effects, 238
Wilkins, Wendy, 218
Williams, Edwin, 70, 86, 11 0, 115, 117, 130,
157,174,226,229,231,236
Williams syndrome, 183
Wittgenstein, Ludwig, 188,207
Working memory. See Memory
World knowledge, 31, 61-62, 64
Wrangham, Richard, 194
X-Bar Theory, 28
Yiddish, 176
Zec, Draga, 27
Zero inflection, 147-150
Zwicky, Amold, 27

Linguistic Inquiry Monographs Samuel Jay Keyser, general editor
1.

2. 3.

Word Formation in Generative Grammar, Mark Aronoff X Syntax: A Study of Phrase Structure, Ray Jackendoff Recent Trans ormational Studies in European Languages, Samuel Jay Keyser, f
editor Studies in Abstract Phonology, Edmund Gussmann An Encyclopedia of A UX: A Study in Cross-Linguistic Equivalence, Susan Steele Some Concepts and Consequences of the Theory of Government and Binding, Noam Chomsky The Syntax of Wordl-, Elisabeth Selkirk Syllable Structure and Stress in Spanish: A Nonlinear Analysis, James W. Harris CV Phonology: A Generative Theory of the Syllable, George N. Clements and Samuel Jay Keyser On the Nature of Grammatical Relations, Alec Marantz A Grammar of Anaphora, Joseph Aoun Logical Form: Its Structure and Derivation, Robert May Barriers, Noam Chomsky On the Definition of Word, Anna-Maria Di Sciullo and Edwin Williams Japanese Tone Structure, Janet Pierrehumbert and Mary Beckman Relativized Minimality, Luigi Rizzi Types of A-Dependencies, Guglielmo Cinque Argument Structure, Jane Grimshaw Locality: A Theory and Some of Its Empirical Consequences, Maria Rita Manzini Indejinites, Molly Diesing Syntax of Scope, Joseph Aoun and Yen-hui Audrey Li Morphology by Itself: Stems and Inflectional Classes, Mark Aronoff Thematic Structure in Syntax, Edwin Williams Indices and Identity, Robert Fiengo and Robert May The Antisymmetry of Syntax, Richard S. Kayne Unaccusativity: At the Syntax-Lexical Semantics Inter ace, Beth Levin and f Malka Rappaport Hovav Lexico-Logical Form: A Radically Minimalist Theory, Michael Brody The Architecture of the Language Faculty, Ray Jackendoff

4. 5.
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.

The Architecture of the Language Faculty

Ray lackendoff

The MIT Press Cam bridge, Massachusetts London, England

© 1 997 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or infor­ mation storage and retrieval) without permission in writing from the publisher. This book was set in Times Roman by Asco Trade Typesetting Ltd., Hong Kong Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Jackendoff, Ray S. The architecture of the language faculty I Ray Jackendoff. p. cm. - (Linguistic inquiry monographs; 28) Includes bibliographical references. ISBN 0-262- 1 0059-2. - ISBN 0-262-60025-0 (pbk.) I. Grammar, Comparative and general. 2. Thought and thinking. I. Title. 11. Series. PI 5 1 .J25 1 996 41 5-dc20

96-25903 CIP

Contents

Series Foreword Preface xiii

xi

Acknowledgments
Chapter 1

xv

Questions, Goals, Assumptions

l .l
1.1.1 1.1.2

Universal Grammar

2

The Mentalist Stance 2 The Notion 0/ Mental Grammar 3 1.1.3 Leamability and Universal Grammar 4 6 1.1.4 Innateness 1.1.5 Relation o/ Grammar to Processing 7 1.1.6 Relation o/ Grammar to Brain 8 1.1.7 Evolutionary Issues 10
1.2 11

Necessities and Assumptions

1.3

Syntactocentrism and Perfection 15

Chapter 2 Modularity Interfaces; Representational 21 2.1

The "Articulatory-Perceptual" Interfaces 21
The Phonology-Syntax
25

2.2

Interface

15.2 Interpolated Function Specified by Qualia of Complement 60 3.3 Mod!/iclItion That CO('fces the' Ikm/III.2 Enriched Composition Aspectual Coercions 47 51 53 Interface 47 51 Verbal Coercions Mass-Count Coercions 3.4 Embedding Mismatches between Syntactic Structure and Conceptual 36 Structure 2.2 Adjectives That Semantically SlIhordinate The Noun 64 .1 A ctors 3. 5 Adjective-Noun 62 Modification 3.3.'ad Norm 65 .2 3.1 3.5 The Tripartite Parallel Architecture 38 2.4 Argument Structure Alternations 58 In terpolated Function Specified by General Principle 58 3. and 56 Cars and Other Vehicles H am Sandwiches 57 56 3.6 Representational Modularity 41 Chapter 3 More on the Syntax-Semantics 3.4.3 Pictures.5.2.2 3.3.3 Reference Transfer 54 Functions 3.3. Statues.1 Adjectives That Invoke the Qualia Structure of the Noun 62 3.I: ottllt' f{.1 3 .5.3 The "Conceptual-Intentional" 31 Interface 2.2.vi Contents 2.4.1 3.

8.7 3.hapter 5 109 5.4 Checking Argument Structure 101 4.2 Quantificational Relations That Depend on Internal Structure of 79 LCS 3.1 vii Summary Anaphora 66 67 Items 3.8 3.7.9 Remarks 81 Chapter 4 The Lexical Interface 83 4 .1 Quantification 78 Quantification into Ellipted 78 Contexts 3.Contents 3.6 Remarks on Processing 103 The Lexicon in a More General 107 Mental Ecology (.3 PIL=CIL 91 PIL and CIL Are at S-Structure 99 4.7.7.6 3.4 Bound Pronouns in Sloppy 75 Identity 3. 1 Lexical Insertion versus Lexical 83 Licensing 4.8.7.2 4.1 Lexical ":ntries.5 4. Lexical Rules Broadening the 109 Conception of the Lexicon .2 Binding inside Lexical 68 Control Determined by 70 LCS 3.3 73 Ringo Sentences 3.

7 Chapter 6 Remarks on Productive Morphology Final Remarks 1 30 6. 1 Introduction 131 131 6.2 Review of the Issues 1 53 1 53 It Isn't Lexical. What Is It? The Wheel o Fortune Corpus: If f 154 .3 Phonologi cal and Class-Based Allomorphy 1 39 6. 2 Morphosyntax versus Morphophonology 110 Contents 5 .viii 5 .6 " Optimal Coding" of Semiproductive Forms 1 23 5. 3 Inflectional versus Derivational Morphology 113 5.6 Why the Lexicon Cannot Be 1 50 Minimalist Chapter 7 Idioms and Other Fixed Expressions 7.4 Suppletion of Composed Forms by Irregulars 1 43 6.5 The Status of Zero Inflections 1 47 6.4 Productivity versus Semiproductivity 1 15 5 . 5 Psycholinguistic 1 21 Considerations 5.2 The Place o f Traditional Morphophonology 1 33 6. 1 7.

9 Summary 177 Chapter llaink 8 Epilogue: How Language Helps Us 179 8.8 Relation to Construction 174 Grammar 7.3 Language Is Not Thought.4 Phonetic Form Is Conscious. Thought Is Not 186 8.5 Parallels between Idioms and Compounds 164 7.6 Syntactic Mobility of (Only) 166 Some Idioms 7.1 Introduction 179 8.4 Lexical Licensing of Units Larger than XO 161 7. and 183 Vice Versa 8.2 Brain Phenomena Opaque to 180 Awareness 8.7 Idioms That Are Specializations of Other Idiomatic Constructions 171 7.5 The Significance of Consciousness Again 193 8.Contents 7.6 First Way Language Helps Us Think: Linguistic Communication 194 .3 Lexical Insertion of Idioms as XOs 158 ix 7.

7.x Contents 8.9 Summing Up 205 8.7.1 Introduction 194 The Relation between Consciousness and A ttention 196 8.7.3 Language Pro vides a Way to Pay Attention to Thought 200 8.2 8.7 Second Way Language Helps Us Think: Making Conceptual Structure Available for Attention 194 8.10 The Illusion That Language Is Thought 206 Appendix The Wheel of Fortune COrplfi 217 241 209 Notes References Index 257 .8 Third Way Language Helps Us Think: Valuation of Conscious Percepts 202 8.

Series Foreword are pleased to present this monograph as the twenty-eighth in the These monographs will present new and original research beyond the scope of the article. the Linguistic Inquiry Mono­ graph series is now available on the much wider scale. The editors wish to thank the readers for their support and welcome suggestions about future directions the series might take. and we hope they will benefit our field by bringing to it perspectives that will stimulate further research and insight. This change is due to the great interest engendered by the series and the needs of a growing readership. Samuel for Jay Keyser the Editorial Board . Originally published in limited edition. We series Linguistic Inquiry Monographs.

.

11I:al ions for idioms. The present study is in large part a consequence of laking my own advice: when I stepped back to admire the view. A fortuitous i n vi ta tion to the 2nd Tilburg Idioms C on feren ce alive IIIl' the opportun ity to e x pan d these ideas and w ork out their im­ .Preface "Rut it comes time to stop and admire the view before pushing on With this remark. The (irst concerns the nature of lex ical insertion. where I situated IIIL' language capacity in a more general theory of mental representations. and left it at that. in which it 'I\'elllcd to me that much of the disagreement turned on differing presup­ po s i ti ons about how lexical items enter in to syntactic structure. This book is therefore an attempt to piece together various fragments of linguistic theory into a lIlore cohesive whole. The architecture developed here already appeared in its essentials in my II}R7 book Consciousness and the Computational Mind. Joe Emonds published a critique of my theory of conceptual structure. two particular issues led to the real i zati on that my assumption was too optimistic. I cnded Semantic Structures.. . my 1990 book on conceptual structure and its relati on to syntax. I sensed that not all was well across the broader landscape. C on se q u e n tl y in r e ply in g to Emonds. . At that time I assumed that the architecture could be connected with just about any theory of syntax. and to identify and reexamine a number of standard assum ptions of linguistic theory that have contributed to the lack of fit. Around tin' same time the very same presup pos itions surfaced as crucial in a IIl1ll1hl'r of inte n se discussions with Daniel Biiring and Katharina Hart11101 nil. In 1991." . l'oincidentally bringing me back to questi ons of again. and stated the version of the modu­ larity hypo thesis here called Representational Modularity. l·onnccted this view with processing. intended mainly as a rhetorical flourish. I found it necessary to un­ earth these presuppositions and decide what I t h o ught lexical insertion is rt'ally like. However.

" Morphological and Semantic Regu­ larities in the Lexicon. for reasons of length and readability. hence the present monograph. I have not gone into a lot of technical detail. my in­ tent is to establish reasonable boundary conditions on the architecture and to work out enough consequences to see the fruitfulness of the approach. sometimes without contact with each other.xiv Preface lexical content that I had thought about in connection with my 1 975 paper on lexical redundancy rules. now appears in practically every approach to generative grammar outside Chomsky's immediate circle.1 970s and continues into the syntax-phonology relation and the syntax-semantics relation in a wide range of approaches. is not so much the technical devices as the attempt to take a larger perspective than usual on grammatical struc­ ture and to fit all of these innovations together. The idea adopted here of multiple. which I have adopted here. Bringing these concerns together with those of lexical insertion led to a 1 994 paper en­ titled " Lexical Insertion in a Post-Minimalist Theory of Grammar. from linguists and cognitive scientists of all stripes. picking and choosing variations that best suit the whole. My hope is to encourage those who know much more than I about millions of different details to explore the possibilities of this architecture for their own concerns. I found myself agreeing entirely with Chomsky's goals. . Rather. similarly." The second major impetus behind this book was the appearance of Chomsky's Minimalist Program. most non-Chomskian generative approaches have aban­ doned transformational derivations (an issue about which I am agnostic here). The idea that a grammar uses unification rather than substitution as its major operation. Many of the ideas in this study have been floating around in various subcommunities of linguistics. whose goal is to determine how much of the structure of language can be deduced from the need for language to obey boundary conditions on meaning and pronunciation. I think. What is original here. but differing radically in what I considered an appropriate exploration and realization. mostly enthusiastic. the paper grew beyond a size appropriate for a journal. In fact. As a consequence. and I received (what was for me at least) a flood of comments. coconstraining grammatical structures appeared first in autosegmental phonology of the mid." This paper was circulated widely.

Daniel Buring. Iklh J ack en do ff's energy and enthusiasm in collecting the Wheel of l'iJrlune corpus was an important spur to the development of chapter 7. Curios Otero. lda Toivonen. Bob Freidin. Philippe Schlenker. its members included Richard Benton. Ron I a n gac ker. as ever. Morris Halle. Alec Marantz. Semantic Interpretation in Generative Grammar. Janet Randall. and on . a nd Moira Yip were instru­ nwntal in tuning it up into its present form. Katharina Hartmann r ead an early draft of most of the book and sug­ gl:stcd some important structural changes. Edwin Williams. Piroska Csuri. Marcelo Dascal. Paul Bloom. Jane Grimshaw. I >an Dcnnett. . Henrietta Hung. Maria Piiiango. Martin Everaert. Fritz Newmeyer. Yehuda J. A seminar at Brandeis University in the fall of 1 994 was of great help in solidifying the overall point of view and developing the material in ��hapters 5 and 6.Acknowledgments express my gratitude to those people who offered comments materials from which the present work developed. Dan Dennett and Marcel Kinsboume. Georgia Green. Ivan Sag. Manfred Bierwisch. to include neuropsychological and evolutionary viewpoints. �Illoothed out much stylistic lumpiness and lent the text a touch of class. Urpo Nikanne. Ken Ilale. Daniel Biiring. and various anonymous referees. Paul Kiparsky. ac­ knowledged my debt to a numher of individuals in reaction to whose . Pim Levelt. Peter Culicover. Comments on the penultimate draft from Robert Beard. darifying Correspondence with Noam Chomsky has been extremely helpful in various basic questions. Pim Levelt encouraged me to mention J1 s y ch olinguistic issues sufficiently prominently. including Steve Anderson. Adele Goldberg. Edgar Zurif. My first hook. Hans den Besten. Ludmila Lescheva. Andre Schenk. Jerry Sadock. I must Ida Toivonen. Patrick Cavanagh. Anne Mark's editing.alk. Paul Kay. Mark Aronoff.ludi Heller. Joan Maling. Lindsay Whaley.

I also want to thank three people with whom I have been collaborating over the past few years. I am pleased to be able to thank again two of the central figures of that long-ago Generative Semantics tradition. this research has also been supported in part by National Science Foundation grant IRI 92.xvi Acknowledgments work my own had developed. Work with Barbara Landau has deeply influenced my views on the relationship between language and spatial cognition. .1 994 during which several parts of this study were conceived. as well as unpublished work we have done together. Finally. for their many insightful comments on aspects of the work presented here. Peter Culicover and I have been working together on arguments that binding is a relationship stated over conceptual structure. as usual. I draw freely on this work in chapter 3. and conse­ quently my views on interfaces and Representational Modularity (chapter 2). not syntax. It is nice to know that collegiality can be cultivated despite major theoretical disagreement.1 3849 to Brandeis University. Jim McCawley and Paul Postal. It is a pleasure to acknowledge financial as well as intellectual support. Amy and Beth are the foundation of my existence. on projects that play important roles in the pres­ ent study. is a major component in the discussion of semantic composition (chapter 3) and lexical redundancy (chapter 5). James Pustejovsky's work on the Generative Lexicon. though not yet molded into a book. I am grateful to the John Simon Guggenheim Foundation for a fellowship in 1 993.

Chapter 2 lays out the basic architecture and how it relates to the architecture of the mind more ge n e rally. assumptions. This first chapter articulates some of the options available for pursuing linguistic investigation and integrating it with the other cognitive sciences. . but to my knowledge there has been little thorough examination of the context of the entire theory. The remaining chapters work out some consequences of these b oundary conditions for linguistic theory. and methodological biases laid down in the 1 960s. My hope is that such an app r oach will put us in a better position to evalu ate the degree to which di ffe re nt frameworks reflect similar concerns. often rather sketchily. I advocate that the theory be formulated in such a way as to promote the possibilities for integration. and to scc what is essential and what is accidental in each framework's way of going about formulating linguistic insights. makes it difficult for those in one branch or technological frame­ work to relate to work in another. Assumptions Much research in linguistics during the past three decades has taken place in the context of goals. but in en ough detail to see which options for further elaboration are promising and which are not. The problem of keeping the larger context in mind has been exacerbated by the explosion of research. certain of these have been changed within various traditions of research. although it is a testament to the flourishing state of the field. Goals. The present study is an attempt to renovate the foundations of linguis­ tic theory. Over time. One of my methodol ogical goals in the present study is to keep the arguments as framework-free as possible-to see what conditions make the most sense no matter what machinery one chooses for writing gram­ mars. which. In particular.Chapter 1 Questions.

calling it the "cognitive commitment. 1. explicitly disavows psychological concerns. Generative grammar is not the only theory of language adopting this stance. and more generally why linguistics concerns people who have no interest in the arcana of syllable weight or excep­ tional case marking. and social aspects of language? One can maintain a mentalist stance without simply dismissing them. following Frege. and Lewis (1972). a sort of preliminary ritual to be performed before plunging into the technical detail at hand.1 The Mentalist Stance The basic stance of generative linguistics is that we are studying "the nature of language. that there are purely ab­ stract prope rties that any system must have in order to serve the expres- What about the abstract . of course.1 Universal Grammar The arguments for Universal Grammar have by now become almost a mantra. The tradition of Cognitive Grammar adopts it as well. and therefore the reason why most of today's linguists are in the profession rather than in computer science. For instance. Yet these arguments are the reason for the existence of generative grammar. It might be. whatever differences surface as we go on." On the other hand. Far more than anyone else. the work presented here is down to its deepest core a part of the Chomskian tradition. In other words.1. Bach (1989) asserts Chomsky's major insight to be that language is a formal system-disregarding what I take to be the still more basic insight that language is a psychological phenomenon. we are interested ultimately in the manner in which language ability is embodied in the human brain. Lakoff (1990).2 Chapter 1 1." not as some sort of abstract phenomenon or social artifact. for instance. but as the way a human being understands and uses language. to Chomsky's work of the late 1950s and early 1960s. By taking Universal Grammar as my start­ ing point. or car repair. They are also the reason why linguistics belongs in the cognitive sciences. literature. Chomsky is responsi­ ble for articulating an overall vision of linguistic inquiry and its place in larger intellectual traditions. These arguments are due. Chomsky makes this distinction nowadays by saying we are studying "internalized" language (I-language) rather than "externalized" language (E-language). a great deal of work in formal semantics does not stem from this assumption. for instance. as Chomsky sometimes seems to. I intend to reaffirm that.

though. Generative gram­ mar for the most part has ignored such aspects of language. I'll try to be explicit. ordi­ nary language basically doesn't provide us with terms sufficiently differ­ entiated for theoretical purposes. defending this terminology against various alternatives. you have to decide what to spend your time studying. It then becomes a matter of where you want to place your bets methodologically: life is short. I would rather not make a fuss over the terminology. Chomsky consistently speaks of I-language as the speaker's or knowledge linguistic competence. topic. so that people can use language. I'll use Chomsky's terms for convenience. Assumptions sive purposes that language serves.1. that we eventually need to investigate how such properties are spelled out in the brains of language users. Such principles are a conceptually necessary part of a theory of language. Where choice of terminology makes a difference. and there might 3 be properties that language has because of the social context in which it is embedded. otherwise. perhaps unwarranted. there must be a way to combine some finite number of memorized units-the words or morphemes of the language-into phrases and sen­ tences of arbitrary length.Questions. Goals. . venturing into them only to the extent that they are useful tools for examining intrasentential phenomena such as anaphora. Similar remarks pertain to those aspects of language that go beyond the scale of the single sentence to discourse and narrative. The bet made by generative linguistics is that there are some important properties of human language that can be effectively studied without taking account of social factors. The mentalist stance would say. and focus. is that the two competences can be treated as relatively independent. The only way this is possible is for the speaker's knowledge of the language to include a set of principles of combina­ tion that determine which combinations are well formed and what they mean. 1. Again. My assump­ tion. I am sure that the construction of discourse and narrative involves a cog­ nitive competence that must interact to some degree with the competence for constructing and comprehending individual sentences. In order for speakers of a language to create and understand sentences they have never heard be­ fore. what Chomsky now calls the "discrete infinity" of language.2 The Notion of Mental Grammar The phenomenon that motivated Chomsky's Syntactic Structures was the unlimited possibility of expression in human language.

Alternatively. The major technical innovation of early generative grammar was to state the combinatorial properties of lan­ guage in terms of a formal system.1. since chi l- . At a more methodological level. In my experience. How the lexicon and its grammat­ ical principles are related to the extralexical (or phrasal) grammatical principles will be one of the topics of the present study. Chomsky 1957. and a theory's choice of formalism can set up sociological barriers to communication with re­ searchers in other frameworks.g. that one might not otherwise have noticed. good or bad. Wexler and Culicover 1980 ). And. Chomsky and Miller 1963) or the leamability of such systems (e. 1. If linguistic knowledge consists of a mental grammar. But formalization is not an unmitigated blessing.3 Leamability and Universal Grammar Chomsky's next question is. the term grammar has been applied more broadly to the entire I-language. for example the expressive power (strong or weak generative capacity) of alternative hypothesized combinatorial systems (e. but not too much to become forbidding.4 Chapter 1 The finite set of memorized units is traditionally called the lexicon. Now we come to Bach's point. formalization permits one to use mathematical techniques to study the consequences of one's hypotheses. The set of principles of combination has been traditionally called the grammar (or better.g. including both lexicon and computational system. This confers many advantages. and that morphology has traditionally been called part of grammar. as Chomsky stressed in a much-quoted passage from the preface to Syntactic Structures. how does the mental grammar get into the speaker's mi nd? Clearly a certain amount of environmental input is necessary. At the deepest level. It's one of those methodological and rhetorical matters that's strictly speaking not part of one's theory. and compact in stating and examining one's claims and assumptions. rigorous. For these reasons. an excessive preoccupation with formal technology can overwhelm the search for genuine insight into language. formalization permits one to be more ab­ stract. in recent work Chomsky has called this set the computational system. but only part of how one states it. mental grammar) of the language. Given that many lexical items have internal morphological structure of interest. a formalization uncovers conse­ quences. I will tend to use the term mental grammar in this broader sense. I personally find the proper formalization of a theory a delicate balance between rigor and lucidity: enough to spell out carefully what the theory claims.

nearly thirty years apart.pur po se learning procedure. while every normal child manages to extract the princi ple s unaided. or both. Lexical-Functional Grammar (1. then. VG could involve either a specified.). constrained search space or a specialized. would have discovered the principles long ago. As linguistic theory has demonstrated. The fact that we are all still se arch ing and arguing. I like to put the problem as the "Paradox of Language Acquisition": If general-purpose intelligence were sufficient to extract the principles of mental grammar.. How do we characterize this capacity? The standard lore outside the linguistics community has it that this capacity is simply general-purpose intelligence of a very simple sort. the combinatorial principles of mental grammar cannot be directly perceived in the environmental input: they must be general­ izations constructed (unconsciously) in response to perception of the input. and over and over it finds its way into psychological theories of language. F ollow ing standard practice in linguistics. Cogn it ive (irallllllar. let's call this extra something Universal Grammar (VG). Chomsky's (1959) response to Skinner ( 1957) and Pinker and Prince's (1988) to Rumelhart and McClelland (1986) are detailed attempts to dispel it. could r el ia bl y converge on such curious notions as long-distance rel1exivcs.Questions. Even less is it likely that any le arn in g procedure could converge on the correct choice among Go v e rnme nt.B indi ng Theory (GB). reduplicative infixation. Standard practice in linguistics has usually presumed that VG creates the s ea rch space: without considerable initial delimitation. Assumptions 5 dren acquire mental grammars appropriate to their linguistic commu­ nities. linguists (or psychologists or computer scientists). a learning procedure). Goals. at least some of whom have more than adequate general intelligence. and Tagmemics ( not to mentio n the as yet . However. In prin­ ciple. Arc-Pair Grammar. it is unlikely that any general. Therefore the language learner must come to the task of acquisi­ tion equipped with a capacity to construct I-linguistic generalizations on the basis of E-linguistic input. suggests that the normal child is using something other than general-purpose intelligence. What does VG consist of ? Acquiring a mental grammar requires (1) a way to generate a search space of candidate mental grammars and (2) a way to determine which candidate best cor­ res pond s to t he environmental input (i. enriched learning procedure.1-"(. the complexities of language ilre such that general-purpose problem solving would seem not enough. Head-Oriven Phrase Structure Grammar (HPSG).e. and q uant ifi er scope. This lore is remarkably persuasive and persistent. no matter how sophisti­ l·aled.

the claim of innateness is what in the 1960s brought linguistics to the a t ten tion of phi losophers. At a gross level. has been on constraining the search space of possible grammars.1. the fewer burdens are placed on the prespecification of a search space for I-language. So we might be slightly more sophisticated and say that the human genome specifies the growth of the brain in such a way that VG is an emergent functional property of the neural wiring. and tomorrow's lun­ cheon date. an appreciation of fine wines. then. we can imagine the human genome specifying the structure of the brain in such a way that VG comes "pre­ installed. There is a trade-off between these two factors in acquisition: the more constrained the search space is.)l 1. In any event.6 Chapter 1 undiscovered correct alternative). Suffice it to say that. it is unclear whether there is such a thing as "ordinary learn­ ing" that encompasses the learning of social mores. the more powerful general-purpose learning proves to be. And it is thi s claim that leads to the potential integra- . the less strain it puts on the learning pro­ cedure. But of course we don't "install software" in the brain: the brain just grows a certain way. VG is innate. Since VG provides the basis for learning. it cannot itself be learned. Though he may be right. The general emphasis in linguistic theory." the way an operating system comes preinstalled on a com­ puter. Although I am in sympathy with this general tack. it is certainly not the only possibility. The only way it can get into the brain. not to mention language. the learning procedure for I-language is thought of as anything different from ordinary learning. (One of the threads of Lakoff 's (1987) argument is that lin­ guistic principles share more with general cognitive principles than gen­ erative grammarians have noticed. biologists-and the general public. especially since the advent of the principles-and-parameters approach (Chomsky 1980). is by virtue of genetic inheritance. tennis serves. That is. so that the complexity of the learning procedure can be minimized. more than anything else. This is not the place to go into the details. Standard practice is somewhat less explicit on whether. the faces at a party. psychologists. It therefore must be present in the brain prior to language acquisition. Rather. In particular. I don't think all aspects of linguistic principles can be so reduced. given the search space. jokes.4 Innateness The next step in Chomsky's argument is to ask how the child gets VG. the usual assumption among linguists is that the correct theory and the repertoire of choices it presents must be available to the child in advance.

1. and Garrett 1974) to imply the demise of transformational grammar. to for­ mulate them in such a way that they make contact with available evidence in other disciplines. in particular because rules of grammar a re neither consciousl y accessible nor consciously taught. for example when many took the demise of the derivational theory of complexity in sentence processing (Slobin 1966. There are at least three possible positions one could take. Bever. The weakest and (to me) least interesting possibility is that there is no necessary relation at all between the combinatorial principles of mental grammar and the principles of processing. Assumptions 7 tion of linguistics with biology. and to the strongest implications of lin­ guistics for fields such as education.) This does not necessarily mean that following rules of grammar need be an ything like following rules of tennis. and. and social policy. It is also the one that has been the focus of the most violent dispute. A second position one could take is that the rules of mental grammar are explicitly stored in memory and that the language processor "con­ sults" the m or "invokes " them in the course of sentence processing. medicine. if the ultimate goal is an integrated theory of brain and mental functions? To be sure. psycholin­ guistic results have sometimes been overinterpreted. If we wish to situate linguistics in a larger intellectual context. reciprocally. it behooves us to keep our claims about innateness in the forefront of discourse. The relation may have to be rethought.Questions. then.) To adopt this as one's working hypothesis has the effect of insulating claims of linguistic theory from psycholinguistic disconfirmation. Goals. and that the processor operates quite differently. But why should one desire such insulation. Fodor.5 Relation of Grammar to Processing Let us look more explicitly at how linguistic theory might be connected with nearby domains of inquiry. 1. But surely there are ways to avoid such conclusions other than denying any relation between competence and processing. but it is worth pursuing. (One could think of the grammar as "declarative " in the computer science sense. (One version of this is to say that what linguists write as mental grammars are simply one possible way of formally describing emergent regularities in linguistic behavior. Let us begin with the relation of the mental grammar-something resident in the brain over the long term­ to the processes of speech perception and production. to keep an eye on what other dis­ ciplines have to offer. .

and finally " semantically interpre ting" the re sul ting structure . 1. Chapters 5-7 will suggest that attention to thi s distinction has some useful impact on how the the ory of competence is to be formulate d. Moreover . speakers don't produce a sente nce by fir st generating the symbol S. it is crucial to distinguish those linguistic structure s that are stored in long-term memory f rom those that are composed " on line" in wor king me mory. since it is proce ssing that the br ain must accomplis h But if one thinks that the r ule s of mental gr ammar have something to do with processing. the n elaborating it into NP+ VP . The next chapter will propose an architecture for grammar in which at least some components must have relatively direct proce s­ sing counterparts. That is. One rather coarse psycholinguistic criterion will play an intere sting role in what is to follow . As the story went. the me ntalist stance entails that knowledge and use of language are somehow instantiated in the brain. if one considers st udies of aphasia to be relevant to . Berwick and We inber g (1984) de velop an altern ative interpre tation of transformational gr ammar that they do embody in a par ser . However . then neuroscience immediately becomes relevant.1.8 Chapter 1 A third possibility is that the rules of me ntal gr ammar are "e mbodied" by the proce ssor -that is. so in the end they find out what the y wanted to say. For example . and so on. the status of in-between cases such as dogs is less obvious. In a the ory of performance . On the other hand. This is not the place to decide be tween the two latter possibilitie s or to find others-or to argue that in the e nd they are indistinguishable psycho­ linguistically . (Thi s corresponds to the com­ puter scie nce notion of "pr oce dural" memory. so (like Bre snan and Kaplan) I want to be able to take psycholinguistic evide nce seriously where available . the natural interpre tation of the Standard Theory was not conducive to a proce dural interpretation of the rules. it is obvious that the word dog is stored and the sentence Your dog just threw up on my carpet is composed. that the y are the mselves instructions for con­ structing and compre he nding sente nces. until e ventually they put in wor ds. .6 Relation of Grammar to Brain Unle ss we choose to be Carte sian dualists. Conversely. other architectures and tech­ nol ogies might le nd themselve s better to an "e mbodie d" interpretation (as argue d by Bresnan and Kaplan (1982) for LFG . for instance).) Back in the 1960s. we were firmly taught not to think of rules of grammar this way. The importance of this claim depends on one 's stance on the rela­ tion of me ntal grammar to proce ssing.

2 Moreover. the formal/computational approach is among the best tools we have for understanding the brain at the level of func­ tioning relevant to language. But this doesn't inform us about the fine-scale articulation of brain function. arguing that only in an account of the neurons is an acceptable theory to be found. or even speech sounds-though we know they must do it somehow. say how the brain stores and retrieves individual words. brain-imaging techniques. it's highly unlikely that the details of neural instantiation of language will be worked out in our life­ time Still. Goals. In addition. then. is far from being able to tackle the question of how mental grammar' is neurally instantiated. 3. evidence of brain localization from aphasia and more recently from PET scans (Jaeger et al. We know how certain neurotransmitters affect brain function in a global way. exciting as it is. Assumptions 9 neurally instantiated to some degree.) 2. linguistic theory. and over the years it has proven a prag­ matically useful perspective. among others. as more comes to be known about the nature of neural computation in general. there may well be conclusions to be drawn about what kinds of computations one might find natural in rules of grammar. For example. Through animal research. I agree that eventually the neuronal basis of mental functioning is a necessary part of a complete theory. (It is like knowing what parts of a television set carry out what functions. have strongly criticized formal/com­ putational approaches to brain function. we have little idea how those areas do what they do. then one is implicitly accepting that the grammar is Can we do better than the formal approach espoused by linguistic theory. wc should be able to take for granted by now t ha t no t heory . 1996) in terms of overall grammatical architecture.Questions. For the moment. But current neuroscience. and studies of brain damage. but again little is known about how neurons en­ code words. this doesn't mean we shouldn't continue to try to interpret . we know what many areas of the brain do as a whole. which may explain certain overall effects or biases such as Parkinson's disease or depression. without knowing really how those parts work. But with the exception of certain lower-level visual areas. and explain language really in terms of the neurons? Searle and Edelman (1992) (1992). We know a lot about how individual neurons and some small sys­ tems of neurons work. For instance: I.

And of course. as well as in language. More over . That' s character­ istic of e volutionary engineering. for instance). such as the or ganization of the visual fie ld. te mpor al pat­ tern ing involving some sor t of me tr ical organization appe ar s in music (Lerdabl and lacke ndoff 1983) and prob ably motor control. Under such a conce ption. Again. Rather . vision. conversely: one would hope that ne uroscie ntists would examine the constraints that the complexity of language place s on the or ie s of ne ural computation. me tab olic. I find a less exclusionary stance of more intere st. " L anguage . At the same time . Hence a unifie d biology studie s them both for their par ticularities at the organ leve l and for the ir commonalities at the cellular . and this no doubt has ramifications for the the ory of compete nce . it might in principle be dif­ f nt f ere rom e ver y other cognitive capacity. and motor control can be diff rentiate d by f unction and e in many cases by brain area as we ll. I think the same can be said for "me ntal or gans. many mental phe nome na. and motor control. Crick 1994). proprioception. The claim that language is a specialized me ntal capacity doe s not preclude the possibility that it is in part a specializa tion of pree xisting brain mechanisms.10 Chapter 1 of proce ssing that de pends on strict digi tal von Ne umann-style compu­ tation can be correct (Churchland and Sejnowski 1992. like fingers and toes. they each have their own very particular properties: only language has phonological segme nts and . and they all deve lop subje ct to the constraints of ge ne tics and embryology. the thumb . if we only keep our e ye s ope n for them. music. the y are all built out of ce lls that have similar me tabolic functions. lin­ guistic the ory is isolate d from the re st of cognitive psychology. and the br ain are distinct physical organs with their own par ticular structures and functions. Pu shing Chomsky' s conce ption of language as a "mental or gan" a little fur ther . and evolu­ tionary leve l. The he ar t. the ir commonalitie s may be distinct specializations of a com­ mon evolutionary ancestor. involve hierarchical par t-whole re lations (or constitue nt structure). 1 . For example . 1 . Moreover.7 Evolutionary Issues If language is a specialized me ntal capacity. the y have all e volved subject to the constraints of natur al se le ction. This doe s not me an language is derived or "built metaphori­ cally" from any of the se other capacities (as one might inf r f e rom wor ks such as Calvin 1990 and Corballis 1991. so it should be no surprise that language doe s too. think about how we study the physical or gans. the kidney. Similarly.

1995). 1. argues that the evolutionary innovation is more than a simple enlargement of the primate brain (as for instance Gould (1980) seems to believe). At the same time. Chomsky identifies three components of grammar as "(virtual) con­ ceptual necessities": Conceptual necessity 1 The computational system must have an interface with the articulatory­ perceptual system (A-P). It is therefore reasonable to look for reflections of such "cellular­ level" constraints on learning and behavior in general. . psychological. However. to the extent that the hypothesis of VG bids us to seek a unified psychological and biological theory. just like bat sonar or the elephant's trunk (and the brain specializations necessary to use them!). Assumptions 11 nouns. and neurological ramifications. Whatever is special about it remains special. developmental. only vision has (literal) color. with all its grammatical. their particular areas in the brain have to grow under similar embryological conditions. Goals. A useful starting point is Chomsky's initial exposition of the Minimalist Program (Chomsky 1993. Such a conclusion doesn't take away any of the uniqueness of language. an attempt to reconstruct linguistic theory around a bare minimum of assumptions. only music has regulated harmonic dissonance. language must somehow come to express thought. language must somehow come to be heard and spoken. That is. That is. it makes sense to reduce the distance in "design space" that evolution had to cover in order to get to VG from our nonlinguistic ancestors. Rosenzweig and Leiman 1989). But they all are instantiated in neurons of basically similar design (Crick 1994. words somehow have to be incorporated into sentences. Now let us turn to particular elements of its organi­ zation. and they have to have evolved from less highly differentiated ancestral struc­ tures.2 Necessities and Assumptions So far I have just been setting the context for investigating the design of mental grammar. Concl'ptual necessity 3 The computational system m ust have an interface with the lexicon. Conceptual necessity 2 The computational system must have an interface with the conceptual­ in tentional system (C-I). That is. the Paradox of Language Acquisition.Questions.

McCawley ( 1 96 8a) proposed that phrase structure rules be thought of as constraints on well-formed trees rather than as a prod uction system that deri ves trees. theory-determined number of distinct levels.12 Chapter 1 This much is unexceptionable. Construction Grammar. it is the output that matters. HPSG. leading from D-Structure to S-Structure). Derivations and constraints are of course not mutually exclusive.g. In particular. imposing multiple simultaneous constraints. Much earlier in the history of generative grammar. and an output. Each step is discrete. the computational system performs derivations. Optimality Theory). rather than. without specifying any alterations performed on the structure to achieve that well-formedness. Chomsky also makes a number of assumptions that are not conceptually necessary. which becomes the input to the next step. However. such important parts of grammar as binding theory are usually conceived of as constraints on some level of syntactic derivation-though they are clearly parts of the computational system (see chapter 3). where each step adds something to a previous structure. and without any necessary order in which the constraints apply. of potentially indefinite length.g. 6). In addition. Categorial Grammar. Steps in the middle of a derivation may or may not be well formed on their own. which is the output of the previous step. In more strictly constraint-based theories of grammar (e. Generalized Phrase Structure Grammar (GPSG). Assumption 1 (Derivations) "The computational system takes representations of a given form and modifies them" (Chomsky 1993. it is possible to construe a derivation simply as a series of constrained relations among a succession of phrase markers (e. or otherwise alters it (say by moving pieces around). as was noted by Chomsky (1 972). deletes something from it. Here are some that are particularly relevant for the purposes of the present study. By contrast. What distinguishes a derivation (even in this latter construal) from the sort of alternative I have in mind is that a derivation involves a sequence of related phrase markers. a constraint-based approach states a set of conditions that a well-formed structure must satisfy. That is. . LFG. and most of the present study will be devoted to exploring the implications. To be slightly more specific. it has an input. constraints determine the form of a single level or a small. for example. a derivational approach constructs well­ formed structures in a sequence of steps.

. syntactic structures are built up and movement rules take place h(fore lexical items (or. Assumptions 13 Chomsky ception. 223-224) defends the derivational con­ (1995. as far as I can tell. This assumption takes two forms. 380. The next assumption concerns the nature of the lexical interface Assumption 3 ( Initial lexical insertion) The lexical interface is located at the initial point in the syntactic derivation. The Minimalist Program adds the operations "Select " and "Merge.. In this alternative. with no temporal interpretation implied. In the Minimalist Program. expressing postulated properties of the language facuIty of the brain." We will ret�rn to Chomsky's defense of derivations here and there in the next three chapters. the interface is mediated by a rule of lexical insertion. 1995. variety of Notably excluded from the repertoire of operations is any unification (Shieber 1986). From Aspects (Chomsky 1965) through GB." That is. Assumption 2 (Substitution) The fundamental operation of the computational system is substitution of one string or structural complex for another in a phrase marker. in which features contributed by different node. phon ological features of . 10. The next assumption is. ( Th i s approach resembles the conception of Tree-Adjoining Grammar (Joshi 1987. constituents can be combined (or superimposed) within a single t-based approaches Unification plays a major role in all the constrain mentioned above. " meant to take the place of phrase structure expansion in previous Chomskian theories. which projects lexical entries into X-bar phrase structures at the level of D(eep)-Structure. note 3) that "the ordering of operations is abstract. only implicit.Questions. Kroch 1987).) Excluded are a nu mbe r of other possibilities.. building up phrase structure recursively. Goals. derivational order has nothing to do with processing: "the terms output and input have a metaphorical flavor.. a. b." which crops up from time to time in the literature­ s ta rtin g with Generative Semantics (McCawley 1968b) and continuing throu gh Distributed M o rp h olog y (Halle and Marantz 1993) . lexical items are combined with each other by the operation Merge. i n some variants. though he is careful to point out (1993. One is the idea of "late lexical i nsertion.

14 Chapter 1 lexical items) are inserted i nto them. the language faculty is nonredundant. 2). Another aspect of the standard conception of the lexical i nterface con­ cerns what lexical i nsertion i nserts. 274). . . these categories must head thei r own X -bar projections. a list of basic i rregulari ties" ( p. who asserts. nevertheless. more like what one expects to find . . Lexical insertion/M erge enters words (the X O categories N. proven to be a successful one. Assumption 4 (Xo lexical insertion) Assumption 5 (Nonredundancy) "A working hypothesis i n generative grammar has been that . I understand the lexicon in a rather traditional sense: as a list of "exceptions. Assume further that the lex­ icon provides an "optimal coding" of such idiosyncrasies. which superficially look like VPs. . Also excluded is the possibility of inserting lexical entries smaller than words. I will examine the conseq uences of assumption 4 i n chapters 5-7 and formulate an alternative. As Aronoff (1 994) points out. . . . We will explore a more radical alternative in chapter 4. . . Traditionally excluded is the possibility of i nserting lexical entries that are larger than words. how successful the approach act ually has proven when applied to the lexicon. " He immediately defends thi s distancing of generative grammar from i nteraction with related domai ns: "The approach has. . it seems the optimal coding should include a phonological matrix of the familiar kind expressing exactly what is not predictable. as Chomsky advocates elsewhere (1 995. " Chomsky observes that this feature is "unexpected" in "complex bio­ logical systems. The next assumption is stated very explicitly by Chomsky (1 993. . this conception goes back at least to Bloomfield (1933). " One might question. . A. For the word book. This leads to difficulties in the treatment of idioms such as kick the bucket. for i nstance lexical VPs. in that particular phenomena are not 'overdetermined' by principles of language. V. " whatever does not follow from general principles . "The lexicon is really an appendix of the grammar. for instance causative affixes or agreement markers. P) into phrase structure. . in the st udy of the i norganic world. . 235-236). suggesting that [it is] more than just an artifact reflecting a mode of i nquiry. though.

consider how one might code the word bookend. Why couldn't language be like that? " 1 .3 Syntactocentrism and Perfection Next comes an important assumption that lies behind all versions of Chomskian generative grammar from the beginning. serial (von Neumann­ style) algorithms were the common currency.and of their plausibi li ty as models of mental function. stereopsis. " Chapters 5-7 will evaluate this latter assumption. the phonological and semantic components are i n terp re tive . suggesting that a certain amount of redundancy is inevitable. there i s better understa n ding o f the possibilities o f parallel algorithms. (As a hint of what is to come. Par­ ticularly in the early days of generative grammar. and it may also make better contact with a theory of processing. Assumption 6 (Syntactocentrism) The fundamental generative component of the computational system is the syntactic component. occlusion. to my way of thinking a desideratum. For instance. " " This assumption derives some of its intuitive appeal from the conception of a grammar as an algorithm that generates grammatical sentences. depth perception is the product of a number of parallel systems such as eye convergence. there is a difference between assuming that all irregularities are encoded in the lexicon and assuming that only irregularities are encoded in the lexicon. Hence it is important to divorce syntactocentrism from considerations of efficient computation . especially if one wishes to integrate linguistics with the ot her cog n i t i ve sciences . the presence of redundancy may bring l anguage more in line with other psychological systems. a linguistic theory that in­ corporates redundancy may in fact prove empirically more adequate as a theory of competence. . and it made sense to gen­ erate sentences recursively in a sequence of ordered steps (assumption 1 ) . but in other cases (such as viewing two­ dimensional pictures or random-dot stereograms) the specialized abilities of one system come to the fore. the latter apparently what Chomsky means by o pt imal coding. As a side benefit. Goals. N owadays. and texture gradients. lens focus.) Although perhaps fonnally not so elegant. Assumptions 15 However. In many cases more than one system comes up with the same result.Questions. so as to express "exactly what is not predictable" given the existence of the words book and end. a view of the lan­ guage faculty that I will dub syntactocentrism.

which Chomsky takes to be one of its essen­ tial and unique characteristics. has always been careful to characterize a gen­ erative grammar not as a method to construct sentences but as a formal way to describe the infinite set of possible sentences. as we will see. One can understand an un­ limited number o f hie r arch i cal l y organized visual scenes and conjure up an unlimited number of visual images. Thus the grammar as a whole is to be thought of as a parallel algorithm. one can a ppreciate a n u n l i m i ted n u m be r of h i e ra rch i ca l l y o rga n i zed tunes. we should have no f ormal problem with it. There is nothing espe­ cially new in this conception. and semantics. of course. chapter 2 will develop an archi­ tecture for language in which there are three independent sources of discrete infinity: phonology. it is just a different way of characterizing an infinite set. but it is worth making it explicit. On the other hand. From the point of view of psychology and neuroscience. of course. the discrete infinity of language. and probably necessary for learnabiIity of the grammar. so are multiple sources of infinite variability. and the structural description of a sentence arises from establishing a correspondence among structures from each of the three components. Is that bad? This remains to be seen. they are a reflection of interpreting the underlying recursion in syntactic phrases. redundancies are in practice present. Each . each with hierarchical structure. syntax. so that it is not in principle redundant. arises from exactly one component of the grammar: the recursive phrase structure rules (or in the Minimalist Pro­ gram. According to the syntactocentric view. Does it introduce unwanted redundancy? I will argue that the three generative components describe orthogonal dimensions of a sentence. Whatever recursive properties phonology and semantics have. Still. Moreover. redundancy is expected. the application of Select and Merge). Each of these com­ ponents is a generative grammar in the formal sense. Is there anything in principle objectionable about such a conception? Recalling Chomsky's injunction against construing the grammar as the way we actually produce sentences. Another possible reason for syntactocentrism is the desire for non­ redundancy (assumption 5). one can plan and carry out an act i on in an unlimited number of h i era rc h i c a l l y orga n i zed ways. I suspect that his syntactocentrism in the early days was partly an artifact of the formal techniques then available.16 Chapter 1 Chomsky. But is a unique source of discrete infinity necessary (not to mention conceptually necessary)? As an alternative to syntactocentrism.

for example in the cases men­ t i oned above. calls for a generative capacity that can interpret and act upon an unlimited number of different situations. then. Having syntax takes much of th e burden of composing sentence m e anings off of pure pragmatics. protolanguage is essentially words plus pragmatics. Why. animal behavior is now understood to be far more complex a nd structured than used to be believed. He asks. 5 Now of course.Questions. Moreover. and from agram­ matic aphasics. as we can see from the speakers of pidgin languages. M oreover. simply stringing together a sequence of conventionalized symbols leaves it li p to the hearer to construct. but that had no (or very rudimentary) capacity for syntax? His answer is. its variety. An obvious question in the evolution of language is. What are the cognitive antecedents of the modem language capacity? Bickerton ( 1 990) offers an intriguing and (to me at least) plau­ sible scenario. how the speaker intends the concepts expressed to be related to one another. How much could be communicated by organisms that could perceive articulated sounds and establish sound-to-meaning associations (i. therefore. then. As Pinker and Rloom ( 1 990) arg ue. one can see the selective advantage in adding gram­ matical devices that make overt the intended relationships among words­ things like word order an d case marking. if the claims are to be believed.e. the apes that have been taught a form of sign language do manage to communicate rai rly successfully. words). Bick­ crton claims that the ability for protolanguage can still manifest itself in h umans when full language is unavailable. especially in higher mammals. that along the evolutionary pathway to language lay a stage he calls proto­ language. is there any prima facie advantage in claiming that language is itself generated by a single source?3 If we wish to explain eventually how language could have arisen through natural selection. Goals.4 Bickerton proposes. So language is hardly the only source of generativity in the brain. despite (according to Terrace 1 979) lacking any ap­ preciable syntactic organization. Let us suppose something like Bickerton's story: that earlier hominids had a capacity to take vocal productions as conventional symbols for something else-a rudimentary sound-meaning mapping. That i s . Assumptions 17 o f these. Quite a lot. from children at the two-word stage. which had meaningful words but lacked (most) syntax. on grounds of pragmatic plausibility. . it is also of interest to consider an evolutionary perspective on syntactocentrism. requires a generative capacity in a nonlinguistic modality.

. then. so far as we know. Not that the evolution of syntax is inevitable. possession of language in the human sense being the most distinctive index of these qualities" Homo sapiens.18 Chapter 1 If one accepts some version of this scenario (and I don't see any alter­ natives other than "unsolved/unsolvable mystery" waiting in the wings). cannot usually be as rigorous as syntactic arguments: one cannot. 6 Chomsky. but more as a facilitator or a refinement than as a creative force. for instance the existence of adverbs or the particular properties of reftexives-any more than the claim that the human hand evolved forces us to explain the selective advantage of exactly five fingers on each hand. as in the syntactocentric view.) On this view.000 years ago or less. Chomsky might o f course b e right: especially i n the principled absence of fossil vowels and verbs. Syntax. It does indeed lie "between" sound and meaning. one of his more extreme statements being. for example. and thereupon modifying onc's syntactic t heory. Evolutionary arguments. 200. It does not force us to explain all properties of syntax in evolutionary terms. (See 1 990 for the reasons behind this claim. they have only to be better than what was around before in order to confer a selective advantage. has expressed skepticism (or at least agnosticism) about arguments from natural selection. unique to man. The solutions that natural selection finds are often matters of historical contingency. the design of language may well be an accretion of lucky accidents. as is well known. then syntax is the last part of the language faculty to have evolved. imagine trusting an evolu­ tionary argument that adverbs couldn't evolve. does not embody the primary generative power of language. either: Bickerton thinks that hominids had protolanguage for a million years or more. In this sense. I am the first to acknowledge that such an argument is only sugges­ tive-though I think it does have the right sort of ftavor. it serves as a scaffolding to help relate strings of linearly ordered words (one kind of discrete infinity) more reliably to the multidimensional and rela­ tional modality of thought (another kind of discrete infinity).' specific to the human species] by speculating that rather sudden and dramatic mutations might have led to qualities of intelligence that are. but that lan­ guage appeared only with Bickerton a skeptical appraisal. we have no way of knowing exactly what happened to language a million years ago. "We might reinterpret [the Cartesians' 'creative principle. the function of syntax is to regiment protolanguage. 396). also see Pinker 1 992 for ( 1 97 3 . espe ciall y those concerning language.

Q uestions. Marvin Minsky [offered a] "third ' i n teresting' pos si b ility : psycho l ogy could turn out to be like engineering" . in Chomsky's mind: psychology could turn out to be "like physics"-its regularities explainable as the con­ seq uences of a few dee p. not dismissive. These require­ ments for language to accommodate to the sensory and motor apparatus] might turn out to be critical factors in determining the inner nature of CHL [the compu­ tational system] in some deep sense. Chomsky speculates that language might be the only such " perfect" mental organ: "The language faculty might be unique among cognitive systems. . we wouldn't need lan­ guage. Assumptions 19 However. My own inclination would be to say that if we could communicate by telepathy. If syntactic results can be rigorously reinterpreted so as to harmonize with psychological and evolutionary plausibility. inexorable laws-or psychology could turn out to be utterly lacking in laws. but hints come from this passage (Chomsky 1 995. . . . . 3 87) recounting of a debate at Tufts University in March 1 978. There were only two interesting possibilities. elegant. It is not entirely clear what is intended by the term perfect. syntactocentrism has successfully guided research for a long time­ but it is still just an assumption that itself was partly a product of histor­ ical accident. or they might turn out to be "extraneous" to it. ga dge t or a dign i ty of the m i nd to be a Chomsky . Moreover. . This view. . 22 1 ) . To be or s ure. I personally find this passage puzzling. in that it satisfies mini­ malist assumptions . What might Chomsky have in mind? An anecdote that I find revealing comes from Dennett's (1 995. would have no truck with engineering. of course. .i n which case the only way to study or expound psychology would be the novel ist's way . language could be perfect if only we didn't have to talk. comports well with his skepticism about the evolutionary perspective. In Chomsky's recent work (especially Chomsky 1 995). a new theme has surfaced that bears mention in this context. [T]he computational system CHL [could be] bio­ logically isolated" (1 995. . . or even in the organic world. . A ssumption 7 (Per ection) f Language approximates a "perfect" system. . at least for the purposes of communication . . . . there is no linguistic argument f syntactocentrism. It w as somehow beneath the collecti o n of gadgets . Goals. I n other words. there would be n o need for a phono­ l ogical component. The latter possibility is not to be discounted. 22 1 ) : I f hu mans could communicate by telepathy. inducing departures from "perfection" that are satisfied in an optimal way. I think we should be delighted.

as well as overlapping systems for marking topic such as stress. Conversely. and that get along well enough with each other and the preexisting systems. But nonredundant perfection? I doubt it. Much of what is to follow here depends on laying these biases bare and questioning them . and agreement." away from Chomsky's sense of "perfection" toward something more psychologically and biologically realistic. this might be about the best we could come up with. in an effort to uncover some of the biases that form the background for empir­ ical analysis. But if we had to make it over from a mammalian back. Nevertheless.f Good Tricks (to use Dennett's term) that make language more or less good enough. If we were designing a back for an upright walker on opti­ mal physical principles. In particular. we still have to satisfy the demands of learnability. rather. the same system of case mor­ phology is often used in a single language for structural case (signaling grammatical function). they are just about what one should expect. as Chomsky and his colleagues often have done. intonation. I find it important to work through them. language has overlapping systems for regulating O-roles. The result is not "perfection. many systems of lan­ guage serve multiple purposes. It is just that we may have to reorient our sense of what "feels like a right answer. If we were designing reproduction from scratch. The very same Romance reflexive morphology can signal the binding of an argument. It may then turn out that what looked like "imperfections" in language-such as the examples just mentioned-are not reason to look for more "perfect" abstract analyses. . I would expect the design of language to involve a lot o. so susceptible to strain and pain. an impersonal subject. Admitting that language isn't "perfect" is not license to give up attempts at explanation.20 Chapter 1 But of course it is characteristic of evolution to invent or discover "gadgets" -kludgy ways to make old machinery over to suit new pur­ poses. semantic case (argument or adjunct semantic function) and quirky case (who knows what function). would we come up with sex as we practice it? Similarly. case. or nothing in particular (with inherently reflexive verbs). and position. For instance. case." as witness the fallibility of the human back. These issues may seem quite distant from day-to-day linguistic re­ search. Just as the visual system has overlapping systems for depth perception. This is not to say we shouldn't aim for rigor and elegance in linguistic analysis. the elimination of an argument (decausativization). among them word order. we would likely not come up with the human design.

. - . This chapter will dig the foundations a l i ttle deeper and ask what these interfaces must be like. These instructions are not part of linguistic repre­ sent at i ons Rather. We will begin with the better understood of the two interfaces. Representational M odularity Let us look more closely at the first two "conceptual necessities" of the p rev i ous chapter: the interfaces between language and the "articulatory­ perceptual" system on one hand and between language and the "con­ l:cptual-intentional" system on the other. Representational Modularity. the usual claim is that phonetic representations are the "cnd" of l i nguist i c represe n tati on they are the strictly linguistic repre­ se n t a t i on s that most c l osel y mirror articulation-and that there is a non­ t ri vi a l prOcess of conversion from ph onet i c form to motor instructions. 1 { The "Articulatory-Perceptual" Interfaces ons i der speech production. In order to produce speech.C hapter 2 I nterfaces. continuing with it in more detail in chapter 3 . We will then briefly consider the nature of the more controversial "conceptional-intentional" i nterface. 2. 1 Si m i larly for speech perception: the process of phoneti c perception is a lIonl ri v i a l conversion from some sort of frequency anal y si s to ph on et i c ' on the basis of . the brain must. A curious gap in Chomsky's ex pos i tion of the Minimalist Program is the absence of any independent characterization of these interfaces. in terms of which these inter­ faces emerge as natural components. Chapter 4 will take up Chomsky's t h i rd i n te rface between the combinatorial system and the lexicon. which now become all-important in constraining the theory of syntax. develop a set of motor instruc­ t i on s to the vocal tract. linguistic representations. The present c hapter will conclude by discussing a more general conception of mental o rg a n ization. that of language with the "articulatory-perceptual" system.

It is not like. the signals coming from the auditory system are not. Moreover. but others. certain phonetic distinctions correspond to auditory dis­ tinctions. Thus the conversion fro m onc to the othcr is best viewed formally as a set . analysis into distinctive features. tone of voice. Consider the relation of auditory representation to phonetic representation. I spoke above of a "conversion " of phonetic representations into motor instructions during speech production. that the two can be brought into correspondence to constitute a unified " articulatory-perceptual" representation. . but others. In particular. the motor movements involved in speech can be modulated by concurrent non­ linguistic tasks such as holding a pipe in one's mouth-without changing the intended phonetics. you would surely not end up helplessly shadowing all speech you heard. Haskins Laboratories would have been out of business years ago!). a series of operations on a for­ mal structure that converts it into another fonnal structure built out of as division into discrete phonetic segments. strictly speaking. Certain auditory distinctions correspond to phonetic distinctions. do not. Given just the elementary obser­ vations so far.22 Chapter 2 fonn (if it were trivial. moving syntactic elements around in a tree. we see that such a conversion cannot be conceived of as a derivation in the standard sense of generative gramma r. If an evil neu­ rosurgeon somehow wired your auditory nerve up directly to the motor strip. We must also not forget the counterpart of phonetic representation for signed languages: it must represent the details of articulation and percep­ tion in a fashion that is neutral between gestural articulation and visual perception. the auditory system per se is not a direct source of linguistic representations. do not. of pri ncipks of t he genera l form ( I ) . compatible with those going to the motor system. ( Who knows what you would do?) It is only at the abstract level of phonetic representation. or like inserting or changing phonological features-that is. In fact. for instance. distinct from both auditory and motor signals. phonetic representation does not uniquely encode motor movements. Likewise. and the systematic presence of word similar elements. and speed of articulation. For example. Rather phoneti c representations are fonnally composed out of onc alphabet of elements and motor instructions out of another. Similarly for speech production. phonetic representation (spoken or signed) is not in one-to-one correspondence with either perceptual or motor representations. That is. It must therefore likewise be an abstraction from both. such as the speaker's voice quality. such boundaries.

(the motor interface level of phono logy/phonetic s) This prescri p t i o n can hct wccn two be genera lized to the structure of any interfac e . where the choice of modality in the rule specifies whether it is a determinative rule (must). these principles have two structural descriptions. perception to phonetic representation and for percep tion and production o f signe d languages. 2 f (2) General f orm o phonetic-to-motor correspondence rules f Configuration X in phonetic representation { must/may/preferably does} correspond to configuration Y in motor instructions. a pause for breath (a motor function. one in each of the relevant domains. � Thus the phonetic-motor interface formally requires three components: 0) a. or a default rule (pre erably does). A c. a "permissive" rule (may). the presence of breath intake in motor output is influenced by. the principles (1) applying to a phonetic representation underdetermine the motor instructions: recall the example of speaking with a pipe in one's mouth. First. but not determined by.[ n terfaces. not a linguistic function) is m uch more likely to occur at a word boundary than within a word or especially within a syllable. Representational Modularity 23 ( I ) General f orm o phonetic-to-motor correspondence rules f Config uration X in phone tic representation corresponds to configuration Y in motor instructions. set of moto r representations into which phonetic represe ntation s are mapped (the phonetic interface level of motor representation ) A set o f phonet ic-moto r correspondence rules of the general form (2). A set of phonetic representations available for motor interpr etation b. Two important points. In other words. which create the correla tion between the levels (3a) and (3 b) d i st i nct f() rms o f represe n t a t i o n . Moreover. but a pause is hardly necessary at any partic­ ular word boundary. For example. word boundaries in phonetic representations. many aspects of coar­ ticulation between speech sounds are none of the business of phonetic rep­ resentation-they are determined by constraints within motor instructions t hemselves. the gen­ eral form (1) must be amplified to (2). To accommodate such circumstances. That is. Paralle l examples ca easily be constructed for the relation of auditory . some of the principles (1) may be "permissive" in force. Second.

the choice of representation in AILB . 2) speaks of the "interface level A-P" as "providing the instructions for the articulatory-perceptual . 3 . Crucially. This is the force of Chomsky's assertion that there is "an" interface. But the existence of such a shared representation is (at least i n part) an empi rical matter. However. which he cal ls PF (phonetic fonn). Furthermore. . but does not uniquely determine. they are not "translations" or one­ to-one mappings from A to B. The broadly accepted assumption.24 Chapter 2 (4) Components of an interface between system A and system B (an A-B interface) a. . the correspondence rules do not perform derivations in the standard sense of mapping a structure within a given format into another structure within the same format: they map between one format of rep­ resentation and another." this is an oversimplification. A set of representations in system A to which the interface has access (the system B interface level(s) of system A or BILA) b. A set of A-to-B correspondence rules. when Chomsky ( 1 993. it is a conceptual possibility that one component of these two interfaces is shared-that the articulatory interface level of language is the same as the perceptual interface level . of course. of the general form illustrated in (5). is that there is such a shared level: phonetic representation. The principles placing language in correspondence with articulation are by necessity quite different from those placing language in correspondence with auditory perception. Thus Chomsky's assertion of the "virtual conceptual necessity" of an articu­ latory-perceptual interface must be broken into two parts: language must have an articulatory interface and a perceptual interface. but rather an indirect consequence of input to the auditory system. Phonetic representation is only indirectly a set of instructions to the motor system. Thus. and it is not at all a set of instruc­ tions to the auditory system. which create the correlation between BILA and AILB . a given representation in BILA constrains. (5) General form of correspondence rules Configuration X in BILA {must/may/preferably does} correspond to configuration Y in AILB . it is clear that there cannot be a single interface of language with the articulatory-perceptual system. system[s]. A set of representations in system B to which the interface has access (the system A interface level(s) of system B or AILB) c. In fact.

there may be aspects of PF that are present for the purpose of level-internal coherence but that are invisible to the correspondence rules. let us ask. to which we return in chapter 4. At the level of PF. a tenninological point. ( 1 986. It could not be represented I lburk].2 The Phonology-Syntax Interface I n this section we will look more closely within the "computational sys­ tcm. On the one hand. . For example. logical languages. The word book. each phonetic ele­ ment must be licensed by some physical interpretation. the tenn formal With this clarification of th e tenninology. were it not for the fact that Chomsky uses FI at the PF interface to motivate an analogous principle at the LF i n terface with more crucial consequences. This would not be so important. where we simply disregard [f] and [r] . The tenn syntax can be used generically to denote the structural principles of any fonnal system. Representational Modularity 25 pri nciple he calls Full Interpretation. nevertheless t hey are present in PF so that the phonetic string can be related to the lcxicon and ultimately to syntax and meaning. VPs. . . Is PF part of ( narrow) syn ta x ? The general consensus seems to be that it is not. for example. o n . . l ha t We . from h uman language as a whole to computer languages. " First. it can be used more narrowly to denote the str uctural principles that detennine the organization of NPs. Chomsky asserts the importance o f a might [say] that there is a principle of full interpretation ( FI ) that requires every element of PF and LF must receive an appropriate interpreta­ t i on None can simply be disregarded. 2. has the phonetic representation [buk]. narrow sense. There is an important ambiguity in this statement. I n the presen t s tudy I will use the tenn only in the latter. and so o ften cal ling it (narrow) syntax as a reminder. . I will use system o r generative system for the general sense. We clearly do not want a theory of PF in which the correspondence rules can arbitrarily disregard partic­ ular segrnents.I n terfaces. it is s urely correct that every element of PF that motor representations can be related to auditory and must be so related. The . 98) . Nor do phonetic features n ecessarily correspond perfectly consistently to motor representation ( much less to "physical interpretation "). . . . Alternatively. word boundaries a re not consistently interpretable in auditory or motor tenns. that would be possible only if there were particular rules or general principles deleting these elements. though. as noted earlier. O n e further point bears mention. Thus the principle FI needs a l'crtain amount of modulation to be accurate with respect to its role in PF. and music.4 On the other hand.

the units of phonological structure are such entitie s as segments. since as a unit it has no ide nti­ fiable syntactic propertie s. they cliticize). [[a] [[[very]big] [house]]] And intonational phrasing cuts across syntactic phrase boundaries. To make this point clearer . (8) a.e . Syntactic: [[a] [[big] [house]]]. .g. the constitue nt structure of PS cannot be produced by simply erasing syntactic boundarie s. But the revolution in phono­ logical the ory starting in the mid-1 97 0s (e . Goldsmith 1 97 6) has altere d that view irrevocably. a nonsyntactic unit. which are as far as I know nearly universally ac­ cepted among phonologists. which do not correspond one to one with the standard units of syntax. and +voice. consider the unit [abig] in (7 a). R ather. rahi1:'] is j ust a phonological word . There seems little sense in trying to identify its syntactic category. syllable s. PF is of course a le ve l of phonological structure. For instance . (6) a. Phonological: [this is the cat] [that ate the rat] [that ate the chee se] b. Morphological: [[[organ]iz]ation] English article s form a phonological unit with the next word (i. Liberman and Prince 1 977 . should not be long to the same component of grammar . in trying to de cide whether it is a determiner or an adje ctive . On current views. 1 980. Phonological: [or +ga+ni] [za+tion] b. since the dete rminer can't lower to the adjective posi tion and the adjective can't unde rgo head-to-head move­ ment to the determiner (or if it can . syllabification and foot structure (phonological entitie s) ofte n cut across morpheme boundarie s (le xical/syntactic e ntitie s). Phonological: [abig] [house]. for example . the adverb in a very big house can't). (7) a. Syntactic: [this is [the cat [that [ate [the rat [that [ate [the chee se]]]]]]]] Consequently. He nce phonological structure (and therefore PF ) could be thought of as derive d e ssentially by a con­ tinuation of (narrow) syntactic derivation. 1 43. [avery] [big] [house] b. Now it use d to be thought (Chomsky 1 965. noun. an element of PF.26 Chapter 2 basic intuition is that. Nor can it be produced by any standardly sanctioned syntactic operation. Chomsky and Halle 1 968) that phonological structure is simply a low (or late) leve l of syntactic structure . an element of (narrow) syntax. derive d by erasing syntactic boundarie s. and intonational phrases. so that all that is visible to phonological rule s is the string of words. whether or not they form a syntactic constituent with it.

there is no detectable semantic difference between the two versions. among others. Zwicky and Pullum 1 983 and the papers in Inkelas and Zec 1 990. [but he didn 't much CA RE ] [for the boxer and the GREYhound]. Gee and Grosjean 1 983. what have we got? It appears that phonological structure and syntactic structure are independent formal systems. (9) What did Bill think of those DOGS? [Well]. VP. ( 1 0) a. an intonation break cannot be felicitously added after Children's in (lOb). The upshot is that intonational structure is constrained by syntactic structure but not derived f rom it. [Sesame Street] [is a production] [of the Children's Television Workshop] . as we have just seen. More generally. b. or CP. Still. [Sesame Street is a production of] [the Children's Television Workshop] . they refer to prosodic constituency. appendix A.) Conversely. Selkirk 1 984. is only partially determined by syntactic structure. each of which provides an exhaustive analysis of sentences in its own terms. intonational phrasing is not entirely arbitrary with respect to syntax. Consider also the sentence in ( 1 0). for instance. for instance. Although it is true t hat intonational phrasing may affect the interpretation of focus (which is often taken to be syntactic). Hirst 1 993). For example. which. Rather. Jackendoff 1 987a. Representational Modularity 27 Similarly. 5 Steppi ng back from the details. for instance a general preference for the intona­ tional phrases of a sentence to be approximately equal in length (see. . marked with two different into national phrasings (both of which are attested on television). intonational phrases such as those in (8a) cannot always be identified with syntactic units such as NP. Not only are there constituents here that are not syntactic constituents.I nterfaces. syntactic rules do not refer to phonological domains or to the phonologi­ cal content of words. such phrasing may cut across syntactic con­ stituency. the italicized intonational phrases in (9) are not complete syntactic constituents (capitals indicate focal stress). [he very much LIKED] [the collie and the SPANiel]. (See. some of its aspects are characterized by a utonomous phonological principles whose structural descriptions make no reference to syntax. there appears to be some consensus that phonological rules cannot refer directly to syntactic categories or syntactic constituency.

in precisely Chomsky's sense. a source of "discrete reduced to the other." neither of which can be . and Tense. Person. (12) a. Such cal terms. (l 2a). the principles of agreement and case marking. and so forth. the notions of stress. infin­ Each of these requires a generative grammar. Case. vowel harmony. tone. A. The principles of phonologicaI combination include rules of syllable structure. which asserts the (approximate) correspondence of syn- . A syntactic XO constituent preferably corresponds to a phonological word. P (or their feature decomposi­ tions) and functional categories (or features) such as Number. the principles of long-distance dependencies. The generative system for syntactic structure (SS) contains such prim­ as the syntactic categories N. b. The principles of syntactic combination include the principles of phrase structure (X-bar theory or Bare Phrase Structure or some equivalent). it Given the coexistence of these two independent analyses of sentences is a conceptual necessity. of Y I and Y 2 · These rules contain the modality "preferably" because they are subject to exception. and intonation contour. Gender. the notions of syllable. and syntactic constituent X 2 corresponds to phonological then the linear order of XI and X2 preferably corresponds to the linear order constituent Y 2. stress assignment. If syntactic constituent X I corresponds to phonological constituent Y I . V. ity. and phonological and intonational phrase.28 Chapter 2 itives 1 . word. Two simple examples of such rules are given in (12). phonologipairing a structural description in syntactic terms with one in or orm f phonological-syntactic correspondence rules (1 1 ) General f (PS-SS rules) Syntactic structure X { must/may/preferably does) correspond to phonological structure Y. 2. and so forth. that the grammar and contain a set of correspondence rules that mediate between syntactic rules must be of the general form given in ( 1 1 ). phonological units. The generative system for phonological structure (PS) contains such primitives as phonological distinctive features.

I n te rfaces. 6 However. that AUX man played piano I lere the auxiliary appears in PF between the determiner and the head noun o f the subject. has exceptions in both directions. syntactically an absurd position. that is why earlier versions of syntax were able to assume that phonological structure results from syntactic bracket erasure. ( 1 2b).. is normally taken to be an absolute rather than Cl default principle. a ps-ss correspondence rule of t he form ( 1 1) must be invoked. That is. which asserts the correspondence of linear order in phonological and syntactic structure. Clitics such as a in (7) are syntactic words that are not phonological words and sO must adjoin to adjacent phonological words. as in (13) (from I l alpern 1995. Conversely. Representational Modularity 29 tactic to phonological words. t hen. I have in mind c1itic placement in those dialects of Serbo-Croatian where the clitic cluster can follow the first phonological constituent. Such con si de rations are essentially what wc sce goi ng on in discussions of the phonology-syntax interface such as t hose ci ted a hove though they arc rarely couched in the present term s. That is. preserving order. The empirical problem. syntactic rules never depend on whether a word has two versus three syllables (as stress rules do). compounds such as throttle handle are syntactic words-Xo categories that act as single syntactic units but that consist of more than one phonological word. there prove to be certain advantages in a llowing (12b) to have exceptions forced by other principles. 4). This overall conception of the phonology-syntax interface leads to the following methodology: whenever we find a phonological phenomenon in which syntax is implicated (or vice versa). many aspects of phonological structure are invisible to syntax and vice versa . Halpern (1995) and Anderson (1995) suggest that such clitics are syntactically initial but require phonological adjunction to a host on their left. ( 1 3) T aj je covek svirao klavir. and phonological rules never depend on whether one phrase is c-commanded hy another (as syntactic rules do). of course. UG must delimit the PS-SS correspondence rules. i n addition to the rules of phonology proper a nd the rules of syntax proper. is to find the most co nstraine d possible set of ps-ss rules that both properly describes the facts and is learnable. In principle. . the resulting place­ ment in PF is the optimal compromise possible between their syntactic position and rule (12b). ps-ss corre­ spondence rules might relate anything phonological to anything syn­ t actic-but in fact they do not. For example.

The full phonology-syntax interface. the organization of NPs and VPs. as seen in (6) -(1 0). In fact.'stcd w i t h an i n depe nd e n t genera tive ca paci t y .e. "in particular. the existence and correl ation of multiple tiers. In Chomsky's exposition of the Minimalist Program. From the present perspective. . which con­ structs wordlike units that are then subjected to further phonological processes . I will use the term Phonological Interface Level of syntax (PILss ) for this level of syntax. which in turn cannot access syntactic and seman­ tic features. phonological features are unavail­ able to rules of grammar. which in turn interfaces with articula­ tion and perception.e. Leaving the question open until chapter 4. and therefore it is not a level of (narrow) syntactic struc­ ture. and Spell-Out. the closest coun­ terpart of this interface is the operation Spell. or what Chomsky calls the overt syntax). syntactic] and semantic features" (Ch om sky 1 995. chapter 4 will show why it is unlikely that there is more than one.Out. has no direct interface with the articulatory-perceptual system. rather. then. . the mapping to PF . In other words. we conclude that PF. . on the (narrow) syntactic side of a derivation.30 Chapter 2 Returning to the initial question of this section. there has to be a particular level of phonological structure that is available to the PS-SS correspondence rules. eliminates formal [i. the main trouble with Spell-Out-beyond i ts vagueness-is that it fails to address the independent generative ca­ pacity of phonological structure: the principles of syllable and foot struc­ ture. The idea is that "be­ fore" Spell-Out (i. this problem drops away im mcd ia l d y . and the PS-SS correspondence rules. accomplishes the transition. and which eliminates features no longer relevant to the computation" (Chomsky 1 995. . Spell-Out strips off the features of phonological relevance and delivers them to "the module Morphology.) Likewise. is part of phono­ logical structure. it inter­ faces with phonological structure. SILps . . (narrow) syntax. phonological features are accessible only to phonology-morphology. we may ask what point in the derivation of syntactic structure is accessible to the PS-SS correspondence rules. then. In trying to formulate a theory of (narrow) syntax. When the ph ono l ogica l com ponent is in­ Vt. and the recursion of ph ono l ogical consti tuency that is in part orth ogona l to synta ctic con­ stituel1l"Y. (This presumes there is only a single syn­ tactic level that interfaces with phonology. consists of PILss . 230). 229). the analogue of the PS-SS correspondence rules in the present proposal. the articulatory-perceptual interface level of language. Let us call this the Syntactic Interf ace Level of phonology (SILps).

unlike syn­ tactic and phonological structures. although conceptual structure undoubtedly con­ sti tutes a syntax in the generic sense.I nterfaces. For example. Let us provisionally call this system of representations conceptual structure (CS). 1 990). and its principles of combination are not those used to combine NPs. etc . it m ust be a conseq uence of thei r meaning. Thus. and intentions. further differentiation will develop in section 2. These entities (or their counterparts in other semantic theories) are always as­ sumed to interact in a formal system that mirrors in certain respects the h ierarchical structure of (narrow) syntax. . one of the most famous of Chomsky's ob­ ser v ations is that sentence ( 1 4) is perfect in terms of (narrow) syntax and t hat its un acceptability lies in the domain of conceptualization. S i m ilarly the v ali d ity of the inferences expressed in ( 1 5 ) and the invalidity of those in ( 1 6) is not a consequence of their syntactic structures. VPs. in the sense that linear order plays no role. 7 Conceptual structure must provide a formal basis for rules of inference and the interaction of language with world knowledge. that its units are such entities as conceptualized physical objects. head-to­ specifier. events. where (narrow) syntax has structural relations such as head-to-complement. VPs. hence it is not syntax in the narrow sense. in terms of which reasoning. ( 1 4) Colorless green i d eas sleep furiously. conceptual structure has structural relations such as predicate-to-argument. Whatever we know about this system. and the forming of intentions takes place. In fact. times. its units are not NPs. I take it that Chomsky intends by this term a system of mental representations. and quantifier-to­ bound variable. I think it safe to say that anyone who has thought seriously about the problem realizes t hat these functions cannot be defined using structures built out of stan­ dard syntactic units. etc.. . I will suppose. planning. . and that it is also responsible for the under­ standing of sentences in context. quantities. Most everyone assumes that there is some such system of mind. which a n: pa ral lel. category-to-modifier. and head-to-adjunct. not part of language per se. following earlier work (Jackendoff 1 983. incorporating pragmatic considerations and "encyclopedic" or "world" knowledge. conceptual structures are (assumed to be) purely relational. we know it is not built out of nouns and verbs and adjectives. properties.6. In particular.3 The "Conceptual-Intentional" Interface Next let us consider the "conceptual-intentional" system. Representational Modularity 31 2.

an im pul se I share (Jackendoff 1 983. and f one or more levels of conceptual structure as the Syntactic Inter ace f Level(s) or SILcs . Fritz is an animal. the form of mental repre ed. a level built up out of syntactic units such as NPs and VPs. 4. # Therefore. These contain rules that mediate between syntactic and interface between syn­ rules the SS. Therefore. b. 1 1 . 205). John went. Fritz is a doll. subject to disconfirming evidence. b. John went.8 � (17) General form of ss-cs correspondence rules Syntactic structure X {must/may/preferably does} correspond to conceptual structure Y. Their sketched as in ( 1 7). Chomsky is trying to dis­ tance h imself here from certain traditional concerns o f lo gi c and formal semantics. # There sen�ation in which If conceptual structure. conceptual uruts.CS correspondence rules. a simpler hypothesis is that there is a single point in the derivation of syntactic structure that serves as the interface level with conceptual structure. Bill forced John to go. an almost identical pas­ sage occurs in Chomsky 1 986. IS not made out contextualized interpretations of sentences are couch the theory of language of syntactic units. 1 43. Chomsky explicitly dissociates LF from inference. Bill encouraged John to go. LF plays the role of the CILss . for instance) . This is essentially Chomsky's ( 1 993) assumption. it is a conceptual necessity that . making clear that it is a level of (narrow) syntax and not conceptual structure itself: "Determination of the elements of LF is an empirical matter not to be settled by a priori reasoning or some extraneous concern. mediate the general form can be tax nd the conceptual-intentional system. Chomsky identifies the level of language that interfaces with the con­ ceptual-intentional system as Logical Form ( LF ). chapters 3. l. Still. In part. In his earlier work at least. ( 1 6) a. and I will adopt it here. Thus. there has to be som e level of mental representation at . Fritz is a cat. Early versions of the Extended Standard Theory such as Chomsky 1 972 and Jackendoff 1 972 proposed multiple CILs in the syntax. for example codification of inference" (Chomsky 1 980.32 Chapter 2 (1 5) a. Fritz is an anima fore. Therefore. in our more detailed exposi­ tion of the interface. The theory of the SS-CS interface must also designate one or more levels of syntactic structure as the Conceptual Inter ace Level(s) or CILss . However.

There has been constant pressure within linguistic theory to minimize t he complexity of the SS-CS interface. consequently theories of a level that codifies inference are potentially as empirically based as theories of a level that codifies grammaticality. place. it is language independent and can be expressed in a variety of ways. and the field is witnessing a revival of the sy ntacti ciza tion of lexical semantics (e. for instance. Montague Grammar (and more generally. a level that is not encoded in (narrow) syntactic tenns.) agree with Chomsky that. interacting ric hly with other central cognitive capacities (of which more in section 2 . the SS-CS corre­ s po ndence rules are part of language: if there were no language. a t opic to w hich we return in chapter 5 .g. ( Incidentally. to cite the most famous example ( McCawley 1 968b» . it is not strictly speaking a part of the language faculty. In my opi n i on the evidence is mounting that the SS-CS interface has many of the same " sloppy" characteristics as the PS-SS interface. I take conceptual structure to be a central cognitive level of representation. Representational Modularity 33 which inference is codified. which proposes a o ne-to-one relation between syntactic configurations and thematic roles ( where a thematic role is a conceptual notion: an Agent. la ughter justice). such rules would have no point-it would be like ending a bridge in midair over a chasm. Hale and Keyser 1 993). All verbs express . is a character playing a role in an event). contra one way of reading this quotation from Chomsky. redness. Language is not necessary for the use of conceptual structure: it is possible to imagine nonlinguistic organisms such as pri­ mates and babies using conceptual structures as part of their encoding of their understanding of the world. partly depending on the syntax of the language in question. but not all nouns express physical object co ncepts (consider I'ilrlhquak('. it is clear that speaker intu­ i tions about inference are as "empirical" as intuitions about grammati­ cal ity. although conceptual structure is what lan­ guage expresses. . I . A ll physical o bje ct concepts are expressed hy nouns. Categorial Grammar) se ek s to establish a one-to-one relation between principles of syntactic and semantic combination. More recently. For example. it is w i de l y accepted that syn tacti c categories do not correspond one to onc w it h conceptual categories. On the other hand. First of all. 6 and chapter 8). GB often appeals to Baker's ( 1 988) U n i fo nn ity of Theta Assignment Hypothesis ( UTAH). I take this to be conceptual structure. Generative Seman­ t ics sought to encode aspects of lexical meaning in tenns of syntactic combination (deriving kill from the same underlying syntactic structure as cause to become not alive.I nterfaces. concert.

V: Situation ( Events and States) A: Property P: Place (in the house). Modality (probably) Second. That is. (obj ect = Theme/Patient) (object Goal) (object Source/Patient) (object Beneficiary) = = = . (. the syn­ tactic position of direct object can express the thematic roles Theme. Time (on Tuesday). In the familiar Euro­ pean languages.34 Chapter 2 event or state concepts. Property (in luck) Adv: Manner (quickly). no matter for what crazy reason a noun happens to be feminine gender. Place (region). and eat and drink are indistinguishable verbs that take an optional object. though with interesting skewings that probably enhance leamability. we want to be able to say that. the syntactic reflexes of lexical meaning differences are relatively coarse. Emma emptied the sink. etc. Prepositional phrases can express places (in the cup).corgc hclped the boys. for justification of this ontology in con­ ceptual structure). b. Consider grammatical gender. Adverbs can express manners (quickly). some syntactic distinctions are related only sporadically to con­ ceptual distinctions. Attitude (fortunately). other than argument structure. grammatical gender classes are a hodgepodge of semantic classes. c. cl. but not all event or state concepts are expressed by verbs (earthquake and concert again). phonological classes. Joe entered the room. ( 1 9) a. Source. Beneficiary. Emily threw the ball . chapters 3 and 4. But from the point of view of syntax. some of these NPs double as Patient as well. Situation (concert). (See Pinker 1 989 for much data and elaboration. Thus the mapping from conceptual category to syntactic cat­ egory is many-to-many. cat and dog are indistinguishable singular count nouns. or properties (in the pink). times (in an hour). (1 8) N: Object (dog). ( 1 8) schematizes some of the possibilities (see Jack­ endoff 1 983. Time (Tuesday).9 Fourth. chapter 1 1 ). it triggers the very same agreement phenomena in its modifiers and predicates. For instance. a wide range of conceptual distinctions can be expressed in terms of apparently identical syntactic structure. and brute force memorization. or modalities (probably). Goal. much of the conceptual material bundled up inside a lexical item is invisible to syntax. As far as syntax can see. attitudes (fortunately). or Experiencer. depending on the verb (Jack­ endoff 1 990. j ust as phonological features are. ) Third.

Jacken­ doff 1 99 1 . John ran to the station (in/*for an hour). e. . The audience applauded the clown. Therefore it cannot be characterized deri vationaIly. John ate lots of peanuts (in/*for an hour). the empirical pr ob le m is to j uggle the content of syntactic . a wide range of syntactic distinctions can signal the very same conceptual relation. Fifth. each step of which con­ verts one syntactic structure into another. b. (20) a. choice of preposition (20b). necessarily results in increasing unnaturalness of underlying structures and derivations. John ran toward the station (for/*in an hour). For instance. The light flashed once (in/*for an hour). or prepositional object (20f). The light flashed constantly (for/*in an hour). by a mul tistep mapping. Hinrichs 1 985. People died (for/*in two days). f. Four people died (in/*for two days). John destroyed the cart (in/*for an hour). . John ate peanuts (for/*in an hour). John crashed into walls (for/*in an hour). As in the case of the PS-SS i nterface. then . d. rather. that is. We return to this issue in the next section. Dowty 1 979. Representational Modularity 35 e. KritKa 1 992. (object = Experiencer) (object = ??) To claim dogmatically that these surface direct objects must all have dif­ ferent underlying syntactic relations to the verb. 1 996e). Tenny 1 994. f. Declerck 1 979. John pushed the cart (for/ *in an hour). c.I nterfaces. there have been no successful attempts to find a uniform underlying syntactic form that yields this bewildering variety of surface forms. as is well known (Verkuyl 1 972. The story annoyed Harry. it is nearly always assumed that this phenomenon involves a somewhat complex mapping between syntactic expression and semantic i nte rp re ta tio n 1 0 I n sho rt the mapping between syntactic structure and conceptual struct ure is a many-ta-many relation between structures made up of dif­ ferent sets of pri m iti ve elements. and choice of determiner in subject (20d). choice of adverbial (20c). the telic/atelic distinction (also called the temporally delimited/nondelimited distinction) can be syntactically differentiated through choice of verb (20a). (telic) (atelic) (telic) (atelic) (telic) (atelic) (telic) (atelic) (telic) (atelic) (telic) (atelic) So fa r as I know. John crashed into three walls (in/*for an hour). object (20e). as required by UTAH.

This may be stated as (2 1 ) . Ostler ( 1 979). Grimshaw (1 990). then. requires complexity in the syntactic derivation. and syntactic maximal phrase X2 corresponds to conceptual constituent Z2. Perlmutter and Postal 1 984) propose that this mapping is biunique. Their superficial syntactic unifor­ mity. including those of Carter ( 1 976). iff Xl contains X2. Perhaps the most basic principle of the SS-CS correspondence is the approximate preservation of embedding relations among maximal syntactic phrases. the matching of hierarchies is a function of the SS-CS rules. This leaves open which semantic argument corresponds to which syntactic position. Jackendoff ( 1 990. chapter 1 1 ). despite the semantic difference s . Bresnan and Kanerva ( 1 989). On this view. in such a way as to properly describe the interaction of syntax and semantics and at the same time make the entire system learnable.36 Chapter 2 structure. there need be no syntactic distinction among the sentences in ( 1 9). Do Agents always appear in the same syntactic position? Do Patients? Do Goals? UTAH and the Universal Alignment Hypothesis of Relational Gram­ mar ( Rosen 1 984.4 Embedding Mismatches between Syntactic Structure and Conceptual Structure Let me illustrate the way expressive power can be juggled between syntax and the SS-CS interface. Anderson ( 1 977). conceptual structure. Marantz ( 1 984). Various less restrictive views of argument linking. Zl preferably contains Z2 . Foley and Van Valin ( 1 984). and Dowty ( 1 99 1 ). permitting the SS-CS connection to be absolutely straight­ forward. appeal to some sort of hierarchy of semantic argument types that is matched with a syntactic hierarchy such as subject-direct object-indirect object. then. and the correspondence rules between them. the consequence is that the sentences in ( 1 9) must be syntactically differentiated at some underlying level in order to account for their semantic differences. 2. 1 I . Hence syntax is kept simple and the complexity is local ized in the correspondence rules. On the other hand. Carrier-Duncan ( 1 985). A subcase of (2 1 ) is the standard assumption that semantic arguments of a proposition normally appear as syntactic arguments (complements or specifier) of the clause expressing that proposition. (21 ) I f syntactic maximal phrase Xl corresponds t o conceptual constituent Zl .

The traditional solution indeed simplifies the SS-CS interface. a (22) a. [Bill seems [to be a nice fellow]]. ) That the latter view has some merit is suggested by t he following example. (See Culicover and Rochemont 1 990 for one version of this. preferably is omitted). but at the cost of adding complexity to syntax (namely the application of a raising rule). b. First. At the moment I am not advocating one of these views over the other: the point is only that they are both reasonable alternatives. a standard argument-part of the canon-is that (22a) must be derived from an underlying syntactic structure something like (22b). Representational Modularity 37 The mapping of argument structure is not the only place where there is trade-off between complexity in the syntax and complexity in the cor­ respondence rules. of a type pointed out as long ago as Perlmutter ilnd Ross 1 970: ( 24 ) A man walked in and a woman walked out who happened to look sort of alike. and it is asso­ ciated with the embedded proposition by a principle that overrules (2 1 ).e. Much of the tradition developed in response to apparent counter­ e xamples. (23) a. (23a) could be "base-generated. [A man who was from Philadelphia] walked in." and the relative clause ou ld be associated with the subject by a further correspondence rule that overrides the default case (2 1 ) . the cost is an additional principle in the correspondence rules. Traditional generative grammar has assumed that (2 1 ) i s not a default rule but a rigid categorical rule (i . Similarly. Let us briefly consider a few progressively more problematic examples.I nterfaces. . c A lte rn ati vel y. in consonance with its being a modifier of man. (21 ) as stated is also consistent with the sort of analysis adopted within the LFG and HPSG traditions: Bill simply appears syntactically outside the clause expressing the embedded proposition. traditional generative grammar has held that the final rela­ tive clause in (23a) is extraposed from an underlying structure like (23b). [e seems [Bill to be a nice fellow]] However. The alternative instead simplifies the syntax by "base-generating" (22a). b. A man walked in who was from Philadelphia. so that Bill can appear within the subordinate clause in consonance with its role as an argument of the subordinate proposition.

) Earlier work in transformational grammar (e. have been largely silent on such examples.g. we can thin k of a (narrow) syntactic deri vati o n as "converging" if it can be m a pped through the interfaces into a well- . with the derivations imposing m utua l constraints through the interfaces. The grammatical structure of a sentence can be regarded as a triple. as three indepen­ dent and parallel derivations. Norbert is. and that the content of this mismatch must be encoded somewhere in the grammar. (22a) and (23a) are cases where a syntactic constituent is "higher" than it should be for its semantic role. we can expand this claim.38 Chapter 2 Here the relative clause cannot plausibly be hosted in underlying structure by either of the noun phrases to which it pertains (a man and a woman). The traditional hypothe­ sis of the autonomy of syntax amounts to the claim that syntactic rules have no access to non syntactic features except via an interface. ( � I think Norbert is a genius. phonological.) b. SS. (25a. in which lowering is anathema.5 The Tripartite ParaDel Architecture Let us now integrate the four previous sections. and see phonology and conceptual structure as equally autonomous generative systems. syntactic. The point in each case is that some mismatch between meaning and surface syntax exists. (25) a. ( � Occasionally a sailor walked by. An occasional sailor walked by. (PS. either in syntactic movement or in the SS-CS correspondence. 2. I think. one in each component. Many more such situations will emerge in chapter 3.b) are cases where the italicized syntactic constituent is "lower" than it should be. Again. a genius. but that overrides (2 1). I present them only as illustrations of how the balance of expressiveness can be shifted between the syntax proper and the correspondence rules. CS ) . This is not the point at which these alternatives can be settled. Fol­ lowing Chomsky ( 1 993). We can regard a full grammatical derivation. Ross 1 967) did not hesi­ tate to derive these by lowering the italicized phrase into its surface posi­ tion from the position dictated by its sense. More recent traditions. because it contains a predicate that necessarily applies to both jointly. motor. Given the distinctness of auditory. it is possible to imagine a correspondence rule that accounts for the semantics without any syntactic movement. and conceptual information. then.

it "crashes" if nO such mapping can be achieved. in that phonology and semantics are treated as generative completely On a par with syntax. observing that ( I ) they have similar structure. mutually constraining derivations is found in Autolexical Syntax (Sadock 1 99 1 ). 1 3 As promised in chapter 1 . but they deal only with the syntax-semantics interface. However. as we will see shortly. we m ust also invoke what is in effect a direct interface from phonology to sem antics. 1 2 What I think is rel­ a t i vely novel here is looking at both interfaces at once. and the correspondence rules are the only source of prop­ l'fties of p ho n olog ica l and conceptual structure. A similar conception of parallel. phonological structure is connected through other sets of correspondence rules with auditory and motor infonnation. 2 and 2 . though it deals primarily only with the phonology-syntax interface. One in which phonological and conceptual structures have nO properties of their own-in which all their properties are derivable from syntax. not predictable from syntax. and (2) as will be seen in chapter 4. 1 are in dTect vacuous. Representational Modularity Phonological fonnation rules Phonological structures Syntactic fonnation rules Syntactic structures ps-ss / correspondence rules Conceptual fonnation rules Conceptual structures ss-cs / correspondence rules 39 ! ! ! '--Figure 2. Shieber and Schabes ( 1 99 1 ) likewise set up parallel derivations in a Tree-Adj oining Grammar formalism. suppose instead that syntactic structures had no properties of t lll'ir own. and that a\ l their properties were derivable from p hon olo gy . In addition. 3 that p hon ologi cal and conceptual structures do have properties of their own. so we must abandon the syntactocentric picture. 1 . On such a view. the p hon ological and conceptual fonnation rules in figure 2. The syntacto­ centric architecture can be seen as a special case of figure 2. Figure 2.1 '--- The tripartite parallel architecture fonned phonological structure and conceptual structure. For fun.Interfaces. this conception abandons Chomsky's syn­ t actocentric assumption (assumption 6). 1 sketches the layout of such a gramma r. we have seen in sections 2 .

after I have built the lexicon into the architecture I will sketch how the tripartite architecture might serve in processing . It is an unwarranted assumption that they are to be minimized and that all expressive power lies in the generative components. case marking. syntax. which are of a different fonnal nature. . there are certainly properties of syntactic structure that cannot be reduced to phonology or semantics: syntactic categories such as N and V. the issue that constantly arises is one of balance of power among components. and therefore they fall under all the arguments for VG. However. The notion of correspondence rules has led some colleagues to object to the tripartite model: " Correspondence rules are too unconstrained. Rather. like syntactic and phonological rules. is recognized not to be a model of processing: imagine starting to think of what to say by generating a syntactic structure. is that correspon­ dence rules are conceptually necessary in order to mediate between pho­ nology. 1 would be in effect vacuous. In other words. " What we have seen here. as we have seen. but between the components lie correspon­ dence rules. this should be no reason to favor the syntactocentric model. and agreement properties.40 Chapter 2 and conceptual structure. and the parallel tripartite model indeed seems justified. ( For those who wish to see correspondence rules in action. Thus their presence in the archi­ tecture does not change the basic nature of the theoretical enterprise. So syntactic fonnation rules are empirically necessary as well. Then the syntactic fonnation rules in figure 2 . much of Jackendoff 1 990 is an extended exposition of the SS-CS correspondence rules under my theory of conceptual structure. We have also abandoned the assumption (assumption 1 ) that the rela­ tion among phonology. and meaning. as stressed in chapter 1 . However. which. language-specific word order. .5. and so on. There may well be derivations in the fonnation rules for each of these components. must be constrained so as to be learnable. they must be acquired by the child.) Other colleagues have found it difficult to envision how a tripartite parallel model could serve in a processing theory . syntax. In section 4. and semantics can be characterized as derivational. the presence or absence in a language of wh-movement and other long-distance dependencies. Since the correspondence rules are part of the grammar of the language. and we would have a maximally minimalist syntax-the sort of syntax that the Minimalist Program claims to envi­ sion. they can do anything. though. correspondence rules.

they are at the scale of l Ild ividual levels of representation. so that it defines an infinite set of expressions along familiar generative lines. that is. rather than being an entire faculty such as language perception. chapter 1).6. is a formal description of the repertoire of structures ava ilable to the corresponding representational module." then. 2." Each of these "languages" is a formal system with its own proprietary set of primitives and principles of combination. For each of these for­ mats. For example. A conceptual difficulty with Fodor's account of modularity is that it rai Is to address how modules communicate with each other and how they wmmu n i ca t e with Fodor's central. nonmodular cognitive core. In partic­ l l l a r. Representational modules differ from Fodorian modules in that they a rc individuated by the representations they process rather than by their fu nction as faculties for input or output. The overall idea is that the mind/brain encodes information in some finite number of distinct representational formats or "languages of the mind. Representational Modularity This way of looking at the organization of grammar fits naturally into a la rger hypothesis of the architecture of mind that might be called Repre­ sentational Modularity (Jackendoff 1 987a. Representational Modularity 41 Finally. . phonological structure and syntactic structure are distinct representa­ t i onal formats. there is a module of mind/brain responsible for it. I urge readers with such inclinations to consider again the pos­ sibility that the syntactocentric model is just a habit dating back to the I 960s. and (with certain caveats to follow shortly) each is i n formationally encapsulated in Fodor's (1 983) sense. Representational Modularity therefore posits t hat the architecture of the mind/brain devotes separate modules to these t wo en codings. F od or claims that the language perception module derives "shallow ft'presenta tions" -some fonn of syn tactic structure-and that the cen­ t ra l faculty of "belief fixation" operates in terms of the "language of t hought . with distinct and only partly commensurate primitives and principles of combination. on the grounds that I have a personal i n te rest). The generative grammar for each "language of the mind." a nonJinguistic e ncodin g But Fodor does not say how "shallow .I n terfaces. some people simply se e no reason t o abandon syntactocentrism despite nearly two decades of evidence for the (relative) autonomy of phonology (I leave semantics aside. Each of these modules is domain specific ( phonology and syntax respectively). chapter 1 2. 1 992a.

1 6 The language faculty is embedded in a larger system with the same overall architecture. More generally. At the same t i me. imposing a partial homomorphism between L\ and L2 information).or downstream. the correspondence rule component that links L\ and L2 can be taken as a formal description of the repertoire of partial trans­ lations accomplished by the L\ -L2 interface module. bells. the communication among languages of the mind. then. In effect. like a Fodorian module. knows only about phonology and syntax. Rather. Phonetic information is not the only information derived from auditory perception . better. the language module is so domain specific and informationally encap­ sulated that nothing can get out of it to serve cognitive purposes. 1 5 In the for­ mal grammar. in addition to the representation modules proposed above. say L\ and L2 .42 Chapter 2 representations" are converted to the "language of thought. The tripartite architec­ ture in figure 2. 1 represents each module as a rule component. and general-purpose auditory perception (birds. affect (tone­ of-voice) perception. Each of these processes converges on a different rep ­ resentational "language" -a different space of distinctions. not about visual perception or general-purpose audition. it is used for voice recognition. Such a module is also informationally encapsulated: the pho­ nology-to-syntax module dumbly takes whatever phonological inputs are available in the phonology representation module. the speech signal seems to be delivered simultaneously to four or more independent interfaces: in addition to phonetic perception (the audi­ tion-to-phonology interface). beliefs about the social context. for instance. etc. by carrying out a partial translation of information in L\ form into information in L2 form (or. interface modules communicate information from one form of representation only to the next form up. and each is subject to independent dissociation i n cases of brain damage. A faculty such as the language faculty can then be built up from the interaction of a number of representational modules and interface modules. Consider the phonological end of language.). 1 4 The theory of Representational Modularity addresses this difficulty by positing. is domain specific: the phonology-to-syntax interface module. like the languages themselves. In short. with no help or interference from. is mediated by modular processes. say. phonological structure (of which phonetic representation is one . An interface module communicates between two levels of encoding. thunder." as they must be if linguistic communication is to affect belief fixation . and delivers them to the syntax representation module. a sys­ tem of interface modules. maps the appropriate parts of them into syntactic structures. An interface module.

There is some evidence for the same sorts of multiple interactions at the I:ognitive end of language as well . and phonology aspect) takes inputs not only from the auditory interface.I n terfaces. 1 996a. Figure 2. parallel to ! he i n terfaces in terior to the language faculty. and in actions like grasping a pipe in the mouth. in facial expres­ sion. A ud i tory Input � � 43 Vocal tract i llsl ructions � / Syntactic structure (sign language) Visual system Nonl inguistic motor control ( chewing. vocal tract instructions . con­ versely. Landau and Jackendoff 1 993). certain types of visual/spatial information (such as details of shape) cannot b e represented propositionally/linguistically. I . conceptual structure must be linked to all the ot ha sensory modal i ties of which we can speak .) Some Figure 2. posed by Macnamara ( 1 978).2 modules that interact with auditory input. 1 7 Moreover. since vocal tract musculature is also involved in eating. smiling.2 sketches t hese interactions. Barbara Landau and I have elaborated Macnamara's position Uackendoff 1 9 87b.or the same reason. including at least those involved in reading and sign language. each arrow stands for an interface module. Consider the question of how we talk a bout what we see. Macnamara argues that t hc only possible answer is that the brain maps information from a visual/ spatial format into the propositional format here called conceptual struc­ t I I rc . but also from a n umber of different interfaces with the visual system. phonological structure is not the only faculty feeding motor control of the vocal tract. Consequently visual/spatial representation must be encoded in one or more modules d i sti nct from conceptual structure. In lackendoff 1 983 I h y pot hesize that concept ual s t r uct u re is the l i n k among multimodal . We observe that n:rta i n types of conceptual information (such as the type-token distinc­ t ion and type taxonomies) cannot be represented visually/spatially. Representational Modularity General-purpose au d ition Voice recognition Affect recognition Phonological structure (readin g) +---> . and the connection from vision to l i l l l g u a ge must be mediated by one or more interface modules that estab­ h�h a pa rtia l relation between visual and conceptual fonnats. etc.

body senses such as proprioception can give us information about location of body parts. rather.3 Action Some modules that interact with conceptual structure perceptual representations. there i s abundant evidence for similar organization. and an integrated pic­ ture of how one's body is located in the environment is used to guide actions such as reaching and navigation. the visual system is sup­ posed to derive information about object shape and location from visual inputs.44 Chapter 2 Auditory infonnation Visual representation ) Haptic representation Proprioception Syntactic structure f----�J / Conceptual structure 1 �� � Smell / � Spattal ) representation ( . and location). chapter 1 1 ). Lerdahl and I found it . The idea behind the tripartite architecture emerged in my own thinking from research on musical grammar ( Lerdahl and lackendoff 1 983. there is reason to believe that some multimodal information is integrated elsewhere in the system.3 shows all this organization. for example enabling us to identify the sound of a bell with its appearance. deriving more from neuropsychology than from a theory of visual information structure (though Marr (1982) offers a distinguished example of the latter). and they communicate with each other through distinct path­ ways. Landau and I therefore suggest that the upper end of the visual system is not strictly visual. and we can be surprised if vision and touch fail to provide us with the expected correlation. However (in contradiction to lackendoff 1 983). 1 s Within the visual system itself. col or. Numerous areas in the brain are now known to encode different sorts of visual information (such as shape. Also. In trying to write a grammar that accounted for t he m usical intuitions of an ed uca ted listener. Figure 2. For the most part there is no straight "derivation" from one visual level to another. but is rather a multimodal spatial representation that integrates all information about shape and location (Landau and lackendoff 1 993). For instance. the information encoded by any one area is influenced by its interfaces with numerous others (see Crick 1 994 for an accessible survey). Emotion Figure 2. lac­ kendolT 1 987a. Landau and Gleitman ( 1 985) make the obvious point that the haptic system (sense of touch) also can derive shape and location infor­ mation.

to he veri fied in terms of n o t o nl y the language fac u lty but other faculties as wdl . l ion mo d u les and the interface modules that connect them with each I I l hcr and w i t h other representations. But music contains many situations.4 Modules of musical grammar impossible to write an algorithm parallel to a transformational grammar t hat would generate all and only the possible pieces of tonal music with wrrect structures. A "fine-scale mental anatomy" then details the I Il lc rnal workings of individual "languages" and interfaces.. Fo l low i ng the approach of Representational Modularity. In the optimal case. Lerdahl and I found it necessary to posit separate generative systems for these structures (as well as for two others) and to express the structural coherence of music in terms of the optimal fit of these structures t o t he musical surface (or sequence of notes) and to each other. is a "fine-scale anatomy" of the language represen­ t . Ideally.I nterfaces. 1 -2. Combining figures 2 . For example. " I t is a hy po thesis about the overall architecture of the mind. one would hope that " mental anatomy and brain anatomy will someday prove to be in an I n t eresting relati onshi p R e p resent ati onal M od u la rity is by no means a "virtual conceptual IIl·cessity . . ' " . the two structures are in phase with each other. Representational Modularity 45 / structure � Auditory s i gnal � Grouping Musica l surface """" Metrical structure I Time-�pan . 2 Cl I therefore do not wish to claim for it any degree of inevitability./' reductIon � Prolo�gatJonal reduction .4 results in a rough sketch of part of such a theory. I:"" I<'igure 2. the rhythmic organization of music con­ sists of a counterpoint between two independent structures. the simplest of which is an ordinary upbeat. 4 sketches the organization of the musical grammar in terms of repre­ se ntation and interface modules.. one can l nvisi on a theory of "gross mental anatomy" that enumerates the various " l a nguages of the mind" and specifies which ones interface most directly w i l h w hich others. in which the two structures are out of phase. for example. each of which has its own primitives and p rinciples of combination. Con­ se q u e n t ly. grouping structure and metrical structure. 1 9 Figure 2 . so that grouping boundaries coincide with strong metrical beats. Linguistic t heory.

we have arrived at an alternative architecture for grammar. Here.46 Chapter 2 Nevertheless it appears to be a plausible way of looking at how the mind is put together. This model proves to integrate nicely into a view of the architecture of the mind at large. . with preliminary support from many different quarters. the tripartite parallel model. by asking how interfaces work. the syntactocen­ tric architecture of the Minimalist Program is envisioned within a view of language that leaves tacit the nature of interfaces and isolates language from the rest of the mind: recall Chomsky's remark that language might well be unique among biological systems. Returning to the issue with which this chapter began. I take this to be a virtue.

ical mean ing differences among quantifiers such as all and some. Crucially. . or Ilm '(' and j<lUr .Chapter 3 M ore on the Syntax­ Semantics Interface 3. 1 45) says. in order to contrast it with w 'lnantic representation[. May 1 985. This definition is essen­ t i a l l y compatible with the definition of CIL: it is the level of syntax that l' ncodes whatever syntactic distinctions are available to (or conversely. �� x p ressed by) conceptual distinctions. . I I I l a te r statements Chomsky is less guarded: "PF and LF constitute the ' 1 I I lL" rfacc' between lang uage and other cognitive systems. is not supposed to encode ". Succinctly.] .1 Enriched Composition Let us next focus further on the Conceptual Interface Level (CIL) of ( narrow) syntax. yielding direct fl' prl. It is worth taking a moment to examine the basic c o n ception of LF. . . " Chomsky emphasizes here that LF.·scn t a t ions of sound on the onc hand and me a n ing on the other . it does not encode the In . Chomsky (1 979. altho ugh it does encode quantifier scope. These differences appear only in conceptual structure. the contri­ hution of grammar to meaning" (May 1985. " . nonlin­ !-!lIistic systems (such as conceptual structure). LF does not p u r po r t to encode those aspects of meaning internal to lexical items. For C' xample.C'aning per se: other aspects of meaning are left to cognitive. it is supposed to encode "whatever properties of syntactic form are relevant to semantic interpretation-those aspects of semantic structure that are expressed syntactically. 2). "I use the expression logical form really in a ma nner different from its standard sense. to designate a level of linguistic representa­ t i nn i n corporating all semantic properties that are strictly determined by l i nguistic rules. As observed in chapter 2. According to one of the earliest works making detailed use of LF. GB identifies this level as Logical Form ( LF ). unlike the under­ l y i n g structures of Generative Semantics.

One effect is that the SS-CS i nterface can be maximall y simple. . However. In fact. The way the LCSs are combined is a function only of the way the lexical items are combined in syntactic structure (including argument structure). and the differences between syntactic elements and truly conceptual elements (discussed here in section 2. along with many others in the field. the locus of semantic properties. since there are no inter­ actions in conceptual structure among syntactic. given that semantic properties do not involve configurations of NPs and VPs at all. lexical. is work­ ing under a standard (and usually unspoken) hypothesis that I will call syntactically transparent semantic composition (or simple composition for short). . So in what sense can we understand Chomsky' s statement? It appears that Chomsky. 3) have been elided. composition is guided entirely by syntactic structure. they involve whatever units conceptual structures are built out of. Hence one can conceive of syntactic structure as directly mir­ rored by a coarse semantic structure. All elements of content in the meaning of a sentence are found in the lexical conceptual structures ( LCSs) of the lexical items composing the sentence. one that idealizes away from the internal structure of LCSs. and pragmatic . looking back to the earlier quotation . From the point of view of this coarse semantic structure.48 Chapter 3 (Chomsky 1 986. then in terms of what primitives are the semantic properties of LF to be encoded? Given that LF is supposed to be a syn­ tactic level. the internal structure of individual LCSs plays no role in determining how the LCSs are combined. but if conceptual structure. rather. i. Under this set of assumptions. the assumption has to be that some semantic properties are encoded in syntactic terms-in terms of configurations of NPs and VPs. we find some signs of the same elision in the phrase "a level of linguistic representation incorporating . is not a linguistic level. LCSs themselves have no effect on how they compose with each other. Here LF seems to be identified directly with con­ ceptual structure. this assumption cannot be strictly true. 1 ( 1 ) Syntactically transparent semantic composition a. In particular. ii. pragmatics plays no role in determining how LCSs are combined." If semantic properties are incorporated in a lin­ guistic level. 68). b. an LCS can be regarded as an opaque monad. semantic properties .

it is only a hypothesis. we can speak elliptically of syntax " i ncorporating semantic properties. . a more a pp ropriate hypothesis treats simple composition as a default in a wider range of options that I will call enriched composition. in addition to h. According to (2). incidentally. H owever. it cannot be sustained. The conceptual structure of a sentence may contain. there can he no interaction between their internal structure and phrasal composi­ t ion. Rather.) The hypothesis of syntactically transparent semantic composition has t he virtue of theoretical elegance and constraint. because lexical items are for the most part regarded as semantically undecomposable entities. hence no aspects of a sentence's interpretation can a rise from outside the sentence itself. but that must be present in conceptual structure either (i) in order to achieve well-formedness in the compo sition of the LCSs into conceptual structure (coercion. (See Partee 1 995 for a summary of t hese issues in formal semantics. other material that is not expressed lexically. as befits a modular conception. that most work in formal semantics shares Chom­ s ky ' s assumptions on this issue. Under such a conception.The Syntax-Semantics Interface 49 l'Ifects. what­ l"ver its a priori attractiveness. The w a y the LCSs are combined into conceptual structure is d e te rm i n e d in part by the syntactic arrangement of the lexical item s and in part by the internal structure of the LCSs themselves ( P u s tejo v s k y ' s cocomposition). Its effect is to enable researchers to isolate the language capacity-including its contribution to semantics-from the rest of the mind. This chapter will suggest that. In particular. I suggest that Chomsky's character­ iza tion of LF should be thought of as elliptical in precisely this way. 1 995. (Much of this hy­ pothesis and many of the examples to follow are derived from Pustejov­ sky 1 99 1 a. the conceptual content of its LCSs. the standard treatment of compositionality requires a disambiguated syntax. the internal structure of LCSs is not opaque to the Jll i m:i ples that compose Less into the mean i ng of the sentence. Moreover. to use Pustejovsky's term) or (ii) in order to satisfy the pragmatics of the discourse or extralinguistic context. It l:an therefore be seen as a potentially positive contribution to psycho­ logical theory.) ( 2) Enriched composition a. Note." meaning that syntax precisely mir­ rors coarse semantic configurations. rather.

it misses a generalization to encode a subset of them i n syntax as well. the s s-cs in­ terface is more complex: the effect of syntactic structure on conceptual structure interleaves intimately with the effects of word meanings and pragmatics. However. they can only be most generally encoded at conceptual structure. Any one of them alone might be dismissed as a "problem case" to be put off for later. drawing on the internal structure of LCSs and even on semantic material not present in LCSs at all. a more constrained theory is only as good as the empirical evidence for it. generalizations will be lost if we insist on it. In sections 3.8 we will see that certain instances of anaphora and quantification depend on enriched composition. then it is not being treated scientifically. If this is so. In addition. and to see what other. A theory of enriched composition represents a dropping of constraints long entrenched in linguistic theory. possibly more valuable. One therefore cannot construe LF as a syntactic level that es­ sentially encodes semantic distinctions directly. 2 In sections 3. as the stan­ dard conception of LF does. In turn. since the theory of quanti­ fier scope includes the binding of variables by quantifiers. i f the full range of relevant facts is to be encoded in conceptual structure. The concl usi on will be that it serves no pur- .2-3. I am pro­ posing here to treat it as a hypothesis like any other. One of the important functions of LF-in fact the original reason it was introduced-is to encode quantifier scope relations. Rather. it is impossible to isolate a coarse semantic structure that idealizes away from the details of LCSs and that thus mirrors precisely the contribution of syntax to meaning.50 Chapter 3 composition proceeds through an interaction between syntactic structure and the meanings of the words themselves. so that no empirical evidence can be brought to bear on it. After looking at these cases. However. rather. anaphoric binding in general also falls under the theory of LF. a nonsyntactic level. we will see how enriched composition bears on problems of binding and quantification.6 we will briefly look at several cases of enriched composition. If elevated to the level of dogma (or reduced to the level of presupposition). This means that the phenomena of binding and quantification that LF is sup­ posed to account f cannot be described in their full generality by means of or a syntactic level of representation. syntactically transparent composition is clearly simpler. the number and variety of different cases that can be attested suggest a larger pattern inconsistent with syn­ tactically transparent composition.7 and 3.

e. the sentence asserts that this process has continued I I Vl' r some period of t i me. flash d oe s not itself denote an ongoing process. (One might propose thatflash is vague with respect to whether or not it denotes a single flash. Such a zoo m i ng in" is a consequence of " .2.The Syntax-Semantics Interface 51 P? s� t o complicate (narrow) syntax by includ ing a syntactic level o f LF distInct from S-Structure. and. until semantically sets a temporal bound on an ongoing process. Rather. on the grounds that The light flashed does not seem vague on this point. In t he analysis of (3a) offered by most authors cited above.g. The first is to construe it as repeated action. 3. The other IS to conceptual Iy "zoom in" on the action. The verb keep requires its argument to be a n ongoing process.2 Aspectual Coercion 3.1 Verbal Coercions A ca �e of enriched composition that has been widely studied (see. Such a process appears in the s�'l:ond re ad in g of ( 3 b). However. 1 process. There are two ways it can be reinterpreted as . it seems odd to localize the ambiguity in keep I I r -ing. Bill kept crossing the street. which forms a suitable process fo r until to bound. This ambiguity does not appear to arise through any lexical ambiguity in the words of the sentence: cross is not ambiguous between crossing repeatedly and crossing partially. Therefore the sentence is i n t e rp reted so as to involve a sequence of flashes. added by a general principle that is capable of creating aspcctual compatibility among the parts of a sentence. b. I would deny this. so that the endpoint disap­ pc: a rs from view-one sees only the ongoing process of Bill moving in the t l i red ion of the other side of the street. ( 3a) carries a sense of repeated flashing not to be found in any of its lex­ ical items. ( 3 b) pres en t s a parallel story. . because 1 1 has a specified e nd point. lackendoff 1 99 1 . these readings arise through the interactions of the lexical meanings with each other. Talmy 1 978. but cross the street is not an ongoing process. Hinrichs 1 98 5. The semantic content "sequence" is an aspec­ fIIal coercion. like (3a). (3) a.) (3b) is ambiguous with respect to whether Bill crossed the street repeatedly or continued in his effort to cross once. since Bill kept sleeping is not ambiguous. Verkuyl ( 993) concerns sentences like those in (3). PusteJ ovsky 1 99 1 b. The light flashed until dawn.

The idea behind (4) i s a s follows: when the reading o f VP is composed with that of its context. clap. but by other "bounding" phrases as well. as would occur in sim­ ple composition. the need for aspectual coercion can be deter­ mined only after the composition of the VP has taken place. and this choice appears not just in complements of keep but also with stop. Rather. Repetition Interpret VP as [REPETITION OF VP]. More schematically. on whether there is a delimited object or completed path (to) that renders the entire VP a completed action. or cross the border. The choice between repetition and partial completion occurs with any gerundive complement to keep that expresses an accomplishment. It depends. b. (4) a. Rather. As these examples show. Accomplishment-to-process Interpret VP as [PROCE SSUAL SUBEVENT OF VP] . and /or an hour. in cases such as (3). and the result is inserted into the argument position of the context. Aspectual coercions therefore are most generally treated as the intro­ duction of an extra specialized function. and it is produced not just with until.52 Chapter 3 enriched composition. Hence the aspectual ambiguity after keep cannot be localized in lexical polysemy. continue. or an incomplete path (toward ) that renders the VP an atelic action. such as walk to school (but not walk toward school) or eat an apple (but not plain eat). or alternatively. such as slap Bill. which makes the accomplishment cross the street aspectually compatible with keep. the interpretation of VP alone is not inserted into the appropriate argument position of the context. Such aspectual coercions do not depend on particular lexical items. similar repetition with until appears with any point-action verb phrase. simple composition would produce a function­ argument structure F(X). However. an unbounded object (eat custard). Hence the process of composition interpolates a "coercing function" G . VP is treated as the argument of the special functions REPETITION OF X or PROCESSUAL SUBEVENT OF X. along the lines of (4). for instance in an hour. interpolated in the course of semantic composition in order to ensure semantic well-formedness. for example. where F is the function expressed by the syn­ tactic head and X is the argument expressed by the syntactic complement. whether there is no object. X does not serve as a suitable argument for F.

2. vitamin pills are lIIade (* There 's some vitamin pill on the counter f you). as in many flashes of light.2 Mass-Count Coercions Many sources point out a formal parallelism between the process-accom­ plishment distinction in sentences and the mass-count distinction in noun �hrases. please. one might expect a counterpart ?f aspectual coercion in NPs. (5) a . Rather. where X is a suitable argument for G. 3 Similarly. for mStance (5). So it appears or t hat. although these cases include as part of their meaning a mass-to­ munt or count-to-mass coercing function. 3. (5b) is a case where an animal name is used with mass syntax and understood to mean ' meat of animal' . but it cannot t·asily apply to denote the substance out of which. b. At the same time. but it is hard to i m agine applying it to specified portions of cement (* He poured three / 'I'ments today) or to the amount of water it takes to fill a radiator (* It lakes more than three waters to fill that heating system). say. the coercion in (5b) l·a n apply to novel animals ( We 're having elandfor dinner). . I'll have a coffee/three coffees. the idea of a sequence of events can be conveyed ID noun phrases by pluralizing a noun that expresses an event. Although these coercions are often taken to be perfectly general prin­ ci ples (under the rubrics "Universal Packager" and "Universal Grinder" respectively). and indeed many are well known. For instance. these appear to be specialized coercions in which the sn u ree reference is mass and the shifted reference is count (5a) or vice wrsa ( 5 b). t he fact that the phenomenon is general within these rather narrOw cate­ ).:! )es beyond this to stipulate rather narrOw selectional restrictions on w hen such coercion is possible and what it might mean.:orics (for instance eland) suggests that lexical polysemy is not the right sol ution . ( Sa) is a case where count syntax is attached to a normally mass noun. That is. Given such parallelisms. but fu rther con tent is present in the coercing function as well.The Syntax-Semantics Interface 53 to create instead the structure F(G(X» . the coerced reading actually means something roughly as specific as 'portion or food or drink suitable for one person' . and G(X) is a suitable argument for F . one's knowledge of English ). t he (5a) frame applies generally to food and drink. We're having rabbit for dinner. and the interpretation is inevitably 'portion(s) of coffee' . [ suspect that they are somewhat specialized. And of course.

Rather. . It would be absurd to claim that the lexical entry of ham sandwich is polysemous between a ham sandwich and a customer. Consider first the possibility that a reference transfer is "mere prag­ matics. In lackendoff 1 992b I argue that neither of these accounts will suffice. first made famous in the "counterpart" theory of McCaw­ ley and Lakoff (see Morgan 1 97 1). or (2) the subject of (6) con­ tains some phonologically null syntactic elements that have the inter­ pretation 'person contextually associated with' . (6) The ham sandwich in the corner wants some more coffee. The alternative is that the interpretation of (6) in context must involve a prin­ ciple that Nunberg calls reference transfer.g.3 � Reference Transfer Functions A superficially different case of enriched interpretation turns out to share certain formal properties with the cases just discussed: the case of refer­ ence transfer. this cond ition must be replaced with condition (2a).2. e. 1 . 3. the premise of enriched composition. The subject of (6) refers to a customer who has ordered or who is eating a ham sandwich. The classic case is Nunberg's. Nunberg 1 979. in which the character of Nixon was sung by lames Maddalena. the lexical items 10 the sentence. l� t�e SS-CS mterface permit conceptual structure to contain these spe­ clah� d bl." A principle similar to that involved in (6) is involved in (7). lackendoff 1 992b). is that the correspondence rules . Fauconnier 1985. Like the verbal co�rclOns m section 3. �S of content that are unexpress ed in syntax. An advocate of syntactically transparent composition will take one of two tacks on reference transfer: either (1) it is a rule of pragmatics and takes place "after" semantic composition. therefore.. such a possibility violates condition �lalm that semantic content is exhausted by the LCSs of the ( l a). which allows one to inter­ pret ham sandwich (the "source reading") to mean something roughly like 'person contextually associated with ham sandwich' (the "shifted reading"). (7) Ni xon was horrified to watch himself sing a foolish aria to Chou En-la i .54 Chapter 3 The genera conclusion. then explored more systematically by others (see. in which one waitress says to another something like (6). Imagine Richard Nixon has attended a performance of the opera Nixon in China.

himself has a shifted reading: it refers not to Nixon himself but to the character onstage. that it is nevertheless part of the principles of semantic composition for a sen­ tence-in other words. it cannot be determined until "after" the shifted reading is differentiated from the original reading. the contrast between (7) and (8) poses an insuperable problem for a syntactic account of binding. In addition. looking ahead to the discussion of binding in section 3. person portraying [himself] ( 1 0) a. we see that. t h ereb y preserving syntactically transparent composition. 5 . Note. 3). such noncorefer­ ence is not always possible. if reference transfer is a matter of pragmatics. For instance. ( 9 ) a. under the stan­ dard assumption that binding is a grammatical principle.The Syntax-Semantics Interface 55 Here. Nixon was horrified to see himself get up and leave the opera house. it must be ful ly integrated into the composition of the sentence. one might claim that reference transfer has a syntactic reflex." However.7. the shifted reading cannot serve as antecedent for anaphora that refers to the real Nixon. ham sandwich in (6) and himself in (7) might have the syntactic structures ( l)a) and (9b) respectively under one possible theory and (lOa) and ( lOb) under another. Thus. [NP e [himsel f]] rhe trouble with the structures in (9) is that they are too explicit: one would need principles to delete specified nouns and relationships-prin­ d ples of a sort that for 25 years have been agreed not to exist in gen­ l'rat i ve g ram m a r (not to men tion that (9b) on the face of it has the wrong . person with a [ham sandwich] b. That is. by the way. the reference t ransfer cannot be relegated to "late principles of pragmatics" . [NP e [ham sandwich)) b. that one cannot "do semantic composition first a nd pragmatics later. (8) * After singing his aria to Chou En-Iai. so t hat binding theory could be preserved in syntax. that I agree that reference transfer is a matter of pragmatics-that it is most commonly evoked when context demands a n onsimple reference (though see section 3 . No matter how one accounts for the contrast between (7) and (8). On the other hand. That is. though. a reference transfer makes Nixon and the anaphor himself noncoreferential. I am arguing. Such an account would have the virtue of making the disti nction between (7) and (8) present in syntax as well as semantics.

56 Chapter 3 re ading). as shown in lacke ndoff 1992b. the nouns picture and sketch can be used to denote e ither a physical object containing a visual representation or a brief description. as in (l4a). Bill was doing charcoal ske tche s of the Be atie s.3. and Actors The use of Nixon to denote an actor portraying Nixon.g on the wall/standing on top of the parliament building. oF ·Harry gave us Ringo. A truck h it Bill in the fender when h e was momentarily distracted by a motorcycle . More over. and to my surprise he gave us Harry. 3. So (9) can be rejected immediately. in which a null e lement in syntax is to be interprete d by general convention as having the re quisite reading? The difficulty is that e ven a general convention re lie s on enr iche d composition: e has no interpre tation of its own. Only on the former re ading can they be the targe t of a re f erence transf . and to my de light he gave me Ringo. (11) Look! There 's K ing Ogpu hangir. so several special rule s are nece ssary.2 ( 1 4) a. What of the se cond possi­ bility. b. But this principle doe s not apply to stories about pe ople . Le t us brie fly revie w three case s here . the re f erence transfer re sponsible for (7) is not perf ectly general. Furthe rmore . 3.3. • Bill was giving us quick (verbal) ske tche s of the ne w employee s. ( l4c-e ) show that the shift is available only in de scribing an eve nt during the time one is in control of the car. However . Statues. ( 1 2) Harry gave us a de scription/an account of Ringo. reference transf ers are not homoge ne ous and gener al. Cars and Other Vehicles Another fre quently cited re fere nce transfer involves using one 's name to de note one 's car . falls under a more general principle that applie s also to picture s and statue s. or to denote the character in the opera representing Nixon. That is. ( l4b) shows that re fle xivization is available for such shifte d e xpre ssions. so a special rule of composition must supply one . . it occurs only with a semantically restricted class. er (1 3) a.1 Pictures.

e.3.e. * Bm repainted himself [i. there is no possibility of using a shifted pronoun or reflexive whose antecedent is the original referent. ( 1 6) a. c. ( 1 7) The ham sandwich/The big hat/The foreign accent over there in the corner wants some more coffee.The Syntax-Semantics Interface 57 Bill squeezed himself [i. 4 ( 1 5) Ringo suffered smashed spokes/handlebars/*hooves in the accident. (i. .3 b. (where it's Ringo's bike/horse) Unlike the shifted referent in the statue/actor/picture case (l 6a). Moreover. Hey. that's Ringo parked over there [pointing to Ringo's car]! Isn't *he/??it beautifully painted! Ham Sandwiches The ham sandwich cases require as source of the transfer some salient property. the shifted referent in the vehicle case cannot comfortably be the target of a discourse pronoun (1 6b). ( 1 8) # The blonde lady/The little dog over there in the corner wants a hamburger. to the person who ordered it] .e. B ut it cannot be something that independently would do as the subject of the sentence. this shift applies to cars and perhaps bicycles. * I gave the ham sandwich to itself [i. ? A truck hit Ringo in the fender while he was recording Abbey Road.e. but definitely not to horses. Nor l J n l ike noun can a pronoun or reflexive referring to the source have the shifted as antecedent. * The ham sandwich pleased itself. ( 1 9) a. (OK only if he drove to the studio!) d. the man with the blonde lady/little dog) in the previous two cases. 3. * A truck hit Bill in the fender two days after he died. Hey. b. that's Ringo hanging over there [pointing to portrait of Ringo]! Isn't he/it beautifully painted! b.e. his car]. the car he was driving] between two buses.

Bill intendeda to come/to bring a cake. Interpret an NP as [VISUAL REPRESENTATION OF NP]. (2 1 ) a. (24) a.1 Interpolated Function Specified by General Principle Consider these argument structure alternations « 22) from Grimshaw 1 979. Evidently. b. Rill intendedh that Sue co me. Notice the similarity of these principles to those in (4): they involve pasting a coercing function around an NP's simple compositional read­ ing X to form an enriched reading G(X). (23) a.e. (or to hm'(' Sue come) = = . Interpret an NP as [pERSON CONTEXTUALLY ASSOCIATED WITH NP]. b.2. Bill askeda who came/what happened. the person who ordered it] put it(self) in his pocket. (23) from lackendoff 1 996c).2. . Bill intendedb that Sue come/that Sue bring a cake. 3. The three cases sketched here have roughly the forms in (2 1 ). Bill askedb the time/her name.4 Argument Structure Alternations Let us next consider two cases of argument structure alternations in which a theory of enriched composition offers the potential of better capturing generalizations. each of these reference transfers has its own peculiar proper­ ties. a speaker of the language must have acquired something in order to use them properly and to have the intuitions evoked above. Bill askeda what the time was. (22) a. in violation of the condition of exhaustive lexical content. Interpret an NP as [VEHICLE CONTROLLED BY NP]. along the lines of the specialized mass/count coer­ cions in section 3 . A paraphrase reveals the semantic relationship between the frames. b.58 Chapter 3 (20) * The ham sandwich [i. My sense therefore is that a speaker's knowledge includes a principle of enriched composition. . c. Bill askedb the time. see also Dor 1 996.4. . 5 b. The sUbscripts on the verbs discriminate between argument structure/subcategorization frames. Bi ll intendeda to bring about that Suc comc. In short. 3.

I wish to argue. stated informally in (26). NP's name. know). In terpret that [ s . and so on. rather. As Grimshaw observes. which applies to all verbs like ask. therefore. as part of an elliptical indirect question what NP is/ was. A similar result obtains with intend. and know. However. = Bill agreed/arranged to have/bring about that Sue bring a cake. ] as [Vol untary action B R ING ABOUT THAT S]. but rather follows from the fact that ask takes an indirect question complement alternating with a direct object. it would be pos­ to treat these alternations as lexical polysemy. However. that ask and intend (as well as the families of Vl'rbs to which they belong) are not lexically polysemous. Verbs with the same syntactic alternation and t he same interpretation of the complement as a volitional action show the same relationship between the readings. . tell. this distribution suggests that the (b)-type interpretation is not a lexical property of ask. . SI ble (. 6 b.om plements ( 26 ) a . Most NPs are not subject to such an interpretation: ask a question "# ask what a question is. tell. the outcome. consider ask. Interpret an NP such as the time. The NPs that can be so interpreted are limited to a class including such phrases as the time. but whose VP need not denote volitional actions. plus a general principle of interpretation. Grimshaw ( 1 979) observes that whenever a verb takes an indirect question complement in alternation with a direct object (e. the readings i n the (b) frames are in part the product of special SS-CS corre­ spondence rules.The Syntax-Semantics Interface 59 These alternations can easily be treated as lexical polysemy. which alternates between a con­ t rolled VP complement denoting a volitional action (23a) and a that-sub­ j u nc tive complement (23b). the char­ ader of the alternation suggests a more general principle of interpretation t hat supplies the interpolated content italicized in (24b) and (25). (26a ) al lows the N P to be i n te r p re te d as an i n direct q ue stio n . NP's height. J ust i n case a verb ta kes an N P syntactically and an indirect que stion "ema ntica lly. ( 2 5 ) Bill agreed/arranged that Sue bring a cake. the outcome. tell the story "# tell what the story is. subj unctive . I'his interpretation however does not appear with verbs such as prefer and desire. . . it also permits certain direct objects to be interpreted in the man­ ner shown in (24a). NP's name.g. as [I ndirect question WHAT NP BE] . Again. know his wife "# know who his wife is. which satisfy the same syntactic conditions as intend.

Mary beganb the beer. then . 3. Mary enjoyedb the novel . appears with verbs such as begin and enjoy. b. to make the resulting composition well formed). b.4. Mary begana to drink the beer. it is possible to find a paraphrase relationship between the two frames. Notice again that the effect of these special rules is to paste a coercing function around the reading of the complement. independent of the choice of complement. the choice of appropriate material to interpolate varies wildly with the choice of complement. Mary enjoyeda reading the novel. However. The price is the specialized correspondence rules-rules that go beyond syntactically transparent composition to allow implicit interpolated functions. the appropriate material to interpolate in the (b) frame was a constant. As in the previous cases. Mary begana to drink/?to bottle/ *to read/*to appreciate the beer.60 Chapter 3 and thereby to satisfy the verb's indirect question argument. Mary beganb the novel. = = = = A general solution to t hese al ternations. 1 995) and Briscoe et al . (1 990).2 Interpolated Function Specified by Qualia of Complement Another type of argument structure alternation. In the previous cases. there is a crucial difference. discussed by Pustejovsky ( 1 99 1 a. Mary enjoyedb the beer. d. cannot invoke lexical pol yscOl Y : each verb would h a v e to have indefi n i tely many different read- . Mary enjoyedb the novel. Mary enjoyeda reading/writing/ *drinking/*appreciating the novel. b. (26b) allows the that-subjunctive to satisfy the vol­ untary action argument. though. in this case to make it conceptually compatible with the argument structure of the verb (i . In these cases. Similarly. (28) a.e. (27) a. Mary beganb the beer. Mary enj oyeda d rin king/ ??bottli ng/ * readingJ*appreciating the beer. Mary begana to read/to write/*to drink/*to appreciate the novel. (1 a). (29) a. just in case a verb takes a that-subjunctive syntactically and a voluntary action semantically. This solution reduces lexical polysemy: the (b) frames are no longer lexically specialized readings. c. Again we are faced with a violation of the condition of exhaus­ tive lexical content.

7 This approach systematically eliminates another class of In. ical polysemies. the generalization i s a s follows: these verbs select semanti­ cally for an activity. (i.e. the (a) frame or an NP such as the dance). an unspecified activity involving NP." But it is also surely part of o n c ' s k n owledge of EnKIi. how it comes into being. (2 1 ). the qualia of book include its physical form. this time a class that would be impossible to specify l'I lIn p\ctely. if the complement does not express an activity (e. something specifying that its proper function is for people to read it. a rule not unlike those in (4). and (26) interpolates a general function that permits the NP to satisfy the activity variable of the verb. Pustejovsky ( 1 99 1 a. this requires violating condition (1 b).g. activity) is determined by I he verb begin or enjoy itself. Only the general kind of function (here. Rather. there will be further specification of how the i n formation comes into being. spine). He posits that part of the LCS of a noun naming an object is a complex called its qualia structure. 1 995) claims that the second step of interpretation s pec ifies the activity by examining the internal semantic structure of the complement. the novel). Within its role as an i n formation-bellring object. Rather. This is a case of what Pustejovsky calls ('l}('omposition. (30) leaves open what the activity in question is. the fact that books l i rc read is part of o ne s "world knowledge. then two steps take place. simple composition suffi­ ces. "doing something with NP") However. For example. including the role of author as creator.e.1 n argument is determined by looking inside the semantic content of the ohjcct. Hence the desired activity predicates can be fo und in the qualia structures of the direct objects. covers. a repertoire of specifications includ­ i ng the object's appearance. The acceptable interpolated predicates in (29) stipulate either a char­ acteristic activity one performs with an object or the activity one performs i n creating these objects. (30) Interpret NP as fActivity F ( NP)]. When their complement directly expresses an activity (i . the idea that the content of an LCS plays no role I I I h ow it combines with other items. how it is used. something about its (normally) being printed. However. To be sure. we must accept condition ( 2b): the conceptual function of which the direct object of begin or enjoy is . However. includ­ i ng its parts (pages.The Syntax-Semantics Interface 61 i ngs. It might be claimed that qualia structure is really "world knowledge" n n d does n o t belong in linguistic theory. its role as an information-bearing ob­ ject. First. and others.�h that one can use the sentence Mary began the ' .

It would miss the point to re s po nd by proposing that the adjectives arc . (3 1 ) a. a good knife a knife that is good for cutting :F something that is a knife and that is good b.5 3.1 Adjective-Noun Modification Adjectives That Invoke the Qualia Structure of the Noun The modification of nouns by adjectives is usually considered to be for­ mally a predicate conjunction. a fast car a car that travels fast ? something that is a car and that is fast c. To me. a sad woman a woman experiencing sadness ? someone who is a woman and who is sad b. a good typist someone who is good at typing :F someone who is a typist and who is good = = = In each case. a fast road a road on which one travels fast :F something that is a road and that is fast = = = = (32) a. a sad event an event that makes its participants experience sadness ? something that is an event and that is sad c. the (likely) meaning of the adjective-noun combination is richer than a simple conjunction of the adjective and noun as pred icates.62 Chapter 3 novel to express the proposition that Mary began to read the novel. a fast typist someone who types fast :F someone who is a typist and who is fast b. Pustejovsky ( 1 99 1a. 3. 1 995) demonstrates a wide range of cases (some previously well known) where such an analysis fails. examples like this show that world knowledge necessarily interacts with the SS-CS correspondence rules to some degree: the correspondence rules permit certain aspects of a proposition to be systematically elided if reconstructible from the context-which includes world knowledge encoded within an LCS. a good road a road that is good for traveling on :F something that is a road and that is good c. a sad movie a movie that makes its viewers experience sadness ? something that is a movie and that is sad = = = = = = (33) a. (3 1 )-(33) present some examples.5. a red house is something that is a house and that is red.

it must pertain to the emotional experience of characters found in the qualia structure of event and movie. There. R ather. as reflected in the paraphrases in (32). without drawing the general conclusion about adjective mod­ i tication pursued here. suggests that q u a l ia s tructure has an independently motivated existence. As me n t ion ed above. in the absence of a specified function (e. For example. for then they would need different readings for practically every noun they modify. The difficulty with this (as pointed out by K a tz ( 1 966» is that an adjectiv e such as good would need as many dif­ feren t read i n gs as there are functions for o bj ects. not a process) or a road (an object. If it modifies a noun that denotes a process. Similarly. it cannot pertain to a typist (a person.4.) This use of qualia structure differs from that in section 3 . The qualia structure of typist specifies most prominently that such a per­ son types. sad pertains to a person's emotional experience. hence this is the most prominent activity to which fast can pertain. the qualia structure of road specifies that it is an object over which one travels. The other alternative would be simply to deny the phe nomenon. It therefore can modify woman by simple composition. one would be to claim that each of t hese adjectives i s polysemous . A better solution is to make the ad­ j ectives closer to monosemous. the default function is chosen from the specification of proper function in the qualia structure of the noun. thus traveling becomes the activity to which fast pertains. for instance characteristic activities. Good evaluates an object in its capacity to serve some function. for i n stance. to claim that a f st road simply a is 'a road t ha t is fast'. a nd to further clai m that the speed is attributed .g. The fact that similar elements of the noun ��merge in both cases. The principles of composition must therefore find a process for fast to modify. an element of qualia structure is instead used as host for the con tent of the adjective. This is a good knif for throw­ e ing). then it can be composed with it by simple composition. but to adopt a richer notion of adjective­ noun modification. One can imag ine two altern atives to admitting enriched composition in t hcse examples . not a process). one enjoys reading the book.The Syntax-Semantics Interface 63 multiply polysemous. Similarly. a characteristic activity was copied from qualia structure to serve as a coe rcion function in argument structure. But as it stands. but not event or movie. In this case. such as waltz. suppose fast simply pertains to the rate of a process (relative to some standard for that process). (Katz (1 966) analyzes (33) s i m i l arly. and they will look for such a process within the qualia structure of the host noun.2.

I wish the theory of conceptual structure and of conceptual composition to be able to characterize this systematicity instead of ignoring it. 8 (34) a. More appropriate para­ phrases look like those in (35). In other words. a fake gun "* something that is a gun and that is fake b. . we have two quite different principles of composition for the very same syntactic configuration. a toy horse something whose function in play is to simulate a horse = = = The details of the paraphrases in (35) are not crucial . Thus these adjectives compose with nouns in an unusual way: syntactically they are adjuncts. the adjective is actually the syntactic head as well. but rather inside a clause whose content is determined by the adjective (e.5. I take the latter position to be an assertion that prag­ matics is unconstrained and autonomous from linguistic semantics. and the N functions as an argument. . an alleged killer "* someone who is a killer and who is alleged c. 3. In particular. However. Yet the fact that similar bits of "world knowledge" play various roles in understanding various constructions suggests that something systematic and constrained is going on. When faced with two distinct ways of composing A-N combinations. one might want to claim that in the latter form of composition.2 Adjectives That Semantically Subordinate The Nouns As seen in (34). What is important is that the noun does not appear in the usual frame 'something that is an N and . . a toy horse "* something that is a horse and that is a toy (35) a. '. a fake gun something that is intended to make people think it is a gun b.g. more often. I see no evidence internal to syntax to substantiate such a claim: it is driven only by the assumption that transparent composition of the purest sort is the only possible realization of the syntax-semantics rela tionship. but semantically they are heads. In the present a pproach the choice of means of combination = .64 Chapter 3 to travel on the road as a matter of pragmatics. a well-known class of adjectives fails altogether to make sense in paraphrases of the sort in (3 1 )-(33)."world knowledge of roads" or the like. f ake N 'something that is intended to make people think it is N'). an alleged killer someone who is alleged to be a killer c. presupposes) simple composition will conclude that two distinct syntactic structures give rise to these interpretations. someone who advocates (or.

suppose we have a combination like wooden turtle. just like. So simple composition of this phrase yields an anomaly. which of course it does not. another reading emerges in sentences like This stump makes a good chair. One reading. ' 3. lousy). Such LCS-driven choice is i n line with hypothesis (2b). However. was already discussed in section 3 . Notice that wooden itself does not provide the information that the object is a statue or replica. ake Notice that the criteria placed on a good knife that is a knife (composi­ tion as in (33» are considerably more stringent than those on a screw­ d river that makes a good knife-and a knife can't make a good knife. (The verb makes. t he y are made of flesh and blood. as pointed out by A ronoff ( 1 980). 5 .The Syntax-Semantics Interface 65 is driven by the LCS of the adjective: if it is a normal attributive adjective. terrific. ·wooden chalk. And · wooden chalk i s bad presumably because we don't nor­ m a l l y conjure up the idea of models of pieces of chalk. However. The presupposition is that the stump is not a chair. say. i t will integrate with the qualia structure of the noun. makes a good N means something like adequately fulfills the normal function of an N (even though it isn't one)'. However. is not necessary: This stump is a good chair is perfectly acceptable as well. it will take the noun as an argument. Turtles are not made of wood. This form of composition thus parallels the f -type adjectives. The qualities that the stump has that make it a good chair d e pend on the proper function of chair.5. as in the other reading of Kood. This shows that the two readings are distinct. ) Notice that this type o f composition also invokes the qualia structure of t he noun. the adjective good is polysemous (as are other evaluative adjectives such as excellent. If applied to some physi­ l:a l object X. but it is being used as one. if it is a f -type ake adjective.) 9 ' = . Interestingly. ( Perhaps in the mlltext of a dol lho use it would be appropriate. it undergoes simple composition to create a reading 'X made of wood' (wooden spoon 'spoon made of wood'). though related. other­ wise wooden spoon would have the same interpretation. That is. shown in (33). the proper function of the noun must be pulled out of its qualia structure. 1 . wooden turtle I urns out to be acceptable because we understand turtle to denote not a rl�a l turtle but a statue or replica of a turtle. Hence. although it forces this reading.3 Modification That Coerces the Reading of the Head Noun ( o n sider modification by the adjective wooden.

is now not just a matter of "convention" or pragmatics. rather. 1 . Table 3. Moreover. However. Others will emerge in sections 7. including nonlinguistic context. it is provided explicitly by the reference transfer function (2 1 a). Across these situations a number of commonalities emerge." " after" simple composition takes place.66 Chapter 3 The possibility of understanding turtle to mean 'replica of a turtle'. where not all semantic information in an interpretation comes from the LeSs of the constituent words.6 Summary We have surveyed a considerable number of situations that appear to call for enriched principles of composition. of course. here it is invoked to provide a well-formed interpretation of an otherwise anomalous phrase.8.7 and 7. summarized in table 3 . the extra material provided by (2 1 a) pro­ vides the host for modification by the adjective. 3. so it cannot be " added later. (2 1 a) was proposed as a principle to make linguistic expression compatible with contextual understanding. This coercion of the read­ ing of the head noun is yet another example where a theory of enriched interpretation is necessary. 1 Summary of instances of enriched composition a Aspectual coercions Mass-count coercions Pictures Cars Ham sandwiches x x x x x x x x x x x x x x b c d x x x x x x x Ask-/intend-type Fast Fake Wooden alternations Begin-/enjoy-type alternations d c = Pra gmatics i n voked in composition = Com position involves addin g i n terpo lated coercion fu nction a = Not al l seman tic information comes from lexica l entries b = Composition guided by internal structure of LCSs .

anaphorli b. ( 1 7 ) a. or because of other semantic distinctions that cannot be reduced to syn tactic distinctions. corresponds to b i nd s I I [X]" .The Syntax-Semantics Interface 67 Questions raised by these cases include: 1 . In designing a logical language. or because of material supplied by principles of enriched compo­ sition. they can be expressed in conceptual structure. as it were. NPi binds [NP. using. not between syntactic constituents. What does this claim mean? The idea is that the traditional notation for hi nding. (36a). is not the correct formalization. (36a) should be regarded as an abbreviation for the three-part relation (36b). binding is fundamentally a rela­ t ion between conceptual constituents. it would be bizarre to sprinkle it so liberally with special-purpose devices. NPa [NP. I will claim. either because of the internal structure of LCSs.7 Anaphora This section will briefly consider several cases where the criteria for bi nding cannot be expressed in syntactic structure. Rather. 3. What is the overall typology of rules of enriched composition? Can they insert arbitrary material in arbitrary arrangements.. Are these rules subject to linguistic variation? 3 . However. I have frequently used paraphrase relations to argue for enriched composition. What is it about the design of nat­ ural language that allows us to leave so many idiosyncratic bits of struc­ ture unexpressed. Thus. conventionalized abbreviations? Keeping in mind the pervasive nature of enriched composition and its implications for the complexity of the ss-cs interface. anaphorlb corresponds to I I (SS) (CS) . or are there con­ straints? Do the similarities among many of these cases suggest the flavor of tentative constraints? 2. In the discussion above. let us now turn to t he two major phenomena for which LF is supposed to be responsible: bi nding and quantification. How valid is this methodology in gen­ eral? To what extent can a paraphrase employing simple composition reliably reveal material concealed by enriched composition? 4.

The binding is notated by the Greek letters: a superscripted Greek letter indicates a binder. . in the meaning of the sentence. plus further information that does not concern us here. when the anaphor and the antecedent are syntactically explicit (as in standard binding examples).1 Binding Inside Lexical Items What distinguishes the verb buy from the verb obtain? Although there is no noticeable syntactic difference. (The argument does not depend on formalism. More specifically. (36b) presents the possibility of anaphoric relations that are not expressed directly in the syntax. it is necessary to find cases in which syntactic mechanisms are insufficient to characterize the relation between anaphors and their antecedents. and the actual binding relation obtains between X and Y. we might expect only conceptual structure conditions to apply . but only implicitly.) (37) a. namely that X pays money to Z. then. On the other hand. so I characterize this information totally informally. the . buy also contains (37b). In order to show that (36b) rather than (36a) is the proper way to con­ ceive of binding. Under such conditions.7. Y changes possession from Z to X b. among others). we might expect a "mixed" binding theory that involves conditions over both structures. money changes possession from X to Z Thus the LCS of buy has to assign two distinct semantic roles (9-roles) to the subject and to the object of from. Here are some.68 Chapter 3 That is. anaphor NP expresses a conceptual structure Y. 3. Under this conception of binding. It is conceivable that all appar­ ent syntactic conditions on binding can be replaced by conceptual struc­ ture conditions (as advocated by Kuno ( 1 987) and Van Hoek ( 1 995). and the corre­ sponding Greek letter within brackets indicates a bindee. X buys Y from Z carries an inference not shared by X obtains Y from Z. what should the counterpart of GB binding theory look like? If the relation between syntactic anaphors and their antecedents is mediated by both syntax and conceptual structure. we might expect both syntactic and conceptual conditions to be invoked. but I will not explore this stronger hypothesis here. the antecedent NP expresses a conceptual structure X. as shown in (36b). This difference has to come from the LCS of the verb: whereas the LCS of both verbs contains the information in (37a).

but accounting syntactically for the absence of from proves tricky. as in (38). Y was bought from Z.cmantic general ization about bound anaphora. (I omit the larger relation that unifies these subevents into a transaction rather than j ust two random transfers of possession. (38) a. and whoever turns over money receives Y-regardless of whether t hese characters are mentioned in syntax. even when the relevant argument is not expressed syntactically. the syntactic level at which binding rela­ t ions are supposed to be made explicit.The Syntax-Semantics Interface 69 Notice that these a-roles are present. and that this person gave money to Z.) (39) Y changes possession from za to X l! money changes possession from � to a The Greek letters in the second line function as variables. LF thus misses an important . The prob­ lem is that the two occurrences of the variable x are inside the LCS of buy a nd never appear as separate syntactic constituents. Sim­ ilarly. Thus there is no way t o express their binding at LF. Consequently. b. One could potentially say that (38a) has a null pp whose obj ect receives these a-roles. Bill bought a book from each studen t has the reading 'For each student x. implicitly. and that this person received money from X. In Jackendoff 1 990 I proposed that the LCS of buy encodes the subevents in the way shown in (39). but it is now widely accepted that passives do not have underlying syn­ tactic subjects. In (38a) we still understand that there is someone (or something such as a vending machine) ftom whom Y was obtained. one could potentially say that (3 8b) has a null underlying subject. whoever turns over Y receives money. at the . bound to the superscripts in the first line. On the other hand. For example. a book changed possession from x to Bill and money changed possession from Bill to x ' . X bought Y. So there are technical difficulties associated with claiming that such "understood" or "implicit" a-roles are expressed by null argu­ ments in syntax (see Jackendoff 1 993b for more detailed discussion of Emonds's ( 1 99 1 ) version of this claim). This leaves us with the problem of how implicit characters are under­ stood as having multiple a-roles. in (38b) we still understand that there is someone who obtained Y. Such binding of variables in LCS has precisely the effects desired for standard bound anaphora.

control does not shift. (43) a. John sold a book. 1 974 I presented evidence that many cases of complement subject control are governed by the thematic roles of the verb that takes the complement-where thematic roles are argued to be structural positions in the verb's LCS. (John is Recipient) (John is Recipient) (John is Source) (J ohn is Source) T h e proposal therefore is that th e controller of get is identified with its Goal role. and their relation looks just like the conceptual structure associated with bound anaphora-a necessary coreference between two characters with differ­ ent 9-roles. obvi­ ously a case that falls within the general account of binding. 3. Williams ( 1 985). b. Chierchia ( 1 988). control switches from object to subject. (4 1 ) a.2 Control Determined by LCS One of the earliest cited types of evidence that enriched composition is involved in binding concerns the control of complement subjects. these bound variables are explicit. b. In Jackendoff 1 972 I observed that the switch in control with get parallels the switch in the Recipient role in (42). Farkas ( 1 988). Bill got John a book. is the proper level at which to encode the properties of bound anaphora most generally. the lower level of (36b). Johnj got [PROj to leave].70 Chapter 3 level of conceptual structure. John sold Bill a book. the con­ troller of promise i s identified with its Source role. In Jackendoff 1 972. John got a book. Hence conceptual structure. which does not switch . which switches position d epending on transitivity. b. Bill got Johnj [PROj to leave]. Similar arguments have since been adduced by Cattell (1984). and Sag and Pollard ( 1 99 1 ) . Johnj promised [PROj to leave] . When get switches from intransitive to transitive.7. (42) a. The original arguments turn o n examples like these (where I use tradi­ tional subscripting to indicate coreference): (40) a. b. whereas the lack of switch with promise parallels that with a verb such as sell. Johnj promised Bill [PROj to leave] . but when promise switches from intransitive to tran­ sitive.

the l)l'st -know n of whic h is (47).) Verbs that behave like promise are rare in English. However. I krc . Johnj gave Bill a promise [PROj to leave]. as in (46b). Johnj got from Bill an order [pROj to leave] .C I' I . In (46a) the subject of give is Source. control goes with the object of to. between light verb and n ominal. b. control changes accordingly. in nominalized form there are some more that have not been previously pointed out in the literature. (The subject i s also Agent. including vow and a couple of others. of course. However. whose Goal is con­ t ro l ler. There are exceptions to the thematic generalizat ion with promise. ( 4 7 ) Bill. d. was promised [PROj to be allowed to leave] . (44) a. In contrast. the controller is John. this case takes us I Il 1 l y dee per i nto enriched comp ositio n. then control switches over to the Goal of the verb: the indirect object of give (46c) and the subject of get (46d) . I choose Source as the detennining role for reasons to emerge in a moment. If instead the nominal is changed to order. the thematic Goal . whose Source is in the object ofjr om . theref ore the Source of the prom ise is the subject of give. 1 0 (46) a. The principle involved matches thematic roles contr ol goes with the Goal of promise. so they have not been recognized as posing the same problem. Johnj'S agreement/contract with Bill [PROj to leave] Other than promise and vow. John/s promise/vow/offer/guarantee/obligation to Bill [PROj to leave] b. note that in the syntactically iden­ tical (45). If the light verb is changed to . Since the Source of promise controls the �:omplement subject. John gave Billj an order [PROj to leave). H e re control switches if either the light verb or the nominal is changed. these nominals do not correspond to verbs that have a transitive syntactic frame. since it occurs only with a passive com pleme nt in which the contr oller receives permi ssion or the like. (45) John's order/instructions/encouragement/reminder to Billj [PROj to leave] The need for thematic control is made more evident by the use of these nominals with light verbs. John got from Billj a promise [PROj to leave] .The Syntax-Semantics Interface 71 position. c.

1 . More complex cases along similar lines arise with verbs such as ask and beg. (47) has an approximate paraphrase.) In short. that the interpretation of PRO is a complex function of the semantics of the main verb and the subordinate clause. One might first guess that the difference in control has to do simply with the change from active to passive in the subordinate clause. Here are two further examples with passive sub­ ordinate clauses. (This is essentially the spi rit of Sag and Pollard's (199 1 ) solution. it may be that a more general solution is revealed more clearly once the presence of the coerced material is acknowledged. It is therefore interesting to speculate that part of the solution to the switches of control in (47)-(50) is that a rule of coercion is operating in the interpretation of these clauses.4. Bill asked/begged Harryj [PROj to leave] . which cannot be local­ ized in a single lexical item or structural position. which is part of binding theory. leave the room be hit on the head ) Chapter 3 In other words. on grounds that seem to be rooted in the semantics of the com­ plement clause. b. syntactically parallel to (49b): (50) a. cannot be s ta t ed without access to the i nt e r i o r struct ure of lexical entries and to . * Billj asked Harryj [pROifj to be forced to leave] . 'Someone promised Bill to bring it about that Bill would be allowed to leave'. These being the two cases in which control is not as it should be. control cannot be detennined without first composing the relevant conceptual structure of the complement. (49) a.72 ( (48) BiIIj was promised [PROj to *to *to *to *to be pennitted to leave pennit Harry to leave get pennission to leave ]. then. b. (50b) proves to be ungram­ matical. parallel to rule (26b). It appears. It so happens that all the verbs and nominals cited in this subsection have a semantic requirement that their complements denote a volitional action. In (50a) PRO proves to be Harry. much like the intend-type verbs cited in section 3. But the facts are more complex. Bill asked Harryj [PROj to be examined by the doctor] . and a similar paraphrase is possible for (49b). For example. Billj asked/begged Harry [PROj to be pennitted to leave] . control theory. not Bill.

How do we account for the difference? I n Jackendoff 1 992b I construct an extended argument to show that t here is no reasonable way to create a syntactic difference between (52) a n d (53). Rather. in fact ( 52) Ringo falls on himself. under special circumstances. But the two are syntactically and even phonologically identical.7. and we come to the statues of the Beatles. and a theory of enriched composi­ t i on. where LeSs are available to binding conditions. t a t llc s has inserted the semantic material that differentiates them. Here is another example. where not all semantic information is reflected in syntactic structure. 3 has already alluded to a class of examples involving the i nteraction of binding with reference transfer. A proper solution requires a binding theory of the general form (36b). where the reference transfer for naming . severe syntactic d i fficulties ari se if one supposes that the interpretation of Ringo that denotes his statue has the syntactic structure [NP statue of[Ringo]] or even 1 N l' (' [ Ringo]]. rhis way of arranging binding is totally impossible-far worse than in ( 52). as observed above. In particular.3 Ringo Sentences Section 3 . But suppose instead that I stumble and bump a gainst a couple of the statues. knocking them over. 3. and to my surprise. rhis has a possible reading in which Ringo is undressing the statue. the difference relevant to binding appears only at t he level of conceptual structure. Suppose I am walking through the wax museum with Ringo Starr. Suppose Ringo stumbles and falls on one of the statues. there is an interesting constraint on this lIse . In other words. In . either at S-Structure or at LF. The intended reading of (52) is perhaps a little hard to get. However. ( 5 1 ) Ringo starts undressing himself. binding theory can apply even to noncoreferential NPs. and some �peakers reject it altogether.The Syntax-Semantics Interface 73 t he composed conceptual structure of subordinate clauses. this is impossible. so that John falls on t he floor and ( ') 3 ) * Ringo falls on himself. Under the assumption of syn­ t actic binding theory (at S-Structure or LF) and the hypothesis of syntac­ t ically transparent semantic composition. . possibly even including material contributed by coercion.

In short. (54) a. though. LF. [FALL ([RINGO]U. Because the syntactic conditions cannot generalize to (52)-(53). [ON [a]])] The syntactic difference between (52)-(53) and (54) is that in the latter case.b) show the two forms. . (54a) and (54b) are differentiated by syntactic binding conditions. (55) a. certain aspects of the binding conditions must be stated over conceptual structure (the lower relation in (36b)). not syntactic structure. they are the ones that must be eliminated. +anaphor] can map into conceptual structure constituent [a] (i . is not doing the work it is supposed to. That is.e. then [NP2 . it is just that they are part of the interface component rather than the syntax. . As it happens. b. Conditions on structural relation of xn and a in conceptua l structure: . the level over which binding conditions are to be stated. the function VISUAL-REPRESENTATION is expressed by the word statue. syntactic condition s on binding are not altogether ex­ cluded. Clearly a generalization is being missed: two sets of conditions are accounting for the same phenomenon. in standard accounts. Given that both pairs have (in relevant respects) the same conceptual structure.74 Chapter 3 other words. NP1 can bind NP2) only if the following conditions obtain: a. . . [FALL ([VISUAL-REPRESENTATION [RINGO]U]. Conditions on syntactic relation of NP1 and NP2 : . [ON [VISUAL-REPRESENTATION [a]]])] b. whereas in the former case it arises through a reference transfer coercion. In other words. (56) If NP1 maps into conceptual structure constituent Xn. Ringo fell on the statue of himself. binding depends on enriched composition. the difference in conceptual structure between (52) and (53) proves to be identical to that between (54a) and (54b). . b. the conceptual structure binding condition that differentiates (52) and (53) (one version of which is proposed in lackendoff 1 992b) can also predict the difference between these. To the extent that there are syntactic conditions on anaphora. (55a. However. in the proposed approach they are stated as part of the correspondence rules that map anaphoric morphemes into conceptual structure variables. * The statue of Ringo fell on himself. syntactic conditions on binding appear in rules of the form (56).

" The question is at what level of representation this binding is repre­ sented. John k issed M a ry . then Susan did the same thing w i th the beer. Consider ( 5 7). M a ry put the food in the fridge. c . is to be produced by a deletion (If unstressed take his kids to dinner in the PF component (i. b. Chomsky's proposal does not address how to deal w i t h the replacement of take his kids to dinner by do so (or other VP a naphors like do it and do likewise). These are all subject to the well-known strict versus sloppy identity a mbiguity. Fiengo and M ay 1 994) some process of "reconstruction" is suggested. the interpretation of the null VP must contain some variable or pointer that can be bound either to John or to Bill. R ather. b. Bill didn't. a type discussed by Akmajian ( 1 973). " Con­ sider sentences such as (58a-c). since the relevant pronouns are not present in the overt utterance. and then it happened to Sue too. in the (ourse of Spell-Out). whereby (57a). presented more fully in Culicover and Jackendoff 1 995. but this phrase is present throughout the rest of the deri vation.The Syntax-Semantics Interface 75 :\. this variable behaves like a bound variable. nobody else did. but Sam did it to the cat. the null VP in (57a). d.c) and do so in (57b. However. It is not at the phonological interface level.7. Thus the invocation of the PF com­ ponent here is ill advised. That much is "conceptually necessary. can be interpreted as either ' Bill d idn't take John's kids to dinner' or 'Bill didn't take his own kids to din­ ner ' . in that it is bound in the standard way by the quantifier nobody else in ( 57c.d) are replaced by take his kids to dinner at LF. for instance. nobody else would ever do so. Chomsky ( 1 993) suggests instead that there is no reconstruction. Bill would never do so. The problem with the reconstruction solution has to do with how the g rammar in general determines the antecedent of "reconstruction. for example. and we can discard it immediately. ( 5 7) John took his kids to dinner. In order to account for this difference. (:)X) a.e. J ohn patted the dog. . Moreover.d).4 Bound Pronouns in Sloppy Identity The next case. but a. concerns the binding of pronouns in "sloppy identity" contexts.g. Then his is bound by normal mechanisms of binding. whereby the n ull VP in (57a. c. Usually (e.

Can this ellipted material be reconstructed at LF? One might suggest. since it is only lhrough . b. Chomsky ( 1 97 1 ) anticipates the fundamental difficulty for such an approach: many more kinds of phrases can be focused than can be lifted out of syntactic expressions.) e. that focused expressions are lifted out of their S-Structure positions to A -positions at LF. then reconstructing the ellipted material from the presupposition of the first clause (in (58a). (59) a.) (60) a. Is John certain to WIN the election? * [wini [is John certain to ti the election)) c. x patted y) . * John patted the dog. then Susan put the food/it in the fridge with the beer. Akmajian shows that the interpretation of such examples depends on extracting contrasting foci (and paired contrasting foci such as John/the dog and Sam/the cat in (58a» . (59) pres­ ents some possibilities. But that would undenn i ne the claim that LF movement is also an inslance of M ove a. One would have to claim that constraints on extraction are much looser in the derivation from S-Structure to LF than in the derivation from D-Structure to S-Structure. * [easYi eagerj [John is neither ti to please nor tj to please)) (Note: The conjoined VPs here present unusually severe problems of extraction and derived structure. * [affirmationi [confirmationj [John is more concerned with ti than with fj]] ] Suppose one tried to maintain that LF movement is involved in (60a­ e). all miserable failures. Mary put the food in the fridge. and then John kissed (Mary/her) happened to Sue too. Does Bill eat PORK and shellfish? * [porki [does Bill eat ti and shellfish)) d. John is more concerned with AFfirmation than with CONfirmation. John is neither EASY to please nor EAGER to please. t t (Most of the examples in (60) are adapted from Chomsky 197 1 . for instance. but Sam patted (the dog/it) to the cat.76 Chapter 3 There is no ready syntactic expression of the ellipted material. (wrong meaning) c. parallel to the overt movement of top­ icalized phrases. * John kissed Mary. capitals indicate focusing stress. Was he warned to look out for a RED-shirted ex-con? * [redi [was he warned to look out for a ti-shirted ex-con)) b.

I ] . must be resolved at a level where the ellipted material is explicit. do it must be I l·constructed. I II s h o rt . filled in I h� second clause by the focus glue.The Syntax-Semantics Interface 77 common constraints that the two can be identified. but only in conceptual structure. as in (57). a nonsyntactic n. (See Koster 1 987 and Berman and Hestvik 1 99 1 for parallel arguments for other LF phenomena. cannot do the job it is supposed to.' presen t a t i o n . either the differ­ entiation of focus and presupposition is present in conceptual structure but not in LF-or else LF loses all its original motivation. However. Rather. in which case the village has been destroyed in order to save it. there is no way to produce II/ il/ his books by smearing y on them in the logical form of the first clause. the phe­ nOlllena in q uesti o n really belong in conceptual structure. but only in conceptual structure. To sum up the argument. 1 2 ( h2 ) ·It i s PA INTj that John is ruining his books by smearing tj on them. 2. the motiva­ tion for LF as a syntactic level in the first place was the extension of Move a to covert cases.) In short. where y is a variable in the presupposition. focus and presupposition in general <. there is no way for LF to encode the strict/sloppy ambiguity in com plete generality . Because of examples like (60a-e). then: I . Therefore LF. therefore such ambiguity within ellipsis.:annot be distinguished in LF. Therefore the strict/sloppy identity ambiguity cannot be stated in syn­ t a x . hl'l�a u se paint cannot be syntactically lifted out of that environment. Because of examples like (58a-c). as we cun sce from (62) . but BILL wants to do it with GLUE. The following example puts all these pieces together and highlights the wnclusion: (6 I ) JOHN wants to ruin his books by smearing PAINT on them. which is supposed to be the locus of IlIlId i ng relations. ). ellipsis in general must be recon­ structed at a level of representation where focus and presupposition can be explicitly distinguished. 4. Its reconstruction is something like ruin his books by 11/ waring y on them. B i l l ' s desire is subject to a strict/sloppy ambiguity: whose books does he ha vc i n mind? In order to recover the relevant pronoun. In turn. The strict/sloppy identity ambiguity depends on ambiguity of binding.

but none in Turkish do that.8 Quantification The other main function attributed to LF is expressing the structure of quantification.78 Chapter 3 We thus have four distinct cases where aspects of binding theory must apply to "hidden" elements. 3. Rather. The evidence of section 3 . What about the relative scope of multiple quantifiers? It is easy enough to extend the previous argument to examples with multiple quantifiers. for instance). they cannot be expressed at a (narrow) syntactic level of LF. (63) JOHN is ruining his books by smearing PAINT on them. To push the point home. no matter how different from S-Structure. For instance. . such properties turn out not to be syntactic at all. but everyone ELSE is doing it with GLUE. (64) a. so it is necessarily repre­ sented at conceptual structure.8. the structure of quantification is involved in inference (the rules of syllogism. and how they take scope over other quantifiers. where the true generalizations can be captured. then. (57c) and (57d) contain a quantifier binding a covert anaphor within an ellipted VP. 7 shows that the binding of anaphoric expressions by quantifiers cannot in general be expressed in syntax. We conclude that LF cannot be justified on the basis of its supposed ability to encode syntactic properties relevant to anaphora that are not already present in S-Structure. the binding of variables by quantifiers can be expressed only at conceptual structure. Of course. we can construct a parallel to (6 1 ) that uses a quantifier. In each case these hidden elements can be expressed only in conceptual structure. is whether scope of quantification needs to be represented structurally in syntax as well. Some quantifiers i n English always take scope over other quantifiers. the representation over which rules of inference are formally defined. 3. and we have just seen that such anaphors cannot in general be reconstructed in syntactic structure-they appear explicitly only in conceptual structure.1 Quantification into Ellipted Contexts The theory of quantification has to account for (at least) two things: how quantifiers bind anaphoric expressions. then. In general. the relation "x binds y" is properly stated over conceptual structure. The question.

14 That is.2 Quantificational Relations That Depend on Internal Structure of LCS A more subtle argument concerns the semantic nature of "measuring out" in examples like (65) ( Tenny 1 987. 1 992. Jackendoff 1 996e). 1 993. Krifka 1 992. for/*in an hour. it is telic and passes the standard tests for telicity. there is a corre­ sponding position on its trajectory. " . then the event has a definite temporal endpoint. w hich 3. on the other hand. Verkuyl 1 972.8. do that must be reconstructed as always take scope over other quantifiers. the event has no definite temporal endpoint and passes the standard tests for atelicity. Fxactl y w h i ch constituents of a sentence stand in a measuring-out . ( n (65) the position of the cart is said to "measure out" the event. because none of them had succeeded in doing it with GLUE. parallels (6 1 ) and (63). Some boys decided to ruin all their books by smearing PAINT on them. r hese fac ts about telicity have typically been the main focus of discus­ sion s of measuring out. (65) Bill pushed the cart down Scrack St. to Fred's house. to Fred's house. the cart's path has no definite endpoint (delete to r'red's house from (65» . But we have just seen that reconstruction of do that is accom­ plished in general only at conceptual structure. and none must take scope over always. (n order for (64a) to be interpreted. verbs of motion impose a q uantification-l ike relationship between the temporal course of the event a nd t h e path of motion: X measures out Y can be analyzed semantically as ' For each part of X there is a corresponding part of Y'.l' i a l i onsh i p depends on the verb. The somewhat baroque but perfectly understandable (64b). to Fred's house in/*for an hour. ( f. Dowty ( 99 1 . not at a putative syntactic level of LF. that is. ( 66) Bill pushed the cart down Scrack St." But they follow from a more general relation­ s h i p observed by most of the authors cited above: for each point in time t hat the cart is moving down Scrack St. as it is supposed to. If the cart's path has a definite spatial endpoint (to Fred's house in (65» .The Syntax-Semantics Interface 79 b. Therefore LF cannot express all quantifier scope relations. amplifies this point. ( 67) Bill pushed the cart down Scrack St.

) In other words. In this case time is not involved at all . A parallel two­ dimensional case is verbs of covering: in Mud covered the floor. there is a corresponding part of Highway 95. In particular. One can only understand which entities are measured out in a sentence by knowing (1 ) which semantic arguments . the temporal course of the event measures out the path. toward Harry with a verb of motion induces atelicity: Bill pushed the cart toward the house for/*in ten seconds. and the endpoints of the path. the temporal course of the event measures out the object being performed. and the beginning and the end of the aria corre­ spond to the beginning and the end of the event respectively. They are interior to LCS and are therefore not "aspects of semantic structure that are expressed syntactically. (Again. 3. which argu­ ments (plus time) measure out which others. in Sue sang an aria. Finally. For instance. there are verbs involving objects." to repeat May's ( 1 98 5) terminology . But Bill slowly pointed the gun toward Harry may be telic: its endpoint is when the gun is properly oriented.80 Chapter 3 1 . in Highway 95 extends down the East Coast from Maine to Florida. these cases are discussed only in Iackendoff 1 996e . 2. LF does not express material interior to the LCS of lexical items. these cases are discussed only in Iackendoff 1 996e. but its successive posi­ tions do not correspond to successive locations along a path toward Harry. there is a corresponding part of the aria that Sue was singing. Maine and Florida. the quantificational rela­ tionship involved in measuring out has to be expressed as part of the LCS of the verb. With verbs of motion. But such a reply would be too glib. paths. the verb has to say. But this means that there are quantifier scope relationships that are imposed by the LCS of a verb. With verbs of performance. For instance. By hypothesis. as part of its meaning. and times in which no measuring out appears at all. in Bill slowly pointed the gun toward Harry. Hence LF does not express all quantifier scope relationships. as we have just seen.) 4. For instance. contain the end­ points of the road. (Of the works cited above. there is a bit of mud for each point of floor. for each point in time during the event. 1 s An advocate of L F might respond that measuring-out relationships are not standard quantification and so should not be expressed in L F . for each location on the East Coast. With verbs of extension. a path measures out an object. the gun changes position over time.

and therefore there is ultimately no justification for such a level. Combining this with the argument from ellipsis. Without a clear alternative semantic account of binding and q u a n tifica tion . This is not to say that I have a solution for all the many phenomena for w h ich LF has been invoked in the literature. 3. It shows that differences in quantifier scope. these phenomena have been discussed almost ex­ d usively in the context of GB. by looking at nonstandard examples of binding and quantification.7. (2) in what syntactic positions these arguments can be expressed. by showing that the relation of syntax to conceptual structure is not as straightforward as standardly presumed. a theory that lacks articulated accounts of wnceptual structure and of the interface between syntax and conceptual sl rudure. the treatment of l a n guages w i t h in situ wh (e. g . That is. Rather. distinct from S-Structure and invisible to phonol­ ogy .9 Remarks t he present chapter has examined the claim that the Conceptual Interface Level in syntax is LF.The Syntax-Semantics Interface 81 are marked by the verb a s measured out. responsibility for binding a nd quantification lies with conceptual structure and the correspondence rules linking it with syntactic structure. This claim has been defused in two ways (to some degree at least). we conclude that LF cannot perform the function it was invented to perfonn. But consider. There therefore appears no ad­ va n tage in positing an LF component distinct from S-Structure that is "closer" to semantics in its hierarchical organization: it simply cannot get �yn tax "close" enough to . the combinatorial system of syntax is deeply implicated in expressing the combinatorial semantics of measuring out. are also imposed by the LCS of lexical items and therefore cannot be encoded in the most general fashion at any (narrow) syntactic level of LF. This argument thus parallels the argument for buy with anaphora (section 3. if properly generalized. H ua n g 1 982). First.semantics to take over the relevant work. i t is natural for an advocate of LF to feel justified in t rea t i n g the m as syntactic . which has been taken as a . for instance. However. I have shown that it is not so plausible to think of any level of syntax as directly "encoding semantic phenomena. these clearly remain to be dl:a l t with. and (3) what syntactic constituents occupy these positions. I have shown that no syntactic level can do justice to t he full generality of these phenomena. which syntactically encodes binding and quantification relationships." Second. 1).

then there is no problem with treating wh-scope similarly. a fact about conceptual structure. Hence there is no need for syntactic LF movement to capture the semantic properties of these lan­ guages. The ability of verbs to select for indirect ques­ tions does not depend on whether the wh-word is at the front of the clause: it depends on the clause's being interpretable as a question. is how to formulate a rigorous theory of con­ ceptual structure that captures the same in sights.82 Chapter 3 strong argument for LF. 1 7 The challenge that such putative LF phenomena pose to the present approach. then. But if quantifier scope is a property of conceptual structure rather than syntax. 1 8 . the ability of a wh-word to take scope over a clause. is indeed parallel to the ability of a quantifier to take scope over a clause. 1 6 In turn. But there are many dif­ ferent traditions in semantics that can be drawn upon in approaching this task. rendering the clause a question.

lexical items were introduced into syn­ t actic trees by phrase structure rules like ( 1 ) that expanded terminal sym­ bols into words. I n Chomsky 1 965. in Chomsky 1 957. 1 84: 1 >1 Icxil:al ent ries cOll:h of w h il:h speci fies the I et u s a ssume . . phonological.e. H ere is how the process is described in Chomsky 1 97 1 . 1 i\ Lexical Insertion versus Lexical Ucensing perhaps startling consequence of Representational Modularity is that .here can be no such thing as a rule of lexical insertion. tha t the gra m m ar contai ns a lexicon. all coordination among these representa­ lions is encoded in correspondence rules.Chapter 4 The Lexical Interface From a formal point of view. there can be no "mixed" representa­ tions that are partly phonological and partly syntactic. and conceptual representations from each other. banana. which we take to be a class gra m m a tical (i . In the process. for reasons that need not concern us here. important conclusions will be derived about l he PS-SS and SS-CS interfaces as well. . (I) N ---> dog. Rather. What is lexical insertion supposed to be? Originally. syntactic. In order to under­ sland this statement. cat. . let us ask. Each lives in its own module. I t also proposes a simple formalization of how the tripartite structures are coordinated. this mecha­ nism was replaced with a lexicon and a (set of) rule(s) of lexical insertion. . the hypothesis of Representational Modu­ larity claims that the informational architecture of the mind strictly seg­ regates phonological. . . 4. or partly syntactic a nd partly semantic. This chapter explores the consequences of this position for the lexical i nterface-the means by which lexical entries find their way into sentences.

84 Chapter 4 semantic. .] This leads to the familar conception of the layout of grammar. (3) a. because we are dealing with syntactic structure throughout. producing the phrase marker (3c) (Chomsky's P'). and syntactic) properties of some lexical item . in which lexical insertion feeds underlying syntactic form. TYPE O F ANIMAL. . Lexical insertion replaces the terminal symbol N in (3b) (Chomsky's substructure Q) with (3a). ) The constant among all these changes i s the position o f the lexical interface. outlined in figure 4. We may think of each lexical entry as incorporating a set of transformations[!] that insert the item in question (that is.] b. . etc . 1 . (I will drop the subscripts on PILss and CILss . Let us see how the archi­ tecture of Chomsky's theories of syntax. Unpacking this. the lexicon has always been viewed as . Throughout the years. [ Ikretl +N . suppose the lexical entry for cat is (3a) and we have the phrase marker (3b) (Chomsky's P). the complex of features that constitutes it) in phrase-markers. TYPE OF ANIMAL.V sin �lar ] N' PS ss CS £Thing CAT. etc . I> et � I N' Ikretl +N -V [ sin ular � ] £Thi ng CAT. I>et � I NP NP N c. Thus [2] a lexical transformation associated with the lexical item marker I maps a phrase­ P containing a substructure Q into a phrase-marker P' formed by replacing Q by I. develops over the years.

1 CS The architecture of some of Chomsky's theories feeding the initial point of the syntactic derivation. . What is a lexical item? As seen in the above quotation. ./ "'" 1 1 CS (b) Revised Extended Standard Theory (Chomsky 1 975) 1 1 (= CIL. LF movement LF (= CIL) � PF (= PIL? = PS?) Figure 4. This means t hat the phonological and conceptual structures of lexical items are d ragged through a syntactic derivation. Generalized transformations . . listed in long-term memory. The operation of lexical insertion is supposed to insert lexical items in their entirety into syntactic phrase structures." It is this assumption (assumption 3 of chapter 1) that [ want to examine next. and semantic features (i. This view of lexical items raises a uncomfortable situation for any of the layouts of grammar sketched in figure 4. . a lexical item is re ga rded as a triple of phonological. SS. CS» . as a structure <PS. . in e rt l y .e. PIL) � CS (c) Government-Binding Theory (Chomsky 198 1 ) 1 1 PF (= P I L ? = P S ? ) Lexicon LF (= CIL) � cs ( d ) Minimalist Program (Chomsky 1 993) Spell-Out . I -one that could have been noticed earl y on. . They become available only once the derivation crosses the appropriate interface in to phonetic or scmant ic form. and its phonological a nd semantic information is interpreted "later. .The Lexical Interface (a) Standard Theory (Chomsky 85 1 965) (= CIL) (= PIL) � Lexicon Deep Structure PS ( ) Surface Structure Lexicon Deep Structure PS ( ) Surface Structure Lexicon D-Structure S-Structure . V � . syntactic.

(Alternatively. Biiring 1 993) have noticed the oddity of tradi­ tional lexical insertion and have proposed to remove phonological infor­ mation (and in some versions. and syntactic subcategorization (i. phonological. for. otherwise Spell-Out could not tell. let us turn the tables and imagine that lexical items complete with syntax and semantics were inserted into phonological structure. ) Some colleagues have correctly observed that there is nothing formally wrong with the traditional view. the mass/count distinction. case-marking properties. person. and semantic information is present in syntactic trees. reduces the much­ vaunted autonomy of syntax essentially to the stipulation that although lexical. To bring its artificiality more into focus. See section 4. Fiengo 1 980. the <)I-features of Chomsky 1 98 1 ) .g. for instance. as mentioned in chapter 2. 54) characterize this approach as follows: . one on each shoulder. no empirical difficulties follow. Di Sciullo and Williams ( 1 987. syntactic rules can't see it. Otero 1 976. people sometimes advocate a strategy of "going back to the lexicon" at Spell-Out to retrieve phonological information. but would one want to? Quite a number of people (e. (As Ivan Sag ( personal communication) has suggested. For present purposes this amounts to the same thing as tradi­ tional lexical insertion. whether to spell out a particular noun in the (narrow) syntactic structure as /keet/ or /beet/ . in the Minimalist Program. Although I grant their point.2 for discussion. Den Besten 1 976. Anderson 1 992. One could make this work formally too. number. Halle and Marantz 1 993. Di Sciullo and Williams 1 987.86 Chapter 4 Alternatively. On this view. the operation Spell-Out is what makes lexical phonology available to further rules. a lexical item in its entirety is said to project initial syntactic structures (through the operation of Merge). it is as if the syntax has to lug around two locked suitcases. This phonological information has to be transmitted through the derivation from the point of lexical insertion. however. I suspect this defense comes from the habits of the syntactocentric view. 1 983.e. only to turn them over to other components to be opened. Koster 1 987. Thus the Minimalist Program too drags the lexical phonology invisibly through the syntax. the syntactic derivation carries around only the lexical information that syntactic rules can access: syntactic fea­ tures such as category. only to be "inter­ preted later" by the syntactic and conceptual components. semantic information as well) from initial syntactic structures.) Such a conception of lexical insertion.

To put this more graphically. although the syntactic forms that syntax is about certainly contribute to the description of "sentences. violating Representational M od ularity: the cat. CS ) triple. . ./ S-Structure Lexicon 4. according to the hypothesis of Representational M odularity. Rather. is phonological. S S ." then. Therefore it cannot be inserted at any stage of a syntactic d e r i va tio n without producing an offending mixed representation. infor­ lJIa tion . where the i l l form a l n o ta tion the cat h as been replaced by an explicit encoding of the ful l lexical e n t rie s for these items. it does not keep them out of syntax entirely-they are still present inertly in S­ Structure. . visible only after the derivation has passed through the rele­ vant interface. as ordinarily understood.2.The Lexical Interface 87 D-Structure 1 ' 1' . It therefore eventually becomes necessary to perform some ver­ sion of lexical insertion. whose strictly syntactic features are identical. . Grafted onto an otherwise unchanged GB theory. syntactic. The distinctive characteristic of all these ap­ p roaches is that they move lexical insertion to some later point in the syntactic derivation. the traditional notation for syntactic trees (4a) is necessarily a mixed representation. syntactic forms are "wordless" ph rase markers. " Of course.2 1 T "" LF (= CIL) � cs Figure The architecture of Government-Binding Theory but with late lexical insertion l Wlhat is syntax about? We could say "sentences" in the usual sense. but another answer could be "sentence forms. but coordinated through correspondence rules that constitute t he interfaces 3 The problem this raises for any standard version of lexical insertion is t hat a lexical item is by i ts very nature a "mixed" representation-a (PS. such "mixed" representation should be impossible. p hono logical. not syntactic. (4a) is traditionally taken as an abbreviation for (4b). there comes a point in the derivation where the grammar has to know-both phonologically and semantically-whether some noun in the syntactic structure is cat or bai." where . is not in the purview of syntax. and conceptual representations should be strictly se g re gated. "Sentence. at the bottom.2 Although late lexical insertion keeps phonological and semantic fea­ tures out of the derivation from D-Structure to S-Structure. they might look in skeleton something like figure 4. However.

etc ..���T [TYPE: [()I" � L C 1\ nJ . (I utilize conceptual structure formalism in the style of lackendoff 1 990. we must replace it with a triple of structures along the lines shown in (5). each of which contains only features from its proper vocabulary.� �.] J In a conception ofgrammar that adopts Representational Modularity asan organizing principle. TYPE OF ANIMAL. CT 6:J � I I A !i\ CT Phrasec Wo rdb k a:t Det. � I I NP NP N cat Det � I I N �/ka:t1 [+N. Rather. .) (5) Cl.88 Chapter 4 (4) a. readers should feel free to substitute their own favorite. .V count singular [+Det] [DEFINITE] hhing CAT. Nb [c�untl SlOg J OF I [ n". things work out differently: a representation like (4b) is entirely ill formed. Det the b.

rather. and lexical conceptual structure or LCS).1\ (J' I [c?unJ smg Nb [Thing T YPE: CAT ] b (6) is a s a small-scale correspondence rule. and /he lexicon as a whole is to be regarded as part of the PS-SS and SS-CS ill/l'rface modules. the pri nciples of coercion and cocomposition in chapter 3 are more specific p h r as a l SS-CS ru l e s. are not "inserted" into syntactic derivations. They are cxplicitly linked by subscripted indices. the formal role of lexical items is not that I hey arc "insertcd" i n to syntactic d e ri vatio n s but rather that they license . Lexical correspondence rules are of course not all there is to establish­ i ng a correspondence. Rather. For example. and the whole phrase cor­ responds to the NP and the whole Thing-constituent. Lexical items. a a small chunk of syntax (the small chunk of semantics (the lexical phonological structure or lexical syntactic structure or LSS). On this view. The PS-SS rules ( l 2a. the rules of Argument Fusion and Restrictive Mod­ ification in lackendoff 1 990 (chapter 2) and the principles of argument structure linking (chapter 1 1 ) are further examples of phrasal SS-CS rules. they license the correspondence of certain (near-)terminal symbols of syntactic structure with phonological and conceptual structures.b) and the SS-CS rule ( 2 1 ) in chapter 2 are schemata for more general larger-scale rules. It l i sts a small chunk of phonology (the LPS). syntactic.there cannot be a rule that inserts all aspects of a lexical item in syntactic structure. For instance. and it shows how to line these chunks up when they are independently gen­ erated in the parallel phonological. then. a lexical item is to be regarded as a correspondence rule. . If (5) is the proper way of representing the cat. . the index c in (5) comes from a larger­ scale correspondence rule that coordinates maximal syntactic phrases with conceptual constituents. I n short. ( 6 ) Wo rdb kret The proper way t o regard I .The Lexical Interface 89 These three structures do not just exist next to one another. the word the 4 corresponds to the cat corresponds to the N and the Type-constituent of conceptual structure. the only part of the lexical item that appears in syntactic structure is its syn­ tactic features. The clitic Det and the definiteness feature. the word cat is represented formally some­ thing like (6). and conceptual derivations.

Something much like it is found in the formalisms of HPSG. only the satisfaction of constraints. is meant to replace assumption 3 of chapter 1 . it should be recalled that no argument has been given for lexical insertion except that it worked in the Aspects framework. (On the other hand. and a lexical one. Notice that. in particular reinterpreting their claim of late lexical insertion as a claim that the licensing of the PS-SS corre­ spondence involves the syntactic level of surface structure. However. This means there is no "ordering" of lexical insertion in the syntactic derivation. in this case rN. " nothing requires it to be a separate interface! Why should one consider replacing lexical insertion with lexical licens­ ing? For one thing. Interestingly. 6 On pre­ liminary examination. the idea is that phonological realization of lexical items best awaits the insertion of syntactic inflection.2. a semantic. it appears that Halle and Marantz's theory can be recast in terms of lexical licensing. count sing] and per­ haps its linking subscripts. In turn. Now . what is "carried through" the syntactic derivation. except that it applies as part of the mapping between S-Structure and PF in figure 4. but only its syntactic features. Tree-Adjoining Grammar. it is just a 30-year-old tradition. there are not three interfaces-a phonological. and Construction Grammar.90 Chapter 4 the correspondence of certain (near-)terminal symbols of syntactic struc­ ture with phonological and conceptual structures.) Lexical licensing eliminates an assumption that is tacit in the way con­ ceptual necessity 3 of chapter 1 is stated: the idea that the computational system has a distinct interface with the lexicon. and lexical licensing would not have made formal sense. the view that syntax carries only syntactic fea­ tures is also proposed by Halle and Marantz ( 1 993). At the time it was developed. 5 There is no operation of insertion. visible to syntactic rules. which might be called lexical licensing. the lexical interface is part of the other two. is not the whole lexical item. On the present view. What it for­ mally means to say that lexical items are "inserted" at some level of S­ Structure is that the licensing of syntactic terminal symbols by lexical items is stated over this level of syntax. Rather. symbol rewriting was the basic operation of formal grammars (assumption 2 of chapter I ). their approach has a rule called Vocabulary Insertion that has the same effect as lexical insertion. although a lexical interface is a "conceptual necessity. This approach. it is not clear to me how semantic interpretation takes place in Halle and Marantz's approach.

walk. giganti c. as are those in (7b-d). as is the case in all the diagrams above except (b) of figure 4. as long as we recognize that the line con­ necting N and cat in (4a) is not a domination relation but really an abbreviation for the linking relations shown in (6). the traditional architecture in practice enforces this separation of information (though by stipulation. Of course. cat. But I am going to approach the question in a roundabout way. 1? I raise the issue this way for an important reason. on. lexical licensing looks much more natural. not on principled grounds). if PIL CIL. swim. in. so that one can say what o ne means. that is. slippery.) 4. handsome d. but not in syntax. dog. the information that a particular word is cat rather than has to be communicated somehow between phonology and con­ structure.The Lexical Interface 91 that constraint satisfaction i s widely acknowledged as a fundamental operation in grammatical theory. all the words in (7a) are syn tactically identical. Then it is possi ble to check a lexical correspondence among all three representations s i m u l ta neously. ( In particular. I see lexical licensing as embodying a more com­ pletely worked out version of the autonomy thesis. in effect going directly from finely individuated lexical p h o n o l ogica l struct u re /kret/ to fi nely i ndividuated lexical conceptual n:rtual = dOK . So it is possible to leave syntactic theory otherwise unaltered.2 PIL = CIL Next I want to ask what levels of syntax should be identified as the Phonological Interface Level ( PIL) and the Conceptual Interface Level (elL). Lexical items are very f inely individuated in phonology and in semantics. in that it allows us to strictly segregate information according to levels of representation. near A t t he same time. at least for a first approximation. armadillo b. for many purposes we can continue draw­ i ng trees the traditional way. ( 7 ) a. At the same time. Since syntax can see only syntactic features. with a different question: Should PIL and CIL be different levels of syn­ t a x . fly c. This is accomplished most simply if these interfaces I I l v o l ve the very same level of syntax. via the PS-SS and SS-CS interfaces.

---. at the same time making sure the syntactic structure is a singular count noun. This solution works. .3 sketches the problem. this lexical i ndex play s no role in the lex­ icon either. : -.- . Consequently. ..ss-cs � [CAT] correspondence : count : correspondence . when we get to its counterpart NU.. The first way involves giving each lexical item a unique syntactic index that accompanies it through the derivation. reference back to the lexicon enables NS5 to map to [CAT] in conceptual structure. but the price is the introduction of this lexical index that serves to finely individ­ uate items in syntax-a lexical index that syntactic rules themselve s can­ not refer to. .92 Chapter 4 = (a) PIL CIL -: synt3ctic ! : derivation : Ikret/ f--. connect N* through the syntactic derivation to its counterpart N** in CIL.PS-SS -+ sing J. Figure 4. some extra means has to be introduced to track lexical items through the syntactic derivation... In other words.. if PIL ::/= CIL. at CIL. we lose track of the fact that N* is connected to /kret/. Since lexical items are already adequately individuated by their phonology and semantics.3 Tracking a lexical item through the syntax (a) if PIL = CIL and (b) if PIL "" CIL structure [CAT). In turn.-.: correspondence I : �y���ti� --. What if PIL is not the same level as CIL? Then it is necessary to con­ nect /kret/ to some noun N* in PIL. The problem is that once we are in syntax. we have no way of knowing that it should be connected to [CAT] rather than [ARMADILLO). Its only function is to make sure lexical identity is preserved ... Let me outline the two ways I can think of to track lexical items through the syntax. then connect NU to [CAT). Sup­ pose the lexical item cat has the index 85.PS-SS � W _ count -) W$ � SS-CS � [CAT] ' correspondence : . : (PIL) smg · (CIL) : Ikretl f--.-. Then /kret/ maps not just to N at PIL. (b) PIL " CIL rules ----------rules : [N ] : i (PILlCIL) i .-1 derivation [ ] rules _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___ __ rules Figure 4. but to NS5 .

(8) If the phonological structure of a lexical item W maps to node n at PIL. In turn. First.The Lexical Interface 93 through the syntactic derivation from PIL to CIL. I can think of three reasons other than simplicity why this is an attrac­ t i ve theory. this would work. This way. Another way to track lexical items through the derivation is to give an index not to the individual lexical items. but it too invokes a mechanism whose only pur­ pose is to track otherwise unindividuated lexical items through the deri­ vation from PIL to CIL. this would in effect be admitting that all the steps of the derivation between PIL and CIL are also accessible to phonology and semantics: each of these steps can see which phonology and semantics go together. These mechanisms to track items through the derivation are basically tricks to subvert the problems created by such removal. the sub­ scripted b in (6» might serve the purpose of a lexical syntactic index.g. right after we found a way to eliminate them. CS) triple for a sentence. That is. there is a class of lexical items that have no syntactic . Again. something like (8). But the linking index's only proper role is as an interface function: it serves to link syntax to phonology at PIL and to semantics at CIL. One might suggest that the lexical item's linking index (e. s r n short. it does not uniquely identify cat. but to the lexical nodes in the syntactic tree. we should try to do without it. 7 For example. However. then the conceptual structure of W must map to node n at CIL. the N node of PIL matched with /kret/ might have the index 68 . this index would be identifiable at CIL. Accord­ i ngly. rather. In line with minimalist assumptions (to use Chomsky's turn of phrase). minimalist assumptions demand that we try to do without them. Part of the reason for removing lexical phonology and semantics from t he syntax is to eliminate all unmotivated syntactic individuation of lex­ ical items. the PS­ SS a n d SS-CS interfaces can be checked at the same time. I will try to work out a theory in which phonology and semantics interface with syntax at the very same level. a well-formedness condition must be imposed on the <PS. SS. this would indirectly sneak the unwanted richness of mixed representa­ tions back into syntax. If we were also to use it to keep track of lexical items through a syntactic derivation. and the fine i ndividuation of lexical items can be regarded as passing directly from phonology to semantics without any syntactic intervention.

yippee. 0). except pos­ sibly under a totally vacuous syntactic node Exc/amationO or the like. for instance fiddle-de-dee. and all the interface conditions are applied at once. Others have both phonology and semantics (or pragmatics). for instance hello.3.) Now if the use of lexical items involved inserting them in an XO posi­ tion in syntactic structure. ouch. Some of these have only phonology. but not outside language." he said. In this case there would be no way to map the phonology of hello into its semantics. wow. albeit nonpropositional. 0. I I Next suppose that the PS-SS interface and the SS-CS interface were to access different levels of syntax. See chapter 5. = the phonology of hello can be mapped into its semantics di rectly. tra-Ia-Ia. reserved (in nonteenage dialects) for nonlinguistic expressions. and hello is of the form (PS. CS) . 9 A syntactician might dismiss these items as "outside language. where virtually anything is possible. and dammit. b. . since the intermediate levels of syntactic derivation would be absent. and they undeniably carry some sort of meaning. e-i-e-i-o." since they do not participate in the "combinatorial system. Suppose instead that we adopt lexical licensing theory and treat these peculiar words as defective lexical items: tra-Ia-Ia is of the form (PS. They therefore can be unified with contexts in which no syntax is required-for example as exclama­ tions and in syntactic environments such as the phrase yummy yummy yummy or " Ouch. however. sim ply . Then John went. By con­ tra s t if PIL CI L. as in (b) of figure 4." In fact. They are indeed outside the combinatorial system of (nar­ row) syntax." except in quotational contexts like (9a). (9) a. Note. they observe normal syllable structure constraints and stress rules. " he said that place no syntactic constraints on the string in question. these items are made of standard phonological units. Thus the most ap­ propriate place in the mind to house them would seem to be the natural language lexicon-especially under the assumption of Representational Modularity.94 Chapter 4 structure. 1 0 This suggests that they are regarded as linguistic items. then dragging them through a derivation until they could be phonologically and semantically interpreted. we could not use these particular items at all: they could never be inserted. and ink-a-dink-a-doo. 0. which also has phonology and semantics/pragmatics but no detectable syntax. that they do not occur in an environment such as (9b). "[belching noise]" / *"Hello. arguably of the same sort. is expletive infixation. (A more complex case. "Hello.

if possible. but simply adds new I. Again. This means that the syntax "I' topic and focus must be correlated with both phonology and semantics. if PIL elL. see s ection 1 . and elefting also mark topic and focus-and these redundantly I l'q u i re the appropriate stre ss and intonation. can be seen just as the gradual growth of a new component of structure within an already existing framework. in a theory where PIL i= elL. The most elegant solu­ t ion. That is. or (2) an . but cannot construct syntactic structures with which to combine words meaningfully. On the other hand. A third argument that PIL elL comes from the fact that topic and foc us in conceptual structure are of