You are on page 1of 356




Draft of November 29, 2017


1 Implications 11

2 Sets, Relations, and Functions 25
2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Ordered pairs and relations . . . . . . . . . . . . . . . 35
2.2.1 Ordered pairs . . . . . . . . . . . . . . . . . . . 35
2.2.2 Relations . . . . . . . . . . . . . . . . . . . . . . 37
2.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4 Negative polarity items . . . . . . . . . . . . . . . . . . 42

3 First order logic 51
3.1 Atomic formulas . . . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Individual constants . . . . . . . . . . . . . . . 52
3.1.2 Predicate symbols and atomic formulas . . . . 53
3.1.3 Function symbols . . . . . . . . . . . . . . . . . 57
3.1.4 Boolean connectives . . . . . . . . . . . . . . . 60
3.1.5 Conditionals . . . . . . . . . . . . . . . . . . . . 65
3.2 Equivalence and tautology . . . . . . . . . . . . . . . . 67
3.3 Summary: L0 . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3.1 Syntax of L0 . . . . . . . . . . . . . . . . . . . . 70
3.3.2 Semantics of L0 . . . . . . . . . . . . . . . . . . 71
3.4 Quantification . . . . . . . . . . . . . . . . . . . . . . . 72
3.4.1 Syntax of L1 . . . . . . . . . . . . . . . . . . . . 81
3.4.2 Semantics of L1 . . . . . . . . . . . . . . . . . . 85



4 Typed lambda calculus 93
4.1 Lambda abstraction . . . . . . . . . . . . . . . . . . . 93
4.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.2.1 Syntax of L λ . . . . . . . . . . . . . . . . . . . . 103
4.2.2 Semantics . . . . . . . . . . . . . . . . . . . . . 110

5 Translating to lambda calculus 115
5.1 Fun with Functional Application . . . . . . . . . . . . 119
5.1.1 Homer loves Maggie . . . . . . . . . . . . . . . . 119
5.1.2 Homer is lazy . . . . . . . . . . . . . . . . . . . 122
5.1.3 Homer is with Maggie . . . . . . . . . . . . . . . 124
5.1.4 Homer is proud of Maggie . . . . . . . . . . . . 126
5.1.5 Homer is a drunkard . . . . . . . . . . . . . . . 127
5.1.6 Toy fragment . . . . . . . . . . . . . . . . . . . . 131
5.2 Predicate Modification . . . . . . . . . . . . . . . . . . 133
5.2.1 Homer is a lazy drunkard . . . . . . . . . . . . 133
5.3 Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.3.1 Quantifiers: not type e . . . . . . . . . . . . . . 144
5.3.2 Solution: Quantifiers . . . . . . . . . . . . . . . 148
5.4 The definite article . . . . . . . . . . . . . . . . . . . . 155

6 Variables in Natural Language 167
6.1 Relative clauses . . . . . . . . . . . . . . . . . . . . . . 167
6.2 Quantifiers in object position . . . . . . . . . . . . . . 179
6.2.1 Quantifier Raising . . . . . . . . . . . . . . . . . 179
6.2.2 A type-shifting approach . . . . . . . . . . . . . 186
6.2.3 Putative arguments for the movement . . . . . 191
6.3 Possessives . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.4 Pronouns . . . . . . . . . . . . . . . . . . . . . . . . . . 206

7 Dynamic semantics 215
7.1 Pronouns with indefinite antecedents . . . . . . . . . 215
7.2 File change semantics . . . . . . . . . . . . . . . . . . 222
7.3 Discourse Representation Theory . . . . . . . . . . . 226


8 Presupposition 235
8.1 Definedness conditions . . . . . . . . . . . . . . . . . 235
8.2 Projection problem . . . . . . . . . . . . . . . . . . . . 244
8.3 Presupposition in Dynamic Semantics . . . . . . . . . 246

9 Coordination and Plurals 257
9.1 Coordination . . . . . . . . . . . . . . . . . . . . . . . . 257
9.2 Mereology . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.3 The plural . . . . . . . . . . . . . . . . . . . . . . . . . 268
9.4 Cumulative readings . . . . . . . . . . . . . . . . . . . 272
9.5 Formal mereology . . . . . . . . . . . . . . . . . . . . . 274
9.6 A formal fragment . . . . . . . . . . . . . . . . . . . . . 277
9.6.1 Logic syntax . . . . . . . . . . . . . . . . . . . . 277
9.6.2 Logic semantics . . . . . . . . . . . . . . . . . . 278
9.6.3 English syntax . . . . . . . . . . . . . . . . . . . 278
9.6.4 Translations . . . . . . . . . . . . . . . . . . . . 279

10 Event semantics 281
10.1 Why event semantics . . . . . . . . . . . . . . . . . . . 281
10.1.1 The Neo-Davidsonian turn . . . . . . . . . . . 284
10.2 Quantification in event semantics . . . . . . . . . . . 287
10.2.1 The first strategy: Verbs as events . . . . . . . . 288
10.3 Quantification in event semantics . . . . . . . . . . . 293
10.3.1 A formal fragment . . . . . . . . . . . . . . . . . 297
10.3.2 The second strategy: Verbs as event quantifiers298
10.4 Conjunction in event semantics . . . . . . . . . . . . 304
10.5 Negation in event semantics . . . . . . . . . . . . . . . 308
10.5.1 A formal fragment . . . . . . . . . . . . . . . . . 310

11 Intensional semantics 313
11.1 Opacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
11.2 Modal Logic . . . . . . . . . . . . . . . . . . . . . . . . 317
11.2.1 Priorean tense logic . . . . . . . . . . . . . . . . 317
11.2.2 Alethic logic . . . . . . . . . . . . . . . . . . . . 319
11.3 Intensional Logic . . . . . . . . . . . . . . . . . . . . . 324


11.3.1 Introducing Intensional Logic . . . . . . . . . . 324
11.3.2 Formal fragment . . . . . . . . . . . . . . . . . 329
11.4 Fregean sense and hyperintensionality . . . . . . . . 333
11.5 Explicit quantification over worlds . . . . . . . . . . . 335
11.5.1 Modal auxiliaries . . . . . . . . . . . . . . . . . 336
11.6 Logic of Indexicals . . . . . . . . . . . . . . . . . . . . 336

12 Tense 347

1 ∣ Implications

Semantics and pragmatics are two subfields of linguistics that deal
with meaning. What is meaning? What is it to understand? For
instance, does Google understand language? Many might argue
that it does, in some sense. One could point to the fact that one
can type in, “350 USD in SEK” and get back for example “2232.23
Swedish Krona” as the first result. This is a step toward under-
standing. But it can be argued that in general, Google does not
understand language because it does not do INFERENCES. For ex-
ample, there is a webpage that says:

(1) Beth speaks English, French and Spanish.

If we were to ask Google, “Does Beth speak German?”, Google would
not know the answer. It wouldn’t even know the answer if we
asked, “Does Beth speak a European language?”
A hallmark of a system or agent that understands language or
grasps meaning is that it can do these kinds of inferences. An-
other way to put this is that a good theory of meaning should
be able to explain when one sentence IMPLIES another sentence.
The implication relations that hold between sentences are per-
haps the main kind of facts that semantics and pragmatics try to
explain. (Semantics and pragmatics can also be used to explain
why certain sentences sound odd to native speakers, i.e., accept-
ability judgments, and facts about frequency in natural language
What exactly does ‘imply’ mean? In other words, what are im-

LANGUAGE. an intelligent person infers a conclusion from a premise. When we get to formal logic. OBJECT LANGUAGE VARIABLES . i. 1 Terminological note: An IMPLICATION (or IMPLICATION RELATION) is a rela- tion that holds between a premise and an implied conclusion. We use φ ‘phi’ and ψ ‘psi’ as placeholders for sentences. the author of the website signals that Beth does not speak German is true.) We will define it as a relation between two sentences here. French and Spanish. For example: In ordinary contexts.e. Since the Greek letters φ and ψ are part of the meta- language. The verb infer is totally different from the verb imply. the noun inference describes an act of inferring a conclusion from a premise. (1) implies Beth does not speak German. It should also take into account the fact that whether or not one sentence implies another can depend on the context in which it is spoken.8 Implications plications?1 Implication can be defined either as a relation be- tween two ideas (or propositions) or as a relation between two sentences. Let us therefore use the follow- ing definition: (2) A sentence φ IMPLIES another sentence ψ in context c if and only if: By expressing φ in context c. 2 In general. following the logical tradition of using Greek letters for variables that range over expressions in the language that is being theorized about. (Other options are also possible. Strictly speaking.LANGUAGE VARIABLES (even though they range over expressions of the object language). . we can call them META . The subject of infer is the person doing the inference. but inference can also be used to mean implication. the term OBJECT LANGUAGE is used to refer to the language that we are talking about. So φ and ψ in this case range over sentences of the ob- ject language. we will encounter variables that are part of the object language. The language in which we theorize about the object language is called the META . but a premise implies a conclusion. we plug sentences in for these variables. because by expressing Beth speaks English. a speaker signals that ψ is true.. but the subject of imply is the premise. The definition should be weak enough that it will cover all of the different types of implica- tions that we will discuss.2 To apply the definition.

A more careful definition would take this into account. (5) a. (1) logically implies that Beth speaks a European language. Jane graduated from MIT in 2008. b. and when. the semantic content of a sentence may de- pend on features of the context in which it is uttered. the (a) sentence entails the (b) sentence: (4) a. All cars are blue. .) In the following pairs. the latter is true too. (7) a. Moreover. We define entailment as follows. at least against the background of very basic assumptions. b. In conjunc- tion with the fact that French is a European language. More than two pens are on the table is true too. (Entailment is not usually thought of as a context-sensitive re- lation. This is an example of an ENTAILMENT. Each of these examples satisfies the definition of entailment. Jane graduated from MIT. such as who is speaking. which holds between two sentences if the first. There are three pens on the table. All cars are blue entails All sportscars are blue. although there is a notion of CONTEXTUAL ENTAILMENT. given that all sportscars are cars. (6) a. All sportscars are blue. Whenever There are three pens on the table is true. (3) A sentence φ ENTAILS a sentence ψ if and only if: Whenever φ is true. Mary invited Fred.Implications 9 There are several different types of implications. ψ is true too. b. b. More than two pens are on the table. entails the second. For ex- ample. combined with the information that is already given in the context. Mary invited Fred and Jack. because in every imaginable world or sit- uation where the former is true.

I would signal a PRESUP- POSITION that she spoke French at some time in the past. If I said. c. It derives from the assumption that the languages listed make up an exhaustive list of the languages that Beth speaks. 1975). Presupposition can be seen as a type of entailment: In every situation where Beth no longer speaks French is true. Example (1) also implies that Beth does not speak German. the information that More than two pens are on the table conveys is contained in the information that There are three pens on the table conveys. But . to treat it as uncontro- versial and known to everyone participating in the conversation. the speaker would be “lying by omission”. For example. A situation describable by φ must also be a situation describable by ψ. This is an example of a CONVERSATIONAL IMPLICATURE . The information that ψ conveys is contained in the in- formation that φ conveys. No longer is among the set of words and phrases that can be used to signal presuppositions. Beth no longer speaks French. φ and it is not that ψ is contradictory (can’t be true in any situation). If she did speak German. a situation describable by There are three pens on the table must also be a situation describable by More than two pens are on the table. But would you say that the sentence is false if she did? Presum- ably not. Conversational implicatures are inferences that the hearer can derive using the assumption that the speaker is adhering to the norms of conversation (Grice. 2000): (8) Alternative definitions of entailment from φ to ψ a. To pre- suppose something is to take it for granted. Beth spoke French at some time in the past is also true. and There are three pens on the table and it is not true that more than two pens are on the table cannot be true in any situation.10 Implications Note that there are several other ways that entailment can be defined (Chierchia & McConnell-Ginet. b. So this implication is not an entailment.

broadly construed. The non-entailment implications that a sentence has can also be con- sidered part of its meaning. Merely asking Does Beth speak French? does not imply that Beth speaks a European language. For example. The same pattern does not arise with or- dinary entailment. whether one entails the other. we only need to be able to discriminate between situations in which the sentence is true and situations in which the sentence is false. We don’t need to know whether either sentence is in fact true. Does Beth no longer speaks French? This would imply that Beth spoke French at some time in the past. Since this is a book about semantics. Semantics is sometimes said to be the study of what linguistic expressions mean. while implicatures lie in the domain of pragmatics proper. It would not be prudent to claim that meaning of a sentence in natural language consists entirely in its truth conditions. 2 + . There are other aspects of meaning that truth conditions leave out as well. meaning is not just truth conditions. If we know the conditions under which two given sentences are true.Implications 11 presuppositions differ from ordinary entailments.” Many find this view objectionable. Heim & Kratzer (1998) begin their book with the following bold sentence: “To know the meaning of a sentence is to know its truth conditions. and indeed the study of presupposition lies squarely in their intersection. The TRUTH CONDITIONS are the conditions under which the sentence is true. given the definition of entail- ment above. but presuppositions will be addressed as well. while pragmatics is the study of what speakers mean by them. arguably. However. we will focus primarily on ordinary entailments. There is no sharp dividing line between semantics and pragmatics. For example. as you can see from what happens when they are negated or put in the form of a question. it is fair to say that ordinary entail- ments lie in the domain of semantics proper. then we can determine. The term ‘pragmatics’ can also be applied to the study of any interaction between meaning and context. al- though this view is sometimes expressed. suppose someone were to ask.

taken together. Socrates is mortal. A SOUND argument is one whose premises are true. A syllogism can be defined as a form of argument con- sisting of one or more PREMISES and a CONCLUSION. But truth conditions are a way of capturing one important aspect of meaning. then we know that φ entails ψ. a valid argument is one such that the premises. If the truth conditions for ψ are satisfied whenever the truth con- ditions for φ are satsified. Socrates’ interest in the form of valid arguments made him the first formal logician. Yet it is possible to know one without knowing the other. Note that the premises of an argument need not be true in or- der for the argument to be valid. entail the conclusion. The most fa- mous example is: (9) All men are mortal. The strategy we follow has its roots in the study of logic. an argument is valid as long as the conclusion is entailed by the premises. namely entailment. (Premise 1) Socrates is a man. and is related to Frege’s (1892) famous distinction between ‘sense’ and ‘reference’. (Premise 2) Therefore. where the conclusion necessarily follows from the premises. This phenomenon goes by the name of HYPERINTENSION - ALITY . but not valid. which has an extremely long recorded history. since they are mathematical truths and therefore do not vary in their truth value from situation to situation). An argument in which the conclusion follows from the premises is called a VALID argument.12 Implications 2 = 4 is true under just the same circumstances as Dirichlet’s The- orem is true (namely always. both (ideally). valid but not sound. Such phenomena show that there is more to meaning than truth conditions. whose SYLLOGISMS were studied for centuries. or neither (unfortu- nately). Aristotelian logic and developments thereof enjoyed a rich and intensive period of study during the Middle . In other words. An argument may be sound. Notable developments include the work of Aristotle.

despite his application of formal language theory to natural language syn- tax. then my concept notation. Frege wrote: If the task of philosophy is to break the domination of words over the human mind [. up until the 1960’s there was a broad consensus among philosophers of language from otherwise diverse camps that there was an intractable divide be- tween formal logic and natural language.] I believe the cause of logic has been advanced already by the in- vention of this concept notation. Indeed. unlike natural lan- guages. but it died out at the hands of the Renaissance humanists in the 14th century.” Mon- tague defined a ‘fragment’ of English just as formally and precisely as a formal logic. laying the groundwork for a new approach. But in the 1970’s Richard Montague (among others) dared to bridge this divide. advocated for it. his artificial nota- tion was not specifically intended as a way to explicate the mean- ings of sentences in natural language. formal semantics has become a core area of generative lin- guistics. and extended its frontier.].. Interest in logic was revived in the 19th century by the likes of Gottlieb Frege and Bertrand Russell. In the preface to his Be- griffsschrift. Thanks to her efforts. As Frege’s primary interest was mathematical. a lin- guist who took an interest in his approach. beginning one of his most famous articles with the bold claim: “I reject the contention that an important theoret- ical difference exists between formal and natural language.Implications 13 Ages. can be a useful instrument for philosophers [.. who wanted to set mathematics on a firm logical footing. was also deeply skeptical of the relevance of formal logic to linguistic theory. Frege and Russell developed artificial languages. are completely unambiguous.. Noam Chomsky. The potential of this approach might never have been explored nearly as fully as it has been if it hadn’t been for Barbara Partee. and those of her stu- dents. being developed for these purposes. To do so. which. ..

These circumstances can be called CIRCUMSTANCES OF EVALUATION. A MODEL for first-order logic can be used to represent a possible world. a situation typi- cally carves out a particular part of a possible world. we can start describing the circumstances with respect to which a given sentence is true. we will get to that later. sentence (1) is false. For this. We reserve the term ‘Logical Form’ for an abstract level of syntactic representation for natural language. and are often conceived of as POSSIBLE WORLDS or SITUATIONS. nor should it be called ‘Logical Form’. The class of circumstances under which a sentence is true can be identified with the truth condi- tions of a sentence: the circumstances where a sentence is true are those under which the truth conditions are satisfied. Set theory will thus form part of our meta-language. which will be discussed in the next chapter.14 Implications This book will present a modern version of Montague’s ap- proach. we will use several ingredients. we define rules connecting expressions of natural lan- guage to this formal language. we borrow tools from formal logic. Once the natural language is con- nected to the formal language. To define such a language. To do this. and under that cir- cumstance. we could imagine a cirucmstance where Beth speaks only Turkish. we call it a ( LOGICAL ) REPRESENTATION LANGUAGE . First. The second ingredient we will use is an unambiguous formal language that has a clearly defined relationship to these models. we need to make certain assumptions about the nature of what language describes – what makes up reality and dreams and lies. and our models will be characterized using ideas from set theory. This formal language is not the meta-language (although it is not un- common to hear semanticists and philosophers speak this way). For example. semanticists typically borrow ideas and notation from SET THE - ORY . and the formal language is con- . As the formal language serves as an intermediary between the natural language and the truth conditions. Using this precise meta- language. Thirdly. representing the latter precisely. A pos- sible world is just a way that the world might be.

This means that the meaning of a larger expresssion can be deter- mined from the meanings of the parts. This means that our rules for connecting expressions of natural language to formal language must include not only a way of assigning meanings to simple words and other atomic lexical units (a LEXICON). atomic units. Advocates of Construction Grammar em- phasize phenomena that seem to point to a need for many differ- ent composition rules.’ . we can determine when one natural lan- guage sentence entails another. corresponding to different constructions. Crucially. but also a way of com- bining these meanings together to assign meanings to complex units. the system pre- sented here is similar to the one in Richard Montague’s famous pa- per. and what the truth conditions of natural language sentences are. According to what Heim & Kratzer (1998) call F REGE ’ S CONJECTURE.Implications 15 nected to the models. the last two are for interpreting simple. we will present the moderately slim theory of com- position given by Heim & Kratzer (1998). the system that we build will be COMPOSITIONAL. consisting of only five very general-purpose composition rules: • Functional Application • Predicate Modification • Predicate Abstraction • Lexical Terminals • Pronouns and Traces The first three are for putting together complex expressions. The way we do this is using COMPOSITION RULES. One of the deeper questions that research in semantics engages with is how many composition rules are necessary. Note that insofar as we are first defining a formal language and translating expressions of natural language into it. ‘The Proper Treatment of Quantification in Ordinary English. In this book. we need only one rule: Functional Application.

you will be able to modify it to suit your purposes. it may look very little like how it did originally. Indirect interpretation offers a number of practical tech- nical advantages. Consider this the basic starter kit for a theory of seman- tics. Consider the following exchange that took place when American movie producer Samuel Bronston was being ques- tioned under oath during a bankruptcy hearing: Q: Do you have any bank accounts in Swiss banks. and meshes well with the Lambda Calculator. That style is known as DIRECT INTERPRETATION. One reason to learn it is that it can be used as a lingua franca with other lin- guists since it is quite standard.16 Implications affectionately known as ‘PTQ’. Bronston? A: No. There are a num- ber of interesting ways in which it falls short at explaining natural language phenomena. If you understand the foundations well (and you have a good supplier of bells and whistles). sir. in Zurich. pedagogical software that can be used in conjunction with this book. Another way to go about things would be to skip the formal language and give the intepretations of English expressions directly using our meta-language. This method is known as INDIRECT INTERPRETATION . Don’t be discouraged by this. Q: Have you ever? A: The company had an account there for about six months. as Montague did in his paper ‘English as a Formal Language’. Another reason not to despair too quickly is that after the theory is modified and extended to capture more and more data. Exercise 1. The system presented here is quite simple. Mr. Using indirect interpretation is one respect in which this textbook differs from that of Heim & Kratzer (1998). It was later revealed that Bronston had had a Swiss bank account .

This is not true of entailments. whether I say Mary has stopped smoking. that he had not personally ever had a Swiss bank account. or Has Mary stopped smoking? I imply that Mary has smoked in the past. I know an unmarried male. in some sense of imply. are they logically equivalent? Explain your reasoning. Exercise 3. Explain how the test supports your conclusion. Two sentences have the same truth conditions if they are true under exactly the same circumstances. .. Exercise 2. (b) The flying saucer came yesterday. and this does not persist under negation. consider Mary does not own a golden retriever and Does Mary own a golden retriever? Use this PROJEC - TION TEST to determine whether the following implications are entailments or presuppositions. (a) The flying saucer came again. Bronston’s answer implied. The flying saucer has come sometime in the past. The flying saucer has come sometime in the past. an entailment of Mary owns a golden retriever is Mary owns a dog. Clearly. Is this implication an entailment? Explain your answer.Implications 17 for about 5 years. (a) I know a bachelor. For example.e. i. John likes Mary and he fears her. Do the following pairs of sentences have the same truth conditions. Mary hasn’t stopped smoking. One of the hallmarks of presuppositions is that they persist under negation and in questions. For example. (b) John likes Mary but he fears her.

give an example of such an argu- ment. Which of the following are valid? Which are sound? (a) Every Spaniard is female. (c) Every Swede is a socialist. there- fore Bernie Sanders is a Swede. (d) If 2 + 2 = 5 then the moon is made of green cheese. therefore. (d) North Korea resembles China. George Bush is female. Paris is the capital of France. 2 + 2 = 5. only Mary got an A. (b) 2 + 2 = 4. Exercise 4. All of the students except Mary didn’t get an A. . Exercise 6. the moon is made of green cheese.18 Implications (c) Of all the students. there- fore. China resembles North Korea. Bernie Sanders is a socialist. If your answer is no. therefore. explain why. Can a valid argument have false premises and a false conclusion? False premises and a true conclusion? True premises and a false conclusion? True premises and a true conclusion? If you answer yes to any of these. George Bush is a Spaniard. Exercise 5. Explain the difference between meta-language and object language.

Implications 19 Exercise 7. . What are the three ingredients of the formal theory to be presented in this book? Use as few words as possible in your answer.


2. we can write: 3 ∈/ {2. and functions. To achieve this. we need to introduce certain tools for describing the circumstances under which a given sentence is true or false. Here is an example of a set: {2. Relations. dreams. 7. 7 and 10. In this chapter. we can write: 2 ∈ {2. and the number 10. we will introduce the most fundamental of these tools: sets. To denote the fact that 2 is a member of this set.2 ∣ Sets. Notice that the members of the list are sepa- rated by commas and enclosed by curly braces. 10} To denote the fact that 3 is not a member of this set.1 Sets A SET is an abstract collection of distinct objects which are called the MEMBERS or ELEMENTS of that set.’ . ‘3 is not an element of the set con- taining 2. 10} This set contains three elements: the number 2. the number 7. and lies that language depicts. and Functions Recall that one of the ingredients of our formal theory is a way of modelling the realities. 10} This statement can be read. ordered pairs and relations. 7. 7.

) signals that the list of elements continues according to the pattern. Relations. for example. 4.22 Sets.. But a set need not have multiple members.. The set with no elements at all is called the EMPTY SET. 8. The cardinality of the empty set. 3. Thus: {3. it can have just one element: {3} This set contains just the number 3. Thus this set: {2. 7} Note also that listing an element multiple times does not change the membership of the set. and Functions Note that the elements of a set are not ordered. it contains all pos- itive even numbers. Cardinality is . 4. 6. 3. So this set is infinite.. Here is another example of a set: {2. 7. written either like this: {} or like this: ∅ The CARDINALITY of a set is the number of elements it contains. is 0. 3} is exactly the same set as this set: {3} Sets can be very big or very small. 2.} The ellipsis notation (. 3. . A set can even be empty. 4} is exactly the same set as this set: {5. If a set has only one member.. 5. it is called a SINGLETON.

{4. not one. of course. for example: ∣{5. The following set: {2. Partee et al. and 7 is 3. 3. How many elements do the following sets contain? (a) {2. 5}} In the kind of set theory that linguists typically use. or the set of all Swedish soccer players). 5. not four. (1990) also point out: . or your computer) or abstract (like the number 2. Exercise 1. the English phoneme /p/.Sets. Relations. and Functions 23 denoted with vertical bars surrounding the set: If A is a set. contain another set as an element. contain the empty set as an element. A set can. 6}} (b) ∅ (c) {∅} (d) {∅. One of the elements is the num- ber 2. ‘The cardinality of the set containing 5. 3. 2} This set has two elements. 6. for example. {1. 5}} contains two elements. So. as the follow- ing set does: {∅. The other element is a three-membered set. elements may be either concrete (like the beige 1992 Toyota Corolla I sold in 2008.’ The members of a set can be all sorts of things. 5}} (e) {∅. 7}∣ = 3 This formula can be read. 4. A set could also. {4. {3. 6. you. then ∣ A ∣ denotes the cardinality of A. 3.

4}.e. This type of scenario can . we place a variable – a name that serves as a placeholder – on the left-hand side of a vertical bar. The set of Roman emperors is well-defined even though its membership is not widely known. In general. When we can’t list all of the members of a set. and put a description containing the variable on the right-hand side. i. we can use PRED - ICATE NOTATION to describe the set of things meeting a certain condition. So the vertical bar is pronounced ‘such that’ in this context. but rather a subset of the set {2. To do that.. 3} is not an element. The set {2. and Functions A set may be a legitimate object even when our knowl- edge of its membership is uncertain or incomplete.. to describe the set of numbers below zero. a set A is a SUBSET of a set B if and only if every member of A is a member of B . for instance. Put more formally: A ⊆ B iff for all x: if x ∈ A then x ∈ B . expresses that every member of the set of musicians is a member of the set of people who snore. 3. if every member of X is a member of Y . The sentence every musician snores. and similarly the set of all former first-grade teachers is perfectly deter- mined. we can intro- duce the variable n (for ‘number’) n < 0 to describe the condition that any number n must meet in order to be counted as part of the set. Relations. The word every can be be thought of as a relation between two sets X and Y which holds if X is a subset of Y . ‘the set of numbers n such that n is less than 0’.. although it may be hard to find out who be- longs to it. For example. For a set to be well-defined it must be clear in principle what makes an object qualify as a mem- ber of it.24 Sets. The result looks like this: {n ∣ n < 0} This expression can be read.

c } Things get slightly trickier to think about when the elements of the sets involved are themselves sets. c } is that the former has a member that is not a member of the lat- ter. c } (The slash across the ⊆ symbol negates it. b. {b }} is not a subset of {a. c } {a } ⊆ {a. {b }. It is tempting to think that {a. b } ⊆ {a. One is a set and the other might not be. b. as we can see by observing that the following two statements hold: a ∈ {a. b. {b }. {b }} is an element of {a. The following is a true statement. The set {b } is not the same thing as b. Here is another true statement: {a. b. c } Every element of {a. namely: a and {b }. {b }. {b }} ⊆ {a. c } {b } ∈ {a. namely {b }. Here are some true statements: {a. The set {a. c } {b. {b }} contains b but this is not correct. with the dashed circle for the snorers and the plain circle for the musicians. c }. {b }} has exactly two elements. c } ⊆ {a. and Functions 25 be depicted as follows.Sets. so ⊆ / can be read ‘is not a subset of’. c } . {b }. b. {b }} ⊆/ {a. Relations. though: {a.) The reason {a.

written A ⊂ B . so the definition of subset is always trivially satisfied. b. written A ⊃ B . So whenever anybody asks you. So.. if and only if every member of B is ia member of A. you can answer “yes” without even hear- ing the rest of the sentence. even though normally we think of two sets of different sizes when we think of the subset relation. is the set of all entities x such that x is a member of A and x is a member of B . b. So: {a. b. c } Since the empty set doesn’t have any members. c } ⊆ {a. {a. if and only if A is a superset of B and A is not equal to B .?”. written A ∩ B .) Note also that by this definition. c } To avoid confusion. if and only if A is a subset of B and A is not equal to B . A is a PROPER SUPERSET of B . c }. A is a PROPER SUBSET of B . Relations. c } ⊂ {a. it never contains anything that is not part of another set. For example. . b. And as you might expect. it helps to distinguish between subsets and proper subsets. A ⊂ B iff (i) for all x: if x ∈ A then x ∈ B and (ii) A ≠ B . c } ⊆ {a. A ⊃ B iff (i) for all x: if x ∈ B then x ∈ A and (ii) A ≠ B . The reverse of subset is superset. then you’ll have to look among the elements of the set in order to decide. b. c } but it is not the case that {a. A ⊇ B iff for all x: if x ∈ B then x ∈ A. every set is actually a subset of itself. (If they ask you whether the empty set is an element of some other set. b. and Functions Note that the empty set is a subset (not an element!) of every set.. The INTERSECTION of A and B . b. written A ⊇ B . “Is the empty set a subset of. A is a SUPERSET of B .26 Sets. in particular: ∅ ⊆ {a.

b. and the plain circle represents lawyers.e. No can be thought of as a relation between two sets X and Y which holds if the two sets have no members in common. if he is both a doctor and a lawyer. So no musician snores holds if there is no individual who is both a musician and a snorer. If the dashed circle in the following diagram represents doctors. as a relation between two sets X and Y which holds if there is some member of X which is also a member of Y . b } Intersection is very useful in natural language semantics. The English determiner some could be thought of in terms of intersection as well. For example. if the intersection between X and Y is non-empty. i. c. if someone tells you that John is a lawyer and a doctor.. In that case. b } = {a.Sets. d } = ∅ {a. c } ∩ {b. For in- stance. if the intersection is empty. b } ∩ {a. then John is lo- cated somewhere in the area where the two circles overlap. like so: . some musician snores should be true if there is some indi- vidual which is both a musician and a snorer. c. the two sets are DISJOINT. c. then you know that John is in the intersection between the set of lawyers and the set of doctors. in other words. d } = {b } {a } ∩ {b. d } = {b. and Functions 27 A ∩ B = {x ∣ x ∈ A and x ∈ B } For example: {a. c } {b } ∩ {b. It can be used as the basis for a semantics of and. Relations.

and Functions Another useful operation on sets is union. {a.) We can also talk about SUBTRACTING one set from another. or the result of subtracting B from A. c }−{b. d . union can be used to give a semantics for or. written A ∪ B . Sometimes people speak simply of the COMPLEMENT of a set A. ‘A minus B ’. . b } ∪ {b. b. it is relative to some assumed UNIVERSE of entities. f } = {a. This is still implicitly a relative complement.28 Sets. c } {a. This is called an exclusive interpretation for or. A ∪ B = {x ∣ x ∈ A or x ∈ B } For example: {a. (You might normally assume that he is not in the inter- section of doctors and lawyers though – that he is either a doctor or a lawyer. e } = {a. then you know that John is in the union of the set of lawyers and the set of doctors. A − B = {x ∣ x ∈ A and x ∈/ B } For example. but not both. without specifying what the complement is relative to. is the set of all entities x such that x is a member of A or x is a member of B . e } {a. c } = {a. d . b } ∪ ∅ = {a. c }. If someone tells you that John is a lawyer or a doctor. and we will get to that later on. written A − B or A ∖ B . The UNION of A and B . b } As the reader can guess. is the set of all entities x such that x is an element of A and x is not an ele- ment of B . The DIFFERENCE of A and B . A − B can also be read. b. b } ∪ {d . b. This is also known as the RELATIVE COMPLEMENT of A and B . Relations.

2} G = {{a. b } F =∅ C = {c. b }. 3. {c }} B = {a.Sets. A } S6 = ∅ S2 = A S 7 = {∅} S3 = { A} S 8 = {{∅}} S 4 = {{ A }} S 9 = {∅. 2. ter Meulen and Wall. b. and Functions 29 Exercises on sets The following exercises are taken from Partee. {c. 2}} D = {b. b. Consider the following sets: S 1 = {{∅}. c } classify each of the following statements as true or false. Exercise 2. c. A } (a) Of the sets S 1 − S 9 . 4} E = {a. Relations. Mathematical Methods in Linguistics. { A }. Given the following sets: A = {a. (a) c ∈ A (g) D ⊂ A (m) B ⊆ G (b) c ∈ F (h) A ⊆ C (n) {B } ⊆ G (c) c ∈ E (i) D ⊆ E (o) D ⊆ G (d) {c } ∈ E (j) F ⊆ A (p) {D } ⊆ G (e) {c } ∈ C (k) E ⊆ F (q) G ⊆ A (f) B ⊆ A (l) B ∈ G (r) {{c }} ⊆ E Exercise 3. {∅}} S 5 = {{ A }. which are members of S 1 ? (b) Which are subsets of S 1 ? (c) Which are members of S 9 ? .

c. b.. C = {d . Given the sets A. b } F =∅ C = {c. . c } list the members of each of the following: (a) B ∪ C (g) A ∩ E (m) B − A (b) A ∪ B (h) C ∩ D (n) C − D (c) D ∪ E (i) B ∩ F (o) E − F (d) B ∪ G (j) C ∩ E (p) F − A (e) D ∪ F (k) B ∩ G (f) A ∩ B (l) A − B (q) G − B Exercise 5. 3. {c }} B = {a. b }. B = {c. 2} G = {{a. e. Let A = {a. Calculate the following: (a) A ∪ B (b) A ∩ B . and Functions (d) Which are subsets of S 9 ? (e) Which are members of S 4 ? (f) Which are subsets of S 4 ? Exercise 4. b. b. d }. c }.. 2}} D = {b.G from above. 4} E = {a. {c. Relations. 2. f }. repeated here: A = {a..30 Sets.

a is the FIRST MEMBER and b is the SECOND MEMBER. a ⟩ An ordered pair always consists of two members.Sets.2 Ordered pairs and relations The meanings of common nouns like cellist and intransitive verbs like snores are often thought of as sets (the set of cellists. But like sets. the set of individuals who snore. Transitive verbs like love. a relation is a set of ordered pairs. admire. a first mem- ber and a second member. ⟨a. Relations. Here is an ordered pair of . B }? (i) Is a a member of A ∪ B ? 2. etc. b ⟩ ≠ ⟨b. sets are not ordered: {a.2.). and respect are sometimes thought of as denoting relations between two individuals. 2. b } = {b.1 Ordered pairs As stated above. the members of an or- dered pair can be almost anything. a } But the elements of an ORDERED PAIR written ⟨a. Technically. and Functions 31 (c) A ∪ (B ∩ C ) (d) C ∪ A (e) B ∪ ∅ (f) A ∩ (B ∩ C ) (g) A − B (h) Is a a member of { A. b ⟩ are ordered. Here.

12⟩. Relations. {4. 3⟩} (g) ⟨3. 4}} = {3. 4⟩} = {⟨3. and the sec- ond member is the ordered pair ⟨10. one or both of the members could be an ordered pair. 3}} . 12⟩. c }. b. e. {d . e.32 Sets. ⟨10. 3⟩ (f) {⟨3. Exercise 6. {3. Note that ⟨3. {10. 4⟩ A member of an ordered pair could also be a set. written: ⟨{a. The first is an ordered pair whose second member is the set containing 10 and 12. 3⟩} = ⟨3. b. 4}⟩ = ⟨3. ⟨10. 3}⟩ (h) {3. 3⟩ = ⟨3. 12}⟩ is not the same thing as ⟨3. as in the following: ⟨3. 3} = {3} (b) {3. 12⟩⟩. 12⟩⟩ In this ordered pair. as in the or- dered pair whose first member is the set {a. {3. 4⟩ = ⟨4. True or false? (a) {3. f }⟩ Alternatively. and Functions numbers: ⟨3. c } and whose sec- ond member is the set {d . {4. the first member is the number 3. 4} = {4. ⟨3. 3} (c) ⟨3. 3⟩. f }. ⟨3. 4⟩. 3⟩ (d) ⟨3. the sec- ond is an ordered pair whose second member is the ordered pair ⟨10. 3⟩ (e) {⟨3.

the mother relation might be thought of as a relation from female animals to animals. the Cartesian product of {a. and the range is the set of objects from which the second members are drawn. on such a view. 0⟩} If R is a relation from A to B . transitive verbs like love. written A × B . admire. c } and {1.1 A relation has a DOMAIN and a RANGE. 0⟩. The domain is the set of objects from which the first members are drawn. The C ARTESIAN PRODUCT of two sets A and B . ⟨b. 1⟩. 1⟩. Sometimes a relation and its extension are identified. If R is a relation between two sets A and B . and respect are sometimes thought of as relations between two individuals. ⟨c. In symbols: R ⊆ A ×B 1 The set of ordered pairs such that the first member loves the second is called the EXTENSION of the ‘love’ relation. ⟨a. We will tend to blur the distinction between a relation and its extension for the purpose of cutting down on the length of our sentences.2 Relations As mentioned above. Sometimes a rela- tion and its extension are kept conceptually separate. A × B = {⟨x. b.2. and the extension is determined by it but not the same thing. Suppose John loves Mary. 0⟩. 1⟩. For example. is the set of ordered pairs that can be constructed by pairing a member of A with a member of B . then R is a subset of the Cartesian product of A and B . y ⟩ ∣ x ∈ A and y ∈ B } For example. ⟨c.Sets. a relation is some kind of abstract entity. 0} is: {⟨a. . a relation just is a set of ordered pairs. The ‘love’ relation corresponds to the set of ordered pairs of individuals such that the first member loves the second mem- ber. Mary⟩ is a member of this relation. on this view. ⟨b. Sometimes it is useful to talk about the relation that results from pairing every element of one set with every element of an- other set. then we say that R is a relation from A to B . Certain nouns like mother can also be thought of as expressing re- lations. and Functions 33 2. Relations. Then the pair ⟨John. such as an algorithm for determining whether two people love each other.

⟨Finland.0⟩. 1} Questions: (a) All of these relations have the same domain. Relations. ⟨Norway. ⟨Denmark.1⟩.0⟩. ⟨Iceland. ⟨Denmark. ⟨Sweden. ⟨Norway. a specification of which item you would like to buy). ⟨Norway. Norway. ⟨Denmark.34 Sets. and Functions Exercise 7.3 Functions Now we are ready to introduce functions. Iceland. a function is like a vending machine: It takes an INPUT (e. Intuitively.0⟩. Norway. ⟨Norway. What is it? (b) All of these relations have the same range. and gives an OUTPUT (e. 1}. 1}: R 1 : {⟨Sweden. All of the following are relations from the set contain- ing the Nordic countries (Iceland.0⟩} R 3 : {⟨Sweden.0⟩.1⟩} R 4 ∶ {⟨Sweden.1⟩.1⟩.1⟩. and Denmark) to {0.0⟩. Finland} × {0.0⟩. Sweden. Finland.1⟩.) This kind of idea can be treated mathematically as a relation from the set of inputs to . Denmark} and {0. Finland.1⟩.0⟩. Norway. ⟨Iceland. ⟨Finland.1⟩} R 2 ∶ {⟨Sweden.1⟩. Sweden. ⟨Norway. (Let us set aside the fact that vending machines typically also require an input of money. Denmark. ⟨Finland.0⟩.g.0⟩} R 5 : {Sweden. What is it? (c) True or false: All of these relations are subsets of the Cartesian product of {Iceland. 2. ⟨Iceland. a chocolate bar).g.

In Figure 2.2. • The domain of R is equal to A. there are two outputs: 0 and 1. it must be predictable exactly which output you are going to get for a given input. In Figure 2. For example. 0⟩. So every input should be paired with one and only one output. and Functions 35 the set of outputs.Sets. a function is a special kind of relation. {⟨0. range range domain domain Figure 2. 1⟩} is not a function because given the in- put 0. (Very special! Functions lie at the heart of modern formal semantics. the relations depicted are functions. Relations.) A relation R from A to B is a FUNCTION if and only if it meets both of the following conditions: • Each element in the domain is paired with just one element in the range.1: Two non-functions Formally. the relations depicted are not functions.1. ⟨0. Importantly. for something to be a function. .

and that b is the VALUE of the function F when a is given as an argument. More prop- erly speaking we say that a is given to F as an ARGUMENT. then: F (a ) = b This means that given a as input. given Norway as an argument? . Relations. Exercise 9. Which of the relations R 1 − R 5 are functions? It may help to draw them. F gives b as output.2: Two functions Exercise 8. b ⟩. If F is a function that contains the ordered pair ⟨a. We write F (a ) to denote ‘the result of applying function F to argument a’ or F of a’ or ‘F applied to a’.36 Sets. Given R 1 as defined above: (a) What value does R 1 take. and Functions range range domain domain Figure 2.

Lisa. applied to Lisa. Lisa}. The function we have just been looking at is the characteristic function of the set {Bart. ⟨Bart. As mentioned above. But we could also treat it as a function that takes an individual and returns a truth value (0 or 1. ⟨Marge. and Functions 37 (b) In other words. ⟨Maggie. the semantics of common nouns like tiger and picnic and student are sometimes thought of in terms of sets – the set of tigers. Maggie}. the set of children in the Simpsons family is {Bart. A function that yields 1 (“true”) for every input that is in set S is called the CHARAC - TERISTIC FUNCTION of S. it would yield 0 (“false”). 1⟩. . ⟨Lisa. 1⟩} Here is an alternative way of writing the same thing: ⎡ Homer → 0 ⎤ ⎢ ⎥ ⎢ → ⎥ ⎢ Marge 0 ⎥ ⎢ ⎥ ⎢ Bart → 1 ⎥ ⎢ ⎥ ⎢ Lisa → 1 ⎥ ⎢ ⎥ ⎢ → ⎥ ⎣ Maggie 1 ⎦ This function. Maggie. 0⟩. Relations. would yield 1 (“true”). Applied to Homer. 0 for “false” and 1 for “true”): {⟨Homer. 0⟩.Sets. One possible analysis of the word child relative to the Simpsons domain is this set. the set of students. what is R 1 (Norway)? (c) What is R 1 (Iceland)? (d) What is R 2 (Norway)? Characteristic function. because Homer is not a child (de- spite the fact that Lisa is more mature than he is). But another way of treating their meaning (which will prove useful to us) is as functions from individuals to truth values. the set of picnics. For example. 1⟩.

) (b) What set is R 2 is the characteristic function of? (c) Give the characteristic function of the set of Scandinavian countries. b. 2. including any. (a) What set is R 1 is the characteristic function of? (You can define it by listing the elements. Here is a sam- pling of the data (Ladusaw. with the set of Nordic countries as its domain. Here is the puzzle. There are certain words of English. . ⎧ ⎪ No one ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ At most three people ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Few students ⎪ ⎪ ⎪ (2) ⎨ ⎬ who had ever read anything about ⎪ ⎪ ⎪ *Someone ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ *At least three people ⎪ ⎪ ⎪ ⎪ ⎩ *Many students ⎪ ⎭ phrenology attended any of the lectures. we can start to get a grip on one of the classic puzzles of semantics. 1980).4 Negative polarity items With some basic tools from set theory. and Functions Exercise 10. Chrysler dealers don’t ever sell any cars anymore. yet. and anymore. Relations. ever. These are called NEGATIVE POLARITY ITEMS (NPIs). *Chrysler dealers ever sell any cars anymore. which can be used in negative sentences but not posi- tive sentences: (1) a.38 Sets. It’s not just negative environments where NPIs can be found.

⎧ ⎪ before ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ if ⎪ (5) John will replace the money ⎨ ⎬ anyone ever misses ⎪ ⎪ ⎪ *after ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ *when ⎭ it.Sets. ⎧ ⎪ is unlikely ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ is doubtful ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ amazed John ⎪ ⎪ ⎪ (8) It ⎨ *is likely ⎬ that anyone could ever discover that ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ *is certain ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ is surprising ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ unsurprising *is ⎪ ⎭ . ⎧ ⎪ hard ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ difficult ⎪ (6) It’s ⎨ ⎬ to find anyone who has ever read anything ⎪ ⎪ ⎪ *easy ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ *possible ⎭ much about phrenology. Relations. John voted { } ever approving any of the pro- *for posals. ⎪ ⎪ ⎪ *usually ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ *always ⎪ ⎪ ⎪ ⎪ ⎩ *sometimes ⎪ ⎭ without (4) a. John finished his homework { } any help. and Functions 39 ⎧ ⎪ never ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ rarely ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ seldom ⎪ ⎪ ⎪ (3) I⎨ ⎬ ever eat anything for breakfast anymore. *with against b. ⎧ ⎪ doubted ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ denied ⎪ (7) John ⎨ ⎬ that anyone would ever discover that the ⎪ ⎪ ⎪ *believed ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ *hoped ⎭ money was missing.

Ladusaw (1980) illustrated a correlation between NPI licens- ing and “direction of entailment”. Mary is a cellist. This shows that the ability to license negative polarity items is not a simple yes/no matter for each lexical item. and Functions the money was missing. The issue is made a bit more complex by the fact that words differ as to where they license negative polarity items. b. we can say that the inference in (7) proceeds from lower (more specific) to higher (more general). No [ student who had ever read anything about phrenol- ogy ] [ attended the lecture ]. there are words like hard and doubt and unlikely which license negative polarity items. But the determiner every licenses NPIs only in the noun phrase immediately following it: (10) a. So. Every [ student who had ever read anything about phrenol- ogy ] [ attended the lecture ]. A simple. along with negation. *Every [ student who attended the lectures ] [ had ever read anything about phrenology ]. No licenses NPIs throughout the sentence: (9) a. . ⇒ Mary is a musician. positive sentence con- taining the word cellist will typically entail the corresponding sen- tence containing the word musician: (11) a. b.40 Sets. Relations. b. If we think of these terms as being arranged visually in a taxonomic hi- erarchy with more specific concepts at the bottom. No [ student who attended the lectures ] [ had ever read anything about phrenology ]. We can describe this situation in terms of sets and subsets. hence “upwards”.

. (15) a.. No musician snores. b. b. ⇒ No cellist snores.. some and every above..Sets. These entailments are called. But negation and the determiner no reverse the entailment pat- tern: (13) a. An entailment by a sentence of the form [. trombonist . and Functions 41 musician string player brass player .. There is a correlation between NPI-licensing and downward entailment: NPIs occur where downward entailments occur. No musicians snore. ] to a sen- tence of the form [ .. Here is another upward entailment: (12) a.. DOWN - WARD ENTAILMENTS . (downward) . cellist violinist . A .. B . ] where A is more specific than B can thus be labelled an UPWARD ENTAILMENT. ⇒ Mary isn’t a cellist. Com- pare the following examples to the NPI data for no... as the reader may have guessed. Relations. Mary isn’t a musician... Some cellists snore. ⇒ Some musicians snore. because they go from more general (higher) to more specific (lower).. b. ⇒ No cellists snore.. (14) a.

(downward) (16) a. Do an internet search for “X-bar syntax” to find out more about it. . NP stands for “Noun Phrase”. ⇒ Some musician snores. No musician snores. and the following syntactic structure in par- ticular:2 S NP VP D N′ snores no musician So no licenses NPIs in the N′ . The “scope” of an expression is the constituent it combines with syntactically.42 Sets. and that the resulting noun phrase combines syntactically with the verb phrase. Some cellists snore. Relations. ⇒ No musician snores loudly. Every musician snores. and Functions b. Every musician snores loudly. (upward) b. or some combines syntactically with the noun next to it. it stands for an intermediate phrase level in between nouns and noun phrases. and the expression no musician li- censes NPIs and downward entailments in the VP. D stands for “Determiner”. Some musician snores loudly. (downward) b. ⇒ Some musicians snore. and N′ should be read “N-bar”. every. ⇒ Every cellist snores. The triangles in the trees indicate that there is additional structure that is not shown in full detail. Every licenses 2 S stands for “Sentence”. We can assume that a determiner like no. VP stands for “Verb Phrase”. (upward) (17) a. ⇒ Every musician snores. (upward) Ladusaw’s generalization was as follows: An expression licenses negative polarity items in its scope if it licenses downward entail- ments in its scope.

because every cellist is a musician.) In general. rather than the element on the right. X . If that is true. Then X is a subset of Y . then any subset X ′ of X will also be a subset of Y . So. so that the dashed circle is fully contained by the solid circle. Let us consider whether every is right downward monotone.. we can say that every is LEFT DOWNWARD MONOTONE (“left” because it has to do with the element on the left. Consider what happens when we consider a subset X ′ of X (e. Assume it is true that all musicians snore. Now let the dot- ted circle represent the set of cellists.Sets. Suppose that every X Y is true. if Every musician snores is true.g. This can be visualized as follows. and Functions 43 NPIs in the N ′ . a determiner δ is left down- ward monotone if δX Y entails δX ′ Y for all X ′ that are subsets of X. This will be fully contained by the dashed line. Relations. but the expression every musician does not license NPIs or downward entailments in the VP. Every X Y means that X is a subset of Y . Now we will take a subset of Y . . Are we guaranteed that X is a subset of Y ′ ? No! Consider the following scenario. and let the dashed line represent the musicians. A determiner δ is RIGHT DOWNWARD MONOTONE if δX Y en- tails δX Y ′ for any Y ′ that is a subset of Y . Let the circle drawn with a solid line represent the set of snorers. Y ′ . the set of cellists). then Every cellist snores is also true. Y . Since every X Y entails every X ′ Y for every X ′ that is a subset of X .

This does not mean that Some cellist snores is true. because it could be the case that none of the musicians who snore are cellists. does Ladusaw’s generalization work for no? Explain.44 Sets. For each of the examples in (2). Is no left downward monotone? Is it right downward monotone? Explain. and (4). Relations. So every is not right downward monotone. then we might end up with a set that has no members in common with Y .) . Exercise 12. we are not guaranteed that the sentence will remain true when we replace X with a sub- set X ′ . Are downward entail- ments licensed in exactly the places where NPIs are licensed? (Note that the examples that you need to construct in order to test this need not contain NPIs. it isn’t right downward monotone either. they can be examples like the ones in (15). By analogous reasoning. Now let us consider some. Some X Y means that the intersection of X and Y contains at least one member. Exercise 11. suppose that Some musician snores is true. So some is not left downward monotone. With some. and Functions Or think about it this way: Just because every musician snores doesn’t mean that every musician snores loudly. check whether Ladusaw’s generalization works. like this: So. (16) and (17). (3). So. for example. If we take a subset X ′ of X .

One issue is how to explain the presence of negative polarity items in questions. (18) a. Relations. Giannakidou (1999) offers a theory of negative polarity item licensing based on a notion called ‘veridicality’. ⇒ / John will replace the money before he gets to Paris. and Functions 45 Note that although Ladusaw’s generalization works surpris- ingly well considering its simplicity. An- other issue is that presupposition needs to be taken into consider- ation. . ⇒ John will replace the money if he gets to Paris. but the inference does go through if we assume that we were standing by the window. ⇒ / John will replace the money after he gets to Paris. b. Based on the following exam- ples. John will replace the money when he gets to France. cf. John will replace the money after he gets to France. b. b. but before is not. Did you have any problems? On the basis of this and other data. So before and if should be downward-entailing and after and when should not be. (19) a.3 This is the kind of observation that 3 Similarly. although intuitions are not entirely clear on this point. (21) a. Condoravdi (2010) observes that Ed left before we were in the room does not intuitively imply Ed left before we were in the room standing by the win- dow. and if is downward-entailing. (18a) does seem to imply (18b). John will replace the money before he gets to France. b.Sets. Assuming that John will get to Paris. there are certain complica- tions. For example. ⇒ / John will replace the money when he gets to Paris. recall that before and if license NPIs and after and when do not. it seems that after and when are not downward-entailing. Whether or not one gets the intuition that (18a) entails (18b) seems to depend on whether or not one assumes that John will get to Paris. John will replace the money if he gets to France. because John will get to France either before he gets to Paris or simultaneously. (20) a.

The discussion about how to account for the distribution and meaning of negative polarity items is a rich one that is still ongoing. Relations. . An environment is Strawson downward-entailing if it is downward-entailing under the assumption that all of the pre- suppositions of both sentences are true.46 Sets. and Functions motivated von Fintel (1999) to introduce the concept of Strawson Downward-Entailment (although von Fintel did not address be- fore per se).

according to our intuitions. as that argument does indeed appear to be valid. The second premise. the argument is valid in this logic. (Premise 2) Therefore. can be written: Mortal(s) The semantics of first-order logic is defined in such a way that any situation (or world.3 ∣ First order logic This chapter introduces first order logic. (Premise 1) Socrates is a man. Socrates is a man. if x is a man. can be written as follows: Man(s) where s is a so-called INDIVIDUAL CONSTANT referring to Socrates. can be rep- resented in first-order logic as follows: ∀x [Man(x ) → Mortal(x )] This can be read. (Conclusion) The first premise of this argument. The conclusion. ‘for all x. for example. then x is mortal’. (And that seems to be a point in favor of this logic as a way of representing the meaning of the corresponding argument in English. which provides tools for explaining with satisfactory precision why. Socrates is mortal. Hence. Socrates is mortal. the fol- lowing argument is valid: All men are mortal. or model) in which the premises hold is also a situation in which the conclusion holds.) 47 . All men are mortal.

In this book. and may contain any sequence of letters and numbers and underscores. An individual con- stant denotes a particular individual object in the domain.1 Individual constants Individual objects are named by INDIVIDUAL CONSTANTS. respectively. It is important to distinguish between the objects themselves. Because they refer to individuals (rather than denoting a set or a relation or a truth value). those objects that exist in the model. corresponding to Bart Simpson and Lisa Simpson. For example. we might have a model whose do- main consists of Bart. However. we adopt the convention that individual constants start with a lowercase letter. s_and_j is a valid individual constant. but no spaces. called a MODEL. Lisa. and the names for those ob- jects. they pick out exactly one object. note that not every individual object in the model must have a name. which are. This set of objects is called the DOMAIN of the model.1 Atomic formulas 3. For example. individual constants are not ambiguous. Marge.48 First order logic 3. When we write “Bart Simpson” using nor- mal font. individ- ual constants are TERMS. Un- like names in English such as Mary. standing for Socrates.1. but S is not. nor is s and j. Expressions of first-order logic are interpreted relative to a par- ticular world or scenario. The model is associated with a set of objects. Maggie. and Homer Simpson. like s above. It could nevertheless be the case that the only constants in the lan- guage are bart and lisa. which are not part of the formal language. we are referring to the individual Bart Simpson in our .

y and z. For example. . . Since we are talking about first-order logic. Predicate symbols combine with the appropriate number of arguments to form ATOMIC FORMULAS. Something like Old- erThan would be a BINARY PREDICATE. αn is a se- quence of terms. 3.. expressing the absurd idea that Socrates is older than himself. which would hold of three objects x. s) is also an atomic formula.. Mortal(s) is an atomic formula. The ARITY of a predicate is the number of arguments that it takes... Unary predicates take one argument. then π(α1 . one which takes only one argument. . like Mortal above. A TERNARY PREDICATE has an arity of 3. first- order logic is our object language for the moment. first-order logic has PREDICATE SYMBOLS .First order logic 49 meta-language. Mortal is an example of a UNARY PRED - ICATE . αn ) is an atomic formula. relating two individuals. and therefore have an arity of 1. In general: Given any predicate π. if n is the arity of π.1. if x is be- tween y and z. expressing the fact that Socrates is mortal. and α1 .. When we write bart.2 Predicate symbols and atomic formulas Along with individual constants. An example of a ternary predicate might be B E - TWEEN . we are mentioning the in- dividual constant that is part of the formal logical language that we are defining. OlderThan(s.. That sentence will presumably always be false. Binary predicates have an arity of 2.

For example. . In general. Then what we just said in the previous paragraph can then be restated as follows: JMortal(s)KM = 1 if JsKM ∈ JMortalKM . and 0 if it is false. and 0 otherwise.) Atomic formulas express claims that can be either true or false. and 0 otherwise. This can be read. Let us assume further that the denotation of a sentence like Mortal(s) is a truth value: 1 if the sentence is true. Whether or not they are true or false depends on the denotations of the terms and predicates involved. The sentence Mortal(s) is defined to be true relative to a given model just in case whatever s denotes in the model is in the set denoted by Mortal. then: Jπ(α)KM = 1 if JαKM ∈ JπKM . Mortal denotes the set of individuals that are mortal according to the model. In general: M Let JαK denote the semantic value of any expression α with re- spect to model M . Recall from above that an individual constant denotes an individual in the domain of the model. A unary predicate denotes a set of objects in the model. and let JMortalK denote the semantic value of Mortal with respect to model M . M Let JsK denote the semantic value of s with respect to model M M .50 First order logic (Note that a summary of definitions like this will be compiled at the end of this section. “the semantic value of pi applied to alpha with respect to model M is 1 if the semantic value of alpha with respect to M is an element of the semantic value of pi with respect to M . if π is a unary predicate and α is a term.

again. A model for first order logic determines not only a domain of individuals D. . This can be generalized to predicates of arbitrary arity as follows: If π is a predicate of arity n and α1 . which specifies the value for all of the constants in the lan- guage. then: Jπ(α. where D is the domain of individuals and I is the in- terpretation function. y ⟩ such that x is older than y. Jαn KM ⟩ ∈ JπKM . and 0 otherwise. A predication of a binary predicate upon its two arguments is true if and only if the ordered pair consisting of the denotations of those two arguments is a member of the relation denoted by the binary predicate. Formally.. we must specify how a binary predicate like OlderThan gets its value. and 0 otherwise. but also an INTERPRETATION FUNC - TION I .” To put it somewhat more elegantly: The pred- ication of π upon α is true if and only if the denotation of α is a member of the set denoted by π. we might define a binary predicate OlderThan which denotes the set of ordered pairs ⟨x.αn )KM = 1 if ⟨Jα1 KM . αn is a sequence of terms. assigns a semantic value to every CONSTANT SYMBOL in the language.First order logic 51 and 0 otherwise. . In general.. M How do we know what JOlderThanK is? The answer is that it is determined by M . β)KM = 1 if ⟨JαKM .. . then: Jπ(α1 ... I ⟩.. then. OlderThan(bart. For instance. A binary predicate denotes a binary relation.. lisa) is defined to true with respect to M if and only if ⟨JbartK . For example. which. JβKM ⟩ ∈ JπKM . if π is a binary predicate and α and β are terms. When defining the semantics for a first-order language. JlisaK ⟩ M M M is a member of JOlderThanK . The constant symbols . a model M can be defined as an ordered pair ⟨D..

i. as it might have been..e.52 First order logic (or just “constants”) include individual constants as well as predi- cate symbols and function symbols (which will be introduced be- low). and so we could not put [on the right-hand side of the equation above] the [philosopher Socrates himself ] but rather have let [him] be represented by [his] conventional [name] in English.) If it had been possible to persuade Socrates to come and occupy the right-hand side of the equation above. To echo Dowty et al. The point is worth belaboring since it is central to the program of truth conditional semantics. etc. The value that I assigns to the constant Philosopher might be the set containing Socrates and Plato (a rather restrictive interpre- tation of the term. there would have been less of a risk of confusing the man himself with his individual con- stant. other “possible worlds”. that a connection is made between lan- guage and extra-linguistic realty. Plato} . “the world. in which case we could write: I (s) = Socrates It is worth reiterating the distinction between the individuals in the model and the names for them.” (The sanitizing quotes here are prompted by the fact that we will eventually want to consider not only the world in which we live as it actually is but also the world as it was... (1981): There would be little chance of confusion on this mat- ter were it not for the fact that we are communicating with reader by means of the printed page..e. For example. But as things stand. i. we must make do using his name in ordinary English to refer to him. the value that I assigns to the constant s might be Socrates. but one that saves a lot of typing): I (Philosopher) = {Socrates. as it will be..

Predicate symbols and function symbols are easy to confuse with each other. Rather. Homer Simpson}. Sup- pose that in M 1 . a binary predicate would be assigned a binary relation (a set of ordered pairs) as its semantic value. For example. as in the case of a predicate. But it is also possible to form syntactically complex terms using FUNCTION SYMBOLS. Suppose we have a particular model M 1 = ⟨D 1 . mother(s) could be used to denote the mother of Socrates. this expres- sion denotes a particular individual. denotes the mother of that individual. In general: If α is any constant. An example of a func- tion symbol would be mother. What is then the value of I 1 (Loves)? Specify the relation as a set of ordered pairs. Let D 1 = {Maggie Simpson. which. like a predicate does.First order logic 53 Similarly. and the com- bination of function with its argument does not make a truth- evaluable claim. I 1 ⟩. It does not say something about Socrates. the only kind of term that we have seen are individual constants. Bart Simpson. So far. Exercise 1. everybody loves themselves and nobody loves anybody else. because they both take arguments in parenthe- .3 Function symbols Recall that a TERM is an expression that denotes an individual in the domain.1. combined with the name of an individual. I ⟩. and the binary predicate Loves denotes the love re- lation. and M = ⟨D. 3. then JαKM = I (α).

and the deno- tation of a function symbol applied to a term is the result of ap- plying the function denoted by the function symbol to the deno- tation of the term. namely. . we will establish a conven- tion whereby predicate symbols start with uppercase letters.e. start with lowercase letters. suppose that in model M . An example of a function with arity 2 might be eldestChild. αn ) is a term.. i.. it is a UNARY FUNCTION). function symbols combine with their arguments in a slightly different manner from the way predicates do.e. In general: M . but no spaces. Then mother(s) denotes Phaenarete. both individual constants and complex terms formed with func- tion symbols. Jmother(s)K = Phaenarete. if n is the arity of γ. αn is a sequence of expressions that are themselves terms. Some- thing like: eldestChild(john. are associated with a particular ar- ity. Func- tions symbols denote functions (of the kind discussed in the pre- vious chapter. The mother function has arity 1 (i. Functions. relations of a certain type). mary) could be used to denote the eldest child of John and Mary.54 First order logic ses. In gen- eral: Given any function γ. mother denotes a function that gives Phaenarete when given the individ- ual Socrates as an argument.. .. all terms. This way. like predicates. then: γ(α1 . For example. where α1 . To distinguish between them.. Any sequence of num- bers or letters or underscores may follow the initial letter.. Semantically.. and function symbols start with lowercase letters.

First order logic 55 If γ is a unary function. In gen- eral: If γ is a binary function. the following expression: eldestChild(phaenarete. “the semantic value of gamma applied to alpha with respect to model M is equal to the semantic value of gamma with respect to M applied to the semantic value of alpha with re- spect to M . JβKM ⟩) This definition can be generalized to accommodate functions of arbitrary arity: . then: Jγ(α)KM = JγKM (JαKM ) This can be read. β)KM = JγKM (⟨JαKM . sophroniscus) would denote the result of applying the function denoted by el- destChild to the ordered pair ⟨Phaenarete.Sophroniscus⟩. Binary functions take ordered pairs as arguments. in a realistic model according to which phaenarete denotes Phaenarete and sophroniscus denotes Sophroniscus. The parentheses in the object language signify the application of the denoted function to the actual individual denoted by the term. and α and β are terms. The parentheses in the object language connect the function symbol to a term. and α is a term. then: Jγ(α.” Note that we are using parentheses both in the ob- ject language and the meta-language here. For exam- ple.

. In the next section. 3..56 First order logic If γ is a function of arity n.. and α1 .4 Boolean connectives Exercise 2.. we will start joining atomic formulas together into com- plex formulas. Jαn KM ⟩) So much for the internal parts of atomic formulas. decide which are which. αn )KM = JγKM (⟨Jα1 KM . then: Jγ(α1 .. .1. . αn is a sequence of n terms. Some of the following are fallacies.. (a) Affirming the consequent Premise 1: if P then Q Premise 2: Q Conclusion: P (b) Reductio ad absurdum Premise 1: not P implies Q Premise 2: not P implies not Q Conclusion: P (c) Hypothetical syllogism Premise 1: if P then Q Premise 2: if Q then R Conclusion: if P then R (d) Modus Tollens Premise 1: if P then Q Premise 2: not Q Conclusion: not P ... Based on your intuition.. some are valid ar- gument forms. .

First order logic 57 (e) Proof by cases Premise 1: P or Q Premise 2: if P then R Premise 3: if Q then R Conclusion: R (f) Affirming a disjunct Premise 1: P or Q Premise 2: P Conclusion: not Q (g) Denying the antecedent Premise 1: if P then Q Premise 2: not P Conclusion: not Q (h) Disjunctive syllogism Premise 1: P or Q Premise 2: not Q Conclusion: P (i) De Morgan (negation of a conjunction) Premise: not (P and Q) Conclusion: not P or not Q (j) De Morgan (negation of a disjunction) Premise: not (P or Q) Conclusion: not P and not Q (k) Constructive dilemma Premise 1: if P then Q Premise 2: if R then S Premise 3: P or R Conclusion: Q or S (l) Material implication Premise 1: P implies Q Conclusion: not P or Q .

In first-order logic. atomic formulas can be combined together to form larger formulas using CONNECTIVES. Thus not [P and Q] means not P. or not Q These are D E M ORGAN ’ S LAWS. known as the CONJUNCTION of φ and ψ. suppose you ask me if I’m free today and to- morrow and I say no. and not. we can capture the logical relationships between these sentences. For example: [Happy(a) ∧ Happy(b)] can be read ‘a is happy and b is happy’. . then [φ ∧ ψ] is also a formula. That means that I’m not free today. if φ is a formula and ψ is a formula. The ∧ symbol represents ‘and’. By specifying an interpretation for and. it means that either I’m not free today or I’m not free tomorrow (or both). or. and not Q On the other hand. That is not quite as strong. So in general: not [P or Q] means not P. and I’m not free tomorrow.58 First order logic (m) Illicit major Premise 1: P implies Q Premise 2: R implies not P Conclusion: R implies not Q Suppose you ask me if I’m free today or tomorrow and I say no. In general.

A sentence like I’ll visit him today or tomorrow typically implies that the speaker will visit the person on one of those days but not both.) Let’s express the rule using symbols to bring out its structure. The word ‘or’ in English sometimes has an INCLUSIVE interpretation. P is false and Q is true. and both are false. (A DISJUNCT is a phrase or sentence that is joined to another one by or. and sometimes has an EXCLUSIVE interpretation. suggest- ing that there won’t be enough help if he volunteers on both days. This interpretation of ∨ can be represented using a truth table. Let us represent “John volunteers on Monday or Wednesday” as follows: P ∨Q Now.First order logic 59 The DISJUNCTION of φ and ψ is written [φ ∨ ψ]. this ∨ symbol should be interpreted in such a way that P ∨ Q is true whenever at least one of P or Q is true. both are true. it is less likely that an exclusive interpretation is intended. it is natural to interpret or in an inclusive man- ner. as follows. where the condition is satisfied even when both disjuncts are true. In this situation. P = John volunteers on Monday Q = John volunteers on Wednesday. For example: [Happy(a) ∨ Happy(b)] can be read ‘a is happy or b is happy’. But with a sentence like If John vol- unteers Monday or Wednesday. P Q P ∨Q 1 1 1 1 0 1 0 1 1 0 0 0 . There are four pos- sibilities: P is true and Q is false. then we have all the help we need. Only in the last case should the disjunc- tion be seen as false.

One argument for this comes from negation: If I say I won’t see you today or tomorrow. Under the ambiguity analysis. This is called a CONJUNCTION. I’ll see you today and tomorrow. Conversational implicatures of the kind that would be involved here (‘scalar implicatures’) typi- cally disappear under negation. because the statement is true even in the case where both of the disjuncts are true. but there is reason to believe that inclusive reading is what or really denotes. it means that I will not see you today and I won’t see you tomorrow.) If I say. and that the ex- clusive reading arises via a conversational implicature in certain contexts.60 First order logic We use the number 1 to represent “true” and the number 0 to rep- resent “false”. E XCLUSIVE DISJUNCTION specifies that only one of the disjuncts is true: P Q P ⊕Q 1 1 0 1 0 1 0 1 1 0 0 0 One might imagine that natural language or is ambiguous between inclusive and exclusive disjunction. This table interprets ∨ as INCLUSIVE DISJUNCTION. so this fact is easily explained un- der the implicature analysis. it is not clear why an exclusive reading should disappear under nega- tion. then my statement is true if and only if I’ll see you today and I’ll see you tomorrow. We can get this interpretation by negating the inclusive interpretation (‘it is not the case that at least one of the disjuncts is true’). Conjunction can be represented as follows: P ∧Q The truth table for conjunction is as follows: . (There is of course much more to say on this issue.

exclusive disjunction. and then apply negation to that. If P is false. and negation make up the so-called B OOLEAN CONNECTIVES . named after the logician George Boole. for negation. in the sense that the truth of formulas containing them depends on .First order logic 61 P Q P ∧Q 1 1 1 1 0 0 0 1 0 0 0 0 So far. There is one UNARY CONNECTIVE in the language. P Q P ∧Q ¬[P ∧ Q ] 1 1 1 0 1 0 0 1 0 1 0 1 0 0 0 1 Note: We are using the brackets [ ] to show that we are applying negation to the conjunction of P and Q. To find out. rather than P . Whenever you do a “Boolean search”.1. we first find out when P ∧ Q is true. we have discussed three different BINARY CONNECTIVES: inclusive disjunction. 3. you are using these connectives. Conjunc- tion. and conjunction. If P is true. the truth table for nega- tion is simple. written ¬.FUNCTIONAL. disjunction. then ¬P is true: P ¬P 1 0 0 1 Let us consider when the negation of P ∧ Q is true. Because negation is a unary connective.5 Conditionals All of the connectives we have seen are TRUTH . then ¬P is false.

Therefore.’ are valid.. it explains Modus Ponens: If P then Q (Premise 1) P (Premise 2) Therefore. and others are not. ... suppose I make the following claim: “If it’s sunny. nor would it be falsi- fied by any scenario in which it’s not sunny. Therefore. given this treatment of ‘if. What I’ve said does not bear on what happens when it’s not sunny..’ is the fact that this treatment explains why certain inferences involving ‘if.. Another motivation for treating material implication as the meaning of ‘if. Q (Conclusion) (A particular instance: If it’s sunny then there are people sitting outside. but there are no people sitting outside. It’s sunny. then there are people sitting outside.) An argument is valid if the conclusion is true whenever the con- junction of the premises is true.. then”. then.. then. and written →.. then” in natural language. called MATERIAL IMPLICATION. There is another truth-functional connective that corresponds roughly to “if. A formula of the form P → Q is false only when P is true and Q is false. Q. For example.” This claim would only be falsi- fied by a scenario in which it is sunny.. there are people sitting outside. is true whenever the premises are both true. According to the truth table for material implication. The truth table looks like this: P Q P →Q 1 1 1 1 0 0 0 1 1 0 0 1 Indeed. It would not be falsified by a scenario in which it’s sunny and there are people sitting outside. the conclusion.62 First order logic the truth values of the formulas it joins together. P → Q and P are jointly true whenever both P and Q are true. and true otherwise....

whenever one is true. we will have to choose material implication. while it seems intuitively clear that a conditional is false when the antecedent is true and the consequent is false. . so one might reasonably argue that they cannot be judged as true or false in a single situation. and An- gelika Kratzer. we need somewhat more sophisticated technology. it admittedly seems less intuitively clear that a conditional is true in the circumstance where the antecedent is false. then they are EQUIVALENT. then I had yogurt for breakfast this morning is true? Intuitively not. (The interested reader is referred to the work of David Lewis. If so.) But if we are forced to iden- tify the truth-functional connective that most closely resembles if then. For example.2 Equivalence and tautology If two formulas are true under exactly the same circumstances. and check whether the pattern of 1s and 0s matches. P and ¬¬P are equiva- lent. the other is true. For example. the other is false. then the two expressions are equivalent. 3. among many others. the moon is not made of green cheese. and whenever one is false.First order logic 63 However. at least. In order to capture the meaning of English conditionals. too: P ¬P ¬¬P 1 0 0 0 1 1 This is a general way to determine whether two formulas are equiv- alent: Construct a truth table with columns for both expressions. In English. Does that mean that If the moon is made of green cheese. Robert Stalnaker. condition- als are used to express regularities.

¬P ∨ ¬Q (d) [P ∨ ¬Q ]. P Q P →Q ¬Q P ∧ ¬Q 1 1 1 0 0 1 0 0 1 1 0 1 1 0 0 0 0 1 1 0 A TAUTOLOGY is an expression that is true in every situation. [P ∨ ¬P ] (Note that the truth table for this one should only contain two rows. ¬[¬P ∧ ¬Q ] (b) [P → Q ].64 First order logic Exercise 3. their truth values are differ- ent. So you can tell whether an expression is a tautology by looking at . ¬P ∧ ¬Q ] (e) [P → Q ]. Using truth tables. since it doesn’t mention Q. [¬P ∨ Q ] (c) ¬[P ∧ Q ]. For example P and ¬P are contradictory. check whether the following pairs of formulas are equivalent.) Two logical expressions are CONTRADICTORY if for every as- signment of values to their variables. [¬Q → ¬P ] (f) [P → P ]. P ¬P 1 0 0 1 Another contradictory pair is P → Q and P ∧ ¬Q. (a) [P ∨ Q ].

. 3. Here is a tautology: P ∨ ¬P (e. P ↔ ¬¬P is a tautology: P ¬P ¬¬P ¬P ↔ ¬¬P 1 0 0 1 0 1 1 1 Exercise 4. the formula obtained by joining them with a biconditional is a tautology. Which of the following are tautologies? (a) P ∨ Q (b) [P → Q ] ∨ [Q → P ] (c) [P → Q ] ↔ [¬Q ∨ ¬P ] (d) [[P ∨ Q ] → R ] ↔ [[P → Q ] ∧ [P → Q ]] Support your answer with truth tables. For example. let us define a simple language called L0 . We begin by listing all of the syntax rules. It is raining or it is not raining): P ¬P P ∨ ¬P 1 0 1 0 1 1 Fact: When two expressions are equivalent. to define what counts as a well-formed expression of the language. then it is a tautology.3 Summary: L0 To summarize what we have covered so far.First order logic 65 the pattern of 1s and 0s in the column underneath it in a truth table: If they’re all true.g. and then give the rules for semantic interpretation.

2 1 Special cases: – If π is a unary function symbol and α is a term then π(α) is a term. 2 Special cases: . 3.. then π(α1 . then π(α1 . Atomic formulas • If π is a function of arity n and α1 . given a model.3...αn ) is an atomic formula. ..1 3. The semantics specifies the semantic value of every well-formed formula. Basic Expressions • Individual constants: d. Siblings 2.αn are terms. m • Function symbols – Unary: mother. . father – Binary: eldestChild • Predicate symbols – Unary: Happy. Philosopher.. • If π is a function symbol of arity n. .66 First order logic It is worth emphasizing that a logic is a language (or a class of languages). and languages have both grammar and semantics.. Loves. Admires.. β) is a term.. and α1 . j. Smokes – Binary: Kissed.. n. – If π is a binary function symbol and α and β are terms then π(α. . Terms • Every individual constant is a term. αn ) is a term. Bored. Linguist. αn are terms.1 Syntax of L0 1. Cel- list. The grammar specifies the well-formed formulas of the language..

2. 4. then so are: • [φ ∧ ψ] ‘φ and ψ’ • [φ ∨ ψ] ‘φ or ψ’ • [φ → ψ] ‘if φ then ψ’ • [φ ↔ ψ] ‘φ if and only if ψ’ Note that the outer square brackets with binary connectives and quantifiers are always there according to the official rules of the syntax. – If π is a binary predicate and α and β are terms. 1. then Jπ(α1 .3. then π(α. Binary connectives: If φ is a formula and ψ is a formula. 3.2 Semantics of L0 The denotation of an expression α relative to a model M is written JαKM . Basic expressions M • If α is a constant. then ¬φ is a formula. 5. A model M = ⟨D. and α1 . . . then JαK = I(α).. then π(α) is a formula.. αn )K = JπK (⟨Jα1 K . Jαn K ⟩)3 M M M M – If π is a unary predicate and α is a term. Complex terms • If π is a function of arity n. αn is a sequence of n terms. Negation • If φ is a formula.First order logic 67 • If α and β are terms. .. then α = β is an atomic formula.. I ⟩ determines a domain of individuals D and an interpretation function I ... β) is formula... 3 Special cases: . but we sometimes drop them when they are not necessary for disambiguation..

M • Jφ → ψK = 1 if JφKM = 0 or JψKM = 1. M M M – When π is a function of arity 2. Binary connectives • Jφ ∧ ψK = 1 if JφKM = 1 and JψKM = 1. . and 0 otherwise.. then Jπ(α)K = 1 iff JαKM ∈ JπKM . M • Jφ ↔ ψK = 1 if JφKM = JψKM .. αn )K = 1 if M ⟨Jα1 KM . Atomic formulas • Predication: If π is a predicate of arity n. then Jπ(α1 . and 0 otherwise. and 0 other- M wise. JβKM ⟩ ∈ JπKM .. M – If π is a binary predicate and α and β are terms. and 0 otherwise. . then Jπ(α.. β)K = JπK (⟨JαK . Negation • J¬φK M = 1 if JφKM = 0..4 Quantification Consider the sentence Beth owns a truck. Jαn KM ⟩ ∈ JπK. How can we express this formally? If we had names for all of the trucks M M – When π is a function of arity 1. M 3. αn is a sequence of n terms. and 0 otherwise. and 0 otherwise. . .. 4. then Jπ(α. β)K = 1 iff ⟨JαKM . 4 Special cases: M – If π is a unary predicate and α is a term. then Jα = βK = 1 if M JαKM = JβKM . and α1 . this sentence is true in any model where Beth stands in the ownership relation to an object which is a truck. then Jπ(α)K = JπK (JαK ).4 • Equality: If α and β are terms..68 First order logic 3. • Jφ ∨ ψK = 1 if JφKM = 1 or JψKM = 1. and 0 otherwise. Construing ownership as a binary relation that holds between potential owners and the objects they own. JβK ⟩). 5...

First order logic 69 in the model. We might represent this as follows: ∀x [Athlete(x ) → ReceivedGrade(x. “There exists an x such that Beth owns x and x is a truck. This is because the variable x is not BOUND by any quantifier (so it is FREE ). That is probably not something one would ever feel the urge to express. then we could express this using the tools we have by saying something along the lines of. The condition that the object should satisfy may be written as follows: [Owns(beth. ∃. All we want to say is that there is some object. We don’t want to have to name all the trucks. we may use the EXISTENTIAL QUANTIFIER . To make the claim that there is some object x that satisfies this condition. “For all x. ∃x [Owns(beth. x ) ∧ Truck(x )] This can be read. such that Beth owns x and x is a truck. consider the sentence Every athlete got an A.. if x is an athlete. If we had used the universal quantifier instead of the existential quantifier in the formula above. then the grade x received was an A. call it x.” Note that we would be saying something very . we would have expressed the claim that everything satisfies the condition.” And this formula will be true in any model where there is a truck that Beth owns.. This can be done using variables. but it does not make a claim. The other quantifier of first-order logic is the UNIVERSAL QUAN - TIFIER . “Beth owns Truck 1 or Beth owns Truck 2 or . x ) ∧ Truck(x )] This is a well-formed formula of first-order logic. But this is quite inconvenient.” and so on for all of the trucks. Thus Beth owns everything and everything is a truck. For example. it just describes a condition that some object x might or might not satisfy. written ∀. but there are plenty of other practical uses for the universal quantifier. a)] This can be read.

For every linguist. a)] This says. The latter reading could be rendered logically as follows: ∃ y . [Linguist(x ) → ∃ y [Philosopher( y ) ∧ Admires(x. x is an athlete and x received an A” – in other words. On the second reading. The first reading is the one where “every linguist” takes WIDE SCOPE over “a philosopher”. we would say. y )]] Predicate logic is thus a tool for teasing apart these kinds of am- biguities in natural language. “every linguist” has NARROW SCOPE with respect to “a philosopher”. There is one lucky philosopher such that every linguist ad- mires that philosopher. It could mean two things: 1.” But notice that “Every linguist admires a philosopher” is actually ambiguous. Consider the sentence “everybody isn’t happy”. [Philosopher( y ) ∧ ∀x . thus: ∀x [Athlete(x ) ∧ ReceivedGrade(x. This could mean either one of the following: ∀x . if x is a linguist. “For all x. [Linguist(x ) → Admires(x.” Now consider the following formula: ∀x . 2. “Everything is an athlete and everything received an A. y )]] If we were to read this aloud symbol for symbol. Quantifiers can also take wide or narrow scope with respect to negation. What we have just seen is an in- stance of QUANTIFIER SCOPE AMBIGUITY. there is some philosopher that the lin- guist admires (possibly a different philosopher for every lin- guist). ¬Happy(x ) .70 First order logic different if we had a conjunction symbol (∧) instead of a material implication arrow (→) in this formula. “For every x.” A more natural way of putting this would be “Every linguist admires a philosopher. then there exists a y such that y is a philosopher and x admires y.

using phrases like ‘for all x’ and ‘there exists an x such that’ and (ii) give a natural paraphrase in English. “it is not the case that for every x. The second one implies merely that there is at least one person who is not happy. For each of the following formulas. Bored(x ) (i) ∀x . Loves( y )(x ) Exercise 6. [Bored(x ) → Happy(x )] (f) ∀x . say (i) how you would read the formula aloud. Happy(x ) The one where the universal quantifier takes wide scope over nega- tion says.” The one where the quantifier has narrow scope with respect to negation says.First order logic 71 ¬∀x . say which of the formulas above it matches (if any). (a) ∀x . Exercise 5. x is happy. it is not the case that x is happy.) . the sentence might match two formulas. [Bored(x ) ∨ Happy(x )] (e) ∀x . ¬Bored(x ) (g) ∃x . (In some cases. “for every x. [Bored(x ) ∧ Happy(x )] (d) ∃x .” The first one implies that nobody is happy. Bored(x ) (b) ∀x . ¬Bored(x ) (h) ¬∃x . For each of the following sentences. [Bored(x ) ∧ Happy(x )] (c) ∃x . ∃ y .

[Cellist(x ) ∧ Smokes(x )] Exercise 8. (c) Everybody who is bored is happy. (b) All cars are red or green. (d) Alan dislikes all cats. (h) Somebody loves everybody. (c) No car is blue. Express the following sentences in L 1 : (a) There is a red car. Which of the following statements in first-order logic better represents the meaning of Every cellist smokes? (a) ∀x . (d) Nobody is bored. (e) Somebody is not bored. (f) Somebody is bored or happy. [Cellist(x ) → Smokes(x )] (b) ∀x . (b) Everybody is bored and happy. (g) Everybody loves somebody.72 First order logic (a) Somebody is bored and happy. Feel free to add as many logical constants as you need. . Exercise 7.

Express the following sentences in L 1 . (d) Every human being has at least two mothers. there may be quantifier scope ambiguity. (c) Something is the reason for everything.. if φ is a formula. and the existential quantifier ∃.. v 2 . (e) All fathers are older than their children. . Feel free to add as many logical constants as you need. (f) If a person is a philosopher then he is mortal. (i) He who sins sleeps badly. all of type e. in that case. We will allow an infinite number of variables v 0 . then [∀u . Given any variable u. (b) Everything has a reason. give a rep- resentation in L 1 corresponding to both interpretations. v 1 . In some cases. (a) Every even number is divisible by two. (h) All statues are not of marble. Now let us start defining the syntax of this language formally. (g) Some statues are not of marble.First order logic 73 Exercise 9. φ] . but use the following shorthands: • x is v 0 • y is v 1 • z is v 2 We will also add new formation rules for the universal quantifier ∀.

Happy(x ) is a valid formula according to these rules. Happy(x ) is true in a model M if (and only if) no matter which individual we assign as the interpretation of x. An assignment function is a function that specifies for each variable. ∀x .74 First order logic is a formula. and I is a function giving a value to every non-logical constant in the language. The formula Happy(x ) does not make a claim that could be true or false relative to any given model. e. Likewise. If x is interpreted as someone who is happy. we may drop the dot af- ter the variable when it is immediately followed by a bracket. In a formula of the form ∀u φ̇ or ∃u φ̇. ∀x . In order to decide whether Happy(x ) is true. This term sometimes comes in handy. I ⟩ where D is the set of individuals in the domain of the model. then the formula is true. We continue to treat models as pairs consisting of a domain and an interpretation function. Happy(x ) is true. how that variable is to be interpreted. Happy(x ) is true iff we can find some individual to assign to x that makes Happy(x ) true. φ is called the SCOPE of the quantifier. informally. ∃x . otherwise not. so a given model M will be defined as ⟨D. As an abbreviatory shorthand. and so is [∃u . Now for the semantics. Here are some ex- amples of assignment functions: .g. ∀x [Happy(x ) → Bored(x )]. we need to know not only about the model. For example. but also how to interpret x. The device that we will use to interpret variables is called an ASSIGNMENT FUNC - TION . Informally. φ].

g is simply whatever g maps x to.First order logic 75 ⎡ x → Maggie ⎤ ⎡ x → Bart ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ y → Bart ⎥ ⎢ y → Homer ⎥ ⎢ ⎥ ⎢ ⎥ g1 = ⎢ ⎥ g2 = ⎢ ⎥ ⎢ z → Bart ⎥ ⎢ z → Bart ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ . We can express this more formally as follows: (1) Jx KM .g where g stands for an assignment function.g = g (x ) For example. and Jx KM . Exercise 10... We typically use the letter g to stand for an assignment function.g 1 (b) What is J y K (for any model M )? (c) What is g 2 ( y )? M . In order to interpret an expression. so instead of JφKM we will now write: JφKM . ⎦ ⎣ . Jx K M . (a) What is g 1 ( y )? M .g 2 = g 2 (x ) = Bart for any model M .. ⎦ The domain of an assignment function is the set of variables (v n for all n). The denotation of the variable x with respect to model M and assignment function g . we need both a model and an assignment function.g 1 = g 1 (x ) = Maggie. written: Jx KM .g 2 (d) What is J y K (for any model M )? . then..

For example. g [u ↦ k ] is the same as g . ⎦ .. Whether Happy(x ) is true or not depends on how x is interpreted. If g already maps u to k then. Happy(x ) is true with respect to a given model M and an assignment function g . and this is given by the assignment function. Now let us consider the formula ∃x ..76 First order logic From now on. We can show that the formula is true by considering a variant of g on which the variable x is assigned to some happy individual. Happy(x ). For M . JHappyK 1 1 will be the same as JHappyK 1 2 for any two assignments g 1 and g 2 . ⎦ g 1 [ y ↦ Homer] would be as follows: ⎡ x → Maggie ⎤ ⎢ ⎥ ⎢ y → Homer ⎥ ⎢ ⎥ g 1 [ y ↦ Homer] = ⎢ ⎥ ⎢ z → Bart ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ . But the value of the formula Happy(x ) depends on the value that is assigned to x. Let us use the expression g [u ↦ k ] to describe an assignment function that is exactly like g save that g (u ) = k. Its value depends only on the model.. This is true if we can find one individual to assign x to such that Happy(x ) is true. In some cases. our semantic denotation brackets will have two superscripts: one for the model. the choice of assignment function will not make any difference for the semantic value of the expression. Suppose we are trying to determine whether ∃x .. using g 1 from above. ⎡ x → Maggie ⎤ ⎢ ⎥ ⎢ y → Bart ⎥ ⎢ ⎥ g1 = ⎢ ⎥ ⎢ z → Bart ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ . and one for the assignment func- tion.g example.g M . because Happy is a constant. This lets us keep everything the same in g except for the variable of interest.

g = 1 iff there is an individual k ∈ D such that: JHappy(x )KM .g [x ↦k ] = 1. (To show that it is false is easier. So the official semantics of the universal quantifier is as follows: J∀v . Happy(x ) is true with respect to M and g if we can modify the assignment function g in such a way that x has a denotation that makes Happy(x ) true. Happy(x ) is true. Exercise 11. we can say: J∃x . then ∀x . .4. What this says is that given a model M and an assignment func- tion g .g = 1 iff for all individuals k ∈ D: JφKM .e. then you just have to find one unhappy individual.1 Syntax of L1 Let us now summarize the syntactic rules of our language.First order logic 77 We changed it so that y maps to Homer and kept everything else the same. we would have to consider assignments of x to every element of the domain.g [v ↦k ] = 1 3. φKM . if we wanted to show that the formula ∀x . Otherwise it is false. what does g 1 [z ↦ Homer] assign to x?) (b) What is g 1 [z ↦ Homer]( y )? (c) What is g 1 [z ↦ Homer](z )? With this terminology. Now. the sentence ∃x . (a) What is g 1 [z ↦ Homer](x )? (I.. Happy(x )KM .) If Happy(x ) turns out to be true no matter what the assignment function maps x to. not just one. Happy(x ) was true.

. n. – If π is a binary predicate and α and β are terms. Smokes – Binary: Kissed. .. then π(α. . j.. and α1 . αn are terms. then π(α1 . Siblings 2. β) is formula. Bored. Terms • Every individual constant is a term. Cel- list... • Every individual variable is a term.αn ) is an atomic formula.78 First order logic 1.. Atomic formulas • Predication If π is a function of arity n and α1 . father – Binary: eldestChild • Predicate symbols – Unary: Happy. Loves. . then π(α1 . . . m • Individual variables: v n for every natural number n • Function symbols – Unary: mother. Linguist.. β) is a term. Basic Expressions • Individual constants: d.αn are terms. Philosopher. then π(α) is a formula. • If π is a function symbol of arity n.5 3. αn ) is a term.. 6 Special cases: – If π is a unary predicate and α is a term. Admires..6 5 Special cases: – If π is a unary function symbol and α is a term then π(α) is a term. – If π is a binary function symbol and α and β are terms then π(α..

and [φ ↔ ψ].First order logic 79 • Equality If α and β are terms. φ]. then ¬φ is a formula. and the free vari- ables in φ and ψ are free [φ ∧ ψ]. [φ → ψ]. A . • The free variables in φ are also free in ¬φ. then both of the follow- ing are formulas: • [∀u . φ] and [∃u . then so are: • [φ ∧ ψ] ‘φ and ψ’ • [φ ∨ ψ] ‘φ or ψ’ • [φ → ψ] ‘if φ then ψ’ • [φ ↔ ψ] ‘φ if and only if ψ’ 6. 4. φ] ‘there exists a u such that φ’ Variables are either free or bound in a given formula. Quantifiers If u is a variable and φ is a formula. A formula containing no free variables is called a closed formula. [φ ∨ ψ]. • All of the free variables in φ are free in [∀u . except for u. 5. φ] ‘for all u: φ’ • [∃u . Negation • If φ is a formula. then α = β is an atomic formula. any variable is free. and every occurrence of u in φ is bound in the quantified formula. A formula containing free variables is called an open formula. Whether a variable is free or bound is defined syntactically as follows: • In an atomic formula. Binary connectives If φ is a formula and ψ is a formula.

let us establish some ab- breviatory conventions: We want to avoid unnecessary clutter in our representations. and implications. for exam- ple. you may think of the dot as a “wall” that forms the left edge of a constituent. Thus in- stead of: ∀x . (As a heuristic.g. Note that the distinctions introduced in this paragraph are syntactic. P (x ) ∧ Q (x ). ∀x or ∃x) extends as far to the right as possible. [Linguist(x ) → ∃ y . you may assume that the scope of a binder (e. inter- preted in such a way that the universal quantifier takes scope over the conjunction. We retain all of the abreviatory conventions from above in or- der to avoid unnecessary clutter in our formulas. So. Before moving on to the semantics. instead of: ∀x . rather than semantic. in the sense that they only talk about the form of the expressions. rather than as the conjunction of ∀x . y )]] we can write: ∀x . Furthermore. as we will see. there are semantic consequences of this distinction. [P (x ) ∧ Q (x )] can be rewritten as ∀x . so we allow brackets to be dropped when it is independently clear what the scope of a quantifier is. Furthermore. ∃ y . we will drop the dot when we have multiple binders in a row. ∀x . y ) we can write: ∀x ∃ y . However. For exam- ple. when reading a formula. disjunctions. we will typically retain brackets around conjunctions. P (x ) and Q (x ). y ) . Admires(x. which continues until you find an unbalanced right bracket or the end of the expression.80 First order logic closed formula is also called a sentence. Admires(x. [Linguist(x ) → [∃ y . Admires( y )(x )] because it is clear that the scope of the existential quantifier does not extend any farther to the right than it does.) However. Admires(x.

g ).. I ⟩ where D is the set of individuals and I is a function mapping each constant of the language to an element. written JαK . . Jαn KM . then: Jπ(α)K = JπKM . is defined as follows: 1.. β)K = JπKM . then JαK M .g tion of a given expression α relative to M and g .. then JαK 2.g (JαKM . = g (α). an assignment function g mapping each individual variable in L1 to some element in D.g = I (α). αn is a sequence of n terms. Complex terms • If π is a function of arity n. Basic Expressions • If α is a constant.g . The semantic value of an expression is determined relative to two parameters: 1..2 Semantics of L1 Now for the semantics of L1 . subset. – When π is a function of arity 2..g = JπKM .g (⟨Jα1 KM . . 3. M . and α1 . then: Jπ(α.4... For any given model M and assignment function g . then:7 Jπ(α1 .First order logic 81 Note that the dot is always optional in the Lambda Calculator (on its default setting).g ⟩)..g ⟩) 7 Special cases: – When π is a function of arity 1. depending on the nature of the constant. a model M = ⟨D. . 2. JβKM . .g .g • If α is a variable. αn )KM . the denota- M .. or relation over elements in D.g (⟨JαKM .

. Binary Connectives • Jφ ∧ ψK = 1 if JφKM . and 0 otherwise. and 0 oth- M . JβKM ⟩ ∈ JπKM .. and 0 otherwise. β)KM = 1 if ⟨JαKM . and 0 otherwise. Negation • J¬φK M .g = 1.g erwise.g wise. and α1 .g = 1.g = 1 or JψKM . then: Jπ(α. then: Jπ(α)KM = 1 if JαKM ∈ JπKM .g = 1 if JαKM .g = 0..g = JβKM .. and 0 otherwise.g . – If π is a binary predicate and α and β are terms. then: Jπ(α1 .g erwise. 5.g = 1. • Jφ → ψK = 1 if JφKM .g 6. . αn is a sequence of n terms.g = JψKM .g . and 0 otherwise.g = 0 or JψKM . then Jα = βKM . . and 0 oth- M ..82 First order logic 3.g = 1 if JφKM . Jαn KM . Quantification 8 Special cases: – If π is a unary predicate and α is a term. and 0 otherwise. M ..g = 1 and JψKM . and 0 other- M . • Jφ ↔ ψK = 1 if JφKM . Atomic formulas • Predication If π is a predicate of arity n... .g ⟩ ∈ JπK.g . 4.g = 1 if ⟨Jα1 KM . • Jφ ∨ ψK = 1 if JφKM .. αn )KM .8 • Equality If α and β are terms..

φK M . φK M . Recall the quan- tifier scope ambiguity in Every linguist admires a philosopher that we discussed at the beginning of the section. in the formula Happy(x ) the variable x is not bound by any quantifier (so it is a FREE VARI - ABLE ). Happy(x ) has the same value relative to every assignment function. a closed formula such as ∀x .g [v ↦k ] = 1 and 0 otherwise. We will just point .g [v ↦k ] = 1 and 0 otherwise. For example. Note that the choice of assignment function doesn’t always make a difference for the interpretation of an expression. which can be represented as follows: ∀x . • J∃v .g = 1 if there is an individual k ∈ D such that: JφKM . [Philosopher( y ) ∧ ∀x . [Linguist(x ) → Admires( y )(x )]] We will spare the reader a step-by-step computation of the seman- tic value for these sentences in a given model.g = 1 if for all individuals k ∈ D: JφKM . That sentence was said to have two readings. [Linguist(x ) → ∃ y . The choice of assignment function only makes a difference when the formula contains free variables. In contrast. [Philosopher( y ) ∧ Admires( y )(x )]] ∃ y . So the semantic value of this formula relative to M and g depends on what g assigns to x.First order logic 83 • J∀v . One important feature of the semantics for quantifiers and variables in first-order logic using assignment functions is that it scales up to formulas with multiple quantifiers.

x ) (c) Happy(m. Loves(x. m) (h) ∃x . one would con- sider modifications of the input assignment for every member of the domain. To verify the second kind of sentence. one would try to find a single modification of the input assignment for the outer quantifier (the existential quantifier). Happy(x ) true in M 1 ? Give a one-sentence ex- planation why or why not. try to find modifications of the mod- ified assignment for some element of the domain making the exis- tential statement true. Exercise 13. Happy( y ) (b) Happy(k) (g) ∃x . (a) Happy(m) ∧ Happy(m) (f) ∀x . Consider the following formulas. and within that. Happy(x ) (j) ∃x . with a univer- sal quantifier outscope an existential quantifier. Is ∀x . This procedure will work for indefinitely many quantifiers. Exercise 12. which of them are open formulas?) . such that modifi- cations of that modified assignment for every member of the do- main verify the embedded universal statement. which of the above for- mulas have free variables in them? (In other words. z ) (d) ¬¬Happy(n) (i) ∃x . ∃z . Happy(m) Questions: (a) Which of the above are well-formed formulas of L1 ? (b) Of the ones that are well formed in L1 . Loves(x. z ) (e) ∀x .84 First order logic out that in order to verify the first kind of sentence. Loves(x.

and suppose that in M 1 . everybody loves themselves and nobody loves anybody else. M f .g (c) JLovesK M 1 . and com- pute the denotation of Happy(x )? (d) Use the rule you identified in your answer to the previous question.g (b) JmK M 1 . y ↦ b. Recall our fantasy model M f . and the binary predicate Loves denotes this love relation. Let g be defined such that x ↦ c.g Bart (b) What is JHappyK ? Apply the relevant L1 semantic inter- pretation rule. M .g Bart (a) What is Jx K f ? Apply the L1 semantic interpretation rule for variables. m)K .g (d) JLoves(x. (c) Which semantic interpretation rule do you need to use in or- der to put the meanings of Happy and x together. (a) Calculate: M 1 . where everybody is happy: ⎡ Bart → 1 ⎤⎥ ⎢ ⎢ ⎥ I f (Happy) = ⎢ Homer → 1 ⎥ ⎢ ⎥ ⎢ Maggie → 1 ⎥⎦ ⎣ M . explain carefully why JHappy(x )K f Bart = 1.First order logic 85 Exercise 14. and z ↦ a.g (a) Jx K M 1 .g Exercise 15.

m)K and explain your reasoning.9 Exercise 18. g n }.86 First order logic (b) List all of the value assignments that are exactly like g ex- cept possibly for the individual assigned to x. If a formula has free variables then it may well be true with respect to some assignments and false with respect to others. A sentence is valid if it is true in every model. m)KM . y ) → Kissed( y.g calculate JLoves(x. (d) On the basis of these and the semantic rule for univer- M ..10 Exercise 19. If it is false..g n . Show that ∃x [Happy(x ) → ∀ y Happy( y )] is valid.. m)KM . Exercise 17. In the run-up to the 2016 presidential election.. .g j . M . and label them g 1 .g sal quantification calculate J∀x . Give an example of two variable assignments g i and g j such that JLoves(x. find a model M 2 = ⟨D 2 .g i ≠ JLoves(x. I 2 ⟩ such that the sentence is true with respect to M 2 . x )] is true with respect to M 1 . Stephen Colbert once joked as follows: . Exercise 16.. Loves(x. Determine by a detailed consideration of the relevant value considerations whether ∀x ∀ y [Kissed(x. m)K i . (c) For each of those value assignments g i in the set {g 1 .

. tying with Rick Santorum.First order logic 87 In a recent CNN poll of New Hampshire Republicans. Which I say he can use to his advantage: “Jindal 2016: No one is more popular!” Explain this joke in painstaking detail by giving translations into L 1 of the two interpretations of the proposed slogan that the joke plays on. [Bobby] Jindal got 3% of respondents. Feel free to introduce as many logical constants as you need. and falling just short of “No-one” at 4%.


For example. we need to be able to assign meanings to a wide range of expressions. Let us therefore assume that Loves is a binary predicate and that the verb maps to it. such as m. m)] Some parts of the sentence Everybody loves Maggie can be straight- forwardly mapped into expressions in L1 . namely Maggie.1 Lambda abstraction In the next chapter we will define a system that translates expres- sions of (a defined fragment of) English into a formal logical lan- guage. the sort of thing denoted by a binary predicate. which might be translated into L1 as: ∀x [Person(x ) → Loves(x. A similar problem arises with expressions like Everybody. we might say that (relative to a given model) the English name Maggie picks out a particular element of the domain. But what does a verb phrase like loves Maggie map onto? Intuitively. as this is the sort of denotation that individual constants have. m) where the first argument of Loves is missing. it denotes a formula of the following form: Loves( . Intuitively. it denotes some- 89 . In order to do this.4 ∣ Typed lambda calculus 4. The English verb loves could be thought of as de- noting a binary relation (a set of ordered pairs of individuals in the domain). Consider the sentence Everybody loves Maggie. So it makes sense to translate the name Maggie as an individual con- stant.

The set of types is infinite. we will have an infinite set of syntactic categories. ⟨e. An expression of type e denotes an individual. it follows that ⟨e. since both e and t are types. and formulas. D t = {1. This third clause implies that. The first major change is in the inventory of syntactic cate- gories of the language. t ⟩⟩ is a type. so its denotation should be either 1 or 0. 0}. Person(x ) → (x ) where is a placeholder for some predicate. gives us the tools to express this idea of a placeholder. t ⟩ denotes a function from individuals . t ⟩ is a type.) We use D τ to signify the set of possible denotations for an ex- pression of type τ (for any type τ). for example. Using the λ symbol. An expression of type ⟨e. ⟨e. (Though not everything is a type. τ⟩ is a type. unary predicates. The set of types is defined as follows: • e is a type • t is a type • If σ is a type and τ is a type. then ⟨σ. For example. binary predicates.90 Typed lambda calculus thing like the following: ∀x . In the system we present next. rather than particular types. t ⟩ is a type. An expression of type t is a formula. And since ⟨e. And so on. These categories will be called TYPES. ⟨e ⟩ is not a type according to this system even though that is sometimes found in the literature. which we present now. • Nothing else is a type. The language of typed lambda calculus. angle brackets are only introduced for complex types according to this definition. Our language L 0 had a finite set of syn- tactic categories: terms. we can ABSTRACT OVER the missing piece. σ and τ are variables over types. In the third clause. D e is the set of individuals. and e is a type of course.

technically. D ⟨e. ⟨e. For example. we will spell them with low- ercase letters. and binary predicates as expressions of type ⟨e.) In fact.t ⟩ is the set of functions that take as input an individual. we did not give for example Loves a meaning of its own but rather defined the cir- cumstances under which a formula containing it would be true.Typed lambda calculus 91 to truth values. Etc. t ⟩⟩. For this reason. let us assume that it denotes the following function: ⎡ Bart → 1 ⎤⎥ ⎢ ⎢ ⎥ (2) I 1 (happy) = ⎢ Homer → 0 ⎥ ⎢ ⎥ ⎢ Maggie → 1 ⎥ ⎣ ⎦ Exercise 1. we gave the semantics of unary and binary predicates SYNCATEGOREMATICALLY. we will not have any predicates in this language. t ⟩. In L 1 . ⎡ Bart → 1 ⎤⎥ ⎢ ⎢ ⎥ (1) I 1 (bored) = ⎢ Homer → 0 ⎥ ⎢ ⎥ ⎢ Maggie → 0 ⎥ ⎣ ⎦ For the unary predicate happy. Is Bart happy in M 1 ? (a) What does I 1 (happy) give as output when given Bart as an in- put? . only functions. In L λ . (In other words. of the kind we saw in first-order logic. we will treat unary predicates as expressions of type ⟨e. that is to say. and give a truth value as output. The next change has to do with how unary and binary predi- cates are treated. we could assume that the interpretation of the constant bored maps Maggie and Homer to “false”. and Bart to “true”. they receive a CATEGOREMATIC treatment. The interpretation of a unary predicate will be a function that takes an individual as input and spits back either 1 (“true”) or 0 (“false”). They will thus have their own denotations.

True or false: (a) I 1 (bored)(Bart) = 1 (b) I 1 (bored)(Bart) = I 1 (happy)(Bart) (c) I 1 (n) = Bart (d) I 1 (bored)( I 1 (n)) = 1 (e) I 1 (n) = I 1 (j) Now for the fun part. given the individual Maggie again. 41) for scholarly notes on these terms). A binary predicate will denote a func- tion which.92 Typed lambda calculus (b) In other words. when given an individual. This kind of function is the result of S CHÖNFINKELIZING the un- derlying binary relation (also known as C URRYING. re- turns 1 (‘true’). ⎡ ⎡ Maggie → ⎤ ⎤ ⎢ ⎢ 1 ⎥ ⎥ ⎢ Maggie → ⎢⎢ Homer → ⎥ ⎥ ⎥ ⎢ ⎢ 0 ⎥ ⎥ ⎢ ⎢ Bart ⎥ ⎥ ⎢ ⎣ → 0 ⎦ ⎥ ⎢ ⎥ ⎢ ⎡ Maggie → ⎤ ⎥ ⎢ ⎢ 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ (3) I 1 (loves) = ⎢ Homer → ⎢ Homer → 1 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Bart → 0 ⎥ ⎥ ⎢ ⎣ ⎦ ⎥ ⎢ ⎡ → ⎤ ⎥ ⎢ ⎢ Maggie 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ Bart → ⎢ Homer → 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Bart → ⎥ ⎥ ⎣ ⎣ 1 ⎦ ⎦ . see Heim & Kratzer (1998. returns another function. For exam- ple. the following function returns another function which. what is I 1 (happy)(Bart)? Exercise 2. given the individual Maggie.

⎡ Maggie → 1 ⎤ ⎢ ⎥ ⎢ ⎥ f (Maggie) = ⎢ Homer → 0 ⎥ ⎢ ⎥ ⎢ Bart → 0 ⎥⎦ ⎣ Hence f (Maggie) applied to Maggie is 1. Specify the function as a set of input-output pairs (using either set notation or arrow notation). does Maggie love herself in M 1 ? Let us suppose that loves denotes this ‘same’ function and kissed denotes the ‘other’ function. As you can see.Typed lambda calculus 93 Let us call this function as f . This describes a situation in which each individual loves him. what is I 1 (loves)(Maggie)(Maggie)? (d) So. The meaning of the verb phrase loves Maggie can for example be represented as loves(m). Note that this already allows us to give a denotation to verb phrases containing transitive verbs. this function corresponds to the ‘same’ relation: It re- turns true only when both arguments are the same. and no- body else. This can be notated as follows (where it is implicit that f (Maggie) forms a constituent): f (Maggie)(Maggie) = 1. In general. But f (Maggie)(Bart) = 0. (a) What is the value of I 1 (loves) applied to Maggie? Hint: It is a function. Exercise 3. which returns 1 if and only if the two . (b) What is the value of that function (the one you gave in your previous answer) when applied to Maggie? (c) In other words.or herself.

because a transitive verb combines first with its object and then with its subject. ⎡ ⎡ Maggie → ⎤ ⎤ ⎢ ⎢ 0 ⎥ ⎥ ⎢ Maggie → ⎢⎢ Homer → ⎥ ⎥ ⎥ ⎢ ⎢ 1 ⎥ ⎥ ⎢ ⎢ Bart ⎥ ⎥ ⎢ ⎣ → 1 ⎦ ⎥ ⎢ ⎥ ⎢ ⎡ Maggie → ⎤ ⎥ ⎢ ⎢ 1 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ I 1 (kissed) = ⎢ Homer → ⎢ Homer → 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Bart → 1 ⎥ ⎥ ⎢ ⎣ ⎦ ⎥ ⎢ ⎡ Maggie → ⎤ ⎥ ⎢ ⎢ 1 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ Bart → ⎢ Homer → 1 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Bart → ⎥ ⎥ ⎣ ⎣ 0 ⎦ ⎦ Let us assume that the first individual that the function combines with is the person being kissed. Now we have a complete spec- ification of all of the values for the basic expressions in the logic. loves(x )(m) denotes the characteristic function of the set of individuals that Maggie loves. For exam- ple: λx . Then this denotation for kissed means that Bart and Homer kissed Maggie.94 Typed lambda calculus arguments are different. also known as an ABSTRAC - TION OPERATOR . The lambda operator allows us to describe a wide range of functions without having to name each one. Applied to Maggie. this func- tion yields the function that returns 1 for Bart and Homer: ⎡ Maggie → 0 ⎤ ⎢ ⎥ ⎢ ⎥ I 1 (kissed)(Maggie) = ⎢ Homer → 1 ⎥ ⎢ ⎥ ⎢ Bart → 1 ⎥ ⎣ ⎦ And I 1 (kissed)(Maggie)(Homer) = 1. loves(m)(x ) . Maggie and Bart kissed Homer. while: λx . and the second individual is the kisser. and Maggie and Homer kissed Bart. The introduction of an infinite set of syntactic categories sets the stage for the introduction of a new variable binder called a LAMBDA OPERATOR (or λ-operator).

λx . Typically φ will contain the variable x. if φ is an expression of type τ. τ⟩. λx . the value description in the expression λx .Typed lambda calculus 95 denotes the characteristic function of the set of individuals that love Maggie. λz . and u is a variable of type σ. loves(x )(m) denotes the char- acteristic function of the set {x ∣ Maggie loves x }. introduced(z )( y )(x ) If φ is a formula (type t ). λx . λy . then λu . then λx . Exercise 4. φ will be an expression of type ⟨e. happy(x ) 2. φ] denotes a function that returns the value described by φ when given input x. λx . For example. loves(x )(m) is loves(x )(m). λx . More generally. . t ⟩. φ will be an ex- pression of type ⟨σ. Such an expression is called a LAMBDA AB - STRACTION . λy . [loves( y )(x ) ∨ loves(x )( y )] 4. In general. As it de- scribes the value of the function given an argument. the expression [λx . You can think of the λ-operator analogously to pred- icate notation for building sets. x 3. Identify the value description in the following lambda expressions: 1. the part that comes after the dot is called the VALUE DESCRIPTION. and x is a variable of type e. denoting a function from individuals to truth values.

It’s actually a fact that follows from how the semantics of λ-abstraction is defined. . bored(x ) denotes the result of applying the function denoted by bored to the value of x. Thus [λx . There is one caveat to the rule of β-reduction. φ](α) can be β-reduced to φ[x ∶= α]. Note that β-reduction does not need to be made as an additional stipulation. with the modification that the argument of the function is substituted for all instances of the variable. then π(α) denotes the result of applying π to α. We can formulate the rule more suc- cinctly as follows: [λx . loves(x )(m)](b) denotes the result of applying the function ‘is loved by Maggie’ to Bart. This principle also applies to syntactically complex function-denoting terms formed by lambda abstraction.REDUCTION or β- CONVERSION . If π is an expression denoting a function. and we have kept just the value description part. and α is an expression that is of the right type to be used as an argument to π. For example. As in L 1 .96 Typed lambda calculus The functions resulting from abstraction behave just like the functions we are already familiar with. For example. It only works if α does not contain any free variables that occur in φ. where φ[x ∶= α] stands for the result of replacing all free occur- rences of x with α in φ. we indicate the arguments of a function using parentheses. This formula is equivalent to the simpler: loves(b)(m) where the λ-binder and the variable have been removed. This is called APPLI - CATION . the result of applying the function to the argument can be described more simply by taking the value description and replacing all occurrences of the lambda-bound variable with the argument. This kind of simplification is known as β. In general.

Now when we apply β-reduction.g. This is harmless. the variable y accidentally became bound by the universal quantifier ∀. [loves(x )(x ) → loves( y )(x )]]( y ) If we take away λx and substitute y for x then we get: ∀ y . we get the following result: ∀z . The use of types in that context was crucial to ensure that certain programs terminate. Before we can apply β-reduction in this case. but also over functions. combined with lambda abstraction.EQUIVALENCE. but use types very much in the latter function. The typed lambda calculus has played an important role in the history of computing. al- lows us to express a wide range of potential meanings for natural . In the version of lambda calculus that linguists have inherited from Montague. Lin- guists are typically not so worried about computability. because a bound variable can be replaced by any other bound variable without a change in meaning. This is the principle of α. ∀ y . ∀z . This. and use- ful in making sure that the right kinds of operations are combined with the right types of objects (e. we must change the name of the variable bound by the universal quantifier thus: [λx . etc. [loves( y )( y ) → loves(z )( y )] where the variable y no longer clashes with the variable bound by the universal quantifier.). [loves(x )(x ) → loves(z )(x )]]( y ) We changed y to z in the scope of the universal quantifier. which we will not go into here. numbers. strings.Typed lambda calculus 97 consider the following expression: [λx . [loves( y )( y ) → loves( y )( y )] Through overly enthusiastic substitution of y for x. as our focus is on natural language semantics. variables can range not only over individuals.

denoted by λx . Take for example the prefix non-. ¬P (x )] This expression denotes a function which takes as input a predi- cate (P ) and returns a new function. [λY . Recall that intuitively. as in non- smoker. We can represent this function as follows. and returns 1 (true) if and only if every X is a Y . [ X (x ) → Y (x )]] This expression denotes a function which takes the characteris- tic function of a set (call it X ).can be represented as a function that takes as its argument a predicate and then returns a new predicate which holds of an individual if the individual does not satisfy the input predicate. The λ symbol will also let us give a meaning to phrases like ev- ery cellist and determiners like every. If smokes denotes a function of type ⟨e.98 Typed lambda calculus language expressions. then the meaning of non-smoker can be represented as follows: λx . t ⟩. The meaning of non. then we get the function λx . and then another (call it Y ). assuming that smoker denotes smokes. A non-smoker is someone who is not in the set of smok- ers. . ¬smokes(x ) This expression denotes a function which takes an individual (x) and returns 1 if and only if x is not a smoker. where P is a variable that ranges over functions from individuals to truth values: λP . This is the meaning of non-smoker. t ⟩). If we apply this function to the predicate smokes. the characteristic function of the set of smokers. To say Every cellist smokes is to say that the set of cellists is a subset of the set of things that smoke. ¬P (x ). ¬smokes(x ). ∀x . The meaning of every can be represented like this: λX . every expresses a subset relation between two sets. The new function takes an individual (x) and returns true if and only if x does not satisfy the predicate. [λx . Let X and Y be variables ranging over the character- istic functions of sets (type ⟨e.

cellist(x ) → Y (x ) Thus the meaning of every. So what we are dealing with here is no longer first-order logic. ∀x .2 Summary 4. Pretty fancy. applied to the meaning of cellist. [λY .) Then open the ‘Scratch Pad’ and verify for yourself that the two reduc- tions just given work as advertised. then we get a sentence that denotes a truth value: [λY . (It works with Mac.2. ∀x . 4. [cellist(x ) → smokes(x )] A crucial point to notice here is that the argument of the function is itself a function. Functions that take other functions as argu- ments are called higher-order. L λ . If we feed it smokes. Windows and Linux operating systems. ∀x . [ X (x ) → Y (x )]]](cellist) This expression will turn out to be equivalent to: λY . and install it on your computer. The TYPES are defined recursively as follows: e and t are both .com. Download the Lambda Calculator from http:// lambdacalculator. but higher-order logic.Typed lambda calculus 99 The denotation of every cellist would be the result of applying this function to whatever cellist denotes: [λX . Exercise 5. is a function that is still hungry for another unary predicate. [cellist(x ) → Y (x )]](smokes) ≡ ∀x .1 Syntax of L λ Let us now summarize our new logic. which is a version of the TYPED LAMBDA CALCULUS developed by logician Alonzo Church.

φ] are formulas. then so is ¬φ. 4. Equality If α and β are terms. Binary Connectives If φ and ψ are formulas. φ] and [∃u . If σ and τ are types. A FORMULA is an expression of type t . τ⟩ and β is an expression of type τ then α(β) is an expression of type τ. Basic Expressions (now more general!) For each type τ. then ⟨σ. then α = β is an expression of type t . Quantification If φ is a formula and u is a variable of any type. Recall that when reading a formula. 3. if α is an expression of type ⟨σ. you may assume that the scope of a binder (e. (a) the non-logical constants of type τ are they symbols of the form c n. τ⟩ is a type. [φ ∨ ψ]. then [∀u .τ for each natural number n. [φ ∧ ψ]. 5. [φ → ψ]. Predication (cf. ‘Atomic Formulas’ – more general!) For any types σ and τ. nothing else is a type. The set of expressions of type τ. then so are ¬φ.τ for each natural number n.g. τ⟩. 6. is defined re- cursively as follows: 1. for any type τ. ∀x or ∃x) extends as far to the right as . Lambda abstraction (new!) If α is an expression of type τ and u is a variable of type σ then [λu . and [φ ↔ ψ]. α] is an expression of type ⟨σ.100 Typed lambda calculus types. 7. (b) the variables of type τ are the symbols of the form v n. 2. Negation If φ is a formula.

Recall also that we drop the dot whenever we have multi- ple binders in a row. so we can for example write π(λx .e • P is v 0.⟨e. we must expand the range of types so that of σ is a type and τ is a type. Q is v 1.t ⟩ . for example. Consider the following expressions. To incorporate this. But sometimes it feels easier to use the style that is fa- miliar from first-order logic. t ⟩. However. [P (x ) ∧ Q (x )] can be rewritten as λx . Exercise 6. So π(α. To further reduce clutter.t ⟩ . so this idea would be expressed as loves(m)(h) instead. λx .⟨e. λxλy . treating Loves as a constant of type ⟨e × e. β) is well-formed when π designates a function that expects an or- dered pair as input (a so-called PRODUCT TYPE). The Lambda Calculator does this. Admires(x. so we can write. we will typically retain brackets in these cases. x + 1]): Square brackets that are immediately embedded inside parentheses can be dropped.g. where e × e is the type of ordered pairs of individuals.Typed lambda calculus 101 possible. we will add the following abbrevatory convention.e (meaning that x is variable number 0 of type e) • y is v 1. Note that Loves(h. So. m) is not a formula anymore according to our rules. e. The ‘Atomic For- mulas’ rule has been replaced by the rule of Predication. m). y ). x + 1) rather than π([λx . then σ × τ is a type.⟨e. assuming the fol- lowing abbreviations: • x is v 0. P (x )∧Q (x ).t ⟩ . and X is v 2. and one could in principle augment the language to include formulas like Loves(h.

R ( y. R (x. a )] 4. ∃x . R ( y. [P (x ) ∧ X (x )]](λy . [P (x ) ∧ X (x )]](λx . [λX . ∃x .t ⟩ • n is c 0. what is its type? . [λX . y )](b )(a ) 8. R (x. [λx . [P (x ) ∧ X (x )]]( X ) 14. R ( y. R (x. [λx . y )](b )](a ) 9. [λX . R (x. ∃x . ∃x . [λyλx . [λx .⟨e ×e. y )) 10. [λX . x )) 12. [λx . [λX . [λy .e and d is c 1. x )) 11. R ( y. [λxλy . [λx . a )](b ) 6. [λx . R (a. P (x )](a ) 2. y )](b ) 7. [P (x ) ∧ X (x )]](Q ) 13. [P (x ) ∧ X (x )](λx . P (x )(a )] 3. R (a. ∃x . ∃x .102 Typed lambda calculus • R is v 0.e 1. [λxλy . [λX . a )](b ) 5. x )](a ) For each of the above. [P (x ) ∧ X (x )]](λy .Q (x ))] 15. answer the following questions: (a) Is it a well-formed expression of L λ (given both the official syntax and our abbreviatory conventions) and if yes.

P 4. You can check your answers using the Lambda Calculator. R ( y. y 2. Check your work using the Lambda Calculator. P (a ) 9. Exercise 7. [λx . λyλx . Assume that man and mortal are constants of type ⟨e. give a completely β-reduced (λ-converted) expression which is equivalent to it. λy . [λx . t ⟩. y ) 10. [λyλx . R (x. 1. P (x ) 7. Identify the type of each of the following. R ( y. a )](b ) . a ) 11. x 6. a 5. Use α- equivalence (relettering of bound variables) if necessary to avoid variable clash. P (x )](a ) 8. λx . x ) 12. P (x ) 3.Typed lambda calculus 103 (b) If the formula is well-formed. x )](a ) 13. λx . R ( y. R (x.

P (x )](mortal) 26. P (a ) 20. λP . λP . [P (x ) ∧ Q (x )] 16. ¬P (x )](mortal) 30. [man(x ) → Q (x )] . [λP λx . ¬mortal(x ) 28. [λx . ∃x . R (a. ∀x . ¬P (x ) 29. λP . λP . [λP . ¬mortal(a ) 33. [λP . [R ( y )(a ) ∧ Q (x )] 18. ¬mortal(x ) 31. P (x ) 21. ∀x . P (x ) 22. b ) 15. λxλy . ∃x . P (x ) ∧ Q (x )](a ) 17. ∀x . λx . P 19.104 Typed lambda calculus 14. P (x )](man) 23. λP λx . ∃x . [λx . λQ . P (x ) 25. ¬mortal(x ) 27. man(x ) 24. λx . λx . ∃x . ¬mortal(x )](a ) 32.

¬P (x )](mortal) 40. R ( y. [λyλx . [λx . [P (x ) → Q (x )] 36. x ](a ) 2. P (x )](man) 4. λP λQ . [λP λx . mortal(x )) You can check your answers using the Lambda Calculator. [λP λQ . [λP λx . [λx . a )](b ) 8. Exercise 8. [λP . [λx . ∀x [P (x ) → Q (x )]](man) 37. ∀x . P (x )](man) . Where possible. apply β-reduction to give a more con- cise version of each of the following. [λP . ∀x [P (x ) → Q (x )]](man)(mortal) 38. ∀x [man(x ) → Q (x )]](λx [mortal(x )]) 39. R ( y. [λx . [P (x ) ∧ Q (x )]](a ) 9. [man(x ) → Q (x )]](mortal) 35. P ](man) 3. ∃x . ∃x . [λQ . [λx . 1. [λP λQ . P (x )] 6. If the expression is fully re- duced. P (x )](a ) 5. just give the original expression. ∀x .Typed lambda calculus 105 34. [λQ . ¬P (x )](λx . x )](a ) 7. [λP .

∀x . [λQ . ∀x . [λx . [λx . P (x )](mortal) 11.106 Typed lambda calculus 10. [R ( y. ∀x [P (x ) → Q (x )]](man)(mortal) 17. ∀x . ¬P (x )](λx [mortal(x )]) 25. [P (x ) → Q (x )]](man) 16. R (b. As in L1 . [λP λx . [λx . I ⟩ . P (x )](λx [¬mortal(x )]) You can check your answers using the Lambda Calculator. P (x ) ∧ Q (x )](a ) 18. y )]( y ) 20.2. a ) ∧ Q (x )]](a )(b ) 19. ¬P (x )](mortal) 13. [λQ . 4. ∃ y . [λP λQ . [P (x ) → ∃x . [λP . the semantic values of expressions in Lλ depend on a model and an assignment function. ∃P . [λx . [λP λx . [λQ . ∀x [P (x ) → Q (x )]](mortal) 24. ¬mortal(x )](a ) 14. [λP λx . x )]](a ) 22. [λP λQ . a model M = ⟨D. ¬mortal(x ) 12. [λxλy . [man(x ) → Q (x )]](mortal) 15. R (x.2 Semantics As in L1 . λx . a ](b ) 21. ∀x [mortal(x ) → Q (x )]](λx [mortal(x )]) 23. [λx .

Binary Connectives If φ and ψ are formulas.g = g (α).g .g = 0. then: (a) Jφ ∧ ψK = 1 iff JφKM . M .g .g . (b) If α is a variable. Assignments provide values for variables of all types.Typed lambda calculus 107 is a pair consisting of the domain of individuals D and an inter- pretation function I . Types are associated with domains. which assigns semantic values to each of the non-logical constants in the language. Basic Expressions (a) If α is a non-logical constant. 5. D ⟨a.b ⟩ is the domain of functions from D a to D b .g = 1.g (d) Jφ ↔ ψK = 1 iff JφKM .g = I (α). An assignment thus is a function assigning to each variable of type a a denotation from the set D a .g 4. The domain of individuals D e = D is the set of individuals. M .g = 0 and JψKM . then JαK M .g (b) Jφ ∨ ψK = 1 iff JφKM . Equality If α and β are terms. M . M .g = 1 iff JφKM . I assigns an object of type a to every non-logical constant of type a.g = 1 or JψKM . not just those of type e. the set of potential denotations for an expression of type e.g = 1. The domain of truth values D t contains just two elements: 1 ‘true’ and 0 ‘false’. The semantic value of an expression is defined as follows: 1. For any types a and b. For every type a. then JαK M .g (c) Jφ → ψK = 1 iff JφKM .g = JψKM . 2.g = 1 and JψKM .g = 1. M . Negation If φ is a formula. then Jα(β)K = JαK(JβK). and β is an expression of type a. b ⟩. then J¬φK M . then Jα = βK = 1 iff JαKM .g = JβKM . Predication If α is an expression of type ⟨a. 3.

loves(n)(x )](d) and its β-reduced version loves(n)(d) have the same semantic value in your model us- ing the semantic rules for L λ .108 Typed lambda calculus 6.g = 1 iff for all k ∈ D a : JφKM . y ) holds between x and y if and only if x is y’s aunt. where aunt(x. But consider a sentence like Sue is an . φK M . Exercise 10. αKM . (a) Partially define a model for L λ giving values (denotations) to the constants loves.g [u ↦k ] .g [v ↦k ] = 1 (b) If φ is a formula and v is a variable of type a then J∃v . Exercise 9. alex).g is that function h from D b into D a such that for all objects k in D b . and d. φK M . We might there- fore introduce a binary predicate aunt to represent the aunthood relation. Thus a sentence like Sue is Alex’s aunt could be rep- resented as aunt(sue. (b) Show that [λx .g = 1 iff there is an individual k ∈ D a such that that: JφKM . 7. Quantification (a) If φ is a formula and v is a variable of type a then J∀v . Relational kinship terms like aunt can be thought of as denoting binary relations among individuals. h (k ) = JαK M . Lambda Abstraction If α is an expression of type a and u a variable of type b then Jλu .g [v ↦k ] = 1. n.

aunt(sue. ⟨e. t ⟩⟩) and gives as output the property that an individual has if they stand in this relation to another individual. ⟨e. Exercise 12. Exercise 11. as in Have you eaten? One might imagine that a two-place predicate can be re- duced to a one-place predicate through an operation that existen- tially quantifies over the object argument. ⟨e. t ⟩⟩) and the output should be a unary relation (type ⟨e. describe a function that would take as input an arbitrary binary relation like the aunthood relation (type ⟨e. ∃x . and ac- cording to the kind of analysis we have done here. The input to the function should be a binary relation (type ⟨e. Like eat. Using Lλ . consider The barber shaved John and The bar- ber shaved. We might imagine that there is a regular process that converts a relational noun like aunt into a noun denoting the property of standing in the relevant relation to some individual. the prop- erty that someone has if there is someone that they are the aunt of: λy . t ⟩) where the object argument has been existentially quantified over. But in contrast to eat. x ). rather than a binary relation. And yet we do have usages where the object does not appear. t ⟩⟩. type ⟨e. t ⟩⟩. the noun aunt might be taken to denote.Typed lambda calculus 109 aunt now! Such a sentence might be taken to express an existen- tial claim like ∃x . it means that the barber . x ). In this expression. aunt(sue. The answer should take the form of a λ expression of type ⟨⟨e. the verb shave can be used both transitively and intransitively. t ⟩⟩. one of the arguments of the relation is existentially bound. ⟨e. the intransitive version does not mean that the barber shaved something. Define a function that does this and express it as a well-formed lambda term in L λ . this would im- ply a treatment as a binary relation. On such a usage. We normally consider eat a transitive verb. ⟨e.

. ⟨e. and there has been only scant presentation of model theory. p. ⟨e. so this can hardly be consid- ered a serious introduction to the subject. (1981). There is no trace of proof theory in this chapter. This chapter has provided just the bare mini- mum that is needed for starting to do formal semantics.) Further reading. Problem 4-7. 97. t ⟩⟩. (Adapted from Dowty et al.110 Typed lambda calculus shaved himself. Give an expression of L λ of type ⟨⟨e. Carpenter (1998) is an excellent introduction to the logic of typed languages for linguists who would like to deepen their understanding of such issues. t ⟩⟩ which produces this sort of meaning from a two-place predicate.

see for example Partee & Rooth (1983a).1 We will often work with simplified (β-reduced) meaning representations. but not identical to. because ⟪α⟫ refers to a particular. unique L λ expression to which a given nat- ural language expression α is translated. snores(x ) Any expression in L λ that is semantically equivalent to ⟪α⟫ but syntactically distinct is technically not appropriately called ⟪α⟫. we will represent type-shifting operations with extra syntactic node. which are equivalent to. For example: ⟪snores⟫ = λx . snores(x ) where ↝ signifies the ‘translates to’ relation. we will translate expressions of En- glish into expressions of L λ .2 1 So-called ‘type-shifting rules’ have been treated in indirect interpretation frameworks as providing additional translations. the primary translations. To do so. then we can write ⟪α⟫ to denote ‘the (primary) translation of α’. syntactically specified expres- sion. Here. 2 It is very common in formal semantics to write things like: 111 . with statements like this: snores ↝ λx . as is the typical practice in the Lambda Calculator. (1) Notation convention ⟪α⟫ denotes the primary translation of α into L λ . If we can assume that there is a designated.5 ∣ Translating to lambda calculus Now we will begin to talk about natural language expressions and their denotations.

using indirect translation gives us better control over the representation language. He was very clear that this procedure was only meant to be a convenience. it cannot be interpreted in the absence of a given model (and assignment function. in other words to name a particular function. and we are translating from the natural language to the formal lan- guage. and add new evaluation parameters. he specified a set of principles for translating English into his own version of lambda calculus called Intensional Logic. 1974a). and it is not obvious how these come into play. The intention appears to be to use the expression on the right (as opposed to mentioning it. as we do in indirect translation) to describe the semantic value of the expression.3 JsnoresK = λx . unlike in English as a Formal Language (Montague. we have two object languages. So we will continue to think of our English expressions as having denotations. There.112 Translating to lambda calculus We then let the semantics of the English expressions be inher- ited from the semantics of lambda calculus. The use of denotation brackets around the natural language expression clearly suggests that a kind of direct interpretation is indended. But even if we assume that the expression is to be interpreted ‘in the standard way’. if there are free variables). this way. Moreover. Nev- ertheless. mixed with talk of sets and relations). even though we will specify them indirectly via a translation to lambda calculus. snores(x ) It is not entirely straightforward how to interpret expressions like this. Rather. one could in principle specify the meanings of the English expressions directly. and specifying the semantics of the formal language in our metalanguage (which is also English. the expressions of lambda calculus are not themselves the denotations. English (a natural language) and lambda calculus (a formal language). 3 An important difference between the tact we are taking here and the one . This is like what Montague (1974b) did in his famous work en- titled The Proper Treatment of Quantification in Ordinary English (‘PTQ’ for short). we can define new logical symbols. change how the logical symbols are interpreted. A given sentence can then be said to be true with respect to a model and an assignment function if its translation is true with respect to that model and assignment function.

or of a verb followed by an AP. The difference is that proper names are ‘intransitive’. this means that Homer is also a DP. and define a larger fragment. so the VP rule means that a VP can con- sist solely of a verb.4 (2) Syntax S → DP VP S → Neg S S → S JP J → JS VP → V (DP∣AP∣PP) AP → A (PP) DP → D (NP) NP → N (PP) NP → A NP PP → P DP The vertical bar ∣ separates alternative possibilities. like articles. One should carefully distinguish between these two ways of using it. where DP stands for ‘Determiner Phrase’. The terminal nodes of the syntax trees produced by these syn- tax rules may be populated by the following words: (3) Lexicon J: and. or taken in Heim & Kratzer’s (1998) textbook is that here the λ symbol is used as an expression of lambda calculus (its original use). etc. and the paren- theses signify optionality. whereas in Heim and Kratzer the λ symbol is part of the meta-language. we will explore the limits of Functional Appli- cation. as an abbreviation for describing func- tions. and we assume that the head of a phrase like the car is the determiner the.Translating to lambda calculus 113 In this section. so they don’t combine with an NP. . Since phrases like the car and proper names like Homer are of the same category. We assume therefore that proper names are of category D. we will need a few syntax rules. or of a verb followed by an NP. 4 Here we are adopting the ‘DP-hypothesis’. To do so.

some. no D: Bart. zebra D: the. kid. with Exercise 1. is A: lazy. (h) A zebra loves proud of Bart. (b) Some drunkard smokes every lazy zebra. Homer. . every. (l) Maggie is with Homer and Bart is with Homer. (e) Homer is. Maggie. (i) The proud zebra of Homer loves every lazy lazy lazy lazy drunkard. nobody P: of. a. (c) Maggie is not a zebra. (d) Homer Simpson is.114 Translating to lambda calculus Neg: it is not the case that V: smokes. Which of the following strings are sentences of the fragment of English that we have defined (modulo sentence- initial capitalization)? (a) Homer Simpson loves everybody. everybody. proud N: drunkard. (g) Somebody is proud of the kid. loves. somebody. (k) Maggie and Bart are with Homer. (f) Homer is a baby zebra. baby. (j) Bart smokes with nobody.

We refer to this mapping from a word to a mean- ing representation in the formal language as a LEXICAL ENTRY for the verb loves. we might write: DP Homer instead of DP D Homer Now that we have defined the syntax of our fragment of En- glish. but also with syntactic trees. 5. We can think of words as de- generate cases of trees.1 Homer loves Maggie Let us first consider how to analyze a simple transitive sentence like Homer loves Maggie. we need to specify how the expressions generated by these syntax rules are interpreted. sometimes we “prune” non-branching nodes. We will build up a theory of how these translations work by considering a series of examples. we will translate them into expressions of L λ .Translating to lambda calculus 115 Note: In the trees below.1. Note that according to the way we have set things . so in general. loves( y )(x ) The expression ⟪loves⟫ can be read “the translation of the word loves into L λ ”. For example. We will associate translations not only with words. We will represent the meaning of the verb loves as follows: (4) ⟪loves⟫ = λyλx . translations go from trees to expressions of our logic.1 Fun with Functional Application 5. To do so.

we will also need a rule for non-branching nodes. • ⟪Homer⟫ = h • ⟪Maggie⟫ = m To put together the meanings of complex expressions. Functional Application (FA) Let γ be a tree whose only two subtrees are α and β. But the longer style brings out the fact that we are dealing with a function of type ⟨e. t ⟩⟩ and is more in line with existing conventions in seman- tics. For this. [V loves ] and [NP Maggie ]. So ⟪[VP [V loves ] [NP Maggie ] ]⟫ = ⟪[V loves ]⟫(⟪[NP Maggie ]⟫) The next step is to give translations to [V loves ] and [NP Maggie ]. Since the translation of first one de- notes a function that applies to the other. τ⟩ and ⟪β⟫ is of type σ. we can use FA to put together the meanings of these two expressions. (We have specified the translation for loves but not for [V loves ]. then: ⟪γ⟫ = ⟪α⟫(⟪β⟫) For example. we could simply have written loves on the right-hand side of the equals sign instead of λyλx . where every argument cor- responds to a λ-abstraction. which sim- ply takes a function and applies it to an appropriate argument: Composition Rule 1. the tree [VP [V loves ] [NP Maggie ] ] has two subtrees.116 Translating to lambda calculus up. following Heim & Kratzer (1998). If ⟪α⟫ is of type ⟨σ. we de- pend primarily on the rule of Functional Application. Let us assume further that Homer is translated as h and that Maggie is translated as m. ⟨e.) . loves( y )(x ).

give a fully β-reduced translation at each node. ⟨e. Exercise 2. Note that we have not shown the primary translations at every node. For both of the following trees. denoting a function from individuals to truth values. Homer loves Maggie can then be analyzed as follows. This applies to the de- notation of Homer (an individual) to produce a truth value. Non-branching Nodes (NN) If β is a tree whose only daughter is α. but rather have applied β-reduction in order to simplify the logi- cal expressions wherever possible. Give appropriate lexical entries for words that have not been defined above. and the VP combines with the subject Homer. then ⟪β⟫ = ⟪α⟫. loves( y )(x ) m loves Maggie Via Functional Application. loves(m)(x ) Homer V DP ⟨e. t ⟩ h λx .Translating to lambda calculus 117 Composition Rule 2. . (5) S t loves(m)(h) DP VP e ⟨e. t ⟩⟩ e λyλx . The translation of the VP is an expression of type ⟨e. t ⟩. the transitive verb loves combines with the object Maggie.

. The syntactic structure is as follows: S DP VP Homer V AP is lazy We will continue to assume that the proper name Homer is translated as the constant h. t ⟩.118 Translating to lambda calculus (a) S DP VP Marge V snores (b) S DP VP Homer V DP respects Maggie 5. and translate lazy thus.1. the characteristic function of a set of individuals (those that are lazy). such as Homer is lazy.2 Homer is lazy Now let us consider how to analyze a sentence with an adjective following is. of type e. We can assume that lazy denotes a function of type ⟨e. Let us use lazy as an abbre- viation for a constant of type ⟨e. t ⟩.

P This implies that is denotes a function that takes as its first argu- ment another function P . and returns P . lazy(x ) Technically. In this case. we could write the same thing more simply by simply writing lazy. we will ignore the former function and focus on the latter. a function that returns whatever it takes in as input.Translating to lambda calculus 119 (6) ⟪lazy⟫ = λx . according to the way we have set things up. is takes in a function of type ⟨e. t ⟩. lazy(x ) is lazy . t ⟩. we will end up with the following analysis for the sentence Homer is lazy: (8) S t lazy(h) DP VP e ⟨e. where P is of type ⟨e. lazy(x ) Homer V A ⟨⟨e. We can capture the fact that is connects the predi- cate to the subject by treating it as an identity function. Since we have not starting deal- ing with tense yet. With these rules. t ⟩ λP . t ⟩ h λx . t ⟩⟩ ⟨e. t ⟩. P λx . What is the meaning of is? Besides signaling present tense. But the longer style brings out the fact that we are dealing with a function with one argument. ⟨e. (7) ⟪is⟫ = λP . it does not seem to accomplish more than to link the predicate ‘lazy’ with the subject of the sentence. again. and returns that same function.

and translate with as follows: (9) ⟪with⟫ = λyλx . is. t ⟩⟩. the preposition with combines with its object Mag- gie. Homer is with Maggie. the semantic type. giving the semantic type and a fully β-reduced translation at each node. Exercise 3. Let us introduce the constant with.1. with( y )(x ) Here is an overview of how the derivation will go.3 Homer is with Maggie Like adjectives.120 Translating to lambda calculus Each node shows the syntactic category. Assume the syntax rules have been changed so as to allow this structure. Via Functional Application. and the resulting PP combines with is to form a VP which . In this case. Func- tional Application is used at all of the branching nodes (S and VP). for example. and lazy). and Non-branching Nodes is used at all of the non-branching non-terminal nodes (DP. V. ⟨e. and a fully β-reduced translation to lambda calculus. and A). a function of type ⟨e. Then provide an analysis of the following sen- tence that captures the fact that it contradicts Homer is lazy. S DP VP Homer V NegP is Neg AP not A lazy 5. as in. prepositional phrases can also serve as predicates. The individual lexical entries that we have specified are used at the terminal nodes (Homer.

t ⟩ is P DP ⟨e. t ⟩⟩ e with Maggie Exercise 4. t ⟩ Homer V PP ⟨⟨e. t ⟩⟩ ⟨e. denoting a function from individuals to truth values. S t DP VP e ⟨e. t ⟩.Translating to lambda calculus 121 combines with the subject Homer. The translation of the VP is an expression of type ⟨e. ⟨e. Derive the translation into Lλ for Homer is with Maggie by giving a fully β-reduced translation for each node. Suppose model M 1 is defined such that the following holds: ⎡ ⎡ Bart → ⎤ ⎤ ⎢ ⎢ 0 ⎥ ⎥ ⎢ Bart ⎢ ⎥ ⎥ ⎢ → ⎢ Maggie → 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Homer → 1 ⎥ ⎥ ⎢ ⎣ ⎦ ⎥ ⎢ ⎡ Bart → ⎤ ⎥ ⎢ ⎢ 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ JwithKM1 = ⎢ Maggie → ⎢ Maggie → 0 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Homer → 0 ⎥ ⎥ ⎢ ⎣ ⎦ ⎥ ⎢ ⎡ → ⎤ ⎥ ⎢ ⎢ Bart 1 ⎥ ⎥ ⎢ ⎥ ⎢ Homer → ⎢⎢ Maggie → 0 ⎥ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ Homer → ⎥ ⎥ ⎣ ⎣ 0 ⎦ ⎦ . Exercise 5. ⟨e. This applies to the denotation of Homer to produce a truth value. t ⟩.

so it will be of category ⟨e. adjectives can denote functions of type ⟨e. the adjective proud expresses a relation that holds between Homer and Maggie. e ⟩.122 Translating to lambda calculus and suppose that JdK 1 =Homer and JmK 1 =Maggie. What truth M M value does the translation for Homer is with Maggie have in M 1 ? Explain your answer. denoting a function that takes two arguments. and also denotes an identity function. and then returns if the pride relation holds between them. x So the adjective phrase proud of Maggie will have the following structure: . first a potential object of pride (such as Maggie). ⟨e. In contrast to with. We therefore assume that of is a function word like is. t ⟩⟩. Un- like is.1. We can capture this by assuming that proud translates to the con- stant proud of type ⟨e. Proud is an example.g.4 Homer is proud of Maggie Like prepositions. the preposition of does not seem to signal a two-place relation in this context. then a potential bearer of such pride (e. (10) ⟪of⟫ = λx . however. ⟨e. 5. we will treat of as an identity function that takes an individual and returns an individual. t ⟩⟩. in Homer is proud of Maggie. Homer).

P With these assumptions. like is. t ⟩⟩ A PP ⟨e. e ⟩ e of Maggie Exercise 6. the derivation will go as follows. t ⟩⟩ e proud P DP ⟨e. We will assume that a. the characteristic function of the set of individuals who are drunkards. The indefinite article a is another function word that appears to be semantically vacuous. t ⟩⟩. t ⟩-type predicate and returns it. like with? 5. ⟨e. ⟨e. ⟨e. . What would go wrong if we were to assume that of was of type ⟨e. at least on its use in the present con- text.1. Give a lexical entry for proud and a fully β-reduced form of the translation at each node for Homer is proud of Mag- gie. (11) ⟪a⟫ = λP . Exercise 7.5 Homer is a drunkard Let us consider Homer is a drunkard. The noun drunkard can be analyzed as an ⟨e.Translating to lambda calculus 123 AP ⟨e. t ⟩ type property like lazy. denotes a function that takes an ⟨e.

t ⟩⟩ ⟨e. ⟨e. Is it possible to assign truth conditions to the following sentence using those assumptions? Why or why not? . Assume that radical and communist are both of type ⟨e. t ⟩⟩ in a sentence like A drunkard loves Maggie? Why or why not? Exercise 10.124 Translating to lambda calculus (12) S t DP VP e ⟨e. t ⟩. ⟨e. Can we treat a as ⟨⟨e. Give fully β-reduced translations at each node of the tree for Homer is a drunkard. following the style we have developed so far. t ⟩. t ⟩ a drunkard Exercise 8. t ⟩. t ⟩ Homer V DP ⟨⟨e. ⟨e. t ⟩. t ⟩⟩ ⟨e. t ⟩ is D NP ⟨⟨e. Exercise 9.

You will also need to assume a lexical entry for to that works along with your assumption about intro- duce and the structure of the syntax tree. Give a lexical entry for introduce of this type and analyze the following tree.Translating to lambda calculus 125 S DP VP Obama V DP is D NP a A NP radical N communist Exercise 11. ⟨e. S DP VP Obama V’ V’ PP V DP P DP introduced Putin to Cher . t ⟩⟩⟩. Assume that the ditransitive verb introduce is of type ⟨e. ⟨e.

give a denotation for each of these morphemes. and self3. S Carlos VP self1 shaves (b) For the sentence Carlos self2-introduced Paco. make the structure below yield the meaning ‘Carlos shaves himself’ by supplying the de- notation of self1. Let us take the following imaginary morphemes self1. make the struc- ture below yield the meaning ‘Carlos introduced Paco to Car- los (himself)’ by supplying the denotation of self2. etc. In some languages.) that attaches to the verb stem and reduces its arity by one. Assuming the syntactic structure given. Mid- dle Voice in Ancient Greek. self2. Passive Voice in Finnish.g.. there is a morpheme (e. reflexivizing affix in Kannada. (a) For the sentence Carlos self1-shaves. S Carlos VP V0 Paco self2 introduced .126 Translating to lambda calculus Exercise 12.

make the struc- ture below yield the meaning ‘Carlos introduced Paco to Paco (himself)’ by supplying the denotation of self3. S Carlos VP V0 Paco self3 introduced Make sure that your denotations work not just for sentences in- volving Carlos and Paco. a lexicon.6 Toy fragment So far. we have developed the following toy fragment of English. and a set of lexical entries.) 5. a set of composition rules.Translating to lambda calculus 127 (c) For the sentence Carlos self3-introduced Paco. but arbitrary proper names. (Exercise due to Maribel Romero.1. Syntax S → DP VP S → Neg S VP → V (DP∣AP∣PP) AP → A (PP) DP → D (NP) NP → N (PP) NP → A NP PP → P DP . consisting of a set of syntax rules.

baby. zebra D: the. with Composition Rules • Functional Application (FA) Let γ be a tree whose only two subtrees are α and β. somebody. Maggie. τ⟩ and ⟪β⟫ is of type σ. no D: Bart. proud N: drunkard. P • ⟪with⟫ = λyλx . If ⟪α⟫ is of type ⟨σ. a. x • ⟪a⟫ = λP . loves. then: ⟪γ⟫ = ⟪α⟫(⟪β⟫) • Non-branching Nodes (NN) If β is a tree whose only daughter is α. lazy(x ) • ⟪is⟫ = λP . some. loves( y )(x ) • ⟪lazy⟫ = λx . nobody P: of. P • ⟪of⟫ = λx . Homer. is A: lazy. kid.128 Translating to lambda calculus Lexicon Neg: it is not the case that V: smokes. then ⟪β⟫ = ⟪α⟫. Lexical entries • ⟪Homer⟫ = d • ⟪Maggie⟫ = m • ⟪loves⟫ = λyλx . everybody. with( y )(x ) . every.

and so does drunkard. 4.2 Predicate Modification 5. Homer is Maggie’s fan.1 Homer is a lazy drunkard Now let us consider the sentence Homer is a lazy drunkard.Translating to lambda calculus 129 Exercise 13. but nei- ther one denotes a function that has the denotation of the other in its domain. and so far. 5. These two expressions are sisters in the tree. t ⟩. This one presents a bit of a problem.) For each sentence. or the lexical en- tries. the lexicon. 2. So we can’t use Functional Application to combine them. the composition rules. You may have to modify or add to the syntax rules. Homer smokes and Maggie drinks. 1.2. Homer smokes and drinks. lazy denotes a function of type ⟨e. (Although you are free to add to the composition rules. 3. Homer is a fan of Maggie. It is not the case that Maggie smokes. give a parse tree with a fully beta-reduced representation at each node. Extend this fragment to assign representations in Lλ to the following sentences. . we have no other rules for combining expres- sions together. the exercises can be solved without doing so. 5. According to our lexical entry for lazy above.

t ⟩⟩. ⟨e. t ⟩ is D NP ⟨⟨e. t ⟩ (Partee. t ⟩ Homer V DP ⟨⟨e. There are two technical ways we could implement type-shifting. We can do this using a type-shifting rule which introduces a possible translation of type ⟨⟨e. t ⟩⟩ ⟨e. t ⟩. t ⟩ lazy drunkard If we want to use Functional Application here. t ⟩⟩ for every translation of type ⟨e. 1995. t ⟩⟩ ⟨e. t ⟩ a A NP ⟨e. there is in addition a secondary trans- lation of type ⟨⟨e. Another way to do it would be to as- sume that type-shifters are present in the syntax tree. p. 29). t ⟩ ⟨e. t ⟩. One would be to specify that. t ⟩⟩. So the type- shifted version of lazy will technically be like this syntactically: . t ⟩. in addition to the primary transla- tion for lazy of type ⟨e.130 Translating to lambda calculus (13) S t DP VP e ⟨e. ⟨e. ⟨e. t ⟩. ⟨e. t ⟩. t ⟩. ⟨e. we need lazy to be a function of type ⟨⟨e.

By this rule. ⟨e. use different variables of the same type). t ⟩ lazy The type-shifting rule that we use is defined as follows: Type-Shifting Rule 1. in that case. [lazy(x ) ∧ P (x )] . we can derive that: ⟪MOD lazy⟫ = λP λx . t ⟩ lazy Under this second style. [⟪α⟫(x ) ∧ P (x )] as well (as long as P and x are not free in ⟪α⟫. t ⟩. we will use a special notation for trees with type-shifters in order to cap- ture the intuition that the type-shifting operation induces a trans- formation of the meaning: (15) A ⟨⟨e.Translating to lambda calculus 131 (14) A ⟨⟨e. Predicate-to-modifier shift (MOD) If ⟪α⟫ is of category ⟨e. t ⟩⟩ ⇑MOD A ⟨e. ⟨e. then: ⟪MOD α⟫ = λP λx . However. t ⟩. This is the style we will follow here. we don’t have to bother with the distinc- tion between primary and secondary translations. t ⟩. and it is con- sistent with how type-shifting is typically handled in the Lambda Calculator. t ⟩⟩ MOD A ⟨e.

t ⟩. [lazy(x ) ∧ P (x )] drunkard ⇑ ⟨e. but if we did. t ⟩ λP . For a sentence like Homer is lazy. ⟨e. t ⟩ h λx . and why we can’t have silent nouns all over the place. t ⟩ λP λx . t ⟩⟩ for him. We would have to augment our syntax rules to allow for this. [lazy(x ) ∧ drunkard(x )] Homer V DP ⟨⟨e. ⟨e. t ⟩. t ⟩ drunkard lazy lazy This is essentially Montague’s strategy for dealing with adjectives that modify nouns. [lazy(x ) ∧ drunkard(x )] a A NP ⟨⟨e. t ⟩. t ⟩. t ⟩ λP . then it would be- come mysterious why *Homer is lazy drunkard is not grammatical in English. t ⟩⟩ ⟨e. all adjectives are ⟨⟨e. P λx .132 Translating to lambda calculus so Homer is a lazy drunkard gets the following analysis: (16) S t lazy(h) ∧ drunkard(h) DP VP e ⟨e. ⟨e. [lazy(x ) ∧ drunkard(x )] is D NP ⟨⟨e. t ⟩⟩ ⟨e. . In fact. Montague assumes that the ad- jective combines with a silent noun. ⟨e. t ⟩⟩ ⟨e. P λx . So it seems necessary to assume that adjectives can at least some- times be of type ⟨e. t ⟩.

another strategy is to let go of Frege’s conjecture (that Functional Application is basically the only composition rule). t ⟩. and accept another composition rule. t ⟩ lazy drunkard lazy drunkard . This gives us the following derivation for the NP: (17) NP ⟨e. [lazy(x ) ∧ drunkard(x )] A NP ⟨e. t ⟩. This is how Predicate Modification is defined: Composition Rule 3. t ⟩. and combine them into a new predicate of type ⟨e. t ⟩ λx . Specify an appropriate such rule. t ⟩ ⟨e. This rule would take two pred- icates of type ⟨e. The new predicate would hold of anything that satis- fied both of the old predicates. [⟪α⟫(u ) ∧ ⟪β⟫(u )] where u is a variable of type e that does not occur free in ⟪α⟫ or ⟪β⟫. ⟨e. where the version with the more complicated type (⟨⟨e. t ⟩⟩) is basic. t ⟩ version is derived through a type-shifting rule. Predicate Modification (PM) If ⟪α⟫ and ⟪β⟫ are of type ⟨e. ⟨e. There is another possible type-shifting analysis. t ⟩.Translating to lambda calculus 133 Exercise 14. and γ is a tree whose only two subtrees are α and β. then: ⟪γ⟫ = λu . and the predicative.5 Instead of using a type-shifting rule to interpret attributive ad- jectives.

g. and a fully β-reduced translation.g. texas) is type e.134 Translating to lambda calculus Exercise 15. Consider the sentence Maggie is a lazy baby. fond-of( y. in( y. and any constant that ap- pears with an argument list of length 2 (e.g. the syntac- tic category. city(x ) (d) texas (e) λy . Give two different analyses of the sentence. the type. Give your analysis in the form of a tree that shows for each node. and f is a variable of type ⟨e. • Any constant that appears without an argument list (e. x ) Assume: • x and y are variables of type e. (Feel free to use the Lambda Calculator for this. and one using Predicate Modification. f (g) λxλy . city) is a unary predicate. one using the Predicate- to-modifier shift. texas) (f) λ f . Identify the types of the following expressions: (a) λxλy . • Any constant that appears with an argument list of length 1 (e. in( y. . x ) (b) λx . t ⟩. in).) Exercise 16. x (c) λx .

in( y. In addition to the ones given above. ⟪gray⟫ = λx . f (x ) ∧ fond-of( y )(x ) 9. ⟪Kaline⟫ = kaline 12. gray(x ) ∧ f (x ) 5. provide a fully β-reduced translation at each node. ⟪in2 ⟫ = λyλ f λx . and state the type of the expression. ⟪fond⟫ = λxλy . ⟪cat⟫ = λx . ⟪Lockhart⟫ = lockhart For each of the trees below. using the same assumptions about types as in the previous exercise: 1. gray(x ) 4. ⟪city⟫ = λx . (a) S DP VP Joe is PP P DP in Texas . fond-of(x )( y ) 8. ⟪fond2 ⟫ = λyλ f λx . ⟪Texas⟫ = texas 11. ⟪Joe⟫ = joe 10. ⟪gray2 ⟫ = λ f λx . city(x ) 3. f (x ) ∧ in( y )(x ) 7. adopt the follow- ing lexical entries. via Lucas Champollion’s adaptation of them for the Lambda Cal- culator. ⟪in⟫ = λxλy . Exercise 17. cat(x ) 2.Translating to lambda calculus 135 The following exercises are adapted from Heim & Kratzer (1998). x ) 6.

136 Translating to lambda calculus (b) S DP VP Joe is AP A PP fond of DP Kaline (c) S DP VP Kaline is DP D N a cat (d) S DP VP Lockhart is DP D NP a city PP P DP in Texas .

Translating to lambda calculus 137 (e) S DP VP Kaline is DP D NP a NP AP A PP A PP gray N P DP fond P DP cat in Texas of Joe (f) S DP VP Kaline is DP D N’ a N’ AP A PP A PP gray2 N P DP fond2 P DP cat in2 Texas of Joe .

(a) yellow (b) round (c) alleged (d) future . Treat Julius and Amherst as ordinary proper names. In this sense. Give an appropriate denotation for is2 and derive a fully β-reduced translation for each node in the following trees. Homer is a former drunkard does not entail Homer is a drunkard and *Homer is former. Which of the following are non- intersective modifiers? Give examples to support your point. S DP VP Julius V AP is2 A gray2 S DP VP Julius V PP is2 P DP in2 Amherst Exercise 19. former is a non-intersective modifier.138 Translating to lambda calculus Exercise 18.

any type-shifting rules you wish to assume. is xoroša in the short form and xorošaja in the long form. In attributive position (modifying a noun). In Russian. is talantliva in the short form and talanti- vaja in the long form. only the long form is possible: (18) xorošaja teorija good-LONG theory ‘good theory’ A version with the short form. syntactic rules.Translating to lambda calculus 139 (e) intelligent (f) good (g) mere Exercise 20. . the adjective ‘good’. But in predicative positions. For example. ‘talented’. (19) Naša molodež’ talantlivaja i truduljubivaja our youth talented-LONG and industrious-LONG ‘Our youth is talented and industrious’ (20) Naša molodež’ talantliva i truduljubiva our youth talented-SHORT and industrious-SHORT ‘Our youth is talented and industrious’ Construct an analysis (including lexical entries. and perhaps additional constraints) that accounts for this contrast. Similarly. both forms are possible. which is feminine. a LONG FORM and a SHORT FORM. As discussed by Siegel (1976). would be ungram- matical. Masculine ‘intelligent’ is umen in the short form and umnyj in the long form. *xoroša teorija. which is also feminine. the two forms have differ- ent syntactic distributions. there is a morphological alternation be- tween two forms of adjectives.

t ⟩). t ⟩ ⟨e.e. we analyzed indefinite descriptions like a baby in sen- tences like Maggie is a baby. t ⟩. S ???? DP VP ⟨e. We have analyzed the meaning of a as an identity function on predicates. A baby loves Homer. ⟨e. b. the VP loves Homer denotes a predicate of type ⟨e. t ⟩ is Predicate Modification.3. and it doesn’t make sense to analyze a sentence like A baby loves Homer as denoting a predicate. so a baby denotes a predicate (i. t ⟩..140 Translating to lambda calculus 5. In a sentence like A baby loves Homer.1 Quantifiers: not type e Previously. t ⟩⟩ e a baby loves Homer The only composition rule that we have for combining two ex- pressions of type ⟨e. as in the following sentences: (21) a. This leaves us in the following predicament. because the sentence is . a function of type ⟨e. But we still cannot account for the use of a baby as the subject of a sentence or the object of a transi- tive verb. t ⟩ D N V DP ⟨⟨e.3 Quantifiers 5. Homer loves a baby. t ⟩⟩ ⟨e. t ⟩ ⟨e. But this yields a predicate. ⟨e.

so that should be of type t . (23) Everybody is lazy.Translating to lambda calculus 141 something that can be true or false.. most. (26) {Few. Its semantic value in a model should be true or false. John came yesterday. is lazy We could arrive at type t at the S node by treating these ex- pressions as type e. like Homer. (24) Nobody is lazy. more than two} linguists are lazy. Therefore. But there is considerable evidence that these expressions cannot be treated as type e. many. some. several. This is a valid inference if the subject. (22) Somebody is lazy. at least one. (25) {Some. at most one. Everything that came yesterday morning . for example: (27) John came yesterday morning. First. like John. t ⟩ . The analysis of these expressions as type e makes a num- ber of predictions that are not borne out. denotes an in- dividual. So what type is the subject in these examples? S t DP VP ? ⟨e.. and the baby. depending on whether or not a baby loves Homer. A sentence is something that can be true or false. every. We also lack an analysis for the following sentences. We have been assuming that the VP denotes a function of type ⟨e. Here is why. Maggie. no} linguist is lazy. t ⟩. an individual-denoting should validate subset-to-superset inferences.

then α came yesterday morning is true if the individual denoted by α is among the things that came yesterday morning. So if the first sentence is true. then the second sentence is true. Therefore. the law of contradiction is that P ∧ ¬P is always false. If α is of type e.142 Translating to lambda calculus came yesterday. In logic. If that is true. This inference is not valid. so at most one letter must not denote an individual. Nothing that is on this side of the border is on the other side of the border. Some of the expressions in (22)–(26) fail to validate subset-to- superset inferences. then that individual is among the things that came yesterday. Exercise 21. This sentence is contradictory because Mount Rainier denotes an individual. If α denotes an individual. In this context. This means that this individual is not on the other side of the border. For example: (28) At most one letter came yesterday morning. then α is on this side of the border is true if and only if the individual that α denotes is on this side of the border. and Mount Rainier is on the other side of the border. Here is why. The second property that expressions of type e should have that these expressions do not always have is that they validate the law of contradiction. at most one letter came yesterday. But the following sentence is not contradictory: . So the second conjunct must be false. So the conjunction (under a standard analysis of and) can never be true. the prediction is that sentences like the following should be self-contradictory: (29) Mount Rainier is on this side of the border. Which of the expressions in (25) validate subset-to- superset inferences? Give examples.

. and more than two mountains are on the other side of the border. Here is why. and that is because I is an expression of type e. But this sentence is not a tautology: (32) Every woman in this room is over 30 years old.. or I am under 40 years old. or every woman in this room is under 40 years old. It can’t be the case that neither is true.Translating to lambda calculus 143 (30) More than two mountains are on this side of the border. Everything is either over 30 years old or under 40 years old. Explain why. For example: (31) I am over 30 years old. This sentence is not contradictory: At most two mountains are on this side of the border. we would expect . “If this expression were of type e.”) Finally. Which of the expressions in (25) fail to validate the law of contradiction? Give examples.. Exercise 23. α is under 40 years old means that the individual is under 40 years old. . or ¬P is true. If α is of type e. So more than two mountains must not be type e. Exercise 22. This is a tautology. but instead we find the opposite: . then α is over 30 years old means that the individual that α denotes is over 30 years old. This proves that at most two mountains is not an expression of type e.. (Your answer could take the form. and at most two moun- tains are on the other side of the border. the disjunction (under a standard analysis of or) cannot fail to be true. Since everything satisfies at least one of these criteria.. an expression of type e should validate the law of the excluded middle. Any expression of type e will yield a tautology in a sentence like this. The law of the excluded middle says that either P is true.

. Which of the expressions in (22)–(26) fail to validate the law of the excluded middle? Give examples. the function denoted by ⟪nothing⟫ returns true if there is nothing satisfying the predicate in the relevant model: (34) ⟪nothing⟫ = λP .3. and we want them to combine with the VP to produce something of type t .2 Solution: Quantifiers Let us recap. λP . We can define something as a function that takes as input a predicate and returns true if and only if there is at least one satis- fier of the predicate: (33) ⟪something⟫ = λP . For exam- ple.144 Translating to lambda calculus So every woman must not be of type e. ¬∃x . Exercise 24. P (x ) denotes a predicate that holds of a predicate over individuals if it has no satisfiers. Its type will therefore be: ⟨⟨e. t ⟩ This is the type of a quantifier. We have a bunch of expressions that are not of type e. P (x ) In contrast. What can we do? The solution is to feed the VP as an argument to the sub- ject DP. ∃x . t ⟩. 5. ¬∃x . P (x ) ⟪Everything⟫ returns true if everything satisfies the predicate: (35) ⟪everything⟫ = λP . t ⟩) and that a sentence denotes a truth value (type t ). An DP like something will denote a function that takes a predicate as an argument. We assume that a VP denotes a predicate (type ⟨e. and returns a truth value. ∀xP (x ) You can think of quantifiers as predicates of predicates.

∃x . The input (e. t ⟩. the quantifier will take the denotation of the VP as an argument. ⟨⟨e. t ⟩. So the type of these kinds of determiners will be: ⟨⟨e. girl) is type ⟨e. t ⟩ ⟨e. t ⟩ vanished vanished Now what about determiners like every. ∀x . ¬∃x .Translating to lambda calculus 145 Using Functional Application (which. and some? These should denote functions that take the denotation of a noun phrase and return a quantifier. t ⟩ ⟨e. [P (x ) ∧ Q (x )] (38) ⟪every⟫ = λP λQ . t ⟩. thus: S vs. does not care about the order of the arguments). [P (x ) → Q (x )] These will yield analyses like the following: . type ⟨⟨e. t ⟩ everything V Maggie V ⟨e. these determiners can be defined as follows: (36) ⟪some⟫ = λP λQ . as you may recall. [P (x ) ∧ Q (x )] (37) ⟪no⟫ = λP λQ . t ⟩ e ⟨e.g. no. S t t DP VP DP VP ⟨⟨e. because we want every girl to function in the same way as everyone. t ⟩. t ⟩. t ⟩⟩ In particular. and the output is a quantifier. and yield a truth value. t ⟩.

Exercise 25. t ⟩⟩ ⟨e. Give an analysis of A baby loves Homer using Func- tional Application and Non-Branching Nodes.e. [doubt(x ) → Q (x )] vanished V D NP ⟨e. Barwise and Cooper ar- gue that first-order logic is not sufficient for expressing the mean- ings of certain quantifiers in English including most. and a fully β-reduced transla- tion to L λ . t ⟩ vanished λP λQ .. i. covering the full range of of so-called GENERALIZED 6 QUANTIFIERS .146 Translating to lambda calculus S t ∀x . has roots in an influential paper by Barwise & Cooper (1981). [P (x ) → Q (x )] doubt vanished every doubt Historical note: Treating quantifiers as type ⟨⟨e. t ⟩. t ⟩ ⟨e. t ⟩. ∀x . specifying at each node. t ⟩. ∀x . t ⟩ ⟨⟨e. t ⟩ λQ . Your analysis should take the form of a tree. as relations between two sets. ⟨⟨e. as first-order logic does not have a means of expressiong the concept ‘more than half’. The sentence should be true in any model where there 6 Recent theories of words like most. building especially on work by Hackl (2009). treats most as the superlative form of words like many. ⟨⟨e. In that paper. the syn- tactic category. This approach reopens the question of what the existence quantifiers like most signifies for lin- guistic theory. . t ⟩. [doubt(x ) → vanished(x )] DP VP ⟨⟨e. the semantic type. t ⟩. They offered a more general theory of quantificational expressions. t ⟩⟩.

t ⟩. ⟨e. • Transitive verbs are type ⟨e. and intransitive verbs are type ⟨e. t ⟩. t ⟩. Exercise 26. t ⟩. ⟨⟨e. For each of the following trees. t ⟩⟩. t ⟩. Give appro- priate lexical entries for words that have not been defined above. For example. non-relational common nouns. • Proper names are type e.) • Quantificational DPs are type ⟨⟨e. following the style we have developed: • Adjectives. No- body likes onions should be predicted to be true in a model such that no individual stands in the ‘like’ relation to onions. (You can treat onions as type e as well. (a) S DP VP Everybody V poops (b) . give the semantic type and a completely β reduced translation at each node.Translating to lambda calculus 147 is some individual that is both a baby and someone who loves Homer. t ⟩. The lexical entries should be assigned in a way that captures what a model should be like if the sentence is true. • Quantifiers are type ⟨⟨e. You may have to introduce a new lexical entry for a.

148 Translating to lambda calculus S DP VP Somebody V DP hugged Merkel (c) S DP VP Everyone V AP is A PP afraid P DP of Obama (d) S DP VP Nobody V DP likes onions (e) S DP VP D NP V DP Some N hugged Putin guy .

4. In the early 70’s. . Exercise 28. 5. cases of VP coordination as in Sam smokes or drinks were analyzed using CONJUNCTION REDUCTION. How does the kind of treatment of quantificational expressions given in the preceding discussion account for these facts: (a) More than two cats are indoors and more than two cats are out- doors is not a contradiction. Give an alternative lexical entry for or that avoids the prob- lem with the conjunction reduction analysis. Give a syntax tree and a step-by-step derivation of the truth conditions for Sam smokes or drinks using your analysis. What translation into Lλ would the conjunction reduction analysis predict for a case like Everybody smokes or drinks? 2. (b) Everybody here is over 30 or everybody here is under 40 is not a tautology. a transformational rule that deletes the subject of the second clause under identity with the subject in the first clause. Explain how your analysis avoids the problem. so this sentence would underlyingly be Sam smokes or Sam drinks. What is problematic about this translation? 3. 1.Translating to lambda calculus 149 Exercise 27.

’ (40) Psite puhtayak ’sakolon-ukuwal peskuwol skitapiyil. (The morphological glosses have been simplified.1: Left: A situation where each man is holding a differ- ent bottle. 2008). (Images created by Benjamin . but not in the situation on the left. In the Algonquian language Passamaquoddy (spo- ken in Maine. man all hold-DIRECT bottles ‘A man is holding all the bottles.1. the verb is in direct voice. For example. 2001. voice marking on the verb can affect which scope readings are available for quantifiers (Bruening. Right: A situation where one man is holding all of the bottles. all bottles hold-INDIRECT one man ‘All of the bottles are held by some man. United States. Speakers of Passamaquoddy judge this sentence to be true in the situation on the right in Figure 5. (39) and (40) differ in voice-marking and are true in different circumstances. interpreted as an indefinite (‘a man’).) (39) Skitap psite ’sakolon-a puhtaya.150 Translating to lambda calculus Figure 5. which is associated with the univer- sal quantifier psite ‘all’. and the agent of the verb hold corresponds to the bare noun skitap ‘man’. and New Brunswick. (See exercise 29.’ In (39). The patient (the thing being held) corre- sponds to puhtaya ‘bottle’. Canada).) Exercise 29.

t ⟩. if there is one. and returns the unique individual who satisfies that predicate. one where the picture on the left in Figure 5. This version of the sentence can be interpreted in two ways. namely ⟨⟨e. which of the two scope interpretations is correct for direct voice? (c) More generally. and one where the picture on the right makes it true. t ⟩. (b) Given that the version in direct voice is true only in the situ- ation on the right.4 The definite article So far. . (a) Write out representations in L1 for the two possible scope in- terpretations. e ⟩. and the patient corresponds to a ‘all bottles’. t ⟩.edu/ ~bruening/scopeproject/materials. see http://udel.) In (40). ⟨e. we have seen two types for determiners: • ⟨⟨e. t ⟩⟩ for the indefinite article a in predicative de- scriptions such as a drunkard in Homer is a drunkard • ⟨⟨e. t ⟩⟩ for quantifiers This section presents arguments for a treatment of definite deter- miners with yet a third type. In a phrase like the moon. the definite determiner the takes as input the predi- cate moon.Translating to lambda calculus 151 Bruening for the Scope Fieldwork Project. t ⟩.1 makes it true. ⟨⟨e.html. then the phrase has an ‘undefined’ semantic value. the verb is in indirect voice. and again the agent cor- responds to an indefinite noun phrase meaning ‘a man’. If there is not. what does this contrast suggest about how voice affects scope interpretation in Passamaquoddy? 5.

(a) Give a Russellian lexical entry for the. the definite article introduces a logical entailment both that there is a princess (existence) and that there is only one (uniqueness). and in particular whether there were any princesses. let us first observe that definite de- scriptions convey uniqueness. Exercise 30. [princess( y ) → x = y ] ∧ smokes(x )] According to Russell’s treatment. He proposes that The princess smokes means ‘There is exactly one princess and she smokes’: ∃x . and that there is only one. how many there were. Which part of this formula en- sures uniqueness? Exercise 31. Read the above formula aloud to yourself and then write out the words that you said. This means that the sentence is predicted to be false if there are no princesses or multiple ones. [princess(x ) ∧ ∀ y . and UNIQUENESS (that there is only one). Thus definite descriptions convey EXISTENCE (that there is a princess. You would probably infer that there is a princess.152 Translating to lambda calculus To motivate this analysis. What is the type of the under your treatment? (b) Show this lexical entry in action in the following tree: S DP VP D NP V the princess smokes . and you were not entirely sure who was in the royal family. Suppose then that I were to tell you: Guess what! I’m having dinner with the princess tonight. Suppose that we were in Sweden. and if there were. Russell (1905) proposes to analyze definite descriptions on a par with the quantifiers we have just analyzed. in this case).

But this is a very special and odd sense of “imply”. For example. Putting it another way:7 When a man uses such an expression. Strawson agrees that definite de- scriptions signal existence and uniqueness of something satisfy- ing the description. nor does what he says entail. since France is not a monarchy. and that the quantificational nature of definite descriptions is disguised by this form. to imply that there is a king of France. but he disagrees with Russell’s proposal that these implications are entailments. “Im- plies” in this sense is certainly not equivalent to “en- tails” (or “logically implies”). But one of the conventional func- tions of the definite article is to act as a signal that a unique reference is being made – a signal. the king of France is an empty description. in some sense of “imply”. is misleading. Would you say. He writes. not a dis- guised assertion. Strawson is alluding to Russell’s idea that the form of a sentence containing a definite description. That’s untrue? I think it is quite certain that you would not. But suppose that he went on to ask 7 With “disguised assertion”. a uniquely exis- tential proposition. where the definite descrip- tion appears as a term. he does not assert. To say.Translating to lambda calculus 153 Strawson (1950) finds this analysis very strange. Strawson argues for this thesis as follows: Now suppose someone were in fact to say to you with a perfectly serious air: The King of France is wise. “The king of France is wise” is. on the grounds that it does not deal well with so-called EMPTY DESCRIPTIONS: def- inite descriptions in which nothing satisfies the descriptive con- tent. .

According to Frege. This is an intuition that was expressed earlier by Frege (1892 [reprinted 1948]). Let us use m to represent this undefined truth value.154 Translating to lambda calculus you whether you thought that what he had just said was true. the reason that the sentence is neither true nor false is that there is an attempt to refer to something that does not exist. We do not feel that the sentence is false. There is no King of France. [emphasis added] . If there is no king of France. which is at any rate permissible when one and only one object falls under the concept. Frege says: We have here a case in which out of a concept-expression. because there was no such person as the King of France. You might. we feel that the question of its truth does not arise. when there is no F. with some hesitation. One way of interpreting this idea is by introducing a third truth value: Along with ‘true’ and ‘false’. to say that you did not do either. like a proper name. then the truth value of the sentence ‘The king of France is bald’ will be m. with the help of the definite article in the singular. that the question of whether his statement was true or false simply did not arise. Intuitively. The sentence is neither true nor false. we will have ‘undefined’ or ‘nonsense’ as a truth value. a compound proper name is formed. or was false. denotes an individual (corresponding to type e). I think you would be in- clined. whether you agreed or disagreed with what he had just said. say something like: I’m afraid you must be under a misapprehension. France is not a monar- chy. as Strawson put it. Regarding the negative square root of 4. a definite description like this. Strawson’s observation is that we feel squeamish when asked to judge the whether a sentence of the form The F is G is true or false. Then the question becomes how we can set up our semantic system so that this is the truth value that gets assigned to such a sentence. astray-in- the-centuries look). if he were obviously serious (had a dazed.

t ⟩. t ⟩⟩ e square root P NP ⟨e. t ⟩. e ⟩ that is only defined for input predicates that characterize one single entity. t ⟩ negative N PP ⟨e. what does Frege mean by “permissible”? One reasonable in- terpretation is that the denotes a function of type ⟨⟨e. Heim & Kratzer (1998) suggest that square root is a “transitive noun”. square root applies to 4 via Functional Appli- cation. t ⟩. e ⟩ e of four Now. t ⟩⟩. t ⟩ ⟨e. e ⟩ ⟨e. and the result of that composes with negative under predi- cate modification. t ⟩ the A N′ ⟨e. Frege means “a complex expression of type e”. even though it is an expression of type e. One might interpret this to mean that the king of France does not denote an individual if there is no King of France. To flesh out Frege’s analysis of this example further. with a meaning of type ⟨e. the definite article presupposes existence and uniqueness. We .Translating to lambda calculus 155 We assume that by “concept-expression”. and that “of is vacuous. One way of handling this is to introduce a special ‘undefined individual’ of type e. In other words. ⟨e. ⟨e. and that “compound proper name”. Frege means an expres- sion of type ⟨e.” Spelling this out yields the following structure: NP e D N′ ⟨⟨e.

Note that we will not add this symbol to our logical representation language Lλ . Here is an example: ιx . given any variable u and formula φ. Like the λ symbol. and u is a variable of type σ. we need to introduce a new symbol into the logic that we are translating English into: ι which is the Greek letter ‘iota’. then ιu . φ] is an expression of type e: (41) L λ syntax rule for ι If φ is an expression of type t . we will drop the brackets when they are unnecessary. using the symbol me to denote this individual in our meta-language. [ιu . standing for a ‘completely alien entity’ not in the set of indi- viduals. and is otherwise undefined. P (x ) This is an expression of type e denoting the unique individual sat- isfying P if there is exactly one such individual. Landman’s (2004) 0.8 In order to express Frege’s idea.156 Translating to lambda calculus will adopt this approach here. φ is an expression of type σ. pronounced ‘zilch’. specifying this as the denotation for empty descrip- tions. ι can bind a variable.g = { me otherwise 8 Other notations that have been used for the undefined individual include Kaplan’s (1977) †. and Oliver & Smiley’s (2013) O. φKM . first we add a syntax rule specifying that. As usual. Then we add a rule specifying its semantics: (42) L λ semantic rule for ι d if {k ∶ JφK = 1} = {d } M . rather we use me in our meta-language to refer to this ‘undefined entity’ we are imagining. .g [u ↦k ] Jιu . To add this symbol to our logic.

if there is one and only one president in the relevant domain. P (x ) president the president (45) Jιx . How does this defini- tion ensure that ι expressions are undefined when existence and uniqueness are not satisfied? With this formal tool in hand.g [x ↦k ] ={ me otherwise For example. ιx . e ⟩ ⟨e. let us assume that a predicate like bald only yields true or false for actual individuals. Read the semantic rule for ι aloud to yourself and then write down the words that you said. the denotation of the phrase would be Barack Obama. in a model corresponding to the White House in 2014. (44) NP e ιx . Now. t ⟩ λP .g . So: Jbald(α)KM . it de- notes the unique president. P (x ) Applied to a predicate-denoting expression like president. president(x ) D N′ ⟨⟨e. president(x )KM . we can now give a Fregean anal- ysis of the definite determiner as follows: (43) ⟪the⟫ = λP . where the only president in the relevant domain is Barack Obama.g d if {k ∶ JpresidentK = 1} = {d } M . ιx . and yields the undefined truth value m when given the undefined individual me as input. t ⟩.Translating to lambda calculus 157 Exercise 32.

Exercise 34.g whether JαK is in the extension of bald with respect to M and g . Since (the translation of) the king of France (used today) would have me as its denotation. t ⟩⟩.158 Translating to lambda calculus will be m if JαK M .g = me . and four as a constant of type e: . Explain how this Fregean treatment of the definite ar- ticle vindicates Strawson’s intuitions. translating square root as a constant of type ⟨e. Compute a derivation for the following tree accord- ing to Frege’s intuitions. depending on M . compute a derivation for the following tree: S DP VP D NP V AP the is A N PP P DP bald king of France This exercise can be solved using the Lambda Calculator. ⟨e. Exercise 33. Using the assumptions above. and 1 or 0 otherwise. Exercise 35. (the translation of ) The King of France is bald would then have m as its denotation.

Assuming the following lexical entries: 1. on( y.Translating to lambda calculus 159 DP D NP the AP N’ A N PP negative square root P DP of four This exercise can be solved using the Lambda Calculator. ⟪book⟫ = λx . pillow(x ) Which of the following trees gives the right kind of interpretation for the book on the pillow? DP the NP book PP on DP D NP the pillow . ⟪on⟫ = λxλy . x ) 3. book(x ) 2. Exercise 36. ⟪pillow⟫ = λx .

This might have the following translation: beautiful(ιx . under second-round review). Let us consider another example. ‘absolutism’ and ‘expressivism’. One might argue that possible worlds do not settle the question of which items are beautiful and which are not. [opera(x ) ∧ by(mozart)(x )]) But this expression will denote m in a model where there are mul- tiple operas by Mozart. namely Fidelio. One way out is to see models as encoding both fac- tual information and matters of opinion. Consider what happens when the opera by Mozart is embedded in a sentence like the following: (46) The opera by Mozart is beautiful. assuming that beautiful yields the value m when applied to an expression whose semantic value is me .160 Translating to lambda calculus DP DP PP D NP P DP the book on D NP the pillow Explain your answer. Beethoven wrote one opera. So in a model reflecting this fact of reality.9 So 9 Here we are setting aside the fact that beautiful is a so-called ‘predicate of personal taste’. If the models we are using are meant to correspond to possible worlds. the phrase the opera by Beethoven has a defined value. But the opera by Mozart does not. which go under the headings of ‘relativism’. . A number of other options are available as well. but Mozart wrote quite a number of operas. as in outlook-based semantics (Cop- pock. then the questions of beauty are not settled relative to a model. ‘contextualism’.

we explain Strawson’s intuition of ‘squeamish- ness’ regarding the question of whether the king of France exists. we have seen a number of different types for determiners in this chapter: • Type ⟨⟨e. e ⟩: for the definite article the To analyze the definite determiner. the undefinedness of the definite description “percolates up”. t ⟩. we will give a fuller discus- sion in Chapter 8. What we have said about the definite article does not exhaust all there is to say about presupposition. t ⟩⟩: for the lexical entry for a as an identity function on predicates • Type ⟨⟨e. . ⟨e. t ⟩.Translating to lambda calculus 161 again. In this way. t ⟩. t ⟩. as it were. t ⟩⟩: for generalized quantifiers • Type ⟨⟨e. Explain the difference. which forms ex- pressions of type e. Both The king of France is bald and The opera by Mozart is boring have an undefined value relative to the actual world. The expressions denote the undefined indi- vidual if the description in question does not have exactly one satisifier. to the sentence level. ⟨⟨e. To summarize. we have augmented our log- ical representation language with the ι symbol. Exercise 37. but for different reasons.


6 ∣ Variables in Natural Language 6. they leave a TRACE signifying that they once were there. We will consider two kinds of expressions that can be treated as variables: • pronouns like he and she • the traces left behind by syntactic movement Two constructions thought to involve movement traces are wh- questions like (1) and relative clauses as in (2).1 Relative clauses In this chapter. and another where it has moved to its surface position (‘Surface Struc- ture’). One way of understanding this connection is by assuming that there are (at least) two levels of syntactic representation.words do not disappear entirely from their original positions. 163 . But these wh. we will consider expressions in natural language that can be translated into L λ as variables like x and y. (1) What did Joe buy? (2) The car [ which Joe bought ] is very fancy. the verb buy is not followed by an object. but there is an element of the sentence that intuitively cor- responds to the object of the verb (what or which). In both of these constructions. one where what/which occupies the object position of the verb (‘Deep Structure’).

). The wh... . 3. The subscript i on what is an INDEX. because it is the type of phrase that can be headed by a complementizer in relative clauses (see below). co-indexed with the moved rel- ative pronoun.164 Variables in Natural Language The structure of a question like What did Joe buy? after move- ment would be: (3) CP Whati C′ C S did DP VP Joe V DP buy ti where i is an arbitrary natural number.word oc- cupies the so-called ‘specifier’ position of CP (sister to C′ ).1 The relative clause in (2) can be analyzed similarly: 1 The term ‘specifier’ comes from the X -bar theory of syntax. The element t i is a trace of movement.word to its base posi- tion. which allows us to link the wh. 2. The category label CP stands for ‘Complementizer Phrase’. It can be instantiated as any natural number (1. . where all phrases are of the form [ X P (specifier) [ X ′ [ X (complement) ] ] ].

In the structure in (4). more on that below. A relative clause like who likes Joe. we say that the two expressions are CO . in which the subject is extracted. The presence of a trace in a structure signifies that at some deeper level of representation. as in the girl who likes Joe.Variables in Natural Language 165 (4) CP whichi C′ C S that DP VP Joe V DP bought ti This structure is derived through a syntactic transformation called RELATIVIZATION .INDEXED. will have the trace in subject po- sition thus: . the relative pronoun whichi occupied the object position of the verb. As the wh-word and the trace bear the same index. the relative pronoun which occupies the specifier position of CP (sister to C′ ). even though it does not occupy the object position at the level where it is pronounced. This assumption allows the wh-element to be interpreted as the object of the verb. the C position is thought to be occupied by a silent version of the complementizer that. In this struc- ture.

but there is still assumed to be a relative pronoun that undergoes movement. a relative clause like car that Joe bought would be pronounced car which that Joe bought. One reason to think that the word that is not of the same . (6) CP whichi C′ C S that DP VP Joe V DP bought ti So at some deeper level of representation.166 Variables in Natural Language (5) CP whoi C′ C S that DP VP ti V DP likes Joe In this tree. In such a case. The silent complementizer that in (4) in (5) is pronounced in a relative clause like that Joe bought in a phrase like car that Joe bought. it is the complementizer that instead of the relative pronoun which that we hear. with the relative pronoun which preceding the complementizer that. the relative pronoun who is co-indexed with a trace in the subject position for the embedded verb likes.

In some languages. we assume that either the relative pro- noun or the complementizer that is deleted. Furthermore. it also serves to introduce other finite clauses.2 These syntactic assumptions lay the groundwork for a satis- factory semantics of relative clauses. the prin- ciple that either the relative pronoun or that must be silent in En- glish. To explain the fact that that and which cannot co-occur in English. the complementizer that is not found only in relative clauses. This contrast can be under- stood under the assumption that which originates as the comple- ment of on.Variables in Natural Language 167 category as a relative pronoun as which is that only relative pro- nouns participate in so-called ‘pied-piping’: (7) good old-fashioned values [CP on which we used to rely ] Compare: *. and moves together with it. Semantically. relative clauses are much like adjectival modifiers. 12). (10) The car that Joe bought is very fancy. compare: (9) The red car is very fancy. 2 See Carnie (2013. including Bavarian German.. Ch. Ch. .. as in John thinks that Mary came. 1977). (8) I woass ned wann dass da Xavea kummt.on that we used to rely. I know not when that the Xavea comes ‘I don’t know when Xavea is coming’ This provides additional evidence for the idea that relative pro- nouns like which and complementizers like that occupy distinct positions in relative clauses. 2013. 12) for a more thorough introduction to the syntax of relative clauses. while that is generated in its surface position. in accordance with the ‘Doubly-Filled Comp Filter’ (Chomsky & Lasnik. in which rela- tive pronouns can actually co-occur with complementizers (Carnie.

• A relative clause is interpreted by introducing a lambda op- erator that binds this variable. x ) This is the goal. (11) NP ⟨e. For every natural number i and every type τ. bought(joe. t ⟩ car that Joe bought Intuitively.τ . We can represent such a property with the following lambda expression: (12) λx . t ⟩ ⟨e. the restrictive relative clause car which Joe bought obtains the following meaning through Predicate Modification: (13) λx . t ⟩ NP CP ⟨e. x ) Then. car(x ) ∧ bought(joe. • Traces correspond to logical variables. we have a constant of the form c i .168 Variables in Natural Language We can capture the semantic contribution of relative clauses by assuming that they denote predicates and can combine via Pred- icate Modification. This goal can be reached by making the fol- lowing assumptions: • Relative clauses are formed through a movement operation that leaves a trace. Which variable does a trace like t 3 correspond to? Recall that in L λ we have an infinite number of constants and variables in stock. what a relative clause like which Joe bought de- notes is a predicate which holds of a given entity x if and only if Joe bought x.

Assuming that traces always correspond to variables of type e.3 Not only is it natural to as- sume that the index of the trace matches the index of the vari- able that it is mapped to. x loves g (i ).e : ⟪t 7 ⟫ = v 7. ⟪αi ⟫ = v i . the trace t 7 is in- terpreted as the variable v 7. will depend on an assign- ment. we translate natural language variables as logical variables.e 3 Contrast Heim and Kratzer’s rule. then it is natural to assume that. We have thus arrived at a new composition rule: Composition Rule 4. With indirect interpretation. and these can be bound by lambda op- g erators. Note that the meta-language still contains its own variables in Heim and Kratzer’s style. Pronouns and Traces Rule If α is an indexed trace or pronoun. where g an assignment function decorates the denotation brackets: Jαi K = g (i ). if ever. rather than as the value of assignment functions. variables like x appear on the righthand side and variables like ‘himi ’ appear on the lefthand side. we only need assignment functions at one level (the logical level). as in Jloves himi K = λx .e The denotation of the variable v 7. The distinction between the assignment functions at these two levels is rarely spelled out. it is also convenient to do so. . for example.Variables in Natural Language 169 and a variable of the form v i .e K M . as we will see later: Mapping a trace with index i to a variable with the same index puts us in a position to choose a matching variable for the lambda expression to bind when reach the co-indexed relative pronoun in the tree.e ).g = g (v 7. Here the difference between direct and indirect interpretation becomes bigger than mere substitution of square brackets for angled brackets: In indirect interpreta- tion. Thus in Heim and Kratzer’s style. and in the interpretation of the logic. given in a direct interpretation style. Here.τ .e . assignment functions must play a role at two levels: at the translation from English to logic. recall that Jv 7.

4 Note: We will sometimes leave off e from the subscripts on variables when it is clear what the type is. Exercise 1.e More on the pronoun side of this in §6.170 Variables in Natural Language We have called it the ‘Pronouns and Traces Rule’ because it will also be used for pronouns. see Jacobson (1999) and Jacobson (2000) for critique and alternatives. we derive the representation bought(j.e as v 7 . for example: ⟪he7 ⟫ = v 7. v 1 ) for Joe bought t 1 : 4 The idea of treating traces and pronouns as variables is rather controversial.4. Give translations at every node for the following tree: S DP VP He3 V DP bought it5 With these assumptions. . So we may abbreviate v 7.

Variables in Natural Language 171 S t bought(j. We can do this with the help of the index of the relative pronoun. t ⟩. of type t . so ⟪that⟫ = λp . bought(x. bought(x. v 1 ) Joe V DP ⟨e. according to which the relative clause ends up with a translation equivalent to λx . bought(j. how do we reach our goal. so we need to know which variable to let the lambda-operator bind. where p is a vari- able of type t . bought(j. t ⟩? In particular. lambda-abstracts over the appropriate variable: . t ⟩ j λx . v 1 ) DP VP e ⟨e. The rule of Predicate Abstraction. x )? We can achieve our goal by assigning the relative clause an interpretation in which a lambda operator binds the variable v 1 . is of type t . thus: λv 1 . p. bought(x. y ) v1 bought t1 The translation corresponding to this S node. the trace might have any index. triggered by the presence of an indexed rel- ative pronoun. So that Joe bought t 1 has the same translation. Suppose that the complementizer that is a trivial iden- tity function of type ⟨t . v 1 ) In principle. t ⟩⟩ e λyλx . ⟨e. v 1 ). How does the relative clause end up with a meaning of type ⟨e.

as it should. This gives us the following analysis of the relative clause: (14) CP ⟨e. y ) v1 bought t1 We have reached our goal! The relative clause that Joe bought de- notes the property of having been bought by Joe. ⟪β⟫. giving the property of being a car that Joe bought. t ⟩ j λx . This can com- bine via Predicate Modification with car. v 1 ) that DP VP e ⟨e. ⟨e. Predicate Abstraction If γ is an expression whose only two subtrees are αi and β and ⟪β⟫ is an expression of type t . t ⟩ λv 1 . v 1 ) C S ⟨t . t ⟩ t λp . . v 1 ) which1 C′ t bought(j. bought(x. bought(j. then ⟪γ⟫ = λv i . bought(x. p bought(j. t ⟩⟩ e λyλx .172 Variables in Natural Language Composition Rule 5. v 1 ) Joe V DP ⟨e.e .

John. who I like. Exercise 3. 2. is coming to the party. (a) For each of the labelled nodes in the following tree. 4. Traditional grammar distinguishes between restric- tive and non-restrictive relative clauses. 1. Non-restrictive relative clauses are normally set off by commas in English. is coming to the party. and they can modify proper names and other individual-denoting expressions.Variables in Natural Language 173 Exercise 2. *John who I like is coming to the party. Why not? Hint: Use the word ‘syncategorematically’. who I like. The guy who I like is coming to the party. 6 NP 5 4 NP CP man who1 3 S 1 2 DP VP t1 left (b) Notice that you are not asked to give a type for who1 . give: i) the type. 3. That guy. ii) a fully β-reduced translation to Lλ . and iii) the composition rule that is used at the node. .

give the type and a fully β-reduced translation to Lλ . NP NP CP man who1 S DP VP t1 talked PP P DP to D NP the NP CP boy who2 S DP VP t2 V DP visited him1 .174 Variables in Natural Language We have given a treatment of restrictive relative clauses in terms of Predicate Modification. For each node in the following tree. Would an analysis using Predicate Modi- fication in the same way be appropriate for non-restrictive relative clauses? Why or why not? Exercise 4.

t ⟩⟩ ⟨e.2 Quantifiers in object position 6. t ⟩ ⟨e. t ⟩ V DP D NP ⟨e. Recall that a sentence like every linguist of- fended John.1 Quantifier Raising The rule of Predicate Abstraction will also help with the interpre- tation of quantifiers. we get a type-mismatch: . If we try to interpret the following tree. with a quantificational noun phrase in subject posi- tion. ⟨⟨e. ⟨e. t ⟩ offended John every linguist But a problem arises when the quantifier is in object position. receives an analysis like this: (15) S t DP VP ⟨⟨e.Variables in Natural Language 175 6. t ⟩. t ⟩.2. t ⟩. t ⟩⟩ e ⟨⟨e.

t ⟩ offended D N′ ⟨⟨e. t ⟩ every linguist The transitive verb is expecting an individual. It is clear what this sentence means! We could represent the meaning as follows: (17) ∀x [linguist(x ) → offended(john. so the verb cannot be fed as an argument to the quantifier phrase. And the quanti- fier phrase is expecting an ⟨e. so the quantifier phrase cannot be fed as an argument to the verb. ⟨e. representable by: (19) λx . then . x ) If we could separate out the quantifier from the rest of the sen- tence. x )] According to the assumptions we made so far. It is rather an embarrassment that this does not work. offended(john. t ⟩⟩ ⟨e. ⟨⟨e. every linguist trans- lates as: (18) λQ ∀x [linguist(x ) → Q (x )] The appropriate value for Q here would be a function that holds of x if John offended x.176 Variables in Natural Language (16) S ??? NP VP e ??? John V NP ⟨e. and let the rest of the sentence denote this function. t ⟩. t ⟩-type predicate. t ⟩. t ⟩. t ⟩⟩ ⟨⟨e.

At Logical Form. x )) = ∀x [linguist(x ) → offended(john. We can get the components we need to produce the right mean- ing using the rule of Quantifier Raising.Variables in Natural Language 177 we could put the two components together and get the right mean- ing: (20) [λQ ∀x [linguist(x ) → Q (x )]](λx . offended(john. Quantifier raising is a syn- tactic transformation moving a quantifier (an expression of type ⟨⟨e. This transformation occurs not between Deep Structure and Surface Structure. constituents do not necessarily appear in the position where they are pronounced. but they are in the po- sition where they are to be interpreted by the semantics. t ⟩ to a position in the tree where it can be interpreted. v 1 )) Tip: Use the ‘scratch pad’ function in the Lambda Calculator. ∀x [linguist(x ) → Q (x )]](λv 1 . but rather between Surface Structure and another level of representation called Logi- cal Form. Simplify the following expression step-by-step: [λQ . S DP VP John V DP offended D NP every linguist . leav- ing a DP trace in object position. t ⟩. Thus the structure in (21a) is converted to the Logical Form representation (21b): (21) a. x )] Exercise 5. offended(j.

Predicate Abstraction is used at the node we have called LP for ‘lambda P’. Functional Applica- tion is used at all other branching nodes.178 Variables in Natural Language b. . The derivation works as follows. S DP LP D NP 1 S every linguist DP VP John V DP offended t1 The number 1 in the syntax tree plays the same role as the rel- ative pronoun like which in a relative clause: It triggers the intro- duction of a lambda expression binding the variable correspond- ing to the trace.

x )] DP LP ⟨⟨e. deletion processes) . v 1 ) D NP 1 S ⟨⟨e. ⟨e. t ⟩ λQ . t ⟩. t ⟩. ⟨⟨e. t ⟩ t λP λQ . [linguist(x ) → Q (x )] λv 1 . ∀x .words are in their original positions. • Surface Structure (SS): Where the order of the words cor- responds to what we see or hear (after e. t ⟩. and ac- tive sentences (John kissed Mary) look the same as passive sentences (Mary was kissed by John). offended(x. v 1 ) John V DP ⟨e. ∀x . Variables in Natural Language 179 (22) S t ∀x . offended(j. v 1 ) every linguist DP VP e ⟨e. t ⟩⟩ e λyλx . y ) v1 offended t1 The Quantifier Raising solution to the problem of quantifiers in object position is embedded in a syntactic theory with several levels of representation: • Deep Structure (DS): Where the derivation begins. and wh. passivization or wh-movement) • Phonological Form (PF): Where the words are realized as sounds (after e. [P (x ) → Q (x )] linguist offended(j. t ⟩⟩ ⟨e. For example. t ⟩ ⟨e. Who did you see? is You did see who? at Deep Structure. [linguist(x ) → offended(j.g. offended(x. t ⟩ j λx .g.

other approaches to quan- tifier scope are taken in conjunction with those syntactic theories. which will be translated into L λ . 1994) and Lexical- Functional Grammar (Bresnan. or (inverted) ‘Y-model’ of Govern- ment and Binding theory. A Logical Form is thus a natural language expression. 5 Note that ‘Logical Form’ refers here to a level of syntactic representation. These movement opera- tions are in this sense COVERT. motivated originally by Wasow (1972) and Chomsky (1973). of course (see Lasnik & Lohn- dal 2013 for an overview). 2001). Many other transformational generative theories of grammar have been proposed over the years. Of course.180 Variables in Natural Language • Logical Form (LF): The input to semantic interpretation (af- ter e. . the crucial thing is that there is an interface with semantics (such as LF) at which quanti- fiers are in the syntactic positions that correspond to their scope. QR)5 Transformations map from DS to SS.g. Quantifier Raising is not an option in non- transformational generative theories of grammar such as Head- Driven Phrase Structure Grammar (Pollard & Sag. Since the transformations from SS to LF happen “after” the order of the words is determined. and many of these are also compati- ble with the idea of Quantifier Raising. and from SS to PF and LF: DS SS LF PF This is the so-called ‘T-model’. It is natural to refer to the L λ translation as the ‘logical form’ of a sentence. and there is a trace indicating the argument position they corre- spond to. but this is not what is meant by ‘Logical Form’ in this context. we do not see the output of these transformations.

You can do this in the Lambda Cal- culator. You can do this in the Lambda Calculator. Chomsky respects nobody who smokes. The fragment should include: . Provide a fragment of English with which you can de- rive truth conditions for the following sentences: 1. and specify the translation into Lλ at every node of your trees. 4. Exercise 7. No congressman who smokes dislikes Chomsky. Derive a translation into lambda calculus for Beth speaks a European language. assuming that a European language undergoes QR. ⟨e. and there may have been different linguists for different philosophers. and that speaks is a transitive verb of type ⟨e. Exercise 8. that Euro- pean and language combine via Predicate Modification. 5. Chomsky dislikes every congressman. Every congressman respects himself. 2. Some congressman from every state smokes. it can mean either that there was one universally offensive linguist or that for every philosopher there was a linguist.Variables in Natural Language 181 Exercise 6. Start by drawing the LF. Some linguist offended every philosopher is ambigu- ous. Give an LF tree for both readings. Every conservative congressman smokes. 3. Assume also hat the in- definite article a can also denote what some denotes. 6. t ⟩⟩.

Predicate Abstraction. Another approach is to interpret the quantifier phrase in situ. adheres to the principle of “Direct Compositionality”. which rejects the idea that the syntax first builds syntactic struc- . You can use the Lambda Calculator for this exercise.2. for each sentence: • draw the syntactic tree for the sentence • for each node of the syntactic tree: – indicate the semantic type – give a fully β-reduced representation of the meaning in L λ – specify the composition rule that you used to compute it If the sentence is ambiguous. in the position where it is pro- nounced.e. 6. Pronouns and Traces Rule. using flexible types for the expressions involved. Terminal Nodes) Then.. give multiple analyses.182 Variables in Natural Language • a set of syntax rules • lexical entries (translations of all of the words into L λ ) • composition rules (Functional Application. Non-branching Nodes.2 A type-shifting approach Quantifier Raising is only one possible solution to the problem of quantifiers in object position. This latter approach. Predicate Mod- ification. one for each reading. i. In this case one can apply a type-shifting operation to change either the type of the quantifier phrase or the type of the verb.

one can imagine type-shifting rules that have the power to repair the mismatch. as it were. When a node of type t is reached. the syntax and the semantics work in tandem. These could target either the quantifier. and what are the main approaches to solving it? Explain in your own words. Jacobson (2012) argues that this is a priori the simplest hypothesis and defends it against putative empirical arguments against it. Alternatively. On Hendriks’s (1993) system. making it into the sort of thing that could combine with a transitive verb. ⟨e. the idea is that a syntax node is associated with a set of quantifiers that are “in store”. 1983). We will focus on one instantiation of this schema.Variables in Natural Language 183 tures which are then sent to the semantics for interpretation as a second step. so that the semantics is computed as sentences are built up syntactically. called OBJECT RAIS - ING . Another approach uses so-called Cooper Storage. This is done in Head-Driven Phrase Structure Grammar (Pollard & Sag. making it into the sort of thing that could combine with a quantifier. defined as follows. Hendriks defines a general type-shifting schema called ARGU - MENT RAISING (not because it involves “raising” of a quantifier phrase to another position in the tree – it doesn’t – but because it “raises” the type of the expression to a more complex type). these quantifiers can be “discharged”. In brief. What is the problem of quantifiers in object position. t ⟩⟩ predicate can be converted into one that is expect- ing a quantifier for its first or second argument. 1994). . or both. With direct compositionality. which in- troduces a storage mechanism into the semantics (Cooper. a type ⟨e. Exercise 9. or the verb.

Since the Lambda Calculator does not support type-shifting. use different vari- ables). z ) Sam ⟨⟨⟨e. ∀x . ⟨e.t ⟩. ⟨e. t ⟩ s λy .t ⟩ λy . t ⟩.184 Variables in Natural Language Type-Shifting Rule 2. v (λz . ∀z . P (x ) ⇑RAISE ⟨e. z ) e ⟨e. in that case. t ⟩⟩ everybody loves loves Exercise 10. Reproduce the tree in (23) using the Lambda Calcu- lator. t ⟩ λv ⟨⟨e. t ⟩⟩ ⟨⟨e. ⟪α⟫(z )( y )) (unless v. y or z is free in ⟪α⟫.t ⟩ λy . So a sentence like Sam loves everybody will be analyzed as follows: (23) t ∀z . loves( y. loves(s. t ⟩ then: ⟪RAISE α⟫ = λv ⟨⟨e. v (λz . loves( y. t ⟩. t ⟩.t ⟩. z )) λP . just treat the type shift as if it is a silent sister to loves. . Object raising (RAISE) If ⟪α⟫ is of type ⟨e. doing the β-reductions along the way.

Variables in Natural Language 185 Note that this is not all that Hendriks’s system can do. t ⟩⟩. then that expression also has translations of the following form: λÐ → xÐ Ð→ → [v (λz [α(Ð a λv ⟨⟨b. ⟨Ð → c . Ð Ð → → c as null. ⟨b. t ⟩⟩⟩.t ⟩. But the dif- ferent orders in which the type-shifting operations applied give two different scopes. where Ð → a is a possibly null sequence of types. where subscripts indicate the types of the variables.7 then you get a reading with the existential quantifier outscop- ing the universal quantifier. and b as e 7 with a as ⟨⟨e. This is shown in the following tree. The general schema is as follows: If an expression has a translation α of type ⟨Ð → a . Ð → c as null. and b as e . ⟨e.6 and then the sub- ject. The resulting type is the same: something that combines with two quantifiers. depending on the order in which the argument slots of the verb are lifted.t ⟩ λ y Ð → c b → x )(z )(Ð → y )])] A sentence with two quantifiers like Everybody likes somebody can be analyzed in two different ways. It turns out that if you raise the object first. He de- fines a general schema of which the Object Raising rule is a partic- ular instance. 6 with Ð → a as e.

t ⟩.t ⟩. ∀x . t ⟩ λx ⟨⟨e. loves(z. ⟨e. t ⟩ ⟨⟨⟨e.8 and then the object. ⟨⟨⟨e. t ⟩. t ⟩. Ð Ð → → c as ⟨⟨e.t ⟩ . z ) ⟨⟨e. t ⟩. x (λz e′ . t ⟩ λP . v (λz e .186 Variables in Natural Language (24) t ∃z ∀x . t ⟩. t ⟩⟩ everybody λx e λv ⟨⟨e.t ⟩. loves(x. z ′ ))) λP . v (λz e . x )) Somebody ⟨⟨⟨e. t ⟩. loves(z. t ⟩.t ⟩. t ⟩. Ð → c as null. x )) ⇑RAISE ⟨e.t ⟩ .t ⟩. ∀x . t ⟩. ⟨⟨⟨e. t ⟩. P (z ) λv ⟨⟨e. t ⟩. ∃z .t ⟩ . and b as e . t ⟩⟩ loves loves If the subject is raised first. P (x ) ⇑RAISE ⟨e. loves(z.t ⟩ λv ⟨⟨e. t ⟩. t ⟩⟩ ⟨⟨e. 8 with Ð → a as e. and b as e 9 with a as null.9 then the univer- sal quantifier ends up scoping outside of the existential quantifier. v (λz e .

Argument #1: Scope ambiguities. t ⟩. x ) ⟨⟨e. P (x ) ⇑ ⟨⟨⟨⟨e. loves(z ′ . y (λz e′ . loves(z. Heim & Kratzer (1998) claim that this can- not be done with a flexible types approach. t ⟩. P (z ) λy ⟨⟨e. z )) ⇑ ⟨e. v (λz e .t ⟩ λy e . z ))) λP .2. Argument #2: Inverse linking. t ⟩. t ⟩. A somewhat more challenging case for a flexible types approach falls under the heading of ‘in- . which does give both readings.3 Putative arguments for the movement Heim & Kratzer (1998) give a number of arguments in favor of the QR approach. ∀x . t ⟩⟩ loves loves 6.t ⟩. ∃z . t ⟩ λv ⟨⟨e. ⟨e. they do not consider the approach of Hendriks (1993). But they only consider flexible types approaches in which the quantifiers themselves un- dergo type-shifting operations. t ⟩⟩ ⟨⟨e. t ⟩. y (λz e′ .t ⟩.Variables in Natural Language 187 (25) t ∀x ∃z . as just shown in the previous section.t ⟩. t ⟩ λP . x )) Somebody ⟨⟨⟨e. t ⟩. The QR approach delivers two different readings for scopally ambiguous sentences like Every- body loves somebody. loves(z ′ . t ⟩ ⟨⟨⟨e.t ⟩ . loves( y. ∀x .t ⟩.t ⟩ λy ⟨⟨e. t ⟩. ⟨e. t ⟩⟩ everybody λv ⟨⟨e. ⟨⟨⟨e.t ⟩ . v (λz e . t ⟩. t ⟩. t ⟩.

including Hendriks 1993. also known as ‘binding out of DP’. and most flexible types analyses. Derive the translation for (27) compositionally using the rules we have introduced so far.188 Variables in Natural Language verse linking’. You may need to introduce new lexical entries. so this does not constitute a knock-down argument in favor of QR. This does not mean: ‘One apple that is in every basket is rotten’. Barker (2005) has a flexible types analysis of binding out of DP. deliver only that reading. Exercise 11. The QR analysis gives the right reading starting from the following LF: (27) S DP every basket 1 S DP VP D NP is rotten one N PP apple P t1 in However. Here is a classic example: (26) One apple in every basket is rotten. .

This example involves a kind of VP ELLIPSIS. Argument #3: Antecedent-contained deletion Another argument that has been made in favor of QR is based on examples like this: (28) Mary read every novel that John did. Start by applying the Object Raising type-shifting rule to the preposition in. Explain why the correct reading of (26) cannot be de- rived via the Object Raising type-shifting rule. For example. Since the antecedent in this case seems to contain the VP that is deleted. in the following case the deleted VP seems to be ‘read War and Peace: (29) I read War and Peace before you did. so the underlying structure would be: (30) I read War and Peace before you did read War and Peace.Variables in Natural Language 189 Exercise 12. VP ellipsis generally involves deletion of a VP under identity with an antecedent.. But what happens if we try to fill in the antecedent VP in a case like (28)? The VP read every novel that John did contains the el- lipsis site. the phenomenon in (28) is known as ANTECEDENT.CONTAINED DELETION. But there is a solution! If the quantifier phrase in (28) under- goes QR.. then the antecedent VP no longer contains the deleted VP: . We might get an infinite regress if we tried to fill it in: Mary read every novel that John did read every novel that John did read every novel that.

thanks to QR. Give an analysis of (31) with fully β-reduced λ- expressions at each node. . Exercise 13.190 Variables in Natural Language (31) S NP λP 2 S D N′ every DP VP N CP Mary V DP novel which1 C′ read t2 C S that NP VP John V VP did V DP read t1 The antecedent VP is now identical to the elided VP except for the index on the trace. Exercise 14. So. Use of the Lambda Calculator is en- couraged. Explain how the phenomenon of antecedent- contained deletion provides an argument in favor of QR. examples like (28) can be accommodated.

Variables in Natural Language 191 Note that Jacobson (1999) argues that this phenomenon does not constitute a clear argument in favor of QR. But this is not the case when a reflexive pronoun is anaphoric to a quantifier: (33) a. which we have not introduced here but has a range of interesting applications. She proposes a way of handling it using a mode of composition called function com- position. Mary blamed herself. we can easily account for the meanings of these sentences: . Every woman blamed every woman. If we treat pronouns as variables and use QR. the sentence can be paraphrased more or less by replacing the pronoun with its antecedent: (32) a. b. Every woman blamed herself. b. When a reflexive pronoun is anaphoric to a proper name. Mary blamed Mary. it has to do with any use of a pronoun that is connected to a quantifier including the following: (34) No man noticed the snake next to him. The problem is not unique to reflexive pronouns. Argument #4: Quantifiers that bind pronouns Another argu- ment that Heim and Kratzer give in favor of QR has to do with reflexive pronouns.

192 Variables in Natural Language (35) S DP LP D N′ 1 S every woman DP VP t1 V DP blamed herself 1 Suppose we have that the VP blamed herself has the following translation (more on this in Section 6. v 1 ) The translation of corresponding to the tree in (35) then gives an appropriate translation: (37) ∀x [woman(x ) → blamed(x.If we combine this with every woman. v 1 )] where v 1 is free. This is not quite the right meaning. it is some- what more difficult to imagine how to derive the right reading on a flexible types approach. Assuming that herself 1 is translated as v 1 . Assuming that pronouns are translated as variables. . [woman(x ) → blamed(x. draw an LF tree for Every woman blamed herself assuming quantifier raising and give a derivation showing how the meaning in (37) is derived.4): (36) λy . blamed( y. then we will end up with a formula that contains a free variable: ∀x . x ) Exercise 15.

(39) a.t ⟩⟩ λx . b. the latter has only the implausible reading implying the existence of an “inter- national girl”.Variables in Natural Language 193 But there are a number of ways of dealing with reflexive pro- nouns without QR. Which country does John know a girl from? . 1987. for example. extraction of which country is possible from a preposi- tional phrase modifier but not a relative clause modifier: (40) a. For example. it has been pointed out that there seem to be some syntactic con- straints on QR. a reflexive pronoun can be an- alyzed as a function that takes as an argument a relation of type ⟨e. Use the treatment of herself in (38) to derive a trans- lation for every woman blamed herself. which can be combined with every woman to give the right reading. x ) Applied to. R (x. Szabolcsi. #John knows a girl who is from every country. Similarly. ⟨e. Does the lexical entry for herself in (38) give the right results for a ditransitive case. the quantifier every coun- try can take wide scope in (39a) but not in (39b). For example. 1987): (38) ⟪herself⟫ = λR ⟨e. John knows a girl from every country. Exercise 16. like Mary introduced herself to Sue? Why or why not? Argument #5: The Extraction-Scope Generalization Finally. Exercise 17. t ⟩⟩ and returns a reflexive predicate (Keenan. and these constraints seem to mirror constraints on movement in general. this will yield the property of blaming oneself.⟨e. blamed.

In (41b). scope does not correlate completely perfectly with extraction. extraction of whom is possible in (42a) but not (42b). where the VP is coordinated with another VP. *I wonder whom Mary gave birth to and will eventually die. A wide scope reading for every man is possible in (41a). b. one of the so-called island constraints specifying is- lands for extraction (syntactic environments from which extrac- tion is impossible). It appears that quantifiers are also subject to some kind of coordinate structure constraint. b. Winter (2001a) refers to this parallel between scope and wh- extraction as the “Extraction Scope Generalization”. I wonder whom Mary gave birth to. we have only the implausible reading that every man has the same mother. (41) a.194 Variables in Natural Language b. #Some woman gave birth to every man and will eventu- ally die. Summary. Another parallel has to do with coordination. The badness of (42b) falls under Ross’s (1968) “Coordinate Struc- ture Constraint”. Some woman gave birth to every man.phrases in their place. none of them constitutes a . and QR is not the only possible explanation for the parallels that do ex- ist. *Which country does John know a girl who is from? Example (40b) is a violation of Ross’s (1968) “Complex Noun Phrase Constraint”. where there is coordination. However. While there are some phenomena that might seem prima facie to favor a QR approach. but not in (41b). (42) a. The oddness of (39b) might lead one to sus- pect that the constraints on scope are parallel to the constraints on wh-extraction. Similarly. This correla- tion might suggest that scope readings are generated via the same kind of movement that puts wh.

3 Possessives Let us now consider an application combining uniqueness pre- suppositions with quantification. as it figures in so much writing in semantics. Mary’s rabbit gives rise to a contradiction in (43) whereas some rabbit does not. However. b. Some rabbit is from South Africa and some rabbit is a secret agent. (not contradictory) Mary’s rabbit also behaves like an individual-denoting expression when it comes to conjunction. QR is certainly an important idea to have a good grasp on. But whether QR has any solid empirical ad- vantages or disadvantages seems to be an open question. i.Variables in Natural Language 195 knock-down argument in favor of it. It is often suggested that pos- sessive descriptions contain a hidden the.. Some rabbit is in the cage and some rabbit is outside the cage. (contradictory) b. (44) a. these tests do not always in- dicate type e status when the possessor is not definite. Mary’s rabbit is from South Africa and Mary’s rabbit is a secret agent. He points . 6. One prima facie rea- son for believing this is that possessive descriptions with definite possessors in argument position typically behave like (constant) terms. as Löbner (2011) discusses. ⇐⇒/ Some rabbit is a secret agent from South Africa. Mary’s rabbit is in the cage and Mary’s rabbit is outside the cage. (43) a. behave as if they have denotations of type e. This would be expected if Mary’s rabbit de- noted a single rabbit.e. ⇐⇒ Mary’s rabbit is a secret agent from South Africa. Some may find QR easier to understand and work with and prefer it on those grounds. others may find direct compositionality more appealing. For ex- ample.

and at least one child’s rabbit is in the cage. It is a bit tricky. the so-called DOUBLE GENITIVE construction shown in (47b). (46) At least one child’s pet rabbit isn’t in the cage. we can make sense of the idea that at least one child’s rabbit is an expression of type e. there still appears to be a kind of uniqueness presupposition. even though it doesn’t refer to a particular rabbit. the possessive relation seems to come from the noun. The . at least one child’s pet rabbit is felicitous only when at least one child in the domain of discourse has exactly one pet rabbit. or negation.196 Variables in Natural Language out that at least one child’s pet rabbit does not behave like it refers to a particular individual for the purposes of conjunction. ⇐⇒ / At least one child’s pet rabbit is a secret agent from South Africa. or he could own it. a portrait of Picasso b. As Kamp (2001) observes. the possessive relation comes from context. John could own the car. In order to analyze these cases. or have been assigned to the car. a portrait of Picasso’s The example without possessive ’s in (47a) describes a portrait in which Picasso is depicted. (not contradictory) Yet. With the help of Quantifier Raising. as shown in (46). (47) a. as shown in (45). t ⟩⟩. So sister would be translated as a two- place predicate. ⟨e. because in a case like John’s sister. Nouns like sis- ter are called RELATIONAL NOUNS. and are typically analyzed as being of type ⟨e. or any number of other things. In (47b). etc. Picasso could have made the portrait. whereas in John’s car. (45) At least one child’s pet rabbit is from South Africa and at least one child’s pet rabbit is a secret agent. we need to have an analysis of possessives. we can coherently maintain that possessives are inherently definite in the face of data like this. In other words. One way to distinguish between relational and non-relational nouns is with of -PP complements as in (47a) vs.

use different variables of the same type. To deal with this. y )] so the possessive shift turns car into car of. So the of -PP construction in (47a) is not a possessive construction at all. The type-shift is defined as follows: Type-Shifting Rule 3. t ⟩⟩-type noun whose mean- ing introduces an underspecified possessive relation poss.Variables in Natural Language 197 restricted interpretation of (47a) can be explained based on the following assumptions. portrait is a transitive noun. There is one sense of portrait on which it denotes a binary predicate. For example. y )] as well (unless y or x is free in ⟪α⟫. in that case. so to speak. then: ⟪POSS α⟫ = λyλx . just as transitive verbs require an object. t ⟩. an ⟨e. [car(x ) ∧ poss(x. where the possessive ’s invokes a contextually salient possessive relation. we will take inspiration from Vikner & Jensen’s (2002) idea that in cases like John’s car. and requires an object. the argument to the noun is the thing depicted in the portrait. One of the virtues of the possessive shift is that it makes it pos- sible to account for the ambiguity of examples involving former such as the following: . t ⟩-type noun under- goes a type shift to become an ⟨e. There is another sense of portrait which is a simple unary predicate. and that is the sense that surfaces in (47b). and the of -PP construction in (47a) serves to feed the object to the noun. [⟪α⟫(x ) ∧ poss(x. it is simply a construction in which a transitive noun takes an argument. Possessive shift If ⟪α⟫ is of type ⟨e. and on that sense. ⟨e. this type-shift applied to car will produce λyλx . On that sense.

A full analysis of this particular example would require us to be able to talk about time. so that any noun that it combines with has to undergo the possessive shift. which we have so far ignored. For example. so let us move on without getting into detail about that. (Clitics are things that are in-between a morpheme and a word. 11 Barker (1995) assumes that the possessive construction involves a silent head that combines with the following noun and serves as “glue” for the posses- sive construction. but it is not so strange in light of other construc- tions involving clitics. This may strike the reader as odd. . assume a more com- plicated analysis of ’s. but if it occurs above former mansion. then we get one reading.) The possessive ’s will form a constituent with the modified noun. this analysis assumes that the possessive ’s sits in the position where Barker has a silent head. it is not unrea- sonable to say that ’m forms a syntactic constient with tired.)11 So the types for John’s rabbit will work out as follows: 10 Vikner & Jensen (2002) following Partee (1983/1997). We will also assume that possessive ’s is an identity function on binary predicates. then we get another reading. R (Recall that R is a variable of type ⟨e. in I’m tired. t ⟩⟩. ⟨e.198 Variables in Natural Language (48) John’s former mansion ‘the mansion that John formerly owned’ ‘the former mansion that John currently owns’ If the possessive shift occurs between former and mansion.10 (49) ⟪’s⟫ = λR .

t ⟩⟩ ⇑ ’s ⟨e. [poss(j. ⟨e. t ⟩ DP D′ e ⟨e. In fact. t ⟩⟩ John D NP ⟨⟨e. ⟪α⟫(x ) as well (unless x is free in α′ . called iota. t ⟩ rabbit To get a type e interpretation of John’s rabbit. Type-Shifting Rule 4. ⟨e. then ⟪IOTA α⟫ = ιx . then we will end up with the following translation for the phrase: (50) ιx . we can assume that there is a silent definite article in the syntax. t ⟩⟩. then choose a different variable). t ⟩⟩⟩ ⟨e. or we can assume that John’s rabbit undergoes a type-shifting operation that does the same job. Iota shift If ⟪α⟫ is of type ⟨e. If we assume that this type-shifting operation applies to John’s rab- bit. ⟨e.Variables in Natural Language 199 DP ⟨e. t ⟩. ⟨e. ⟨e. Partee (1986) already posited a type-shifting operation that would do this job. x ) ∧ rabbit(x )] .

a definite description containing a variable bound by the possessor. See Barker (2005) and Francez (2011) regarding directly com- positional analyses of quantified possessives. The possessor. indicate which com- position rules and type-shifting operations you use. The syntactic structure and types will be as follows: 12 This is an instance of the ‘binding out of DP’ problem. Give a tree analysis of John’s rabbit showing logical representations at each node. . which is a quantifier. undergoes QR. in effect. At each node. yielding. We can do it with Quantifier Raising. leaving a trace in its original position. Now let us consider how to analyze a case with a quantified possessor.200 Variables in Natural Language Exercise 18.12 The possessive noun undergoes the iota shift. so QR is making our lives easier here. if any.

t ⟩ pet rabbit Exercise 19. ⟨e. t ⟩⟩. . ⟨e. ⟨e. t ⟩. t ⟩⟩ ⇑ ’s ⟨e. Is it possible to account for them while still assuming that the possessive DP at least one child’s pet rabbit is of type e? Explain why or why not. ⟨e. t ⟩ some child 1 S t DP VP e ⟨e. t ⟩ ⇑ ⟨e. Exercise 20. ⟨e.Variables in Natural Language 201 (51) S t DP LP ⟨⟨e. Derive the translation for Some child’s pet rabbit is a secret agent step-by-step. t ⟩⟩ t1 D NP ⟨⟨e. t ⟩⟩⟩ ⟨e. Consider examples (45) and (46) again in light of the foregoing discussion. t ⟩ is a secret agent DP D′ e ⟨e.

Both the deictic and the anaphoric uses can be accounted for under the following hypothesis: Hypothesis 1. (Note that we are setting aside gender and animacy features for the moment.) Individuals can be brought to salience in any num- ber of ways: through pointing. In this case.4 Pronouns What is the meaning of a pronoun like he? If I point to Jared Kush- ner. the pronoun is used ANAPHORICALLY. by being visually salient. I don’t have to point. Al- ternatively. The following ex- amples all have readings on which it is intuitively quite difficult to answer the question (54) No woman blamed herself. In the previous cases.202 Variables in Natural Language 6. then he refers to Jared Kushner. in which case he would refer to Jeff Ses- sions. and say: (52) He is suspicious. All pronouns refer to whichever individual is most salient at the moment when the pronoun is processed. if Jeff Sessions is on TV then he is sufficiently salient for the same utterance to pick him out. He is suspicious. or by be- ing raised to salience linguistically. . But I could point to Jeff Sessions and say the same thing. as it has a lin- guistic antecedent. of course. I could raise Jeff Sessions to salience by talking about him: (53) Jeff Sessions lied under oath about his contacts with Russia. the pronoun is used DE - ICTICALLY . The problem with Hypopthesis 1 is that there are some pro- nouns that don’t refer to any individual at all.

The pronouns in examples (54)-(57) can be analyzed as bound variables. using Predicate Abstraction. Following Heim and Kratzer. v 1 ) If we combine this with every woman. coreference implies reference. because. The trigger for the abstraction in this case is such.Variables in Natural Language 203 (55) Neither man thought he was at fault. (56) Every boy loves his mother. We saw a preview of this this above in connection with example (33a) (Every woman blamed herself ). (57) the book such1 that Mary reviewed it1 So not all pronouns are referential. in (57). Give your own example of a pronoun that could be seen as referential. there is coindexa- tion between such and it. For example. which was analyzed above as: (58) ⟪blamed herself⟫ = λy . and your own example of a pronoun that could not be seen as referential. v 1 )] and this is not the right reading. then we will end up with a formula that contains a free variable: ∀x . such-relatives can be treated much like relative clauses. Exercise 21. . [woman(x ) → blamed( y. as Heim and Kratzer point out. which is coindexed with a pro- noun rather than a trace. Note that it is sometimes said that No woman and herself are “coreferential” in (54) but this is strictly speaking a misuse of the term “coreferential”. blamed( y. The analysis works as follows:13 13 Let us assume that x is a distinct variable from v 1 .

204 Variables in Natural Language (59) DP e ιx . t ⟩. [book(x ) ∧ read(m. x )] D NP ⟨⟨e. t ⟩⟩ e λy . t ⟩ t λp . e ⟩ ⟨e. t ⟩ λP . x )] the NP AP ⟨e. P (x ) λx . book(x ) λv 1 . t ⟩ λx . Give the types and a fully β-reduced logical transla- tion for every node of the following tree. v 1 ) that DP VP e ⟨e. read(x. read(x. read(m. y ) v1 read it1 Exercise 22. λx . v 1 ) Mary V DP ⟨e. ⟨e. t ⟩ ⟨e. [book(x ) ∧ read(m. . v 1 ) book such1 CP t read(m. v 1 ) C S ⟨t . p read(m. t ⟩ m λx . ιx .

Variables in Natural Language 205 NP NP AP man such2 CP C S that DP VP Ed V DP read the NP NP CP book which1 S DP VP he2 V DP wrote t1 Let us consider the possibility that pronouns should always be treated as bound variables. All pronouns are translated as bound variables.14 Hypothesis 2. Because we are using ‘indirect interpretation’. where the interpretation is relative to an assignment function that applies di- rectly to pronouns and traces. pp. 14 Heim & Kratzer (1998. 116-118) define a distinction between free and bound variables in natural language in their ‘direct interpretation’ framework. . rather than their interpretations. we can let the notions of ‘free’ and ‘bound’ for natural language expressions be inherited from their standard conceptions in logic.

Raymond Stanz. Moreover.206 Variables in Natural Language What this means is that whenever a pronoun occurs in a sentence. Egon Spengler: I blame myself. Dr. For example. so if this condition holds. Peter Venkman: So do I. Dan Akroyd. so there is a corresponding kind of binding at the logical level. are in an elevator. we don’t want to treat all pronouns as bound vari- ables because there are some ambiguities that depend on a dis- tinction between free and bound interpretations. but it is counterintuitive that QR should allow a DP to move across a sentence boundary. Predicate Abstrac- tion is the only mechanism by which variables get bound. there is a scene in which the three Ghost- busters Dr. They have just started their Ghost- busters business and received their very first call. Peter Venkman. They have their proton packs on their back and they realize that they have never been tested. For example. It is not completely crazy to imagine that proper names can undergo QR. If that were possible. we should get a reading where everybody binds he in the sequence. then the pronoun is part of an expression that translates as a lambda abstraction binding the logical vari- able corresponding to the pronoun. the sentence translates to a formula in which the variable corre- sponding to the pronoun is bound. re- spectively). Dr. It would mean that for cases like (50). There are two readings of Peter Venkman’s quip. (60) Dr Ray Stantz: You know. Egon Spen- gler (played by Bill Murray. it just occurred to me that we re- ally haven’t had a successful test of this equipment. we would have to QR Jeff Ses- sions to a position where it QRs He in the second sentence some- how. and Dr. a sympathetic . in the movie Ghostbusters. then we would get all sorts of interpretations for quantifiers that we never get. I don’t think everybody should be invited. from a fancy ho- tel in which a ghost has been making disturbances. Hypothesis 2 has a number of undesirable consequences. and Harold Ramis. He will just be a bore. Dr. Currently.

Peter blames himself. nor how we might ensure that myself is interpreted as co-referential with I. however. Note. and on another reading. and some- times free. we have not said anything about how to interpret deictic pronouns like I. which is made available through QR thus: (61) S DP LP I 1 S DP VP t1 V DP blame myself 1 The strict reading can be derived from an antecedent without QR: (62) S DP VP I V DP blame myself 1 This suggests that pronouns are sometimes bound. . we have a bound pronoun. On the STRICT reading (the sympathetic reading). Peter blames Egon. The anaphor so picks up the the property ‘x blames x’ on the sloppy reading. so a full explanation of this contrast awaits an answer to these questions. On the SLOPPY reading (the asshole reading). we have a referential pronoun. The strict/sloppy ambiguity exemplified in (60) can be explained by saying that on one reading.Variables in Natural Language 207 reading and a reading on which he is as usual being a jerk.

208 Variables in Natural Language Exercise 23. and bound pronouns are interpreted as bound variables. the pronoun is bound: . Which reading – strict or sloppy – involves a bound interpretation of the pronoun? Which reading involves a free in- terpretation? Heim and Kratzer’s hypothesis: All pronouns are variables. In the following examples. the pronoun in the sentence is free: (63) S DP VP She1 V A is nice (64) S DP VP John V DP hates D NP his1 father But in these examples. and referen- tial pronouns are interpreted as free variables.

What empirical advantages does Hypothesis 3 have over Hypotheses 1 and 2? Summarize briefly in your own words.Variables in Natural Language 209 (65) S ] DP LP Every boy 1 S DP VP t1 V DP loves D NP his1 father (66) S DP LP John 1 S DP VP t1 V DP hates D NP his1 father Whether or not QR takes place will be reflected in a free/bound distinction in the logical translation. Exercise 24. This way of treating pronouns suggests that assignment func- . using example sentences where necessary. The denotation of the sen- tences with free pronouns will depend on an assignment.

We can capture this idea using dynamic semantics. discussed in the next chapter. We will no longer make this assumption. . As Heim & Kratzer (1998) put it: Treating referring pronouns as free variables implies a new way of looking at the role of variable assign- ments. Instead. complete utterance. it is not appropriate to say She left! if your interlocutor has no idea who she refers to.210 Variables in Natural Language tions can be thought of as being provided by the discourse con- text. Still. Until now we have asssumed that an LF whose truth-value varied from one assignment to the next could ipso facto not represent a felicitous. let us think of assignments as representing the contribution of the utterance situation.

a. it correctly captures the fact that (2) does not imply that there is a dog that John and Mary are both friends with a dog. So far. among others . Kamp & Reyle 1993. There are good reasons to favor Russell’s treatment of indef- inites over one on which indefinites refer to some individual. (2) John is friends with a dog and Mary is friends with a dog. an indefinite noun phrase like a man introduces a new DISCOURSE REFERENT into the context.1 Pronouns with indefinite antecedents The way of treating pronouns just introduced in the previous chap- ter brings us very close to DYNAMIC SEMANTICS. Then it ran away. Muskens 1996. 1 Heim 1982. 1983b. as in the following two-sentence discourse: (1) John found a cat. we have analyzed indefinite descriptions as existential quan- tifiers. 1991. In dynamic semantics. This was Russell’s (1905) treatment. Groenendijk & Stokhof 1990. and an anaphoric pronoun or definite descrip- tion picks up on the discourse referent.7 ∣ Dynamic semantics 7. One of the main motivations for dynamic semantics comes from examples involving pronouns whose antecedents are indef- inite descriptions.1 where the mean- ing of an utterance is something that depends on and updates the current discourse context. as Heim (1982) discusses. First.

because it will be outside the scope of the existential quantifier. (5) Every child owns a dog. as in (6). However. we will derive the following representation (assuming that it carries the index 3.) Assuming that QR does not move quantifiers beyond the sentence level. Thirdly. the variable will still be unbound. Russell’s analysis correctly captures the fact that (3) does not say regarding some particular dog that it came in. there are some problems.212 Dynamic semantics If we assumed that a dog referred to some dog. (3) It is not the case that a dog came in. (6) Every child owns Fido. in contrast to (4). then we would predict this sentence to have that implication. If we analyze example (1) using Russell’s very sensible analysis. which has a proper name referring to a dog and does have that implication. the scope of the existential quantifier introduced by a cat does not extend all the way to include the variable v 3 . even if we choose x. If a dog referred to a dog then (5) should mean that every child owns that dog. x )] ∧ ran-away(v 3 ) with v 3 an unbound variable outside the scope of the existential quantifier. (4) It is not the case that Fido came in. Second. while (6) does not have that implication. . (It doesn’t matter which variable we choose. and that a se- quence of two sentences is interpreted as the conjunction of the two sentences): (7) ∃x [cat(x ) ∧ found(j. Russell’s analysis correctly captures the fact that (5) can be true even if it is not the case that there is some particular dog that everybody owns. and there is no other variable- binder to bind it.

so we could get the following representation: (8) ∃x [cat(x ) ∧ found(j. One comes from dialogues like the following: (9) a. but it fails to specify. he jumped. A dog came in. but the text. b. (Feel free to treat ran away as a single verb. It implies as a general moral that the proper unit for the semantic interpretation of natural language is not the individual sentence. (10) a.Dynamic semantics 213 Exercise 1. He didn’t fall. One imaginable solution to this problem is to allow QR to move quantifiers to take scope over multiple-sentence discourses. A man fell over the edge. Heim (1982. x ) ∧ ran-away(x )] Regarding this imaginable solution. we .’ Heim (1982) also presents a number of empirical arguments agains this kind of treatment. b. Give LF trees and derivations for the two sentences in (1).) Explain why these representations do not capture the connection between the con- nection between the pronoun and its intuitive antecedent. What did it do next? What would a Geachian analysis be for a case like (9)? If we let the existential quantifier take scope over the entire discourse. a truth condition for the [first] sentence. 126ff]. and apparently even precludes specifying. [The formula] pro- vides the truth condition for the bisentential text as a whole. 13) writes the fol- lowing: This analysis was proposed by Geach [1962.

This is self-contradictory. the interpretation would be something along the lines. But the English sentence would not be. Another argument that Heim makes against the Geachian anal- ysis is based on the following example: (11) John owns some sheep. (13) Every dog came in. they are enchanted. Harry vaccinated them. “On Telescoping”) . On the Geachian analy- sis. they are excited. (14) No dog came in. This sentence should be false in a situation where John owns six sheep. counterexemplifying the gener- alization that every cannot take scope beyond the sentence boundary. of which Harry vaccinated three. Con- sider the following examples: (12) A dog came in.2 Heim (1992. Examples include: (i) Every story pleases these children. In neither (13) nor (14) can it be bound by the quantifier in the first sentence. #It lay down under the table. ‘there exists an x such that x is a bunch of sheep and John owns x and Harry vaccinated x’. (ii) Each degree candidate walked to the stage. Geach’s proposal would mean that existential quanti- fiers have different scope properties from other quantifiers. 17) concludes: 2 There is a phenomenon called telescoping. Thirdly. It lay down under the table. He took his diploma from the dean and returned to his seat. they never want me to stop. #It lay down under the table. If it is about animals. Example (10) presents a similarly puz- zling challenge. (From Poesio & Zucchi 1992. which would be true in such a situation. if it is about witches.214 Dynamic semantics would get the meaning ‘there exists an x such that x is a man and x fell over the edge and x didn’t fall over the edge and x jumped’. so this is not a welcome pre- diction. and if it is about humans.

This analysis leaves the pronouns unbound.e. y )] → beats(x ′ . This example is naturally interpreted as a universal statement. then he beats it. y )] → beats(x. y ′ ) where the existential quantifiers have scope only over the antecedent of the conditional. and we need an- other alternative. Similar problems arise with indefinite antecedents in relative clauses: (18) Every man who owns a donkey beats it. rep- resentable as follows: (16) ∀x [[farmer(x ) ∧ donkey(x ) ∧ own(x. Thus it seems that Geach’s solution will not do.” i. clearly it does not deliver the right meaning. This generalization (which is just a special case of the structural restrictions on quantifier-scope and pronoun-binding that have been studied in the linguistic literature) is only true as long as the puta- tive cases of pronouns bound by existential quanti- fiers under Geach’s analysis are left out of considera- tion. a unit such that no quantifier inside it can take scope be- yond it. y )] But the representation that we would derive for it using the as- sumptions that we have built up so far would be: (17) ∃x ∃ y [farmer(x ) ∧ donkey(x ) ∧ own(x.Dynamic semantics 215 The generalization behind this fact is that an unem- bedded sentence is always a “scope-island. So-called ‘donkey anaphora’ is another type of case involving pronouns with indefinite antecedents that motivates dynamic se- mantics. The classic ‘donkey sentence’ is: (15) If a farmer owns a donkey. .

But this is more of a description of the facts than an ex- planation for what is happening. . it will last for 50 more. If not. Heim (1982) idea is that indefinites have no quantificational force of their own. if a cat falls from the fifth floor. According to Geach (1962). which may get bound by whatever quantifier there is to bind them. witness the following equivalences: (20) In most cases. he or she will very rarely survive. Then give an LF tree and a derivation for (18) using the assumptions that we have built up so far. explain. we must simply stipulate that in- definites are interpretable as universal quantifiers that can have extra-wide scope when they are in conditionals or in a relative clause. ⇐⇒ Very few people that fall from the fifth floor survive. but are like variables.216 Dynamic semantics Exercise 2. This is supported by the fact that their quantificational force seems quite adaptable. Give a representation in Lλ capturing the intuitively correct truth conditions for (18). Does this derivation give an equivalent result? If so. There is no wide-scope universal reading for a donkey here. it survives. (21) Sometimes. it is not as if just any relative clause allows for a wide-scope universal reading of an in- definite within it: (19) A friend of mine who owns a donkey beats it. ⇐⇒ Some cats that fall from the fifth floor survive. give a situation (including a particular assignment function) where one would be true but the other would be false. if a table has lasted for 50 years. (22) If a person falls from the fifth floor. Moreover. ⇐⇒ Most tables that have lasted for 50 years will last for another 50.

on Heim’s view. say. A discourse referent is more like a 3 Here. objects. and store the content in some manner. A pronoun picks out an established discourse referent. the machine will have to fulfill at least the following basic requirement. events. as in No man wants his reputa- tion dragged through the mud. which opens as follows: Consider a device designed to read a text in some nat- ural language. etc. for the purpose of being able to an- swer questions about it. while pronouns pick up an ‘old’ referent. any kind of anaphor-antecedent relationship qualifies as coref- erence. for each indi- vidual. but no partic- ular individual is talked about. An indefi- nite contributes a new referent. It has to be able to build a file that con- sists of records of all the individuals. indefinites are unlike pronouns in that they introduce a ‘new’ referent.”3 Thus a dis- course referent need not correspond to any actual individual. where as a sentence or text unfolds. To accomplish this task. that is. in this sense. . even if reference does not take place. The idea of a DISCOURSE REFERENT is laid out by Karttunen (1976). mentioned in the text and. a discourse referent does not necessarily imply a ref- erent. The quantificational force arises from the indefinite’s environment. we con- struct a representation of the text using discourse referents. For Karttunen. record whatever is said about it.Dynamic semantics 217 However. There are examples in which the occurrence of a corefer- ential pronoun or definite noun phrase is justified. interpret it. Karttunen characterizes discourse referents as follows: “the ap- pearance of an indefinite noun phrase establishes a discourse ref- erent just in case it justifies the occurrence of a coreferential pro- noun or a definite noun phrase later in the text. Karttunen is using “coreference” in a looser manner than the one Heim & Kratzer (1998) advocate when they say that “coreference implies reference”. This idea of novelty is formulated in the context of dy- namic semantics.. and has no quantificational force of its own.

one of the virtues of this notion is that it “allows the study of coreference to proceed independently of any general theory of extralinguistic reference” (p. The pronoun it in the second sentence cannot refer back to the discourse referent that the it in the first sentence picks up. It broke in half. The dog ran away. A good theory should account for both sides of this tension. d. Accord- ing to Karttunen. consider the discourse in (24): (24) a. c. The lifespan of that discourse referent ends with the scope of negation. 367). Examples (13) and (14) above provide further examples in which one can see evidence of lifetime limitations for discourse refer- ents. She hit him with a paddle. For exam- ple. 7. they do not license subsequent anaphora in perpetuity. very much like a variable. In file card semantics. Subsequent anaphoric reference updates the file card. an indefinite introduces a new file card. So while indefinites seem to introduce discourse referents with an unusually long life span. b. #It is black.2 File change semantics Heim’s (1982) FILE CARD SEMANTICS conceptualizes discourse ref- erents as file cards. Here is an example where a discourse referent dies: (23) Bill didn’t find a cat and keep it. Karttunen (1976) also pointed out that discourse referents have a certain LIFESPAN. A woman was bitten by a dog. very much building on Karttunen’s metaphor. compared to other apparently quantificational expressions.218 Dynamic semantics placeholder for an individual. the discourse referents they intro- duce are immortal. .

Dynamic semantics 219 The first sentence contains two indefinites. and ‘bit 1’. and that the descriptive content (dog) serves merely to identify the appropriate discourse referent. and file card 2 is associated with the property ‘dog’. so that by the end of the discourse. the file looks like this: 1 2 3 woman dog paddle (25) bitten by 2 bit 1 used by 1 to hit 2 hit 2 with 3 was hit by 1 with 3 broke in half ran away The definite description the dog is assumed to behave just as an anaphoric pronoun. Add a sentence to (24) and show what the file would look like afterwards. a third card is added to the file. Exercise 3. and the first two cards are updated thus: 1 2 3 woman dog paddle bitten by 2 bit 1 used by 1 to hit 2 hit 2 with 3 was hit by 1 with 3 And so forth. Heim wishes to distinguish between discourse referents (i. and ‘bitten by 2’. File card 1 is associated with the property ‘woman’. But . Pictorially. a woman and a dog. Like Karttunen.e.. we can represent the situa- tion like this: 1 2 woman dog bitten by 2 bit 1 After the second sentence. These trigger the introduction of two new file cards. call them file card 1 and file card 2. file cards) and the things that they talk about.

the sequence is ⟨Joan. For example. it is ⟨Joan.. In file change semantics. Paddle⟩ (corresponding respectively to file cards 1. Pug. because a file card is just a description and in principle it could match any number of individuals. SAT- .. and in World 2. not a set of discourse referents. what the file cards describe..k. it is possible to find a sequence of individuals that match the descriptions. Some people might disagree with this identification and maintain that discourse referents are . Assume that in both worlds. that are true or false.” This conception of file cards as descriptions is key to understand- ing how truth is conceptualized in file change semantics.. etc. it is not formulas.a. Fido and Pug are dogs. The truth of a file like (25) depends on whether it is possible to find a sequence of individuals that match the de- scriptions on the cards.220 Dynamic semantics she reasons that such an identification would be absurd. and 3). we say that a given sequence of individuals SATISFIES a file in a given possible world if the first individual in the sequence fits the description on card number 1 in the file (ac- cording to what is true in the world).e. consider the following two worlds. More technically. In World 1. World 1 World 2 Pug bit Joan Fido bit Joan Joan hit Pug with Paddle Joan hit Fido with Paddle Paddle broke in half Paddle broke in half Pug ran away Fido ran away In both worlds. but files (i. But such a distinction gains us nothing and creates puzzling questions: File cards usu- ally describe more than one thing equally well. and Paddle is a paddle. But. A file is TRUE (a. So the file is true relative to both worlds.. Joan and Sue are women.. an indefinite NP [introduces] a discourse referent. 2.. sets of file cards). Paddle⟩. Fido.

It is not any particular file.Dynamic semantics 221 ISFIABLE ) in a possible world if and only if there is a sequence that satisfies it in that world. Now. the meaning of a sentence is constituted by its potential to update the context: a CONTEXT CHANGE POTENTIAL . the context is rep- resented as a file. but they do not lend themselves to this purpose. the pair consisting of World 1 and the sequence ⟨Joan. The set of model-assignment pairs that satisfy the formula represent the truth conditions for the sentence. Paddle⟩. The boxes we have drawn give a rough idea. To make this precise. Pug. the meaning of a sentence in dynamic semantics corresponds to a relation between sets of world-assignment pairs. we need a conceptualization of files that is amenable to formal definitions. We therefore identify a file with the set of world-sequence pairs such that the sequence satisfies the file in the world. the meaning of a sentence can be identified with a set of world-assignment pairs (or model- assignment pairs): We talk about (the translation of ) a sentence is true with respect to world w (or model M ) and assignment func- tion g . In file change semantics. rather the meaning of a sentence constitutes a set of instructions for up- dating a given file. mapping variables to individuals. notice that a sequence of individuals is very much like an assignment func- tion. As the meaning of a sentence in dynamic framework is something that relates an input context to an output context. Paddle⟩ would be in the set of world-sequence pairs making up the file represented by (25). For instance. the meaning of a sentence corresponds to a set of world-assignment pairs. the meaning of a sentence corresponds to an up- date to the file in the discourse. Fido. Thus the difference be- tween static semantics and dynamic semantics can be seen as follows: Whereas in static semantics. the meaning would thus be a relation between two sets of world-sequence pairs. On this view. Recall that in a static framework. so the meaning of a sentence is a FILE CHANGE POTENTIAL . In other words. So would the pair consisting of World 2 and the sequence ⟨Joan. .

where K and K ′ are both DRSs. or complex.3 Discourse Representation Theory File card semantics is not the only dynamic theory of meaning. with information about all of the discourse referents all combined together.222 Dynamic semantics 7.y.x) hit-with(x. 1993). An indefinite adds a new discourse referent to the universe. in which DISCOURSE REPRESENTATION STRUCTURES take the place of files.z) ran-away(y) Just as in file card semantics. and subsequent anaphora can update the information associated with that discourse referent. For example. and the meaning of a sen- tence can be seen as its potential to affect any DRS representing the current state of the discourse. Conditions can be simple. containing a set of discourse referents • a SET OF CONDITIONS. this kind of structure is thought to be built up over the course of a discourse. spoken out of the blue. So. a sentence with two indefinites like a farmer owns a donkey would give rise to the following DRS: . Discourse representation structures (DRSs) are in a way one big file card. A DRS has two parts: • a UNIVERSE. like woman(x ). another very well-developed and well-known one is DISCOURSE REPRESENTATION THEORY (Kamp & Reyle. the DRS for the discourse in (24) would look as follows: xyz woman(x) dog(y) paddle(z) bit(y. like ¬K or K ⇒ K ′ .

But to give an idea.y) .y) owns(x. Which embeddings verify a given DRS is determined by semantic clauses for DRSs. consider the following DRS: xy farmer(x) donkey(y) owns(x.y) The same sentence used as the antecedent of a conditional would appear as a DRS contained in a larger DRS. In this sense.Dynamic semantics 223 xy farmer(x) donkey(y) owns(x. the function may be a PARTIAL FUNCTION on the set of discourse referents. An EMBEDDING is a function that maps discourse referents to individuals (like an assignment or sequence).y) Informally. A DRS is defined to be true TRUE relative to a model if there is an embedding that VERIFIES it in the model. as follows: xy farmer(x) ⇒ donkey(y) beats(x. a DRS K is considered to be true in a model M if there is a way of associating individuals in the universe of M with the discourse referents of K so that each of the conditions in K is verified in M . The do- main of this function will always be some set of discourse refer- ents. Truth in DRT is defined relative to a DRS. but it may or may not include all of the possible discourse referents.

so A farmer owns a donkey will be represented: . So g verifies farmer(x ) with respect to model M = ⟨D. What this means is that an embedding g veri- fies the DRS for A farmer owns a donkey if it assigns x to a farmer. and y to a donkey that the farmer owns. I (farmer) will be a set of individuals. As in predicate logic. I ⟩.g ⊧ φ to denote ‘g verifies condition φ in model M ’. I ⟩. Let us use the notation M. The rule for decid- ing whether a given embedded verifies a condition like farmer(x). and g (x ) chased g ( y ). I (owns) will correspond to a relation. In general. donkey. and the domain of g includes every discourse referent in the universe of K . is defined as follows: (27) Verification of a predication condition M . I ⟩ if and only if g (x ) ∈ I (farmer). g ( y ) is a donkey. verification of a DRS is defined as follows: (26) Verification of a DRS Embedding g VERIFIES DRS K in model M if and only if g verifies every condition in K . Recall that an indefinite will introduce a new discourse refer- ent into the discourse. g ⊧ π(x ) iff g (x ) ∈ I (π) where π is a predicate symbol and model M = ⟨D.). and add the condition that the descriptive content apply to the discourse referent. where a predicate applies to an argument. owns. Whether or not a given embedding g verifies a given condition depends on the nature of the condition.224 Dynamic semantics A function g verifies this DRS with respect to model M if: • the domain of g contains at least x and y • according to M it is the case that g (x ) is a farmer. we have models M = ⟨D. etc. I assigns an ex- tension to every predicate (farmer.

Another kind of atomic condition is equality: (28) Verification of an equality condition M . if there are indeed x and y such that x is a farmer and y is a donkey and x owns y. in other words. and explain it to them so that they say ‘Aha!’. indefinites in unembedded sen- tences like A farmer owns a donkey are interpreted essentially as existential quantifiers. g ⊧ x = y iff g (x ) = g ( y ) This says that embedding g verifies the condition ‘x = y’ in model M if g (x ) is the same entity as g ( y ). Under this treatment. Suppose that this is the representation for Pedro does not own a donkey. Verifying a negated condition such as the following is a bit more complex.y) .Dynamic semantics 225 xy farmer(x) donkey(y) owns(x. which verifies all three of the conditions. this DRS will be true in M if there is an embedding g with a domain that includes x and y. x Pedro(x) y ¬ donkey(y) owns(x. Suppose that your friend doesn’t explain why this is so.y) According to the rules that we have set out. Exercise 4.

Returning to negation: (29) Verification of a negated condition An embedding function f verifies a condition of the form ¬K with respect to model M iff there is no function g such that: • g extends f • g verifies K Thus. a function f verifies the negated condition in the DRS for Pedro does not own a donkey iff: • f verifies Pedro(x ). I. this should be true if and only if there is no way to as- sign a value to x such that x is Pedro. for example.e. and • There is no function g such that: (i) g extends f . and (ii) g verifies .226 Dynamic semantics Intuitively. Thus if g is an extension of f then f and g assign the same values to all arguments for which f is defined. while g may (though it need not) be defined for some additional arguments as well. This is defined with the help of some auxiliary notions: • Compatibility We say that two functions f and g are COMPATIBLE if they assign the same values to those arguments for which they are both defined.. f and g are compatible if for any a which belongs to the domain of both f and g : f (a ) = g (a ) • Extension g is called an EXTENSION of f if g is compatible with f and the domain of g includes the domain of f . and there is some individual y such that y is a donkey and x owns y.

(30) Verification of a conditional condition An embedding function f verifies a condition of the form K ⇒ K ′ with respect to model M if and only if: For all ex- tensions g of f that verify K .Dynamic semantics 227 y donkey(y) owns(x. We will not go through how this works here. and explain why it verifies that condition. The semantics of conditionals uses the concept of extensions among embedding functions as well. Exercise 5. Furthermore. I ⟩ where Pedro does not own a donkey is true. the fact that the discourse referent is introduced in a DRS that is nested within an- other DRS. first consider what kind of embedding would . there is an extension h of g that verifies K ′ . suffice it to say that the discourse referent is not ACCESSIBLE for subsequent anaphora in this posi- tion.y) This gives us results for negated sentences containing indefinites on par with Russell’s treatment: Just as with negated existentials. by specifying the value of I for the rele- vant constants. The intuitive idea is something like the following: To verify a con- ditional statement. Partially specify a model M = ⟨D. as it were. and. Then give an embedding function f that verifies the negated condition in the DRS for Pedro does not own a donkey in M . “shielded” from the top level by a nega- tion symbol. gives us the tools to account for the fact that a donkey does not license an antecedent in a later sentence. a negated sentence containing an indefinite that takes scope un- der the negation will be true only if there is no object in the model satisfying the relevant description.

d ⟩. Consider the DRS for the donkey sentence: xy (31) farmer(x) ⇒ donkey(y) beats(x. like this: x → a g =[ ] y → d . For example. ⟨b. although unembedded indefinites get an exis- tential interpretation. I ⟩ such that the following statements hold: (32) I (Pedro) = a I (farmer) = {a. given that embedding. d ⟩.228 Dynamic semantics be necessary to verify the antecedent. that has the empty set as its do- main. ⟨b. e. e ⟩. It turns out that this semantics for conditionals allow for a uni- fied account of indefinites across the full range of uses we have seen. f ⟩} Let f be the null embedding. ⟨b. Suppose we have a model M = ⟨D. f } I (owns) = {⟨a. The extensions g of f that verify the antecedent are the ones that assign x to a farmer and y to a donkey that is owned by the farmer. In particular. this criterion would be satisfied by an embedding that assigns x to a and y to d . c } I (donkey) = {d . we want to determine whether ev- ery extension g of f that verifies the antecedent DRS has an ex- tension h of g that verifies the consequent DRS. e ⟩. indefinites acquire universal import in con- ditionals. b. and indefinites can bind from antecedent to consequent. ⟨b.y) For an arbitrary embedding f .y) owns(x. Now consider whether or not the consequent has to hold. f ⟩} I (beats) = {⟨a.

for every given case where we have a pair x and y where x is a farmer and y is a donkey owned by the farmer. For each of those embed- dings. and name the embedding g that veri- fies the antecedent that does not have an extension h that verifies the consequent. in this case. Change the model specified in (32) so that the condi- tion in (31) is not satisfied. since a beats d . Hence we have universal im- port for indefinites in conditional sentences. including more on the notion of . since the own relation is exactly the same as the beat relation. and give a model in which the conditional is (non-trivially) satisfied. there is an extension h of g that verifies the con- sequent.Dynamic semantics 229 Now. Exercise 6. the farmer in that pair also beats that donkey. given an assign- ment g that verifies the antecedent. namely g itself. If that condition did not hold. Let g be the empty embedding ∅. then he beats a friend of the donkey. Exercise 7. Draw a DRS for If a farmer beats a donkey. Give an example of an embedding g and an extension h of g such that g verifies the antecedent and h verifies the consequent. Geurts & Beaver (2011) pro- vide a more thorough overview. In general. enough to show that it has the power to deal smoothly with the appar- ently variable force of indefinites. list all of the embeddings g that verify the antecedent DRS in (31). Using the assump- tions about the model given in (32). then the condition would be false. there will always be an exten- sion h of g that verifies the consequent. This chapter has only given a taste of dynamic semantics. give an embedding h that verify the consequent. namely g itself. In other words. Exercise 8.

The interested student is encouraged to start there and work backwards from the references cited there. This is a tricky area. . Note also that we have not touched on how composition works. i. how files or DRSs are to be constructed from an LF representation.230 Dynamic semantics ‘accessiblity’. called Compositional DRT.e.. but the interested student is encouraged to consult Muskens (1996) for an elegant approach to composition in dynamic seman- tics. which constrains the ‘life-span’ of discourse refer- ents.

then (1a) is clearly true and (1b) is clearly false.8 ∣ Presupposition 8. for example. we still need a more general way of dealing with presuppo- sition. Both candidates are qualified.1 The definite article is one of many presupposition triggers. pronounced ‘zilch’. which fail to denote when nothing satisfies the description. we assumed that the denotation was a special ‘undefined individual’. In a context with three candidates for a job. denoted #e . If there are two candidates and both are qualified. then it is hard to say whether these sentences are true or false. standing for a ‘completely alien entity’ not in the set of indi- viduals. it would be quite odd for someone to say either of the following: (1) a. Landman’s (2004) 0. and Oliver & Smiley’s (2013) O. we saw one example of a presuppositional expres- sion. 1 Other notations that have been used for the undefined individual include Kaplan’s (1977) †. . Neither candidate is qualified. We translated the the definite determiner using ι-expressions. and even if we are satisfied with our treatment of the definite ar- ticle. The determiners both and neither. In that case. b. This suggests that both candidates and neither candidate come with a presupposition that there are ex- actly two candidates. But if there is any number of candiates other than two. come with presuppositions.1 Definedness conditions In Chapter 5. namely the definite determiner the.

Let us call our new language ∂L. ‘presupposing that there are exactly two P s’. We can model this presupposition by treating nei- ther as a variant of every that is only defined when its argument is a predicate with exactly two satisfiers. To signify that it is presupposed. then ∂(φ) is an expression of type t . [P (x ) ∧ Q (x )]] This say that neither is basically a synonym of no.g = 1 J∂(φ)KM . carrying an extra presupposition: that there are exactly two P s.g = { m otherwise.2 This is what is presupposed. we must add the fol- lowing clause to our syntax rules: (3) Syntax of definedness conditions If φ is an expression of type t . we will use Beaver & Krahmer’s (2001) ∂ ‘partial’ operator. We define the semantics of these expressions as follows: (4) Semantics of definedness conditions If φ is an expression of type t . . The lexi- cal entry for neither can be stated using the ∂ operator as follows: (2) neither ↝ λP λQ .232 Presupposition As the reader may have noticed. In this new language. ∂(φ) will be an kind of expression of type t . 2 ∣P ∣ = 2 is short for ∃x ∃ y [x ≠ y ∧ P (x ) ∧ P ( y ) ∧ ¬∃z [z ≠ x ∧ z ≠ y ∧ P (z )]]. [∂(∣P ∣ = 2) ∧ ¬∃x . then: JαKM . Let us use ∣P ∣ = 2 (suggesting ‘the cardinality of P is 2’) as a way of writing the idea that pred- icate P has exactly two satisfiers. we have been assiduously avoiding plural noun phrases so far. Its value will be ‘true’ if φ is true and ‘undefined’ otherwise. To implement this. In order to be able to give translations like this. A formula like this: ∂[∣P ∣ = 2] can be read.g if JφKM . so we will ignore both and fo- cus on neither. we need to augment L λ to handle formulas containing the ∂ symbol.

. suggesting that there might be some- thing wrong with our analysis of every. then its value is equal to that of ∀x . If it has a defined value. [∂(∣candidate∣ = 2) ∧ ∀x . Assuming that there are no unicorns in the world. this sentence is true given the analysis of every that we have given so far: since there are no unicorns in the world. one might hesitate to judge the following sentence as true: (6) Every unicorn in the world is white. [candidate(x ) → Q (x )]] qualified is qualified λP λQ . and the expressive power is the same. For example. [candidate(x ) → qualified(x )].3 The quantifier every is also sometimes argued to come with a presupposition. [P (x ) → Q (x )]] candidates neither candidate The translation for the whole sentence (at the top of the tree) should have a defined value in a model if ∣candidate∣ = 2 is true in the model. But some- thing feels wrong with (6). [candidate(x ) → qualified(x )] λQ . 3 Notice that β-reduction works as usual under this way of doing things.Presupposition 233 The lexical entry in (2) will give us the following analysis for (1b). the style of handling un- definedness laid out in Heim & Kratzer (1998) does not allow for β-reduction to work properly. It seems to imply. where β-reduced variants of the translations are given at each node: (5) ∂(∣candidate∣ = 2) ∧ ∀x . and this is sufficient to make the sen- tence true according to the semantics we have given. there are no unicorns in the world that are not white. that there are unicorns. Al- though the notation here is quite similar to Heim & Kratzer’s (1998) notation for partial functions. in some sense of imply (as Strawson would say). [∂(∣P ∣ = 2) ∧ ∀x . so the present system is a bit cleaner in this respect.

We can capture this using the following kind of analysis of every: (7) ⟪every⟫ = λP λQ . ∂(∣P ∣ ≤ 1) ∧ P (x ) • ⟪only⟫ = λP λx . Based on examples like this.) Exercise 2. we would hesitate to judge it as true or false. Stephen Colbert explained that Carter was “the only president to . capturing the intuition that the sentence is neither true nor false. Exercise 1. P (x )) ∧ ∀x . uses of definites in argu- ment positions are thought to involve type-shifting operations on this view. [∂(∃x . Explain why the truth and definedness conditions that you derive for the sentence are satisfied only when there are mul- tiple girls. P (x ) ∧ ∀ y [ y ≠ x → ¬P ( y )] Draw a tree for Anna isn’t the only girl and give a compositional derivation. t ⟩ under this analysis. (Note that the only girl will turn out to be an expres- sion of type ⟨e. Explain why this is a problem for the Fregean analysis of definite descriptions.234 Presupposition But (6) does not entail that there are unicorns. Coppock & Beaver (2015) argue for an analysis using the following lexical entries (where ∣P ∣ ≤ 1 means ‘the cardinality of P is less than or equal to 1’ and is tech- nically a shorthand for a more complicated well-formed formula in first-order logic): • ⟪the⟫ = λP λx . [P (x ) → Q (x )]] This will give rise to an undefined value for (6) in models where there are no unicorns (such as the one corresponding to reality). In preparation for an interview with Jimmy Carter. Notice that Anna isn’t the only girl does not presup- pose that there is only one girl (and in fact asserts the opposite).

what happens when a universal quantifier scopes over a presupposition operator? Consider the following example: (8) Every boy loves his cat. [boy(x ) → loves(x. This would be translated: (9) ∀x . These connectives are called the WEAK K LEENE connectives. ιy [cat( y ) ∧ has(x. This leads to the truth tables in Table 8. y )])] . explain why it is funny. other than all of them” (emphasis added). using ∂L to articulate the contribution of the emphasized part to the meaning. what if φ is undefined and ψ is true – is φ ∧ ψ undefined or false? If we take undefinedness to represent ‘nonsense’. and the negation of an undefined formula is also presumably unde- fined. the truth value of the lefthand con- junct (or disjunct) is represented by the row labels and the truth value of the righthand conjunct (or disjunct) is represented by the column labels.1. Now. The value in the table is the value for the con- joined (or disjoined) formula. a number of de- cisions have to be made.1: Truth tables for the connectives in ∂L have lived in public housing. then presumably the conjunction of non- sense with anything is also nonsense. For example. Same for disjunction.Presupposition 235 ∧ T F m ∨ T F m ¬ ∂ T T F m T T T m T F T T F F F m F T F m F T F m m m m m m m m m m m m m Table 8. In the truth tables for the binary connectives. Assuming that this is a funny joke. In setting up a logic with three truth values.

Haug (2013) allows a universal claim to be true as long as it contains no 0s. If we want to maintain that ∀xφ is equivalent to ¬∃x ¬φ. should the truth value for the universal proposition be 1 or should it be #? Different authors have advocated different answers to this question.g [x →k ] = # for some k ∈ D ⎪ ⎪ ⎪ ⎪ ⎩ 0 otherwise So a universal claim is false only if the scope proposition never takes on an undefined value.g [x →k ] = 1 for all k ∈ D ⎪ J∀xφKM . unless it is always undefined. and an existential claim should be seen as a big disjunc- tion. This leads to the following treatment of universal quantifica- tion: ⎧ ⎪ ⎪ ⎪ 1 if JφK M . LaPierre (1992) and Muskens (1995) assign the value # to a universal claim consisting of 1s and #s. y )]) This formula will give the value ‘undefined’ when x is a boy who doesn’t happen to have a cat. So a universal claim should be seen as a big con- junction.g = ⎨ # if JφKM . ιy [cat( y ) ∧ has(x. As Muskens (1995) discusses. if the assortment of truth values that the scope proposition takes on contains both 1 and #. and is not always true. then this yields the following interpretation of the existential quanti- .236 Presupposition This formula will be true in a model where every element of D satisfies the formula boy(x ) → loves(x. and our existential quantifiers ‘match’ our treatment of disjunction. What if there are such boys? Should that make the sentence as a whole have an undefined truth value? Or should we say that the sentence is true as long as every boy who has a cat loves it? In other words. however we set things up. it should be the case that our universal quantifiers ‘match’ our treatment of conjunction.

given that α or β might denote the undefined individual? We certainly don’t want it to turn out to be true that the king of France is the grand sultan of Germany is a true statement. then too many universal and exis- tential claims would turn out to be undefined. This treatment avoids the conclusion that the king of France is grand sultan of Germany is true.Presupposition 237 fier: ⎧ ⎪ ⎪ ⎪ 0 if JφK M . because if we did. Consider the following famous sentence discussed by Russell (1905): .g [x →k ] = 0 for all k ∈ D ⎪ J∃xφKM .g [x →k ] = # for some k ∈ D ⎪ ⎪ ⎪ ⎪ ⎩ 1 otherwise So an existential claim is true only if the scope proposition never takes on an undefined value. Now. Another slightly thorny issue is identity.g . • If one of α or β denotes the undefined individual.g = ⎨ # if JφKM . To deal with this issue.g wise. LaPierre (1992) defines identity between two terms as follows: • If neither α nor β denotes the undefined individual. and is not always false. and 0 other- M . Should the undefined individual be considered part of D? Let’s assume not. • If both denote the undefined individual. But there is one case in which we might want the predication to be true. then α = β is true wrt M and g if JαK = JβKM . Under what circum- stances do we want to say that a given sentence of the form α = β is true. then α = β is unde- fined (the rationale being that not enough is “known” about the objects to determine that they are the same or distinct. then α = β is false. predications involving the undefined individ- ual will themselves be undefined. generally.

the sen- tence is correctly predicted to be true under this treatment. assigning a denotation to all of the constants of the language. and just give the semantics here. I ⟩ subject to the follow- ing constraints: • The domain of individuals D contains at least one individ- ual. As usual.) The semantics of exists might then be defined as follows: (12) Jexists(α)KM . For every type. a world. For functional types ⟨σ.τ⟩ consisting of the (total) func- tions from D σ to D τ . which we refer to as mτ . Let us denote this predicate exists. We leave the syntax of the language implicit.238 Presupposition (10) The golden mountain does not exist. which is true of actual individuals and false of undefined in- dividuals. 0. The denotation of a con- stant of type τ is a member of D τ . #}. τ⟩. A model is a tuple ⟨D. there is a domain D ⟨σ. [golden(x ) ∧ mountain(x )]) (This of course is not the only possible treatment of this example.g = 1 if JαKM . [golden(x )∧mountain(x )]) denotes #e . One possible treatment of this sentence is with an existence pred- icate. • I is an interpretation function. Type e is associ- ated with the domain of individuals D e = D and type t is associ- ated with the domain of truth values D t = {1. and an assignment. types are associated with domains. there is also an ‘undefined individual’ of that type. An assignment g is a total function whose domain consists of the variables of the language such that if u is a variable of type τ then .g ≠ #e and 0 otherwise Assuming that ιx . Expressions are interpreted with respect to a model. Then the translation of (10) would be: (11) ¬exists(ιx . We are now ready to give the full semantics for ∂L.

g = g (α).g ≠ me and JβKM .g = me ⎪ ⎪ ⎪ ⎩ 0 otherwise 4.1. Connectives The semantics of the connectives is defined as in Table 8.g [v →k ] = 1 for all k ∈ D ⎪ J∀vφKM . Quantification (a) If φ is a formula and v is a variable of type a then ⎧ ⎪ ⎪ ⎪ 1 if JφK M . Predication (same as before) If α is an expression of type ⟨a. then Jα = βKM . then Jα(β)K = JαK(JβK).g = ⎨ # if JφK . The semantic rules are the following.g ≠ me and JαKM .g ⎪ ⎪ = ⎨ # if JαKM . 1. then JαK M .g [v →k ] = 0 for all k ∈ D ⎪ J∃vφKM .Presupposition 239 g (u ) ∈ D τ .g [v →k ] = # for some k ∈ D M ⎪ ⎪ ⎪ ⎪ ⎩ 0 otherwise (b) If φ is a formula and v is a variable of type a then ⎧ ⎪ ⎪ ⎪ 0 if JφK M .g ⎧ ⎪ 1 if JαK M . We use g [x → d ] to denote an assignment function which is exactly like g with the possible exception that g (x ) = d .g [v →k ] = # for some k ∈ D M ⎪ ⎪ ⎪ ⎪ ⎩ 1 otherwise . Basic Expressions (same as before) (a) If α is a non-logical constant.g = me and JβKM . Equality If α and β are terms. 5.g = I (α). b ⟩. then JαK M . 3.g = ⎨ # if JφK . 2.g = JβKM . and β is an expression of type a. (b) If α is a variable.

Lambda Abstraction (same as before) If α is an expression of type a and u a variable of type b then Jλu . the presupposition projects over negation. that presuppositions project.2 Projection problem This treatment of presupposition captures the fact. . then the king of France is wise. Imagine the presuppositions floating up from deep inside the sen- tence. h (k ) = JαK M . Neither of these sentences as a whole implies that there is a king of France. as does the question. The problem of determining when a presupposition projects is called the PROJECTION PROBLEM. Consider the following examples: (13) If there is a king of France. then Every candidate is qualified has no truth value.g [u ↦k ] . if /then and either/or are FIL - TERS . The presuppo- sition also projects from the antecedent of a conditional: If every candidate is qualified.240 Presupposition 6. and getting trapped when they meet if /then or either/or. Both im- ply – in some sense of imply – that there are candidates. as Karttunen (1973) discussed. Is every candidate qualified? But presuppositions do not always project. αKM . 8. other design choices are of course possible. discussed in the first chapter. so to speak. then it doesn’t matter who we pick also. This is just one example of a complete system. nor does It is not the case that every candidate is qualified. In Karttunen’s terms. In that sense. (14) Either there is no king of France or the king of France is wise. If there are no can- didates. communicates that the speaker takes there to be at least two candidates.g is that function h from D b into D a such that for all objects k in D b . as a whole. which do not let all presuppositions “through”.

But it could also be stronger. the presupposition gets filtered out. so the presupposition gets filtered out. In (15). for example. the part of the sentence that is supposed to en- tail the presupposition is simply equivalent to the presupposition. and the antecedent of the conditional is there is a king of France. In (13) and (14). so the presupposi- tion does not get filtered out. so the presupposition “passes through the filter”. for example. Consider the following example from Karttunen (1973): . With a disjunction. (16) Either France is lucky or the king of France is wise. In (13). the antecedent is France is staying out of the war. In (14). the first disjunct does not entail that there is a king of France.Presupposition 241 Operators like if /then and either/or do let some presupposi- tions through. the second disjunct (the king of France is wise) presupposes that there is a king of France. In (16). The first disjunct is there is no king of France. and strictly entail the presupposition. the consequent (the king of France is wise) presupposes that there is a king of France. The antecedent en- tails of course that there is a king of France. so the presupposition gets filtered out. so to speak. which doesn’t entail that there is a king of France. whose negation is there is a king of France. then the king of France is wise. the generalization is as follows: (18) A presupposition of one disjunct gets filtered out when the negation of another disjunct entails it. Karttunen gave the following generalization: (17) When the antecedent of the conditional (the if -part) en- tails a presupposition of the consequent (the then-part). which again entails of course that there is a king of France. for example: (15) If France is neutral. Observe that generalizations (17) and (18) use the word entail.

However. So Karttunen’s generalization correctly captures the fact that (29) does not presuppose that Geraldine has holy un- derwear. This will be discussed in the next section. false and unde- fined). Stalnaker introduces the concept of the . where the meaning of a sentence is a “context change potential”: a function that can update a discourse con- text. and the local context for one disjunct of a disjunction is the negation of the other disjunct. using the ∂ operator.242 Presupposition (19) Either Geraldine is not a mormon or she has stopped wear- ing her holy underwear. since undefinedness tends to “percolates up.3 Presupposition in Dynamic Semantics The two generalizations about when presuppositions get filtered that were just identified (in (17) and (18)) can be stated concisely and illuminatingly using Karttunen’s (1974) concept of LOCAL CON - TEXT : In general. The first disjunct is Geraldine is not a mormon. The local con- text for the second disjunct is the negation of the first disjunct. The local context for the conse- quent of a conditional is its antecedent. a presupposition gets filtered out if it is entailed by the appropriate local context. The projection problem has been dealt with elegantly using dy- namic semantics. The system that we have introduced for dealing with presup- positions might seem to predict that presuppositions will always project. The second disjunct (she has stopped wearing her holy underwear) presupposes that Geraldine has holy underwear. If we assume that all mormons have holy underwear.” so to speak. so the local con- text is Geraldine is a mormon. then the antecedent entails that Geraldine has holy underwear. note that Beaver & Krahmer (2001) argue that presupposition projec- tion can in fact be handled in an empirically adequate manner in a static semantics with three truth values (true. 8. as we have done here. This idea builds on Stalnaker’s (1978) ideas about the prag- matics of presupposition.

the possible worlds com- patible with what is presupposed. It is PROPOSITIONS that are presupposed – functions from possible worlds into truth-values. which is conceived of as the set of worlds that the speakers all publicly consider possible candidates for being the actual world. Here is how Stalnaker characterizes it: Roughly speaking. To engage in conversation is. it is PRESUPPOSED. and as if he assumes or believes that his audi- ence assumes or believes that it is true as well.Presupposition 243 CONTEXT SET . But the more fundamental way of representing the speaker’s pre- suppositions is not as a set of propositions. This set. but rather as a set of possible worlds. to distinguish among alternative possible ways that things may be. The presuppositions define the limits of the set of alternative possibilities among which . which I will call the CONTEXT SET. Pre- suppositions are what is taken by the speaker to be the COMMON GROUND of the participants in the con- versation.. is the set of possible worlds recognized by the speaker to be the “live options” rel- evant to the conversation. es- sentially.. If a proposition holds in every world in the context set. what is treated as their COMMON KNOWL - EDGE or MUTUAL KNOWLEDGE . A proposition is presup- posed if and only if it is true in all of these possible worlds. The motivation for representing the speaker’s presup- positions in terms of a set of possible worlds in this way is that this representation is appropriate to a de- scription of the conversational process in terms of its essential purposes. the presuppositions of a speaker are the propositions whose truth he takes for granted as part of the background of the conversation. A propo- sition is presupposed if the speaker is disposed to act as if he assumes or believes that the proposition is true.

models) in which the sentence is true. w 2 . ψ is true. is the president. w 3 . Bart is a child. Ignoring assign- ment functions for the moment. and speaking of possible worlds rather than models. the contents of the assertion become part of the common ground. w 5 } Since {w 1 . . Thus a child is president in all of these world. w 2 . that is. For example. w 2 . and w 3 . w 4 . w 2 . let us say that the proposition expressed by a sentence is the set of possible worlds (i. w 3 } ⊆ {w 1 . w 3 } Assume further that in every world. All worlds in which the former holds are worlds in which the latter holds.244 Presupposition speakers intend their expressions of propositions to distinguish. But there are also other worlds where Lisa. so the proposition ex- pressed by ‘Bart is president’ is: {w 1 . This definition depends on a notion of entailment that can hold between contexts and sentences which we must make pre- cise. Then we can say that φ entails ψ if and only if the proposition expressed by φ is a subset of the proposition ex- pressed by ψ: every φ-world is a ψ-world. When an assertion is made. w 3 . w 4 . w 2 . But an assertion is only felici- tous when its presuppositions already hold in the context set. Let us say that the presuppositions of a sentence are SATIS - FIED in a given context if the the context entails the presupposi- tions. suppose that Bart is president in w 1 . and all of the interlocutors agree to it. w 5 } ‘Bart is president’ entails ‘A child is president’. Then the proposition expressed by ‘A child is president’ is: {w 1 .e. Call these w 4 and w 5 .. who is also a child. they enter the context set. Recall that a sentence φ ENTAILS another sentence ψ (written φ ⊧ ψ) if and only if whenever φ is true.

399) puts it: A context is here construed more or less. Recall from Chapter 7 that in dynamic semantics.Presupposition 245 Now. P ∩S = {w 1 . w 2 . a context consists of all of the information that is presupposed – in other words.. That is the intersection (not the union). w 2 . Heim (1983c. w 4 } [‘Santa’s little helper is sick’] W = {w 1 . w 3 } ∩ {w 1 . w 3 } [‘Bart has a girlfriend’] S = {w 1 . w 3 . as a proposition. w 8 . all of the information that is agreed upon. w 2 . or as the set of propositions expressed by these sentences. w 3 } [‘Bart is president’] Q = {w 1 . what does it mean for a sentence to be entailed by a con- text? As we said above. then what is the con- junction of a set of propositions? Here is a concrete example: P = {w 1 . w 7 . w 2 . the intersection of all of the agreed-upon propositions. the meaning of a sentence . w 9 . we can say that the update crashes when the presuppositions of the sentence are not satisfied. w 5 . w 6 . i. those pos- sible worlds in which all of the presupposed facts hold.e. w 5 } [‘A child is president’] R = {w 2 . namely that proposition which is the conjunction of all the el- ements of the set. or taken for granted. Or. w 4 } = {w 1 } So the context will constitute a set of possible worlds. We could think of this informa- tion as a set of sentences. as a set of propositions. In dynamic terms. the conjunction of the propo- sition that Bart is president and the proposition that Santa’s little helper is sick? It is the set of worlds where both propositions are true.. or more simply. w 10 }[the set of all worlds] What is the conjunction of P and S.. If propositions are sets of possible worlds. w 4 . w 3 . w 4 .

What is the result of updating c with It is raining? Suppose that we have a sentence like John’s son is bald. the context must entail the presuppositions of the sentence. A context c satisfies the presuppositions of φ if X ⊆ P φ . In such a situation.246 Presupposition is a context change potential. would eliminate all worlds where John is not happy from the context set. w 4 . Ignoring assignment functions. (21) Admittance A context c ADMITS φ if and only if c satisfies the presuppo- sitions of φ. Admit- tance is defined in terms of SATISFACTION: (20) Satifaction Let P φ be the set of worlds where the presuppositions of φ are satisfied. it is easy enough to repair it so that the presuppo- . if we take the meaning of a sentence to be a set of possible worlds. Karttunen’s idea is that in order for a context to admit a sentence. then the presupposi- tions of the sentence are not satisfied in the context set. for example. we say that the context set does not ADMIT the sentence. given a context that does not satisfy the presuppositions of a given sentence. Exercise 3. which presupposes that John has a son. Now. w 5 } and in these worlds it is raining: {w 2 . Suppose that the context c consists of the following worlds: {w 1 . rather than a characterization of the world. w 3 . Let us write: c +φ to denote the result of updating c with the proposition expressed by φ. the update that a sentence makes is to narrow down the set of worlds in the context set to just those in which the proposition expressed by the sentence holds. If there are some worlds in the context set where John does not have a son. w 4 }. w 2 . A sentence like John is happy.

Presupposition 247 sitions are taken for granted. Maggie is actually adopted. w 3 } [‘Bart is president’] Q = {w 1 . w 6 . So the proposition that Homer has at least three children (call it K ) is the set of worlds w 1 . this process is called ACCOMMODA - TION . w 2 .. w 8 } . w 3 . w 2 . w 3 . w 5 . A simple. w 9 .. 184) Exercise 4.w 8 . w 7 . Suppose that in worlds w 1 . w 5 } [‘A child is president’] R = {w 2 . w 4 . w 7 . w 10 }[the set of all worlds] Suppose that the sentence φ = All of Homer’s children are bald presupposes that Homer has at least three children. Lisa and Maggie are all children of Homer. 1974. and therefore strictly speaking not Homer’s child. non-compound sentence will have a set of BASIC PRESUPPOSITIONS . w 2 . But the idea is nevertheless that the update cannot proceed until the context is such that all the presuppositions of the sen- tence are satisfied. then A context c admits φ if and only if c satisfies the basic presuppositions of φ. this is a basic presupposition of this non-compound sentence. w 4 . w 4 . w 3 . Assume the following: P = {w 1 . w 5 . non-compound sentence. w 6 ..w 8 : K = {w 1 . w 8 . For example. w 4 } [‘Santa’s little helper is sick’] W = {w 1 . w 3 } [‘Bart has a girlfriend’] S = {w 1 .. Non-compound sentences are admitted by a context as long as the context entails all of their basic prsuppositions: (22) Admittance conditions for non-compound sentences If φ is a simple. Bart. but in w 9 and w 10 . w 2 . All of Homer’s children are bald presupposes that Homer has at least three children. (Karttunen.

hypothetical context that the presuppositions of B have to be satisfied. w 8 } Since our sentence φ is a simple. the sentence as a whole presupposes that John has a son (as indicated by the symbol ≫). consider (again) the contrast between the following two conditional sentences: (23) If baldness is hereditary. ≫ / John has a son. and that it is in this temporary. Does c admit φ? Why or why not? Now. In the second example. w 2 . w 4 . then the conditional as a whole does not carry the presuppositions of B. then John’s son is bald. Again. w 3 . in a conditional sentence of the form If A then B. then John’s son is bald. ≫ John has a son. Karttunen (1974) makes sense of this by imagining that we first update the global discourse context with A. Does c admit φ? Why or why not? • Suppose instead that c = W . if the antecedent A satisfies the presuppositions of B. Karttunen proposes the following: . (24) If John has a son. w 6 . For conditionals. In the first example. So P φ = K = {w 1 .248 Presupposition K is a basic presupposition of φ. let us pretend that it is the only one. non-compound sentence. The speaker appears quite open to the possibility that he does not. w 7 . • Suppose that c = P ∩ S. the sentence as a whole does not at all convey that the speaker believes that John has a son. w 5 . it is admitted in contexts c that entail K .

then all of his children are bald. . w 7 . (27) If Homer is bald. Here c + φ designates ‘c updated with φ’. Since Homer has at least three children carries no presuppositions. admits the non- compound sentence all of his children are bald. w 4 . and (ii) c + φ admits ψ. Now the question is whether this set. Assume the following: A = {w 1 . But the same does not hold for (27). w 7 . A. it will be the result of eliminating all worlds where φ is not true. Consider the following examples: (26) If Homer has at least 3 children. w 2 . w 8 . Since it is a non- compound sentence. Does c admit (26)? According to the admit- tance conditions for conditional sentences in (25).Presupposition 249 (25) Admittance conditions for a conditional sentence Context c admits “If φ then ψ” just in case (i) c admits φ. it does just in case (i) c admits Homer has at least three children and (ii) c +Homer has at least three children admits all of his children are bald. What all of his children are bald presupposes is that Homer has at least three children. it will be defined if the presuppositions are satisfied. w 4 . w 5 . w 10 } [the universe] Suppose that c = W . then all of his children are bald. This is satisfied in all of the worlds in A. w 6 . the first condition is satisfied. so the second condition is satisfied as well. w 9 . w 6 . w 3 . Hence c does admit (26). The result of this update will be the same as if φ is asserted in context c. and if so. the rule for non-compound sentences (22) applies. What about the second condition? The result of updating c with Homer has at least three children is the set A. w 2 . w 5 . w 3 . w 8 } [‘Homer has ≥3 children’] W = {w 1 . the set of worlds where Homer indeed has at least three children.

both of the conditions in (25) . Explain step-by-step why c (as defined in the foregoing discussion) does not admit (27). Exercise 6. Does C admit If the king has a son.1: Example propositions Exercise 5.1. In order for the conditional as a whole to be admitted by a given context. So now we are in a position to explain why the presupposition that Homer has at least three children projects in a case like (27). and not in a case like (26). Refer to Figure 8. and assume that the result of updating a context with The king has a son is the intersection of A with the context if the presuppositions of ‘The king has a son’ are satisfied.250 Presupposition Figure 8. then the king’s son is bald? Why or why not? Does K admit it? Why or not? Explain using the definition of admittance. undefined otherwise.

The second condition will be me either if (i) the antecedent entails the presuppositions of the consequent or (ii) the global context already entails them. where the antecedent of the con- ditional does not entail the presuppositions of the consequent. We can iden- tify a range of local contexts (c here stands for the global context): • the consequent of a conditional → c + the antecedent • the second conjunct in a conjunction → c + the negation of the first conjunct • the second disjunct in a disjunction → c + the negation of the first disjunct • the complement of a propositional attitude verb → the be- liefs of the holder of the propositional attitude (e. In order for that sentence to be admitted in a given context. Another way of putting Karttunen’s insight is as follows: The global context incremented by the antecedent is the LOCAL CON - TEXT for the consequent.(Heim. 399) For example. Hence the presuppositions project in that case. repeated here: . Such is the situation in a case like (26). Hans wants the ghost in his attic to be quiet tonight presupposes that Hans believes that there is a ghost in his attic) In general: (28) A context c admits a sentence S just in case each of the con- stituent sentences of S is admitted by the corresponding lo- cal context. then the global context must already entail them. The first condition will be met only if the presuppo- sitions of the antecedent are already satisified in the global con- text. consider (29) above. the con- text must already entail the presuppositions of the consequence. If the antecedent of the conditional does not entail the presuppositions of the conse- quent. 1983c. Hence presuppositions always project from the antecedent of a conditional.g.Presupposition 251 must be met. This idea is quite general.

the global context need not entail it. so the local context for the second disjunct is c + Geraldine is a mormon. she has stopped wearing her holy underwear. The consequent. so the presupposition is filtered out. The first disjunct (Geraldine is not a mormon) is itself negated. Then the local context entails that Geraldine has holy underwear. The local context for the second dis- junct is c + the negation of the first disjunct. and explain how the presuppositions of the consequent get filtered out. Give another example of a disjunction in which the negation of the antecedent entails the presuppositions of the con- sequent. Here we have a disjunction. Suppose it is common ground in the global context that all mormons have holy underwear.252 Presupposition (29) Either Geraldine is not a mormon or she has stopped wear- ing her holy underwear. Since the local context entails this proposition. Exercise 7. presup- poses that Geraldine has holy underwear. let us assume that the negation of the negated sentence can be obtained simply by removing the ‘not’. .

we can simply write the following (here. here is a tree for Homer smokes and Marge drinks: 253 . or To translate these into the lambda calculus. or S ↝ λqλp . For example. We may include sentences with and and or among the well-formed expressions of our language by extending our syntax and lexicon as follows: (1) Syntax S → S JP JP → JS (2) Lexicon J: and.9 ∣ Coordination and Plurals 9.1 Coordination Let us now consider coordination in more detail. p ∧ q b. p ∨ q This will work for coordinations of sentences. andS ↝ λqλp . p and q are variables over truth values): (3) a.

(VP and VP) b. (V and V) It is clear that we need to extend our grammar. ⟨t . Sentences are not the only kinds of expressions that can be coordinated. Somebody smokes and drinks. This exercise can be solved in the Lambda Calculator. (DP and DP) c. we . it will not do to intro- duce fixes to the syntax and semantics one at a time. Perform the beta reductions involved in Tree 4. though. p ∧ q drinks(m) Homer smokes and NP VP e ⟨e. John caught and ate the fish. Here are a few examples: (5) a. No man and no woman arrived. smokes(x ) λqλp . t ⟩ ⟨t . Instead. t ⟩⟩ t h λx . drinks(x ) Marge drinks Exercise 1. t ⟩ m λx . t ⟩ smokes(h) λp . p ∧ drinks(m) NP VP J S e ⟨e.254 Coordination and Plurals (4) S t smokes(h) ∧ drinks(m) S JP t ⟨t . Since these examples do not cover all the possibilities.

It was soon found that this would not work. We can extend the syntax by pairs of rules of the following kind. ∃x . smokes(x ) ∨ ∀x . VP coordination was analyzed as involving deletion of the subject of the second sentence is silent (indicated here as strikethrough): (6) a. For example. Somebody smokes and somebody drinks. For each of the two sentence pairs above. This may be the case for simple sentences. Somebody smokes and drinks. (7) a. one pair for each category: . drinks(x ) (8) a. smokes(x ) ∧ ∃x . Homer smokes and Homer drinks. smokes(x ) ∧ drinks(x ) b. If VP coordination really was sentential coordination in disguise. as above. The following two sentences are not paraphrases. smokes(x ) ∨ drinks(x ) b. ∀x . An early style of analysis consisted in analyzing all coordina- tions as underlyingly sentential. Everybody smokes or everybody drinks. Luckily. it is also possible to design a grammar in which coor- dinated constituents are directly generated syntactically. as their translations into logic show. then all VP coordi- nations should be semantically equivalent to their sentential rel- atives. drinks(x ) Exercise 2. Everybody smokes or drinks. But quantifers break this equivalence.Coordination and Plurals 255 need to formulate a general pattern and then extend our syntax and semantics according to it. even those of constituents other than sentences. and di- rectly interpreted semantically. establish that they are not equivalent by describing a scenario in which one of them is true and the other one is false. b. ∃x . Homer smokes and drinks. ∀x .

The result is what we want: the quantifier somebody takes scope over and. So we will first look at a few cases individually. For VP coordination. P (x ) ∨ P ′ (x ) This tree shows the entry for and in action. and then generalize over them. orVP ↝ λP ′ λP λx . It is not obvious if we can give a single meaning for each conjunction that covers all of its uses across categories. P (x ) ∧ P ′ (x ) b. .256 Coordination and Plurals (9) Syntax X → X JP where X ∈ {S. V. VP..} JP → JX The semantic side is trickier. DP. . the following entries for and and or will do: (10) a. andVP ↝ λP ′ λP λx . .

⟨⟨e. smokes(x ) ∧ drinks(x ) NP VP ⟨⟨e. andV ↝ λR ′ λRλyλx . P (x ) ∧ P ′ (x ) λx . This exercise can be solved in the Lambda Calculator. R ( y )(x ) ∧ R ′ ( y )(x ) b. Using the lexical entry above. t ⟩ λP ′ λP λx .Coordination and Plurals 257 (11) S t ∃x . Let us first look at conjunctions of quantifiers: . P (x ) ∧ drinks(x ) smokes J VP ⟨⟨e. t ⟩ λP ∃x . orV ↝ λR ′ λRλyλx . drinks(x ) and drinks The following entries can be used to coordinate transitive verbs. smokes(x ) ∧ drinks(x ) Somebody VP JP ⟨e. P (x ) λx . draw the tree for John caught and ate the fish. t ⟩⟩ ⟨e. t ⟩ t λx . R ( y )(x ) ∨ R ′ ( y )(x ) Exercise 3. t ⟩. t ⟩ ⟨e. We can approach noun phrase coordination in this way as well. smokes(x ) λP λx . t ⟩. (12) a. t ⟩.

t ⟩ noun phrase. That is not always the case. woman(x ) → arrived(x )] b. This exercise can be solved in the Lambda Calculator. man(x )∧arrived(x )]∨[∃x . draw the trees for Every man and every woman arrived and A man or a woman arrived. woman(x ) → arrived(x )] In order to be able to reuse the lexical entry above. a type-e noun phrase like John can be coordinated with a type-⟨⟨e. [∃x . we introduce a new type-shifting rule that introduces a possible translation of type ⟨⟨e.Q (P ) ∨ Q ′ (P ) Exercise 4. [∀x . they take verb phrases as arguments. orDP ↝ λQ ′ λQλP . A man or a woman arrived. t ⟩. we will adjust the type of John to make it equal to that of every woman. man(x ) → arrived(x )]∧[∀x . As the following example shows.258 Coordination and Plurals (13) a. Using the lexical entries above. andDP ↝ λQ ′ λQλP . In all of the examples so far. arrived(j) ∧ [∀x . This makes the entries for and and or very similar to their VP-coordinating counterparts: (14) a.Q (P ) ∧ Q ′ (P ) b. woman(x )∧arrived(x )] Since quantifiers have a higher type. t ⟩ for every translation of type e: . t ⟩. and in or- der to avoid deviating from the pattern we have established so far. Every man and every woman arrived. For this purpose. (15) John and every woman arrived. we have always coordinated two constituents of the same semantic type.

we are able to conjoin it with every woman using the entry andDP .Coordination and Plurals 259 Type-Shifting Rule 5. ⟨e. encapsulates the insight that an individual x can be recast as the set of all the properties that x has. And all of them operate on types that end in t . Exercise 5. the rule inverts the predicate- argument relationship between the subject and the verb phrase of a sentence. t ⟩. we can do better than that.σ2 ⟩⟩ ( X ( Z ))(Y ( Z )) if τ = ⟨σ1 . which goes back to Montague (1973). Essentially. Entity-to-quantifier shift If α ↝ ⟪α⟫. So far. t ⟩. t ⟩⟩ for coordination of transitive verbs. then: α ↝ λP . For more details on this approach. see for example Partee & Rooth (1983b) and Winter (2001b).⟨σ2 . For example. t ⟩. σ2 ⟩ .⟨τ. we have been able to keep the number of entries for each conjunction down to one per syntactic category. (16) ⟪and⟫⟨τ. t ⟩ for VP coordination. In fact. p ∧ q if τ = t ={ λX τ λYτ λZσ1 . and ⟨⟨e. if John ↝ j then also John ↝ λP . Draw the tree for John and every woman arrived. t ⟩ for DP coordination. P ( j ).τ⟩⟩ λqλp . P (⟪α⟫) as well. All of the entries for conjunction and for disjunction have ∧ and ∨ at their core respectively. where ⟪α⟫ is of category e. ⟪and⟫⟨σ2 . In a sentence like John ar- rived. In a sentence like John and every woman arrived. it can take the verb phrase as an argument. That translation is of type ⟨⟨e. This exercise can be solved in the Lambda Calculator. ⟨e. The following recursive definitions will work for ev- ery type that ends in t . namely ⟨e. This rule.

You will need to apply the type shifter once on each conjunct. This time. τ⟩⟩. σ2 ⟩ For example.t ⟩. t ⟩. t ⟩.⟨t .σ2 ⟩⟩ ( X ( Z ))(Y ( Z )) if τ = ⟨σ1 .t ⟩. smoke(h) ∧ smoke(m) Exercise 6. t ⟩ and σ2 = t . we look for σ1 and σ2 such that τ = ⟨σ1 . We plug in these definitions into the last line of 16 and get: (18) λX ⟨⟨e. t ⟩.⟨t .t ⟩ λZ⟨e.t ⟩ λY⟨⟨e. .⟨t . This works for σ1 = ⟨e.t ⟩⟩ ( X ( Z ))(Y ( Z )) To resolve ⟪and⟫⟨t . Y ( Z ) ∧ X ( Z ) This is indeed equivalent our entry for DP-coordinating and in (14a). so the result is simply logical conjunction: (19) and⟨t . ⟨τ.t ⟩ λY⟨⟨e. we apply Definition 16 once more.⟨τ.t ⟩ . The type of DP (after lifting entities to quantifiers if neces- sary) is ⟨⟨e.t ⟩ λZ⟨e. one or both DPs may need to be lifted into that type first by applying the type shifter above.t ⟩⟩ ↝ λqλp . σ2 ⟩. τ = t .260 Coordination and Plurals (17) ⟪or⟫⟨τ. If necessary.⟨σ2 . t ⟩.t ⟩. So the type of DP-coordinating and is ⟨τ.t ⟩ . t ⟩. where τ = ⟨⟨e.t ⟩⟩ .τ⟩⟩ λqλp . Draw the tree for Homer and Marge smoke and make sure you get the result above. The entry will only work if both DPs are of type ⟨⟨e. ⟪and⟫⟨t . ⟪or⟫⟨σ2 . p ∨ q if τ = t ={ λX τ λYτ λZσ1 . p ∧ q We plug this into the previous line and get the final result: (20) λX ⟨⟨e. (21) Homer and Marge smoke. This exercise can be solved in the Lambda Calculator. here is how the schema in (16) derives DP-coordinating and. Since τ = / t . t ⟩.t ⟩.

the parthood . it would have the non- sensical entailments that Homer met and that Marge met. only two people can be a happy couple. For example.2 Mereology All of the occurrences of and that we have seen so far can be re- lated to the meaning of logical conjunction. There is no obvious way to formulate the truth conditions of (22a) or (22b) using logical conjunction. There are many reasons for this choice. Instead. One is that using mereology for this purpose has been standard practice in formal semantics since Link (1983). This is nonsensical because a singular individual can’t meet. Where set theory is founded on two rela- tions (∈ and ⊆). Homer and Marge are a happy couple. we need to extend our formal setup. Similarly. You might suspect that we would represent these collections as sets. They apply to collections of individuals directly. In the sentences above. the study of parthood. In other words. The schema in (16) encapsulates this relation by reducing various uses of and to logi- cal conjunction. Predicates like meet and be a couple are called collective. without apply- ing to those individuals. This will not work in every case. To talk about such collections. so that Homer and Marge would be repre- sented as the set that contains just these two individuals. though. then. On the semantic side. in this case. Another reason is that set theory makes formal distinctions that turn out not to be needed in mereology. Only two or more people can. the “collective individual” Homer-and-Marge. since this would entail met(h) as well as met(m).Coordination and Plurals 261 9. the word and does not seem to amount to logical conjunction but to the formation of a collection. Homer and Marge met (in the park). b. we will add collections of individ- uals to our model. mereology collapses them into one. (22) a. (22b) can- not be represented as met(h) ∧ met(m). we will extend our formal toolbox by borrowing from mereology.

This exercise can be solved in the Lambda Calculator. We will use the word individual to range over all the circles in this structure. we would use ⊆). Draw the tree for Homer and Marge met. the collection Homer- and-Marge is represented formally as the sum h ⊕ m. We will assume that part- hood is reflexive. (This may not be intuitive but it is a mere formal convenience. Homer and Marge met can be represented as met(h⊕m). and antisymmetric. then a is also part of c. For example.2 illus- trates such a structure. An algebraic structure is essentially a set with a binary operation (in this case. Collective predicates apply directly to such sums. or as it is called in mathematics. For example. that allows us to put individuals together to form col- lections. The lines between the circles stand for the parthood relations that hold between the various individuals.) Transitivity means that if a is part of b and b is part of c. We will refer to Tom. the other circles stand for individuals which not atomic. Exercise 7. Reflexivity means that everything is part of itself. transitive. Dick. .262 Coordination and Plurals relation. Formulate an additional lexical entry for andDP that conjoins two entities of type e and returns their sum. t ⟩ as usual. as atomic indi- viduals. and also be- tween Homer-and-Marge and Homer-and-Marge-and-Bart (where in set theory. ⊕) defined on it. we would use ∈). and Harry. the type of the VP is ⟨e. a “partial order”. Figure 9. Mereology can be axiomatized in a way that gives rise to alge- braic structures. This relation holds both between Homer and Homer- and-Marge (where in set theory. The circles stand for the individuals Tom. The formal objects that represent these collections in mereology are called sums. and it can be eliminated by defining a distinct no- tion of proper parthood: a is a proper part of b just in case a is both part of and distinct from b. For example. and for the sums that are built up from them. Dick. Since the sum h⊕m is of type e. Mereology also provides an op- erator. ⊕. and Harry.

) For example. we will write t ≤ t ⊕ d. therefore. if P consists of the two atomic individuals t and d. For example. by transitivity. This condition is very intuitive. (As we will see later. We will assume that all individuals. Figure 9. then the lowest individuals that sits above these two is t⊕d.1: An algebraic structure t⊕d⊕h t⊕d t⊕h t⊕h t d h The branch of formal semantics that uses algebraic structures and parthood relations to model various phenomena is known as algebraic semantics. . To express that the atomic individual Tom is part of the sum in- dividual Tom-and-Dick. t is also part of t⊕d⊕h.2. So far.2). it follows that t ⊕ d is not part of t. t is part of t⊕d. antisym- metry means that two distinct things cannot both be part of each other. the sum of any nonempty set of individuals P is always the low- est individual that sits above every element of P . since t is part of t ⊕ d. will be of type e. including sums. we have only considered one sort. The fundamental assumption in algebraic semantics is that any nonempty set of things of the same sort (for example individuals or events) has one and exactly one sum. and t⊕d is part of t⊕d⊕h. In figures like (9. this corresponds to the mathematical notion of “least upper bound”.Coordination and Plurals 263 according to Figure 9. namely individuals (type e). Finally.

1983): (24) Definition: Algebraic closure The algebraic closure ∗ P of a set P is the set that contains any sum of any nonempty subset of P . And if P consists of just one individual. This operation is captured by the no- tion of algebraic closure. the meaning of a plural noun can be described in terms of the meaning of its corresponding singular noun. then the plural noun denotes the set that contains any sum of things taken from P . then its sum is that individual itself. if P consists of t and t ⊕ d. Tom. c. then its sum is t ⊕ d again. In each of these three sentences. Dick and Harry met. which has been proposed to underlie the meaning of the plural (Link. b. The boys met. Some boys met. If we take P to be the set of all the entities in the denotation of the sin- gular noun.3 The plural Coordinations of proper nouns are not the only way to talk about sums: (23) a. while (23b) and (23d) described it partially. 9. Only (23a) fully specifies the parts of that sum. Three boys met. the collective predicate met applies to a sum x. For example. The most straightforward way to implement this idea is to iden- tify the meaning of the plural morpheme with the “star operator”: .264 Coordination and Plurals Sometimes the sum of P can be a member of P . namely. d. boys requires it to be the sum of a set of boys. In general. What is the meaning of the noun boys? One way to describe it is in terms of the condi- tions it imposes on x. such as h.

↝ ∃x . d. ↝ boy(t) ∧ boy(d) b. Dick and Harry. Then the denotation of the noun boy might be modeled as {t. Link himself proposed excluding them. suppose that we are in a model with just three boys. ↝ meet(t ⊕ d ⊕ h) b. h. Tom and Dick are boys. d ⊕ h. t ⊕ d ⊕ h}. Some boys met. boys literally means one or more boys. And indeed. 2005. d. t ⊕ h. 2007). and the singular is a special case of the plural. This set contains everything that is either a boy or a sum of two or more boys. This allowed him to represent collective predicates like meet as predicates that ap- ply directly to sum individuals: (26) a. Here only one of the two sentences is true.Coordination and Plurals 265 (25) -s ↝ λP . That is. After all. Tom is a boy and Dick is a boy. For this reason we will continue to use (25) as the meaning of the plural. It might seem strange to include individual boys in this set. Link gave plural individuals the status of first-class citizens in the logical representation of natural language. ∗ boy(x ) ∧ meet(x ) As seen in 26a. Consider the case where a single doctor is in the room. But No doctors are in the room is not synonymous with No two or more doctors are in the room. ↝ ∗ boy(t ⊕ d) . The denotation of the noun boys is the algebraic closure of that set: {t. Tom. But this leads to a different problem: It makes boys essentially syn- onymous with two or more boys. Boy and boys are in competition. Link represented sentential conjunction in a different way than noun phrase conjunction. and rule out Tom are boys on pragmatic grounds. This has the con- sequence that even the translations of equivalent sentences can look very different: (27) a. Spector. and the singular form blocks the plural form because it is more specific (Sauerland et al. Tom (and) Dick and Harry met.. ∗ P For example. t ⊕ d. and the sentence Some doctors are in the room is false if only one doctor is in the room. it sounds strange to say Tom are boys. h}.

That is. a and are denote identity functions. d ⊕ h }. this is equivalent to {t. t⊕d⊕h}. t ⊕ h. As can be checked with Figure (9. the sum of this set is .2) will confirm. Suppose that two denotes the property of being a sum of exactly two atomic individuals (for which we will write card(x ) = 2). comes to mind as a possible candidate.card(x ) = 2 In order to deal with sentences like Two boys met as well as The two boys met. Dick and Harry. In a model where the boys are Tom. As a glance at Figure (9. the set denoted by two boys is {t ⊕ d. we run into a problem. we will assume that in the first case there is a silent determiner with the semantics of a generalized existential quantifier: (29) ∅D ↝ λP λP ′ . h. Make sure that the result is as in (26) and (27). d. the boys could correspond to the sum of all the entities to which the predi- cate denoted by boys applies. [P (x ) ∧ P ′ (x )] In our model.266 Coordination and Plurals Exercise 8. such as the boy and the two boys. ∃x . Draw trees for the sentences in (26) and (27). This exercise can be solved in the Lambda Calculator.2) the sum of this set. and that it combines with boys via Predicate Modifi- cation: (28) two ↝ λx. and there- fore the denotation of the boys. or treat them as vacuous nodes. using the appropriate entries for and in each case. when used with a plural noun as in sentence (23d). is just t⊕d⊕h. This is exactly what we want. But in other cases. Assume that is. If binary sum is realized as and. d⊕h. You can use the same entry for some as in Chapter (5). t⊕d. t⊕h. is there a linguistic expression that is able to build the sum of more than two individuals? The definite article the.

1979): (30) the ↝ λP ιz [P (z ) ∧ ∀x [P (x ) → x ≤ z ]] It turns out that this representation even works for the singular definite article. explain which pre- supposition arises and whether it is satisfied. John and Bill arrived. the set denoted by boy is a singleton. and since everything is part of itself. So we end up with the rather odd prediction that the two boys refers to this sum! Intuitively. The two interpretive schemes lead to different representations: (31) a. and The two boys met. or we can lift the individuals to generalized quan- tifiers and use the schema in (16). This exercise can be solved in the Lambda Calculator. In the model where the boys are Tom. Exercise 9. So we should build a source of presupposition failure into our meaning for the plural definite. Exercise 10. the two boys and the three boys denote? In each case. Dick. what (if anything) do the expressions the boy. and Harry. the ι operator will not be defined. Two boys met. In any model where there is exactly one boy. the representation in (30) picks out the only member of that singleton. ↝ arrive(j ⊕ b) . We have now seen two ways to treat conjunction of proper names. Translate The boys met. Let us therefore interpret the P as the single individual of which P holds that contains every other individual of which P also holds (Montague.Coordination and Plurals 267 t ⊕ d ⊕ h. the boys. be- cause there are three boys in our model. the two boys should be a presupposition failure. ↝ arrive(j) ∧ arrive(b) b. John and Bill arrived. In all other models. We can either represent it as ⊕ and combine the two indi- viduals directly.

but that whenever it combines with a plural noun phrase. then ∗ arrive applies to any sum drawn from that set. John and Bill arrived-∅. collective predicates like met. ∗ P On this approach. some re- searchers assume that the star operator is inserted on every verb even when it combines with a singular subject (Krifka. it first combines with a silent plural morpheme ∅ which has the same meaning as the plural morpheme (Landman. To make sense of this. Kratzer. we have seen three kinds of predicates that apply to sums: plural nouns like boys. we can assume that arrive only holds of atomic indi- viduals. ↝ ∗ arrive(j) This might seem surprising but it works because ∗ arrive will be true of anything arrive is also true of. the relationship between sentential con- junction and conjunction of individuals can be modeled in a par- allel way no matter if they combine with a noun (as in (27) with is a boy / are boys) or with a verb (as in the following pair): (33) a. Predicates with this property are called distributive. say a set that contains among others j and b. 1998.268 Coordination and Plurals What (31b) is that arrive holds of the collective individual j⊕b. pluralized . 1989): (32) −∅ ↝ λP . we can assume that a collective individual arrives if and only if all the atomic individuals arrive that it con- sists of. In fact. ↝ arrive(j) ∧ arrive(b) b. 2007): (34) John arrived. Once again. the singular is a special case of the plural. As an alternative. John arrived and Bill arrived.4 Cumulative readings So far. say j ⊕ b. 9. ↝ ∗ arrive(j ⊕ b) The idea is that if arrive applies to each of a given set of indi- viduals.

and each of the computers is used by one or more of the firms. (Kroch. Dick and Harry stands in the algebraic closure of the owning relation to the sum of the four gadgets. The idea is that if Tom owns gadget g 1 . rather than special circumstances under which they are true. Sen- tence (35b) (on its relevant reading) is true in a scenario where there are a collection of 600 Dutch firms. Sentence (35c) is true in a scenario where Tom. 1974) b. The men in the room are married to the girls across the hall. as in the following sentences: (35) a. Let us take a closer look at the ways the plural entities in these sentences are related. cumulative readings can be mod- eled via algebraic closure. A widespread view is that these sce- narios corresponds to genuine readings of these sentences. All these are one-place pred- icates. and Harry owns gadget g 3 and also gadget g 4 . and a collection of 5000 American supercomputers. In order to formalize this. and each of the girls is married to one of the men. 1981) c. Tom. we need to generalize the definition of al- gebraic closure from sets (which correspond to one-place pred- icates) to n-place relations (which correspond to n-place predi- cates): . Dick and Harry (between them) own (a total of ) four gadgets. Dick owns gadget g 2 . Just like distributive readings. Sentence (35a) is true in a scenario where each of the men in the room is married to one of the girls across the hall. 600 Dutch firms use 5000 American supercomputers. These read- ings are then called cumulative readings. Sums can also be related by two-place predicates. then the sum of Tom. Dick and Harry own gadgets in such a way that a total of four gadgets are owned. such that each of the firms uses one or more of the supercomputers.Coordination and Plurals 269 distributive predicates like arrived-∅. (adapted from Scha.

270 Coordination and Plurals (36) Definition: Sum of a set of tuples The sum of a set of tuples is the tuple whose first element is the sum of the first elements of these tuples. . x n . g 1 ⊕ g 2 ⊕ g 3 ⊕ g 4 ⟩. whose second element is the sum of the second elements of these tuples. . then we need a precise framework in which we . The n-tuples of the relation denoted by “own” are the pairs (2-tuples) ⟨t . this rough intuition along with a quick glance at figures like (9. ⟨h. x ) An example model which verifies formula 38 is the one de- scribed above. x 2 . 9. . g 4 ⟩. . and if these representations involve parthood or sums. g 1 ⟩. For many purposes. We write ∗ R (a. a tuple of R is any n-tuple ⟨x 1 . b ⟩).5 Formal mereology Intuitively. ↝ ∃x 1 . . and Harry owns gadgets 3 and 4. The sum of these four pairs is ⟨t ⊕ d ⊕ h. ∗ gadget(x ) ∧ card(x ) = 4∧ ∗ own(t ⊕ d ⊕ h. But if we want to prove that the logical representation of one sentence en- tails that of another sentence. and so on.2) is sufficient. (x n ). the sum of some things is that which you get when you put them together. (38) Definition: Algebraic closure of an n-place predicate The algebraic closure ∗ R of an n-place predicate R is the set that contains any sum of any nonempty subset of tuples of R. Dick owns gadget 2. g 2 ⟩. ⟩ such that R (x 1 )(x 2 ) . b ) for ∗ R (⟨a. g 3 ⟩ and ⟨h. ⟨d . where Tom owns gadget 1. (37) Definition: Tuple of an n-place predicate For a given n-place relation R. We can then represent cumulative readings by using the alge- braic closure of transitive verbs: (39) Tom. . Dick and Harry own four gadgets.

or a sum of P. we say that it is a proper part of that thing. . we need to show that this is the case given certain basic assumptions about the properties of parthood and sum. One of the advantages of CEM is that there are intuitive similarities be- tween its parthood relation and set-theoretical subsethood. There are different formulations of CEM. (40) Axiom: Transitivity Any part of any part of a thing is also part of that thing.Coordination and Plurals 271 can substantiate our intuitions. and between its sum operation and set-theoretical union. (45) Definition: Least upper bound / Sum If an upper bound of P is part of every upper bound of P. The most commonly used framework for describing the for- mal behavior of parthood and sum in natural language seman- tics is known as Classical Extensional Mereology (CEM). we say that they are disjoint. we call it a least upper bound of P. (44) Definition: Upper bound Something of which everything in P is part is called an up- per bound of P. the resulting system is first-order or second-order. (43) Axiom: Weak supplementation Any proper part of any thing is disjoint from one of the parts of that thing. In the following. Depending on the choice. because there are more sets than predicates. For example. (42) Definition: Disjointness If two things do not have any part in common. if we want to prove that (27b) logically follows from (27a). Here is one. based on Hovda (2009). P is either an arbitrary predicate from first-order logic or an arbitrary set. (41) Definition: Proper parthood If something is part of a thing but not identical to it.

) (49) Theorem: Antisymmetry Two distinct things cannot both be part of each other. In other words. Now we can say more precisely what . By Axiom (43). Since b is part of a. Call that part c. and cannot be disjoint. they must be proper parts of each other. each of these sums is an upper bound of S and is part of every upper bound of S. These axioms jointly define classical extensional mereology.) Theorem (50) justifies our decision to talk about “the sum” rather than “a sum” of a set P. by Axiom (40) c is part of a.272 Coordination and Plurals (46) Axiom: Existence of sums Every nonempty P has a sum.) (51) Theorem: Associativity of sums The sum of the sum of two things with a third thing is the same as the sum of one of these things with the sum of the second of these things with the third thing. By Theorem (48). (TODO: prove this. By def- inition (45). (TODO: prove this. (Proof: Suppose otherwise. Some theorems we can prove from these axioms: (48) Theorem: Reflexivity Everything is part of itself.) (50) Theorem: Uniqueness of sums Every nonempty set has ex- actly one sum. Since they are distinct. Then there is a nonempty set S that has two distinct sums a and b. namely a. and c is part of b. there is a part of b from which a is disjoint. (47) Axiom: Filtration Every part of any sum of P has a part in common with some- thing in P. the order in which you sum up three things doesn’t matter. (Proof: Suppose otherwise. This contradicts Theorem (49). So a and c have a part in common. a is also part of a. Then there are two distinct things a and b that are both part of each other. Contradiction. So each of these sums is part of the other one.

6.Coordination and Plurals 273 we mean with the notation we have already introduced informally above: We write ⊕ P for that sum.[∀z ′ . a and b. So. we can prove that the assump- tion boy(j) ∧ boy(b) entails the conclusion ∗ boy(j ⊕ b). we write ⊕ φ (read as: “the sum of φ”) for the expression [ιx. This concludes the proof. So we need to show two things: that {j. 1. plus the following additions: 9.[∀ y. So {j. That j ⊕ b is the sum of this set follows from the definition of ⊕. In addition. then α ≤ β is an expression of type t . we have the following abbreviation conventions. and that j ⊕ b is its sum. According to Definition (24). b} is a nonempty set of boys. b} as a ⊕ b. boy(j) ∧ boy(b). we abbreviate ⊕{a. 9. The obvious candidate is {j. We can now prove various entailment relations hold between sentences that we had represented in ways that look rather differ- ent from each other. By assumption. b}. In the special case where P con- sists of just two members. Parthood If α and β are terms of type e. The syntax is defined as in three-valued type logic (Lλ with three truth values as in Chapter 8). t ⟩.[φ( y ) → y ≤ x ]] ∧ [∀z.6 A formal fragment Finally. (That is. ⊕ φ denotes . If φ is an expression of type ⟨e. b} is a nonempty set of boys. ∗ boy is the set that contains any sum of any nonempty set of boys. hence j and b are boys.[φ(z ′ ) → z ′ ≤ z ]] → x ≤ z ]]. 1. ∗ boy(j ⊕ b) is true if and only if j ⊕ b is the sum of some nonempty set of boys. For example. we need to extend the syntax and semantics of our logic in order to adapt it to the new entities and relations we have added.1 Logic syntax We add the following primitive symbol to our syntax.

then Jα ≤ βK = 1 if JαK ≤ JβK. DP. ≤⟩ where D and I are defined as usual and ≤ is the parthood relation over individuals that obeys the conditions listed above. An expression of the form [[α ⊕ β] ⊕ γ] or [α ⊕ [β ⊕ γ]] can be simplified to [α ⊕ β ⊕ γ]. 9.) 2. I .2 Logic semantics Expressions are interpreted with respect to both: • a model M = ⟨D.6.6..274 Coordination and Plurals the least upper bound. we write [α ⊕ β] (read as: “the sum of α and β”) for the expression ⊕[λx.x = α ∨ x = β]. V. JαK . If α and β are terms of type e. • an assignment g defined as usual. 3. M . . 9. this will be defined whenever φ applies to at least one entity. We add the following rules for coordination: (53) Syntax X → X JP where X ∈ {S. of φ – see Definition (45). or sum. VP. otherwise Jα ≤ βK = 0. is defined recursively as usual. . By Axiom (50). We add the following rule: (52) Parthood If α and β are expressions of type e.g For every well-formed expression α. we add the following rule for the plural: N → N Pl .} JP → JX In addition.3 English syntax Syntax rules.

y ) Type e: 1. ∗ drink(x ) 3. Marge ↝ m 3. J: and.4 Translations Type ⟨e. t ⟩: 1. drinks ↝ λx . Harry ↝ h Type ⟨t . ∗ catch(x. Homer ↝ h 2. ⟨t . ∗ own(x. two ↝ λx . card(x ) = 2 Type ⟨e. ∗ eat(x. smokes ↝ λx . three etc. Dick ↝ d 5. Lexical items are associated with syntactic categories as follows: D: ∅D A: two. ∗ smoke(x ) 2. ate ↝ λyλx .Coordination and Plurals 275 Lexicon. own ↝ λyλx . y ) 2. ⟨e. t ⟩⟩: 1.6. t ⟩⟩: . own 9. or Pl: -s V: met. Tom ↝ t 4. caught ↝ λyλx . y ) 3.

t ⟩. P (x ) ∨ P ′ (x ) Type ⟨⟨e. ⟨e. P 3. or S ↝ λqλp . t ⟩. ∗ P Type ⟨⟨e. p ∧ q 2. [P (x ) ∧ Q (x )] . ⟨e. andS ↝ λqλp . p ∨ q Type ⟨⟨e. andVP ↝ λP ′ λP λx . R ( y )(x ) ∧ R ′ ( y )(x ) 2. t ⟩⟩.276 Coordination and Plurals 1. ∅D ↝ λP λQ . a ↝ λP . t ⟩⟩⟩: 1. t ⟩: 1. t ⟩⟩: 1. ⟨e. t ⟩. -s ↝ λP . e ⟩: 1. ⟨⟨e. t ⟩⟩: 1. ⟨e. t ⟩. orVP ↝ λP ′ λP λx . t ⟩. ⟨e. the ↝ λP ιz [P (z ) ∧ ∀x [P (x ) → x ≤ z ]] Type ⟨⟨e. P (x ) ∧ P ′ (x ) 4. orV ↝ λR ′ λRλyλx . is↝ λP . ∃x . P 2. andV ↝ λR ′ λRλyλx . R ( y )(x ) ∨ R ′ ( y )(x ) Type ⟨⟨e.

1 Why event semantics One of the advantages of translating natural language into logic is that it helps us account for certain entailment relations between natural language sentences. Adverbial modification is one example. Take the following sentences: (1) a. Here is how we would represent (3a) given the previous chapters (we are treating the toast as if it was a constant rather than a defi- nite description. b. If the translation of A logically entails that of B . Suppose that whenever a sentence A is true. smokes(h) ∧ drinks(m) b. This argument is captured by the following logical entailment: (2) a.10 ∣ Event semantics 10. (3) a. ⇒ Jones buttered the toast. a sentence B is also true. then we have an explanation for this entailment. This pattern of inference – a longer sentence entails a shorter one – also shows up in other places. Jones buttered the toast slowly. ⇒ Homer smokes. but nothing will hinge on this): 277 . Homer smokes and Marge drinks. smokes(h) Every model for 2a is also a model for 2b. b.

our account clearly has a prob- lem. Davidson (1967) suggested that it is not Jones but the action – or. t) If this representation is correct. In an influential paper. (3a) is about only two entities: Jones and the toast. events are taken to be concrete entities with locations in space and in time. refer to them. as we will say. It happened slowly. Jones buttered the bagel quickly. the fol- lowing will also be true as a matter of logical consequence! (7) slow(j) ∧ quick(j) Unless we want to countenance the possibility that Jones is both slow and quick at the same time. b. the subjects in these two sentences arguably have an event as their referent (Parsons. the event – of but- tering the toast that is slow in (3a). Which entity does slowly describe in (3b)? Is it perhaps Jones who is slow? Then we might represent the meaning of that sentence as follows: (5) butter(j. some do. For example. clearly we ought to represent (6a) as (6b).278 Event semantics (4) butter(j. etc. by analogy. b) ∧ quick(j) But then. Although not all sentences that are about events necessarily provide explicit clues to that ef- fect. 1990): (8) a. and natural language provides means to provide informa- tion about them. butter(j. Jones’ buttering of the toast was artful. On Davidsson’s view. . we have an account of the entailment from (3a) to (3b). But there is a problem. (6) a. b. in any model where (5) and (6b) are both true. t) ∧ slow(j) Since (5) logically entails (4). If we represent (3a) as (5).

b. Moreover. the conjunction of (9a) and (9b) no longer entails that something is both slow and quick at the same time. e ) ∧ slow(e ) b. d.Event semantics 279 So let us assume that in (3a) it is the event of buttering the toast that is slow. e ) There is a logical entailment from (9a) to (10). rather than Jones himself. ⇒ Jones buttered the toast slowly. butter(j. are not only talking about Jones and the things he is buttering but also about the buttering events. since the two formulas could (and typically will) be true in virtue of different events. t. Adverbs like quickly and slowly are not the only phenomena in natural language that have been given an event semantic treat- ment – far from it. as desired. ∃e . ∃e . ⇒ Jones buttered the toast. In these respects. then. According to Davidson (1967). But un- like before. Prepositional adjuncts. butter(j. their correct logical representations are not (5) and (6b) but rather something like the following. . Here are a few other examples. t. they behave just like the adverbs quickly and slowly that we have already seen: (11) a. c. ⇒ Jones buttered the toast slowly in the kitchen. Jones buttered the toast slowly in the kitchen at noon. and in (6a) it is the event of buttering the bagel that is quick. b. butter(j. The two sentences. when a sentence has multiple adverbs and adjuncts then one or more can be dropped. Adjuncts like in the kitchen and at noon can be dropped from ordinary true sentences without affecting their truth value. where e ranges over events: (9) a. e ) ∧ quick(e ) A sentence like (3a) would then be represented as: (10) ∃e .

⇒ Mary left. e ) Perceptual reports. ∃e . butter(j. butter(j. b. t. e ′ ) ∧ leave(m.1 The Neo-Davidsonian turn As we have seen. 1983): (13) a. ∃e ∃e ′ .1. Since events are concrete entities with a lo- cation in spacetime. ∃e . If adverbs ascribe properties to events. it stands to reason that we can see and hear them. k) c. take the event to be the only . (14) a. t. e ) ∧ slow(e ) ∧ loc(e. e ′ ) b. k) ∧ time(e. ∃e . ∃e . ∃e . however. butter(j. e ) ∧ slow(e ) d. e. t. Brutus stabbed Caesar violently. ∃e . ∃e ′ . leave(m. 1990): (15) a. noon) b.280 Event semantics Event semantics provides a straightforward account of these entailment patterns: (12) a. b. violent(e ) 10. it is plausible to assume that the same is true of adjectives that are derivationally related to these adverbs (Par- sons. This idea can be exploited to give semantics of direct per- ception reports (Higginbotham. ⇒ There was something violent. c. e ) ∧ violent(e ) b. Davidson equipped verbs with an additional event argument. Later authors. stab(b. e ) ∧ slow(e ) ∧ loc(e. e ′ ) The relation between adjectives and adverbs. saw(j. t. butter(j. John saw Mary leave. (16) a.

Jones buttered the toast might be represented as follows: (17) ∃e. These relations represent ways entities take part in events and are gener- ally called thematic roles. the agent initiates the event. 1990). this is not essential. in the following representation of Jones buttered the toast with a knife. This came to be known as “Neo-Davidsonian” event semantics.Event semantics 281 argument of the verb (e. For example.butter(e ) ∧ agent(e. (For simplicity. the con- junct that represents the prepositional phrase is essentially par- allel to those conjuncts that represent Jones and the toast. instrument. the theme un- dergoes the event.g.butter(e ) ∧ agent(e. t) In Neo-Davidsonian event semantics. but role lists of a large number of English verbs have been compiled in Levin (1993) and Kipper-Schuler (2005). we represent a knife as if it was a constant. Additional thematic roles that specify the location of an event in space and time are often proposed. j) ∧ theme(e. there is no fundamen- tal semantic distinction between syntactic arguments such as the subject and object of a verb. and syntactic adjuncts such as ad- verbs and prepositional phrases. j) ∧ theme(e. 1967. For example.) (18) ∃e. Parsons. or is responsible for the event. On the Neo-Davidsonian view. The relationship between this event and syntactic arguments of the verb is then expressed by a small number of semantic relations with names like agent. theme. k) One of the advantages of the Neo-Davidsonian view is that it allows us to capture semantic entailment relations between dif- . As in the case of the toast. Castañeda. and beneficiary. A commonly held view on thematic roles is that they encapsulate generalizations over shared entailments of ar- gument positions in different predicates. the instrument is used to perform an event. t) ∧ with(e. and so on. There is no consen- sus on the full inventory of thematic roles. the beneficiary is the entity for which the event was performed.

(20) a. presumably the thematic role of Mary in (23a) – perhaps beneficiary – matches the one of Mary in (23b). 1995). m) ∧ theme(e. Mary opened the door. Another question is whether each verbal argument corresponds to exactly one role. at most . ⇒ The door opened. t) b. b. or whether the subject of a verb like fall is both the agent and the theme of the event (Parsons. t) (21) a. One com- mon perspective on thematic roles in generative syntax is that when no preposition is around. such as causatives and their intransitive counterparts (Parsons. ⇒ The tree fell. Relatedly. b.fall(e ) ∧ agent(e. but in (23b) there is no corresponding word we can point to. ∃e. they correspond to silent func- tional ) ∧ agent(e. d) The Neo-Davidsonian approach raises important questions.282 Event semantics ferent syntactic subcategorization frames of the same verb.fall(e ) ∧ theme(e. ∃e. John gave Mary the ball. Mary felled the tree. ∃e. often called theta roles (Chomsky. it is often assumed that each event has at most one agent. ) ∧ theme(e. Do semantic roles have syntactic counterparts? If so. m) ∧ theme(e. how should we think of them? For example. (23) a. 1990). d) b. 1990): (19) a. John gave the ball to Mary. many of which are have been answered in different ways in the se- mantic literature. (22) a. ∃e. We might think of this role as the meaning of to in (23a).

enter the compositional process. For ex- ample. 1990. Landman (1996) assumes that the lexical entry of a verb consists of an event predicate conjoined with one or more the- matic roles. We discuss both of them here. Thematic uniqueness has the effect that thematic roles can be represented as partial functions.2 Quantification in event semantics Building Neo-Davidsonian semantics into our fragment requires us to decide how events.Event semantics 283 one theme. 1984. and arguments and adjuncts as adding successive restrictions to this quantifier (Champollion. There are also other strategies that we will not discuss. Landman. 10. butter(e ) ∧ agent(e ) = j ∧ theme(e ) = t A differing view holds that one can touch a man and his shoulder in the same event (Krifka. 1990. 2000)). The first approach is more widespread and is sufficient for simple purposes. negation and quantifiers. 2015). as in (24). This is often reflected in the notation. Kratzer (2000) argues that verbs denote relations be- . is widely accepted in seman- tics (Carlson. while the sec- ond leads to a cleaner interaction with certain other components of the grammar such as conjunction. event quantifiers. 1992). (24) ∃e . A more recent approach views this existential quantifier as part of the lexical entry of the verb. 1984. often called the unique role re- quirement or thematic uniqueness. Both strategies are compatible with the idea that adjuncts and prepo- sitional phrases are essentially conjuncts that apply to the same event. and thematic roles. and so on. until an existential quantifier is inserted at the end and binds the event variable (Carlson. There is currently no univer- sally accepted way to settle the question. A common approach is that verbs and verbal projections (such as VPs and IPs) denote predicates of events and are intersected with their arguments and adjuncts. This view. 1995). Parsons. Parsons.

the subject is typically. butter(e ) c. For example. verbs denote predicates of events: (25) a.. . [theme]. but not exclusively. Wunderlich.2.g. 10. Operations such as passivization change the order in which arguments get mapped to thematic roles. 2012). butter ↝ λe . One way to do so is to allow each noun phrase a way to “sprout” a theta role head θ. These lexical entries conform with the Neo-Davidsonian view in that they do not contain any variables for the arguments of the verb. bark(e ) b. . Next.284 Event semantics tween events and their internal arguments while external argu- ments (subjects) are related to verbs indirectly by theta roles. we would normally need to make sure that the right syntactic argument gets mapped to the right thematic roles.. we map these theta roles to thematic roles: . . Since these variables need to be related to the event by the- matic roles. we need to provide means for these roles to enter the derivation. . We will ignore this problem here and simply assume that each θ head gets mapped to the “right” role. At this point. bark ↝ λe . This is what theories of argument structure are about (e. (26) Syntax DP → θ DP We then write lexical entries that map these heads suitable roles: (27) Lexicon θ: [agent].1 The first strategy: Verbs as events On the first strategy. mapped to the agent role.

v stands for the type of events. then: α ↝ ∃e. . and combine via Predicate Modifica- tion. agent(e ) = x b. Here. so ⟨v.. in that case.. Finally. Existential closure If α ↝ ⟪α⟫. [agent] ↝ λxλe . we introduce an operation that existentially binds the event variable at the sentence level. t ⟩ is the type of an event predicate. Type-Shifting Rule 6. [theme] ↝ λxλe .⟪α⟫(e ) as well (as long as e is not free in ⟪α⟫. A sample derivation that shows all of the elements we have in- troduced is shown in (29). The subject and the verb phrase both denote predicates of events. use a dif- ferent variable of the same type). where ⟪α⟫ is of category ⟨v.Event semantics 285 (28) a. The resulting event predicate is mapped to a truth value by the Existential Closure type-shifting rule. theme(e ) = x c. We can handle this operation as a type-shifting rule. . and in what follows. t ⟩.

This ad- verb is quite free in terms of where it can occur in the sentence: before the sentence. Let us know add the adjunct slowly to our fragment. Since both VP and S have the same type. agent(e ) = x s [agent] Spot The existential closure type-shifting rule applies at the root of the tree. and at the end of the sentence. bark(e ) ∧ agent(e ) = s ⇑ ⟨v. t ⟩ λe . this derivation will not be interpretable. one might won- der what prevents it from applying at VP. the type of VP would be t and there would be no way for the subject to com- bine with it.286 Event semantics (29) S t ∃e . t ⟩⟩ e λxλe . between subject and VP. t ⟩ ⟨v.bark(e ) ∧ agent(e ) = s DP VP ⟨v. t ⟩ λe. bark(e ) barks θ DP ⟨e. This is captured in the following rules: (30) Syntax S → AdvP S VP → AdvP VP VP → VP AdvP AdvP → Adv (31) Lexicon Adv: slowly . ⟨v. agent(e ) = s λe . As long as the syntax requires that a subject is present. In that case.

Event semantics 287

As we have seen above, slowly is interpreted as an event pred-
icate. Its lexical entry is therefore very simple:

(32) a. [agent] ↝ λe . slow(e )

The tree in (33) shows the application of slowly. Like the sub-
ject and object, it is a predicate of type ⟨v, t ⟩ and it combines with
its sister node via Predicate Modification:

288 Event semantics

(33) S
∃e . butter(e ) ∧ agent(e ) = j
∧theme(e ) = t ∧ slow(e )

⟨v, t ⟩
λe.butter(e ) ∧ agent(e ) = j
∧theme(e ) = t ∧ slow(e )

⟨v, t ⟩ ⟨v, t ⟩
λe . agent(e ) = j λe . butter(e )∧
theme(e ) = t∧
θ DP slow(e )
⟨e, ⟨v, t ⟩⟩ e
λxλe . j
agent(e ) = x
Jones VP AdvP
[agent] ⟨v, t ⟩ ⟨v, t ⟩
λe . butter(e )∧ λe . slow(e )
theme(e ) = t

⟨v, t ⟩ ⟨v, t ⟩
λe . butter(e ) λe . theme(e ) = t

θ DP
⟨e, ⟨v, t ⟩⟩ e
λxλe . t
theme(e ) = x
the toast

In the derivation in (33), syntactic arguments do not change
the type of the verbal projections they attach to is a hallmark of

Event semantics 289

Neo-Davidsonian event semantics. The object maps a predicate
of type ⟨v, t ⟩ (the V) to another one that is also of type ⟨v, t ⟩ (the
VP). The subject maps a predicate of type ⟨v, t ⟩ (the VP) to another
one that is also of type ⟨v, t ⟩ (the S). This is very different from
what we have seen in previous chapters, where V, VP and S all
had different types (namely, ⟨e, ⟨e, t ⟩⟩, ⟨e, t ⟩, and t respectively).
In Neo-Davidsonian semantics, syntactic arguments are seman-
tically indistinguishable (as far as types are concerned) from ad-
juncts, which map a VP of a certain type (here, ⟨v, t ⟩) to another
VP of the same type and which do not change the type of the VP.

10.3 Quantification in event semantics
The system we have seen so far is sufficient for many purposes,
including the sentences discussed at the beginning of the chap-
ter. Most papers that use event semantics assume some version of
it, although the details differ. Things become more complicated,
though, when we bring in quantifiers like every cat and no dog. As
we have seen in Chapter 6, these quantifiers are able to take scope
in various positions in the sentence. We have seen that this can be
explained using quantifier raising or type-shifting. Since the event
variable is bound by a silent existential quantifier, we might ex-
pect that in this case too any overt quantifiers in the sentence can
take scope either over or under it. But this is not the case. Rather,
the event quantifier always takes scope below anything else in the
sentence. Sentence (34), for example, is not ambiguous. Its only
reading corresponds to (35), where the event quantifier takes low
scope). As for (36), that is not a possible reading of the sentence.

(34) No dog barks.

(35) a. ¬∃x . dog(x ) ∧ ∃e . bark(e ) ∧ agent(e ) = x
b. “There is no barking event that is done by a dog”

(36) a. ∃e . bark(e ) ∧ ¬∃x . dog(x ) ∧ agent(e ) = x

290 Event semantics

b. “There is an event that is not a barking by a dog”

Exercise 1. How can you tell that (36) is not a possible reading of
sentence (34)?

As it turns out, each of the two strategies for the interpretation
of quantifiers — quantifier raising and type-shifting — generates
one of these two formulas. Quantifier raising no dog above the
sentence level leads to the only available reading (35), while ap-
plying Hendriks’ object raising rule (or rather, the general schema)
to the theta role head leads to the unavailable reading (36). This is
shown in (37) and (38), respectively.

Event semantics 291

(37) S
¬∃x . dog(x ) ∧ ∃e . bark(e )∧
agent(e ) = x

⟨⟨e, t ⟩, t ⟩ ⟨e, t ⟩
λP ¬∃x . dog(x ) λv 1 ∃e . bark(e ) ∧ agent(e ) = v 1
∧P (x )

D N 1 S
⟨⟨e, t ⟩, ⟨⟨e, t ⟩, t ⟩⟩ ⟨e, t ⟩ t
λP ′ λP ¬∃x . λx . dog(x ) ∃e . bark(e ) ∧ agent(e ) = v 1
P ′ (x ) ∧ P (x ) ⇑
dog ⟨v, t ⟩
no λe.bark(e ) ∧ agent(e ) = v 1

⟨v, t ⟩ ⟨v, t ⟩
λe . agent(e ) = v 1 λe . bark(e )

θ DP
⟨e, ⟨v, t ⟩⟩ e
λxλe . agent(e ) = x v1

[agent] t1

292 Event semantics

(38) S
∃e . bark(e )∧
¬∃x . dog(x ) ∧ agent(e ) = x

⟨v, t ⟩
λe . bark(e )∧
¬∃x . dog(x ) ∧ agent(e ) = x

⟨v, t ⟩ ⟨v, t ⟩
λe ¬∃x . dog(x )∧ λe . bark(e )
agent(e ) = x

θ DP
⟨⟨⟨e, t ⟩, t ⟩, ⟨v, t ⟩⟩ ⟨⟨e, t ⟩, t ⟩
λQλe .Q (λx . λP ¬∃x . dog(x )
agent(e ) = x ) ∧P (x )

⟨e, ⟨v, t ⟩⟩
λxλe . agent(e ) = x
⟨⟨e, t ⟩, ⟨⟨e, t ⟩, t ⟩⟩ ⟨e, t ⟩
[agent] λP ′ λP ¬∃x . λx . dog(x )
P ′ (x ) ∧ P (x )

The interim conclusion, then, is that event semantics seems
to commit us to a quantifier-raising based treatment of quantifi-
cational noun phrases.

Event semantics 293

10.3.1 A formal fragment
Let us recapitulate the additions to our fragment. The syntax is
defined as in three-valued type logic (Lλ with three truth values as
in Chapter 8), plus the following additions:

Syntax rules. We add the following rule:

(39) Syntax
DP → θ DP

Lexicon. Lexical items are associated with syntactic categories
as follows:

θ: [agent], [theme], . . .

Translations . Verbs get new translations, and we add thematic
roles. We will use the following abbreviations:

• e is v 0,v

• bark, and butter are constants of type ⟨v, t ⟩,

• agent and theme are constants of type ⟨v, e ⟩.

Type ⟨v, t ⟩:

1. bark ↝ λe . bark(e )

2. butter ↝ λe . butter(e )

Type ⟨v, e ⟩:

1. [agent] ↝ λxλe . agent(e ) = x

2. [theme] ↝ λxλe . theme(e ) = x

[agent] ↝ λxλV λ f . we need to revise our semantics. [theme] ↝ λxλV λ f . on this approach. . bark ↝ λ f ∃e . If the root of the tree is of type ⟨⟨v.V (λe . For this reason.V (λe . t ⟩. To implement this approach. true. bark(e ) ∧ f (e ) b. verbs come equipped with their own event quanti- fiers. But this type is no longer ⟨v. .. the root will be true of any set of events f so long as f contains (possibly among other things) an event that satisfies the relevant event predicate.. we need to map it to a truth value. t ⟩ but ⟨⟨v.3. VPs and Ss) to the same type. Whether this is true can be checked by testing whether the set of all events whatsoever. t ⟩. which was in- troduced by the existential-closure rule at sentence level. λe . t ⟩. Our grammar will continue to map verbal projections ( verbs. but instead use function application to combine syntac- tic arguments with verbal projections. This means that quantifi- cational noun phrases can be interpreted without applying quan- tifier raising. If the event quantifier was introduced lower than no dog. agent(e ) = x ∧ f (e )) b. there would be no need to raise it. butter ↝ λ f ∃e . we will no longer rely on predicate mod- ification. The new representations for verbs are as follows: (40) a. theme(e ) = x ∧ f (e )) c. This brings us to the second strategy for the compositional treatment of event semantics. In a simple case such as Spot barks.2 The second strategy: Verbs as event quantifiers In the tree in (37). butter(e ) ∧ f (e ) c. contains such an event: . we needed to apply quantifier raising to no dog in order to give it scope above the event quantifier.. As mentioned.294 Event semantics 10. Verbs no longer denote event predicates but rather general- ized existential quantifiers over events.. t ⟩. This means that our the- matic look more complicated than before: (41) a.

true) as well. we introduce the type-shifting rule of Quantifier Closure: Type-Shifting Rule 7. ∃e [bark(e ) ∧ ag(e ) = s ∧ true] d. where ⟪α⟫ is of category ⟨⟨v. true)(e )] c. true) b. t ⟩. Quantifier Closure If α ↝ ⟪α⟫. ∃e [bark(e ) ∧ ag(e ) = s ∧ (λe . λ f ∃e [bark(e ) ∧ ag(e ) = s ∧ f (e )](λe . The full derivation of the sentence is shown in (43). ∃e [bark(e ) ∧ ag(e ) = s] To formalize this idea.Event semantics 295 (42) a. . then: α ↝ ⟪α⟫(λe. t ⟩.

t ⟩ λ f ∃e . so the subject already takes syn- tactic scope over it. t ⟩. This is as expected. t ⟩. t ⟩⟩ ⟨⟨v. agent(e ) = s ∧ f (e )) bark(e ) ∧ f (e ) barks θ DP ⟨e. We do not need to apply quan- tifier raising.V (λe . t ⟩⟩⟩ e λxλV λ f . t ⟩. t ⟩. because the quantifier is con- tained in the entry for the verb. as shown in (44). ⟨⟨v. t ⟩. This time. λ f ∃e . ⟨⟨v.V (λe . t ⟩. s agent(e ) = x ∧ f (e )) Spot [agent] We are now ready to interpret a quantificational noun phrase. t ⟩ λV λ f . bark(e )∧ agent(e ) = s ∧ f (e ) DP VP ⟨⟨⟨v. ⟨⟨⟨v. t ⟩. bark(e )∧ agent(e ) = s ⇑ ⟨⟨v.296 Event semantics (43) S t ∃e . t ⟩. applying Hendriks’ raising schema to the theta role gives the right result. .

t ⟩ ⟨⟨⟨v. D N agent(e ) = x ⟨⟨e.V (λe . t ⟩. t ⟩⟩⟩ dog λxλV λ f . ⟨⟨⟨v. Q (λx . t ⟩ λV λ f ¬∃x . t ⟩. Just like syntactic arguments. t ⟩⟩ ⟨e. ⟨⟨v. bark(e )∧ agent(e ) = x ⇑ ⟨⟨v. ⟨⟨e. dog(x )∧ λ f ∃e . t ⟩. t ⟩. dog(x ) ∧ ∃e . λx . t ⟩⟩⟩ dog(x ) ∧ P (x ) λQλV λ f . t ⟩. t ⟩. t ⟩ λ f ¬∃x . t ⟩⟩ ⟨⟨v. t ⟩.agent(e ) = x ∧ f (e )) bark(e ) ∧ f (e ) barks θ DP ⟨⟨⟨e. dog(x ) ∧ ∃e . V (λe. no agent(e ) = x ∧ f (e )) [agent] Let us now see how syntactic adjuncts. t ⟩. λP ¬∃x . t ⟩. ⟨⟨v. adjuncts .V (λe . t ⟩. ⟨⟨e. t ⟩. t ⟩. t ⟩. t ⟩. ⟨⟨v. dog(x ) ⇑ P ′ (x ) ∧ P (x ) ⟨e. Event semantics 297 (44) S t ¬∃x . t ⟩. such as adverbs. bark(e )∧ agent(e ) = x ∧ f (e ) DP V ⟨⟨⟨v. t ⟩. t ⟩ ∧ f (e ))) λP ′ λP ¬∃x . are treated on this approach.

we pretend that the toast is a constant rather than a definite description. slowly ↝ λV λ f . An example of a derivation that uses this adverb is shown in (46). This makes the representations of adverbs more complicated: (45) a.298 Event semantics are combined with verbal projections using Function Application instead of Predicate Modification to combine adjuncts. .. .V (λe . the VP buttered the toast is shown as a unit. To save space. slow(e ) ∧ f (e )) b. Nothing of consequence would change if we didn’t. and as before..

t ⟩. . t ⟩ ⟨⟨⟨v. ⟨⟨v. The next sections compare the two systems with respect to two other phenomena. t ⟩⟩ ⟨⟨v. t ⟩⟩ agent(e ) = x ∧ f (e )) Jones λ f ∃e . t ⟩. the choice between the two approaches depends mainly on whether the preferred way to deal with quantificational noun phrases is by quantifier raising or type shifting. t ⟩.V (λe . t ⟩. t ⟩ λV λ f . t ⟩. t ⟩. [agent] butter(e )∧ V (λe . Event semantics 299 (46) S t ∃e . agent(e ) = j ∧ butter(e )∧ theme(e ) = t ∧ slow(e ) ∧ f (e ) DP VP ⟨⟨⟨v. t ⟩⟩⟩ e V AdvP λxλV λ f . agent(e ) = j ∧ f (e )) butter(e ) ∧ theme(e ) = t ∧slow(e ) ∧ f (e ) θ DP ⟨e. conjunction and negation. t ⟩. λV λ f . t ⟩. t ⟩. λ f ∃e . j ⟨⟨v. agent(e ) = j ∧ butter(e )∧ theme(e ) = t ∧ slow(e ) ⇑ ⟨⟨v. ⟨⟨v. t ⟩.V (λe . ⟨⟨v. t ⟩ λ f ∃e . t ⟩. slow(e ) ∧ f (e )) theme(e ) = t ∧ f (e ) slowly buttered the toast From what we have seen so far. ⟨⟨⟨v. t ⟩.

V ( f ) ∧ V ′ ( f ) Exercise 2. these two choices lead to very different translations: (49a) and (49b) respectively. we have seen that many uses of and can be sub- sumed under a general schema.⟨σ2 . and we would end up with the same kind of problem we already encountered ear- lier in connection with Jones buttered the toast slowly and buttered . f (e ) ∧ f ′ (e ) b. drink(e ′ ) ∧ agent(e ′ ) = h Now. σ2 ⟩ What does this rule amount to in the case of VP-modifying and. (49a) cannot be the right representation of Homer smoked and drank). VPs are of type τ = ⟨v. t ⟩⟩.300 Event semantics 10. ⟪and⟫⟨σ2 . andVP ↝ λ f ′ λ f λe . discussed by Partee & Rooth (1983b) among others. Show how rule (47) leads to these two representations.4 Conjunction in event semantics In Chapter (9). ∃e . Applying rule (47) in each case results in the follow- ing: (48) a.σ2 ⟩⟩ ( X ( Z ))(Y ( Z )) if τ = ⟨σ1 . smoke(e ) ∧ agent(e ) = h∧ ∃e ′ . as in Homer smoked and drank? On the first approach. This schema is repeated here: (47) ⟪and⟫⟨τ. As you can see in (50) and (51). smoke(e ) ∧ drink(e ) ∧ agent(e ) = h b. If this sentence is true. ⟨v.τ⟩⟩ λqλp . (49) a. t ⟩. On the second approach. VPs are of type τ = ⟨v. ∃e . andVP ↝ λV ′ λV λ f . In (49a) there is only one event for two such contradictory adverbs to modify.⟨τ. for all we know he might have smoked slowly and drunk quickly. p ∧ q if τ = t ={ λX τ λYτ λZσ1 .

Add slowly and quickly to the tree in (51) and show how the resulting formula avoids the attribution of contradictory properties to the same event. Part of the point of introducing events was to avoid having to attribute contradictory properties to the same en- tity. . We fare much better with (49b). Exercise 3. The entry in (49a) sends us right back to square one.Event semantics 301 the bagel quickly. because it provides us with the two events we need to avoid the problem.

t ⟩. t ⟩ [agent] Homer λe . ⟨v. t ⟩ λe . ⟨v. agent(e ) = x h VP JP ⟨v. drink(e ) λ f ′ λ f λe . t ⟩. t ⟩⟩ e λxλe . smoke(e ) ∧ drink(e ) ∧agent(e ) = h ⇑ ⟨v.smoke(e ) ∧ drink(e ) ∧agent(e ) = h DP VP ⟨v. t ⟩ λe. t ⟩⟩⟩ λe . smoke(e ) ∧drink(e ) θ DP ⟨e. t ⟩ ⟨v. t ⟩ smoked ⟨⟨v. smoke(e ) J VP ⟨⟨v. 302 Event semantics (50) S t ∃e . f (e ) ∧ f ′ ( e ) drank and . ⟨v. agent(e ) = h λe .

t ⟩. t ⟩. t ⟩⟩⟩ h λ f ∃e . t ⟩. ⟨⟨v. ⟨⟨v. t ⟩⟩⟩ drink(e ′ ) smoke λV ′ λV λ f . t ⟩. t ⟩ ⟨⟨v. t ⟩⟩ ⟨⟨v. t ⟩. smoke(e )∧ agent(e ) = h ∧ f (e )∧ ∃e ′ . t ⟩. t ⟩. drink(e ′ )∧ agent(e ′ ) = h ⇑ ⟨⟨v. t ⟩ λ f ∃e . t ⟩. e ⟨⟨v. We can still formulate an en- try for VP-level conjunction that is compatible with event pred- . t ⟩. t ⟩. t ⟩. smoke(e ) ∧ f (e )∧ agent(e ) = h ∧ f (e )) ∃e ′ . [agent] ⟨⟨v. smoke(e )∧ agent(e ) = h∧ ∃e ′ . t ⟩. ∧ f (e ′ ) V ( f ) ∧ V ′( f ) drank and Does this mean that we cannot represent conjunction on the first approach? No: all we have seen is that the Partee & Rooth schema is not compatible with it. λ f ∃e . t ⟩.V (λe . t ⟩ agent(e ) = x ∧ f (e )) ∧ f (e ) ⟨⟨⟨v. drink(e ′ )∧ agent(e ′ ) = h ∧ f (e ) DP VP ⟨⟨⟨v. t ⟩. t ⟩ λV λ f .V (λe . Homer smoke(e ) ⟨⟨⟨v. λ f ∃e ′ . drink(e ′ ) ∧ f (e ′ ) θ DP VP JP ⟨e. J VP λxλV λ f . ⟨⟨⟨v. t ⟩. Event semantics 303 (51) S t ∃e .

This is similar to DP-level conjunction. 10. Hint: use sums of events.304 Event semantics icates. Make sure it predicts the right truth conditions for Homer smoked slowly and drank quickly. where in Chap- ter 9 we have encountered both schema-based and non-schema- based entries. I’ve created the trees but delayed writing this section until I know if we’ll have a discussion of negation somewhere earlier in the text. Exercise 4. .5 Negation in event semantics Note to Liz: The textbook hasn’t introduced a compositional treat- ment of VP-level negation at this point. Formulate an entry for VP-level conjunction that is compatible with the event-predicate based approach.

¬bark(e ) ∧ agent(e ) = s ⇑ ⟨v.¬bark(e ) ∧ agent(e ) = s DP VP ⟨v. t ⟩ λe . t ⟩ ⟨v. agent(e ) = x s ⟨⟨v. t ⟩ λ f λe . Event semantics 305 (52) S t ∃e . t ⟩. ⟨v. t ⟩⟩ ⟨v. t ⟩ λe. t ⟩⟩ e Aux VP λxλe . ⟨v. t ⟩⟩ ⟨v. ⟨v. ¬ f (e ) λe . t ⟩. bark(e ) not bark . t ⟩ λf . f λe . ¬bark(e ) [agent] Spot did Neg VP ⟨⟨v. agent(e ) = s λe . ¬bark(e ) θ DP ⟨e.

⟨⟨v. agent(e ) = s ∧ f (e )) bark(e ) ∧ f (e ) θ DP ⟨e. ⟨⟨v. t ⟩ λ f ¬∃e . t ⟩⟩ λ f ¬∃e . λ f ¬∃e . t ⟩ ⟨⟨v. t ⟩. ⟨⟨v.V (λe . s ⟨⟨⟨v. 306 Event semantics (53) S t ¬∃e . t ⟩. ⟨⟨v. λV λ f . plus the following additions: . bark(e )∧ agent(e ) = s ∧ f (e ) DP VP ⟨⟨⟨v. bark(e )∧ agent(e ) = s ⇑ ⟨⟨v. t ⟩. t ⟩. ¬V ( f ) bark(e ) ∧ f (e ) [agent] did Neg VP ⟨⟨⟨v. ⟨⟨⟨v.1 A formal fragment Let us recapitulate the additions to our fragment. t ⟩. t ⟩.V (λe . t ⟩. t ⟩.The syntax is de- fined as in three-valued type logic (Lλ with three truth values as in Chapter 8).5. t ⟩⟩⟩ e Aux VP λxλV λ f . t ⟩. t ⟩. ¬V ( f ) bark(e ) ∧ f (e ) not bark 10. t ⟩. t ⟩⟩ ⟨⟨v. t ⟩. t ⟩ agent(e ) = x ∧ f (e )) Spot ⟨⟨v. t ⟩⟩ λ f ∃e . t ⟩ λV λ f . t ⟩. λV λ f . t ⟩. t ⟩. t ⟩.

• f is a variable of type ⟨v. . bark(e ) ∧ f (e ) 2. t ⟩. t ⟩. . e ⟩ (as before). The following entries replace the previous ones: Type ⟨v. t ⟩. [agent] ↝ λxλV λ f . butter(e ) ∧ f (e ) Type ⟨v. theme(e ) = x ∧ f (e )) Type ⟨⟨⟨v. • bark. butter ↝ λ f ∃e . t ⟩: 1.V ( f ) ∧ V ′ ( f ) . t ⟩.V (λe . . and butter are constants of type ⟨v. [theme]. Translations. t ⟩⟩⟩: 1. bark ↝ λ f ∃e . We add the following rule: (54) Syntax DP → θ DP Lexicon. t ⟩. • agent and theme are constants of type ⟨v. and we add thematic roles.Event semantics 307 Syntax rules. • V is a variable of type ⟨⟨v.V (λe . [theme] ↝ λxλV λ f . andVP ↝ λV ′ λV λ f . agent(e ) = x ∧ f (e )) 2. We will use the following abbreviations: • e is v 0. t ⟩.v (as before). t ⟩ (as before). ⟨⟨v. ⟨⟨⟨v. Lexical items are associated with syntactic categories as follows: θ: [agent]. e ⟩: 1. Verbs get new translations. t ⟩. t ⟩.

t ⟩.308 Event semantics We have introduced the following type-shifter: Type-Shifting Rule 8. then: α ↝ ⟪α⟫(λe.true) as well. Quantifier Closure If α ↝ ⟪α⟫. . where ⟪α⟫ is of category ⟨⟨v. t ⟩.

11 ∣ Intensional semantics 11. given that Scott Pruitt is the head of the EPA. You 309 .) Perhaps you did not know this already. and given this.1 Opacity The following might seem like a well-founded principle to adopt in everyday reasoning: (1) Leibniz’s law If two expressions have the same denotation. Although the latter may feel more ironic than the former. this is not a necessary truth. But suppose some- one were to tell you that the Morning Star is the Morning Star. and indeed it feels that one is licensed to conclude the latter from the former. Scott Pruitt is the currently head of the Environmen- tal Protection Agency (EPA). the Morning Star happens to be the Evening Star. But there are examples where Leibniz’s law does not hold. it seems to follow from (2) that (3) is true. That would not reflect poorly on your reasoning abilities. (2) [Scott Pruitt] is a climate change denier. (Both name Venus. For instance. because it could have been otherwise. the truth value has not been changed by the substitution of the one for the other. (3) [The head of the EPA] is a climate change denier. then if one is substituted for the other in any given sentence. For example. the truth value of the sentence remains the same.

where de re is Latin for ‘of the thing’. Consider the fol- lowing example: (7) John is seeking a unicorn. John may not even know that the person in ques- tion is a Republican. As Quine (1956) pointed out. there is a specific Republican who John be- lieves will win. Concomitantly. despite differing only in the substitution of one term for another that co-refers with it: (4) Necessarily. On a de re interpretation. there is no specific Republican that John believes will win. Contexts in which the substitution of one term for a co-referential term can lead to a change in truth value are called REFERENTIALLY OPAQUE CONTEXTS .310 Intensional semantics might be offended at the suggestion that you did not already know this. the Morning Star is the Evening Star. there is a particular unicorn such that John is seeking it. Such an interpretation would have to be false in . This interpretation is called a DE RE INTER- PRETATION . (5) Necessarily. where the content of the description is crucially part of the belief. For example. Verbs like want. the Morning Star is the Morning Star. Our formal apparatus is currently not equipped to handle these. this opacity opens up an ambiguity which can be seen in sentences like the following: (6) John believes that a Republican will win. This is called a DE DICTO INTERPRETATION. the following two sentences differ in their truth value. On one interpretation. if Mary does not know that Cicero is Tully. Another kind of opaque context is belief. seek and need also form opaque contexts. and allow for both de re and de dicto interpretations. On another in- terpretation. he just believes that whoever wins will be of that party. then Mary believes that Cicero is a great orator does not imply that Mary believes that Tully is a great orator.

corruptly persuad[ing] another . but also ‘in the wild’. not that unicorns exist. . . the statute could not have been violated. . in legal statutes. In the fall of 2001. document. or other object. The ruling followed a line of obstruction of jus- tice decisions dating back to the nineteenth century in holding that. withhold a record. In 2005. . Expecting a federal sub- poena of records as a wave of accounting scandals un- folded. This kind of ambiguity occurs not only in philosophy texts. with in- tent to .” The conviction was defective in part because the jury instructions did not make clear that the de- fendant’s actions had to be connected to a particular official proceeding that it had in mind. the Supreme Court reversed Arthur Andersen’s conviction for “know- ingly . or at least. since there are (sadly) no unicorns here. . Andersen (2014) de- scribes the following case. if in its frenzy of paper shredding the defendant firm was not specific about the particular official proceeding to be obstructed.Intensional semantics 311 the actual world. But the sentence is perfectly apt to be true on a de dicto interpretation. induce any person to . the accounting firm Arthur Ander- sen directed a large scale destruction of documents regarding its client Enron. . . from an official proceed- ing. The shredding ceased abruptly on November 9th. shortly before the SEC began an official investigation into Enron. it merely signals that John believes there to be unicorns (and aims to be in a state where he has found one). which in this case had not been initiated at the time of the shred- ding. . On the de re interpretation (for the both the document and the proceeding from it) of the statute. the firm urged its employees to begin shred- ding papers in October. it is violated when there is a particular document from a particular official proceeding which . immediately on the heels of the SEC’s subpoena.

certainly. albeit re- luctantly. it is violated when the intent is such that there is an official pro- ceeding from which documents are withheld. respectively. Chappell. an English court considered the case of Whiteley v. Exercise 1. Consider the following case from Andersen (2014): In 1869. There had been voter fraud by imperson- ation. On a de dicto interpretation.” How would you characterize the de re and de dicto interpretations. But the court fixated on the object of the impersonation and concluded that because a dead person could not vote.” The court attributed the mismatch between this result and the evident purpose of the statute to an oversight of the drafters: “The legislature has not used words wide enough to make the personation of a dead man an of- fence.312 Intensional semantics the perpetrator intends to” The court acquitted him.pdf . in this case? Which interpretation does the court ap- pear to have taken? Is there an interpretation on which the man is guilty? Explain why or why not. in which a man who had voted in the name of his deceased neighbor was prosecuted for having fraudulently impersonated a “person en- titled to vote. A related puzzle is that two sentences that have the same truth value can lead to different truth values when embedded under an attitude predicate like believe or hope:1 1 Adapted from https://mitcho. the defendant had not impersonated a “person entitled to vote. It is fair to say that Andersen would be guilty on a de dicto interpretation. and were acquitted on the basis of a de re interpretation.

suppose we have a model in which the domain D = {a.2. The tools we will develop in this section will allow us to capture such facts. The conclusion we are forced to draw is that the semantics of atti- tude verbs like believe and hope must depend not just on the truth value of the embedded clause. but on some more fine-grained as- pect of its meaning.2 Modal Logic 11.1 Priorean tense logic Following the presentation of Dowty et al. c }. The function I will thus take two arguments: a constant. and a time. (1981. t 2 . I . b. t 3 }. the set of times T is {t 1 . I hope that [ the final exam is cancelled ] One of these sentences can be true while the other is false. < is the ‘earlier than’ relation among the times.Intensional semantics 313 (8) a. <⟩ such that D is a set of individuals. a formula can vary in its truth value across time. For example. A future sentence like John will be asleep can then be said to be true at time t if there is a time t ′ later than t at which John is asleep. T. Thus John is asleep might be true at time t . ch. In this logic. and I is an interpretation function which maps the non-logical constants to appropriate denotations at the various times. and . 11. T is a set of times. 5). even if both of the embedded sentences have the same truth value. but false at time t ′ . we will add to our models so that they con- sist not only of a domain of individuals D and an interpretation function I but also a set of times T and a linear ordering relation < among the times. I hope that [ tomorrow is a public holiday ] b. To achieve this. we will approach modal logic with a brief presentation of TENSE LOGIC in the style developed by Arthur Prior. A TEMPORAL MODEL for a language L is then a quadruple ⟨D.

happy) = {a.g . john) = b I (t 3 . namely b. mary) = a I (t 3 . everyone is happy. This framework allows for the definition of future and past op- erators. mary) = a I (t 1 . But who is happy changes. throughout time.t1 = 1 Jhappy(mary)KM .t = 1 iff JφKM . happy) = {c } So. The interpretation function I might then be defined as follows: I (t 1 . ′ • JFφK M .t = 1 iff JφKM . john) = b I (t 2 . To the syntax of the language. respectively: M . we add the following rules: • If φ is a formula. • If φ is a formula. so we will have expressions like: Jhappy(mary)KM . Truth will be relative not only to a model and an assignment function. or ‘past φ’.t = 1 for some t ′ such that t < t ′ . the name john always denotes the same in- dividual. and so does the name mary.g . john) = b I (t 1 . At first.t = 1 for some t ′ such that t ′ < t . then Fφ is a formula. mary) = a I (t 2 . then Pφ is a formula. c } I (t 2 . Pφ can be read: ‘it was the case that φ’. ′ • JPφK . happy) = {a.g .g . but c has the last laugh in the end.) These kinds of statements can be given truth values relative to a particular time that depend on what value φ takes on at times preceding or following the evaluation time. b } I (t 3 . or ‘future φ’.g .314 Intensional semantics we have two individual constants john and mary. but also to a time. b. (Fφ can be read: ‘it will be the case that φ’.g . and one predi- cate constant happy.t3 = 0 Both of these meta-language statements happen to be true ac- cording to the way we have set things up. then c becomes unhappy.

Intensional semantics 315

The way we have set things up, these formulas can be iterated ad
infinitum, letting us model statements like ‘John will have seen
the report’, which can take the form of FPφ or ‘A child was born
that would become the ruler of the world’ (Kamp, 1971), which
might be modeled using a future operator in the scope of a past
But let us not get too married to this system, because it suffers
from a number of difficulties as a theory of tense. We will discuss
these further in Chapter 12 on tense. Here, it serves mainly as a
warm-up for the study of the so-called ‘alethic’ modalities of ne-
cessity and possibility, which we turn to next.

11.2.2 Alethic logic
Just as sentences might be felt to vary in their truth value across
time, so can they vary in their truth value depending on the facts.
A statement like

(9) Donald Trump is President of the U.S.

expresses a truth that is certainly not a necessary truth; in princi-
ple, it is possible that Hillary Clinton could have won. In contrast,
as sentence like:

(10) The President of the U.S. is President of the U.S.

is necessarily true. It could not possibly be false, in other words.
In other words, the statement in (9) is CONTINGENTLY TRUE, while
the statement in (10) is NECESSARILY TRUE. We can also divide
false statements into those that are necessarily false and those
that are contingently false in an analogous manner.

Exercise 2. Give one example of a contingently false statement
and one example of a necessarily false statement.

316 Intensional semantics

A logical system representing concepts like it is necessary that
and it is possible that is called an ALETHIC LOGIC or MODAL LOGIC.
The term ‘modal logic’ is somewhat more common and frequent,
but it also has a broader usage, sometimes also applying to tense
logics of the Priorian kind. The operator ‘it is necessary that’ is
standardly represented as a box, ◻, and ‘it is possible that’ is rep-
resented as a diamond, ◇. Thus in alethic logic the syntax rules
are extended with the following:
• If φ is a formula, then ◻φ is a formula.

• If φ is a formula, then ◇φ is a formula.
From the perspective of the system that we have developed so
far, it may seem natural to define the semantics of ◻φ by saying
that the formula is true if φ is true in every first-order model. This
is how Rudolph Carnap defined it. A slight variant on this view
is due to Saul Kripke, who contributed a new notion of model. A
model in Kripke’s framework contained a set of first-order models,
each representing a different possible state of affairs, or a POSSI -
BLE WORLD . In this way, Kripke formalized an idea from Leibniz
that a necessary truth is one that is true in all possible worlds.
These so-called K RIPKE MODELS had a flexibility that was absent
from Carnap’s system.
Now, a first-order model consists of a domain and an interpre-
tation function. So in principle, the possible worlds in a Kripke
model might have different domains. But we will assume for sim-
plicity (and not without good philosophical reason) that there is
a single domain of individuals that is shared across all possible
worlds. A model for modal logic will therefore consist of a set of
possible worlds W , in addition to a domain of individuals D and
an interpretation function I . Unlike in tense logic, the worlds are
not ordered. Thus a model will be a triple:
⟨D,W, I ⟩
where D is a set of individuals, W is a set of worlds, and I is an
interpretation function. Just as in tense logic, the interpretation

Intensional semantics 317

function I will take two arguments: a non-logical constant, and,
this time, a world. So if there are three worlds w 1 , w 2 , and w 3 , and
three individuals a, b and c, it might be the case that

I (w 1 , happy) = {a, b, c }

I (w 3 , happy) = {c }
Truth of a formula will in general be relative to a model M , an as-
signment function g , and a possible world w. So assuming that
john maps to a in every possible world, we have:

Jhappy(john)KM ,g ,w 1 = 1

Jhappy(john)KM ,g ,w 3 = 0
Note that there is controversy as to how possible worlds should
be conceived of. On David Lewis’s view they are physical objects,
such as the universe we actually live in. On another view, which
can be attributed to Ludwig Wittgenstein, they are merely ways
the world can be.2 The formalization here does not depend on a
particular conception of possible worlds, but the authors tend to
prefer Wittgenstein’s perspective.
The semantics of the modal operators can be defined syncat-
egorematically as follows:3
M ,g ,w
= 1 iff JφKM ,g ,w = 1 for all w ′

• J◻φK
M ,g ,w
= 1 iff JφKM ,g ,w = 1 for some w ′

• J◇φK

It turns out that, using these definitions, certain intuitively valid
sentences are indeed valid, for example:
This is one of many possibilities; this simplest system is known as S5. See
Hughes & Cresswell (1968) for a fuller presentation.

318 Intensional semantics

• ◻φ ↔ ¬ ◇ ¬φ
‘It is necessarily the case that φ if and only if it is not possible
that not φ’

• ◻φ → φ
‘Necessarily φ implies φ’

• φ → ◇φ
‘If φ, then possibly φ’

The first statement implies that ◇ is the DUAL of ◻. (In the same
way, ∃ is the dual of ∀, since ∀xφ is equivalent to ¬∃x ¬φ.) In fact,
often a statement of the semantics of ◇ is left out, and the ◇ is
defined as a syntactic abbreviation of ¬ ◻ ¬.
Given our semantics for the quantifiers from previous chap-
ters, the following formulas are also valid:

• ∀x ◻ φ → ◻∀xφ

• ∃x ◇ φ → ◇∃xφ

The first of these is known as the B ARCAN FORMULA; the second is
actually equivalent. But as Dowty et al. (1981, 129) explain:

[I]t is somewhat controversial whether [these two state-
ments] should be formally valid. It has been suggested
that ∀x ◻ φ ought to mean, “every individual x that ac-
tually exists is necessarily such that φ, whereas ◻∀xφ
ought to mean “in any possible world, anything that
exists in that possible word is such that φ.” Similarly,
∃xφ ought to mean that “some individual x that ac-
tually exists is in some world such that φ”, whereas
◇∃xφ should mean that “in some world it is the case
that some individual which exists in that world is such
that φ.” To make these pairs of formulas semantically
distinct would require a model theory in which each
possible world has its own domain of individuals over

Intensional semantics 319

which quantifiers range (though the domains would,
in general, overlap partially). In this way, there could
be “possible individuals” that are not actual individu-
als, and perhaps actual individuals that do not “exist”
in some other possible worlds. The question whether
there are such individuals has, not surprisingly, been
the subject of considerable philosophical debate. It
is possible to construct a satisfactory model theory
on this approach (and in fact Kripke’s early treatment
in Kripke 1963 adopted it), but it is technically more
complicated than the approach we have adopted here,
and it was not adopted by Montague (for discussion
see Hughes & Cresswell 1968, pp. 170–184).

Note that treating possible worlds as first-order models, as Carnap
did, naturally suggests that different possible worlds may well be
associated with different domains of individuals. This not only
makes things more complicated, it also raises issues related to
how one might recognize a given individual as ‘the same’ individ-
ual across worlds, which of course is important for capturing the
semantics of sentences like I could have been a millionaire. Lewis
(1968) advocates an extreme version of the differing-domains view,
on which no two worlds share individuals. Rather, individuals are
identified across worlds through a COUNTERPART RELATION. In a
system where there is a fixed domain for all possible worlds, this
problem does not arise.
However, there are examples that seem to suggest that the verb
exist denote a contingent property (examples from Coppock & Beaver

(11) My university email account no longer exists.

(12) If that existed, then I would have heard of it!

A famous example discussed by Russell (1905) is:

(13) The golden mountain does not exist.

320 Intensional semantics

This sentence is felt to be true; but in that case, what does the
golden mountain refer to? One way of capturing these facts in
a fixed-domain framework is to introduce an existence predicate
exists, understood to be true of an individual at a world if that in-
dividual really exists at that world. Thus we can distinguish be-
tween two kinds of ‘existence’: BROAD EXISTENCE, which holds of
everything a quantifier can range over, that is, everything in the
domain of individuals, and NARROW EXISTENCE, which is a con-
tingent property of individuals, holding at some worlds but not
others. The verb exist can be taken to denote the narrow, contin-
gent kind of existence, captured by the existence predicate.
Now, it is possible to combine tense logic with alethic logic,
letting models contain both possible worlds and times, and hav-
ing complex points of evaluation for formulas consisting of a time
and a world. A formula might be true at ⟨w 1 , t 1 ⟩ but false at ⟨w 2 , t 3 ⟩,
for example. As a broad cover-term for the points at which for-
mulas are evaluated for truth, these pairs are called INDICES. But
given that we will advocate a non-Priorean, so-called ‘referential’
theory of tense in Chapter 12, we will continue to just use possible
worlds as our indices in this text.

11.3 Intensional Logic

11.3.1 Introducing Intensional Logic
In the previous section, we defined the semantics of ◻ and ◇ syn-
categorematically, rather than giving ◻ a meaning of its own. It
is common to do this with negation as well. But with negation,
unlike necessity, it is possible to give the symbol a meaning of its
own, one that is a function of the truth value of its complement.
The semantic value of the ¬ symbol can be defined as a function
that returns 0 if it receives 1 as input, or 1 if it receives 0 as input.
The same is not the case for ◻, because the truth of a ◻ statement
depends not on the truth value of its complement at a particular

Intensional semantics 321

world. We saw this above with Frege’s examples, repeated here:

(14) Necessarily, the Morning Star is the Morning Star.

(15) Necessarily, the Morning Star is the Evening Star.

Both of the embedded sentences are true, but only one of the full
sentences is.
Indeed, the truth of a necessity statement depends on the whole
range of truth values that the inner formula takes on across all
worlds. So if ◻ denotes a function, it does not take as input a truth
value. Rather, it must take as input a specification of all of the
truth values that the sentence takes on, across all worlds. In other
words, the input to the function that ◻ denotes must be a propo-
sition. In this section, we will develop tools that make it possible
to give ◻ a denotation of its own, and feed the right kind of object
to it as an argument.
The technique we will use (due to Carnap) is to associate with
each expression both an INTENSION and an EXTENSION. The in-
tension is a function from possible worlds to the denotation of
the expression at that world. The denotation of an expression at a
world is called the extension (of the expression at that world).
• A name (type e), which denotes an individual, has an inten-
sion that is a function from possible worlds to individuals.
A function from possible worlds to individuals is called an

• A unary predicate (type ⟨e, t ⟩), which denotes a setsof indi-
viduals (or characteristic function thereof ), has a function
from possible worlds to (characteristic functions of) sets of
individuals as its intension. Such a function is called a PROP-

• A formula (type t ), which denotes a truth value, has as its
intension a function from possible worlds to truth values.
A function from possible worlds to truth values is called a

322 Intensional semantics The extension of an expression α at world w (with respect to model M . The device is called the ‘hat operator’. non-syncategorematic semantics for necessity and belief. The in- tension of an expression α is that function f such that f (w ) = JαKM . you need to know the extension at every possible world. So there is no function from extensions to intensions.g . in w 2 . Now let us return to the problem of giving a compositional. In order to get the intension. For example.w M and assignment function g ) is denoted by JαK . its intension relative to g will be a function that yields the same value for every possible world given as input. and no world variable superscript. . and it looks like this: ˆα Relative to any given world. the extension might be 0. The relevant proposition is of course the intension of the formula with which it combines. Note also that every expression in the language gets an extension. even variables.g JαK¢ with the cent sign ¢ as a subscript on the denotation brackets. That function is sometimes denoted as follows: M . the formula happy(m) has either 1 or 0 as its extension in every world. the exten- sion of this formula might be 1.w . Recall that if the ◻ operator denotes any function. it denotes one whose input is a proposition. this expression has as its extension the intension of α. In w 1 . Note that it is not possible to figure out the intension from the extension at a particular world. But since the denotation of a variable is always determined by an as- signment function. rather than a truth value. The strategy that Montague followed in order to do so was to introduce a device that forms from any expression α a new expression denoting the intension of α.g .

τ⟩. τ⟩ is a type. the extension might be 1. which normally denotes a truth value. . a formula. But the expression: ˆhappy(m) has the intension of happy(m) as its extension. (Put more sim- ply: The extension of ˆhappy(m) is the intension of happy(m). With the help of this ‘hat’ operator. a new type ⟨s. then JˆαK is that func- tion f with domain W such that for all w ∈ W : f (w ) is JαKM . it will be convenient to define an extension of the type system that allows for the new kinds of expressions that are formed using this operator. which denote functions from possible worlds to other sorts of things. then ˆα is an expression of type ⟨s.w . τ⟩ will denote a function from possible worlds to D τ . This new expression is the right kind of input for an expression that denotes necessity or belief. The complete type system is now as follows: • t is a type • e is a type • If σ and τ are types. we now have. then so is ⟨σ. for every type τ.g . τ⟩. then ⟨s. can be converted into an expression that denotes a proposition. Any expression of type ⟨s. Before showing how that works.) We now therefore have a new class of expressions. where D τ is the domain of entities denoted by expressions of type τ. Our syntax rules will be extended so that if α is an expression of type τ. The official semantic rule for ˆ is as follows: M .Intensional semantics 323 in w 3 .w • If α is an expression of type τ. τ⟩ • If τ is any type.g . The intension is a function from worlds to truth values. Letting s stand for the type of possible worlds.

then ˇα is an expression of type τ. John’s belief does not have to do with the property of being a Republican. which relates a proposition (denoted by an expression of type ⟨s.g . The following sentence can be understood in two ways: (16) John believes that Miss America is bald. this time involving a proper name. the existential quantifier and the predicate repub occur outside the scope of the belief operator. Its semantics is defined as follows: • If α is an expression of type ⟨s. t ⟩⟩ The de dicto reading of a sentence like John believes that a Repub- lican will win can then be represented as follows: bel(john. ˇ which moves from in- tensions to extensions. ˆ[win(x )]) In the latter formula. let us now consider how we might get a handle on the de dicto / de re ambiguity and related puzzles. ˆ∃x [repub(x ) ∧ win(x )]) The de re reading can be represented: ∃x [repub(x ) ∧ bel(john. τ⟩. Let us introduce a constant bel. ⟨e.w sult of applying the function JαK to w. t ⟩. then JˇαK M . on which the content of John’s belief involves that property. So on the de re reading. τ⟩. With these tools in hand. it’s about the particular individual.g . Not so for the de dicto reading. .w is the re- M . t ⟩) with an individual (denoted by an expression of type e). its type should then be ⟨⟨s.324 Intensional semantics The hat operator also has a partner. Given that believe combines first with its clausal complement and then with its subject. If α is an expression of type ⟨s. Let us consider one more example of a de dicto / de re ambi- guity.

The two interpreta- tions can be represented as follows: (17) [λx [bel(john. John might believe of some individual who happens to be Miss America without John knowing. but believe that. that she is bald. This example has an important consequence: lambda-conversion is not in general a valid principle anymore. 11. 6.2. (1981. following Mon- tague. whoever they are. its in- terpretation may vary from world to world. See Dowty et al.Intensional semantics 325 On the de re interpretation. the latter captures the de dicto one. John might not have any acquaintance at all with the individual who is Miss America. On the de dicto interpretation. 11. they are bald. lambda conversion can change the meaning.3.1 Semantics The types are defined recursively as follows: • t is a type • e is a type • If σ and τ are types. 166-167) for a somewhat fuller discussion of this issue. ˆbald(m)] The former captures the de re interpretation. IL ‘Intensional Logic’. ch.3. then so is ⟨σ. When the lambda- bound variable is found in the scope of an intensional operator. When m is in the scope of the bel operator. it just denotes whoever Miss America is in the current world. but it is fundamentally similar in spirit. pp. τ⟩ . ˆbald(x )]](m) (18) bel(john. The language is not exactly the same as Montague’s Inten- sional Logic. but when it is outside.2 Formal fragment Let us define a new logic.

6. The set of expressions of type τ. then α = β is an expression of type t . is defined re- cursively as follows: 1. Alethic modalities (new!) If φ is a formula. for any type τ. 5. 7. then ◻φ and ◇φ are formulas. then ⟨s. Basic Expressions For each type τ. Predication For any types σ and τ. [φ ∨ ψ]. 3. if α is an expression of type ⟨σ. . Lambda abstraction If α is an expression of type τ and u is a variable of type σ then [λu . 2.326 Intensional semantics • If τ is any type. τ⟩ and β is an expression of type τ then α(β) is an expression of type τ. then so are ¬φ. Binary Connectives If φ and ψ are formulas. φ] and [∃u . then so is ¬φ. Quantification If φ is a formula and u is a variable of any type. Negation If φ is a formula. [φ → ψ]. Equality If α and β are terms. (a) the non-logical constants of type τ are they symbols of the form c n. 4. φ] are formulas. (b) the variables of type τ are the symbols of the form v n. 8. α] is an expression of type ⟨σ. τ⟩ is a type. and [φ ↔ ψ]. [φ ∧ ψ]. τ⟩.τ for each natural number n. then [∀u .τ for each natural number n.

D ⟨a. M .g = I (α). For any types a and b. then Jα(β)K = JαK(JβK). For every type a. then JαK M .Intensional semantics 327 9. 10. then Jα = βK = 1 iff JαKM .g . then ˇα is an expression of type τ. not just those of type e. Intensionalization (new!) if α is an expression of type τ. (b) If α is a variable. τ⟩.g = JβKM . Basic Expressions (a) If α is a non-logical constant. and a world. 3. The domain of individuals D e = D is the set of individuals. and β is an expression of type a. Equality If α and β are terms. I . Extensionalization (new!) If α is an expression of type ⟨s. then JαK M . A model M = ⟨D. The semantic values of expressions in IL depend on a model. Predication If α is an expression of type ⟨a. 2. the set of potential denotations for an expression of type e.g .W ⟩ is a triple consisting of the domain of individuals D. τ⟩.b ⟩ is the domain of functions from D a to D b . and a set of worlds W . The semantic value of an expression is defined as follows: 1. an assignment function.g = g (α). an interpreta- tion function I which assigns semantic values to each of the non- logical constants in the language. An assignment thus is a function assigning to each variable of type a a denotation from the set D a . then ˆα is an expression of type ⟨s. b ⟩. The domain of truth values D t contains just two elements: 1 ‘true’ and 0 ‘false’. Types are associated with domains. Assignments provide values for variables of all types. I assigns an object in D a to every non-logical constant of type a.

328 Intensional semantics

4. Negation
If φ is a formula, then J¬φK
M ,g
= 1 iff JφKM ,g = 0.

5. Binary Connectives
If φ and ψ are formulas, then:

(a) Jφ ∧ ψK = 1 iff JφKM ,g = 1 and JψKM ,g = 1.
M ,g

(b) Jφ ∨ ψK = 1 iff JφKM ,g = 1 or JψKM ,g = 1.
M ,g

(c) Jφ → ψK = 1 iff JφKM ,g = 0 and JψKM ,g = 1.
M ,g

(d) Jφ ↔ ψK = 1 iff JφKM ,g = JψKM ,g .
M ,g

6. Quantification

(a) If φ is a formula and v is a variable of type a then J∀v . φK
M ,g
1 iff for all k ∈ D a :

JφKM ,g [v ↦k ] = 1

(b) If φ is a formula and v is a variable of type a then J∃v . φK
M ,g
1 iff there is an individual k ∈ D a such that that:

JφKM ,g [v ↦k ] = 1.

7. Lambda Abstraction
If α is an expression of type a and u a variable of type b then
Jλu . αKM ,g is that function h from D b into D a such that for
all objects k in D b , h (k ) = JαK
M ,g [u ↦k ]

8. Alethic modalities (new!)
M ,g ,w
= 1 iff JφKM ,g ,w = 1 for all w ′

(a) J◻φK
M ,g ,w
= 1 iff JφKM ,g ,w = 1 for some w ′

(b) J◇φK

9. Intensionalization (new!)
M ,g ,w
If α is an expression of type τ, then JˆαK is that func-
tion f with domain W such that for all w ∈ W : f (w ) is
JαKM ,g ,w .

Intensional semantics 329

10. Extensionalization (new!)
If α is an expression of type ⟨s, τ⟩, then JˇαK
M ,g ,w
is the re-
M ,g ,w
sult of applying the function JαK to w.

Exercise 3. Formulate a lexical entry for a necessity operator, and
show how it accounts for the examples involving the Morning Star
and the Evening Star.

Exercise 4. Consider (8) in light of the presentation just given.
Give logical translations for the two sentences involved, and ex-
plain how they can differ in their truth value even when the em-
bedded statements have the same truth value, by giving an exam-
ple model where this is the case.

Exercise 5. Formalize the de dicto and de re readings of the what
the statute prohibits in the Andersen example on page 315, and
describe in your own words what the truth conditions are under
these two readings; in other words, describe what properties a
model would have to have in order for the reading to be true.

Exercise 6. Is it possible to give a non-syncategorematic treatment
of the hat operator ˆ? Explain why or why not.

11.4 Fregean sense and hyperintensionality
Frege’s assessment of his puzzle about identity built on a distinc-
tion between sense and reference. For Frege, the expressions the
Morning Star and the Evening Star have the same referent, but

330 Intensional semantics

they differ in their sense. Frege was not fully explicit about what
a sense was, but described it as a ‘mode of presentation’. A Car-
napian intension is like a Fregean sense insofar as it provides a
more fine-grained notion of meaning, but one might question whether
it really captures what Frege had in mind. Perhaps Frege’s notion
of sense is even more fine-grained than the notion of intension.
Certainly, intensions in Carnap’s sense are not sufficiently fine-
grained to capture entailment relations among belief sentences.
For example, the sentence 2 + 2 = 4 is a mathematical truth, so it
is true in every possible world. And there are many other math-
ematical truths that are true in exactly the same possible worlds
(namely all of them), such as Toida’s conjecture. But from (19), it
does not follow that (20) is true.

(19) John believes that 2 + 2 = 4.

(20) John believes that Toida’s conjecture is true.

This problem is not limited to tautologies; it also holds for
pairs of contingent but logically equivalent propositions where
the logical equivalence might be cognitively difficult to compute.
For example, the law of contraposition is sometimes difficult for
human beings to compute, so the (21) does not imply (22) (exam-
ple from Muskens 2005):

(21) John believes that the cat is in if the dog is out.

(22) John believes that the dog is in if the cat is out.

Both of these cases exemplify the PROBLEM OF LOGICAL OMNI -
SCIENCE : in general, people do not believe all of the logical con-
sequences of their beliefs. Phenomena in which the substitution
of one expression for another that has the same intension leads
to a difference in truth value are called HYPERINTENSIONAL. Such
cases clearly show that the analysis of belief given in the previous
section is inadequate, and moreover that a more fine-grained no-
tion of meaning is required. Recent perspectives on the problem

Intensional semantics 331

are collected in a special volume of the journal Synthese; see Jes-
persen & Duží 2015 for an overview.
But the existence of hyperintensionaltiy does not negate the
existence of the intensional ‘layer’ of meaning, as it were. Inten-
sions are like the shadows of hyperintensions. And intensions are
quite a bit more straightforward to deal with and more standard,
at the time of writing. Therefore, in order to keep things manage-
able, we will continue to work ‘at the intensional level’ as it were,
keeping in mind that any fully adequate theory ought to be hyper-

11.5 Explicit quantification over worlds
The astute reader may have noticed that in the system described
in §11.3, there were no expressions of type s. This is partly be-
cause Montague and his contemporaries believed that there were
no expressions that made reference to possible worlds. That as-
sumption has since been challenged, and for that reason among
others, it is generally preferred nowadays to use a formal system
in which there is explicit quantification and binding of possible
worlds. On this view, rather than writing


one writes something along the lines of:

λw . baldw (m)

where the subscript on w is meant to indicate that const denotes a
function that takes a possible world as an argument, in addition to
an individual. A famous example of this sort of system is Gallin’s
(1975) Ty2 , so called because it has two types (s and e) other than
t . The semantics literature still uses both styles, as each has its
own advantages, although Gallin’s style is gaining in popularity.

332 Intensional semantics

11.5.1 Modal auxiliaries
So far we have discussed two types of modal expressions: the ad-
jectives necessary and possible, denoting alethic modalities, and
attitude verbs like believe and hope. We have not said anything
about modal auxiliaries like may, must, can, and have to. For a
more thorough and pedagogical discussion of this topic than what
can be achieved here, the reader is encouraged to consult Chapter
3 of von Fintel & Heim 2011, which motivates a particular context-
sensitive analysis of these elements.

11.6 Logic of Indexicals
We are now in a position to appreciate Kaplan’s (1977) theory of
indexicals, published in a famous article called Demonstratives.
An INDEXICAL may be defined as “a word whose referent is de-
pendent on the context of use, which provides a rule which deter-
mines the referent in terms of certain aspects of the context” (Ka-
plan, 1977, 490). Examples include I, my, you, that, this, here, now,
tomorrow, yesterday, actual, and present. Kaplan distinguishes be-
tween two sorts of indexicals:

• DEMONSTRATIVES : indexicals that require an associated demon-
stration. Examples: this and that.

• PURE INDEXICALS : indexicals for which no demonstration
is required. Examples: I, now, here, tomorrow (although
here has a demonstrative use: “In two weeks, I will be here

What is particularly interesting about indexicals is that they ex-
hibit what Kaplan calls a ‘deviant logic’. To illustrate, consider the
following sentence:

(23) I am here now.

Intensional semantics 333

This sentence is similar to a tautology in that it is true whenever
it is uttered (given normal interpretations of the terms involved;
here we are setting aside answering machine cases where I am not
here right now is in fact perfectly apt to be true). But (23) is not a
necessary truth, because the circumstances could be otherwise.
Any given speaker of such a sentence might well have ended up
somewhere else that day.
This puzzle can be solved with the help of two levels of mean-
ing, one which he calls CHARACTER and another which he calls
CONTENT . He bases this conclusion on the following two princi-

• The referent of a pure indexical depends on the context, and
the referent of a demonstrative depends on the associated

• Indexicals, pure and demonstrative alike, are directly refer-

Kaplan defines DIRECTLY REFERENTIAL as follows: an expression
is directly referential if its referent, once determined, is taken as
fixed for all possible circumstances. By CIRCUMSTANCES, he means
possible worlds (or indices, more generally). So a directly referen-
tial expression would have a constant intension, i.e., one whose
value does not change depending on the input possible world.
Kripke (1980) used the term RIGID DESIGNATOR for more or less
the same concept. On Kripke’s view, proper names like John are
rigid designators, and definite descriptions like the president are
For example, consider the following sentence:

(24) The president is a Democrat.

This sentence is false in the actual world. But there is an alterna-
tive world where Hillary is president (and still a Democrat); rel-
ative to that world, the president picks her out and the sentence

334 Intensional semantics

is true. What affects the truth of the sentence at a given world is
whether or not whoever happens to be president in that world is
a Democrat at that world. But suppose we try:

(25) Donald Trump is a Democrat.

This sentence is also false in the actual world. But if we move to an
alternative world where Hillary is president (and still a Democrat),
the sentence does not suddenly become true at that world. Of
course, the sentence could be true, if Donald Trump himself is a
Democrat in that world. But who is president does not matter.
What matters to the truth of this sentence is that Donald Trump
himself has the property of being a Democrat.
Now suppose that Donald Trump decides to refer to himself
with a first-person pronoun rather than in the third person, and

(26) I am a Democrat.

Again, the sentence is false in this world. And its ‘modal profile’ as
it were is exactly the same as the one for the proper name. If we
move to another world where Hillary Clinton is a president (and
still a Democrat), the sentence does not necessarily become true.
What matters is whether Trump himself is a Democrat. So:

• Donald Trump designates the same individual in every pos-
sible world; this expression is directly referential.

• The president can designate different individuals in differ-
ent possible worlds.

• When Donald Trump says I, he means Donald Trump. I is
directly referential too.

(A slight complication comes from the fact that there are so-called
DESCRIPTIVE uses of indexicals, as Nunberg (1992) discusses. His
example is the following, spoken by a person on death row:

uttered by Elizabeth Coppock:) I am turning 30 today. it picks out the same object. if Hillary Clinton says I am a Democrat. Now consider the following two utterances: (28) a.) Recall that an expression is called ‘directly referential’ if its ref- erent. • C IRCUMSTANCE OF EVALUATION: A possible world at which the truth of the utterance might be evaluated. then I picks out Hillary Clinton rather than Donald Trump. “This does not mean it could not have been used to designate a different object. once determined. is taken as fixed for all possible circum- stances. It therefore becomes important to distinguish between the fol- lowing two concepts: • C ONTEXT OF UTTERANCE: Who is speaking to whom. the meaning of a definite description is variable across circum- stances of evaluation (and arguably variable across contexts of use as well). 2010. when. where. The word I picks out the same individual in every circumstance of evaluation. Do they have the same meaning or different meaning? How about the following pair: . but variable across contexts of use. b. but different individuals in different contexts of ut- terance. Whereas the meaning of the indexical I is fixed across all circumstances of evaluation. (May 12.Intensional semantics 335 (27) I am traditionally allowed a last meal. 2010. in a different context.” For example. and Kaplan claims that I has this property. etc. what they are gesturing to. But regardless of the circumstance of evaluation. But let us ignore this fact for the time being. it might have. However. uttered by Elizabeth Coppock:) I am turning 30 today. (May 11. as Kaplan continues.

• C HARACTER is the aspect of meaning that two utterances of the same sentence share across different contexts of utter- ance. uttered by Elizabeth Coppock:) I turned 30 yesterday. So in a nutshell. Ka- plan’s ‘contents’ are essentially the same as Carnap’s ‘intensions’. they play indistinguishable roles for our purposes. with the referents of all of the indexicals resolved. CHARACTER and CONTENT. and in that sense. even though an indexical does not always refer to the same thing. For sentences. it has some sort of descriptive meaning. Kaplan says . b. ‘circumstances of evaluation’) to extensions. (May 12.k. the indexical I means something like ‘the speaker of the sentence’. gives you a content. the content is a propo- sition. Intuitions differ.a. the contribution of an indexical to the content of a sentence is fixed. while the pair in (29) have the same content. the pair of sentences in (28) have the same character. In some sense. So under these definitions. 2010.) Given a particular context. a function from possible worlds to truth values.336 Intensional semantics (29) a. • C ONTENT is the proposition expressed by an utterance. formally a function from contexts to contents. (May 11. they are functions from possible worlds (a. But. uttered by Elizabeth Coppock:) I am turning 30 today. 2010. the Kaplanian picture is as follows: • Character + Context of utterance ⇒ Content • Content + Circumstance of evaluation ⇒ Extension (We could have written ‘Intension’ in place of ‘Content’ here. given a context of utterance. The char- acter of a sentence is something that. Kaplan resolves this tension by distinguishing between two levels of meaning.

g . and a world w. So now. “this meaning is relevant only to determining a referent in a context of use and not to determining a relevant individual in a circumstance of evaluation. The extension of an expression α is denoted as follows: JαKM . Nonsense! If that were the correct analysis.w. yet not a NECESSARY TRUTH. The modal op- . then ◻φ is true in all models. Similarly. In other words. because the circumstances could be otherwise.c The M . So as Kaplan (1999) describes it. leads to an absurd conclusion. Suppose I do not exist is true in a circumstance of eval- uation if and only if the speaker (assuming there is one) of the circumstance does not exist in the circum- stance. we will also determine the extension of expressions relative to a context of utterance c. taking the descriptive content to be part of the content. Kaplan’s theory does not. how does it work? Very simply. an assignment function g . All that is needed in or- der to capture this deviant logic is to add context as a parameter of evaluation. it is true. 498).Intensional semantics 337 (p. Imagine if it were otherwise! Kaplan writes. indexicals produce “a distinctive and deviant pattern of logical consequence”. Kaplan’s theory can explain why I am here now is a LOGICAL TRUTH . rather than the character. So. then JαK M . what I said could not be true. For example.” In other words. This is unusual: Normally logical truths are necessary truths! If φ is true in all models. From which it follows that: It is impossible that I do not exist. in the sense that whenever it is uttered. but not the content. g and w parameters work just as we have already seen. if α is a variable.g .c = g (α). in addition to a model M .w. the descriptive meaning is part of the character.

(They also contain times and po- sitions but we will ignore those here. .c = l (c ) The designated circumstance of evaluation w (c ) is intuitively the world in which the utterance takes place.c = sp (c ) (31) JuKM .w. a set of possible worlds.g . The models of Kaplan’s logic of indexicals determine a set of contexts.338 Intensional semantics erators are also defined just as before: J◻φK = 1 iff JφK M .g . The context parameter c represents the context of utterance.c = M . in addition to a set of individuals. u ‘you’ n ‘now’. so in general. f (c ) = JαK M .w.c = t (c ) (33) JhKM . an addressee ad (c ). Now. Truth in a context can then be defined as follows: An occurrence of φ in c is true iff the content expressed by φ in this context is true when evaluated with respect to the circumstance of the context. The indexical constants i ‘I’. For example.w. I .W. and a designated cir- cumstance of evaluation w (c ).w ′ .w. and C is a set of contexts. a location of utterance l (c ).g .C ⟩ where D is a set of individuals. there are certain constraints on these models.g .g . the character of an expression α will be a function f such that for any c.g .c 1 for all w ′ . I is a world-relative interpretation function.w. and an interpretation function.) So a simplest model M for a logic of indexicals would be a tuple: M = ⟨D. W is a set of possible worlds. and h ‘here’ can be defined as follows: (30) JiKM .w.c = ad (c ) (32) JnKM .g . The character can be defined for- mally by abstracting over the context parameter. It determines a speaker sp (c ).c . a time of ut- terance t (c ).

The content expressed is one that is contingent. 544 of Kaplan (1977).w ′ . the sentence will be true in the con- text.Intensional semantics 339 the speaker of any context must be in the extension of the exis- tence predicate exists at the world of the context. conditions 10 and 11. Explain why I do not exist is logically false yet not nec- essarily false in this framework. In this sense. p. then sp (c ) ∈ I w (c ) (exists). Kaplan’s logic of indexicals contains necessity and possibility M . Thus ◻exists(i) is neither a logical truth nor a necessary truth.4 Formally: (34) If c ∈ C .c iff Jexists(i)K = 1 for all w ′ . If. formally exists(i) will be a function from contexts to con- tents such that the content is true in the world of the context. . relative to c. In this framework. So J◻exists(i)K =1 M . the interpretation function I must be such that the extension of the exists predicate in the world of c contains the speaker of c. then ◻exists(i) will be false. But it is not a necessary truth.w. respectively? 4 Cf. This condition guarantees that the character of ‘I exist’. Kaplan’s logic of indexicals is just like IL. or.g .g . for any context. Exercise 7. This condition on well-formed models requires that for any con- text c in the model. Otherwise.c operators defined in the standard way. In other words. In partic- ular. i denotes an individual that does not exist in every world. ‘I exist’ is a logical truth. Which parameter do pronouns and indexicals depend on. pronouns and indexicals depend on different parameters of the function that assigns semantic val- ues to expressions. Exercise 8.

here. “See how rigidly the indexicals cling to the referent determined in the context of use:”: (35) It is possible that in Pakistan.. one might ask if there are operators that are similar to ◻ except that they quantify over different possible contexts of utter- ance rather than different possible circumstances of evaluation. And such operators could not be added to it. but they don’t shift the context. As Ka- plan (1977. So these operators may shift the circumstance of evaluation. Kaplan asks this question in the following way (p. 510): Are there such operators as ‘In some contexts it is true that’. I am not saying we could not construct a language with such operators.. Now. just that English is not one. which when prefixed to a sentence yields a truth if and only if in some context the contained sentence (not the context expressed by it) expresses a content that is true in the circumstances of that context? Let us try it: (36) In some contexts it is true that I am not tired now. in five years. For [(36)] to be true in the present context it suffices that some [speaker] of some context not be tired at the time of that context. and now. and another time with in five years does not change the interpretation of actually.340 Intensional semantics Kaplan’s framework captures the fact that words like possible and necessary do not affect the interpretation of indexicals. . 499) puts it. only those who are actually here now are envied. He continues. Moving to another circumstance of evaluation with it is possible that. another location with in Pakistan.

Intensional semantics 341

There is a way to control an indexical, to keep it from
taking primary scope, and even to refer it to another
context (this amounts to changing character). Use quo-
tation marks. If we mention the indexical rather than
use it, we can, of course, operate directly on it. Carnap
once pointed out to me how important the difference
between direct and indirect quotation is in:

Otto said “I am a fool.”
Otto said that I am a fool.

Operators like ‘In some context it is true that’, which
attempt to meddle with character, I call MONSTERS. I
claim that none can be expressed in English (without
sneaking in a quotation device).

As it turned out, some years later, monsters were found in the
Semitic language Amharic (Schlenker, 2003), where the word-by-
word translation of John said that I am a hero means ‘John said
that he was a hero’, and it can be shown that quotation is not in-
volved. So apparently monsters do exist after all, and they have
proved quite rewarding to study.
Let us take stock. One way of looking at what Kaplan achieved
in Demonstratives is as “the scientific realization of a Strawsonian
semantics of use.” Strawson was one of the philosophers that protested
against the logical approach to natural language semantics under
the slogan, ‘meaning is use’. Here is a representative quotation
from Strawson (1950):

Meaning (in at least one important sense) is a func-
tion of the sentence or expression; mentioning and
referring and truth or falsity, are functions of the use
of the sentence or expression. To give the meaning
of an expression (in the sense in which I am using the
word) is to give general directions for its use to refer to
or mention particular objects or persons; to give the

342 Intensional semantics

meaning of a sentence is to give general directions for
its use in making true or false assertions.

Kaplan (1999), reflecting on Demonstratives, says the following:

In [Demonstratives], I tried to show that by adding
context as a parameter, Strawson’s “conventions for
referring” as he calls them, even if neglected by lo-
gicians, could be accommodated within the range of
our methods. And at the time I regarded my work
as extending current semantical methods just to the
degree necessary to incorporate the indexicals. I re-
garded what I was doing as a sort of epicycle on Car-
nap’s method of extension and intension and I didn’t
think of it as involving any different conception of se-
mantics or what semantics was supposed to do.
Some years ago, it occurred to me that the analysis of
indexicals in Demonstratives could be seen as the sci-
entific realization of a Strawsonian semantics of use.
Ask not after other-worldly meanings, ask only after
rules of use. [Emphasis added]

12 ∣ Tense

In which we give a theory of tense.



Andersen, Jill. 2014. Misreading like a lawyer: Cognitive bias in
statutory interpretation. Harvard Law Review 1521. 1563–68.

Barker, Chris. 1995. Possessive descriptions. Stanford: CSLI Publi-

Barker, Chris. 2005. Remark on Jacobson 1999: Crossover as a lo-
cal constraint. Linguistics and Philosophy 28(4). 447–472.

Barwise, Jon & Robin Cooper. 1981. Generalized quantifiers and
natural language. Linguistics and Philosophy 4. 159–219.

Beaver, David & Emiel Krahmer. 2001. A partial account of pre-
supposition projection. Journal of Logic, Language and Infor-
mation 10. 147–182.

Bresnan, Joan. 2001. Lexical-functional syntax. Malden, MA:

Bruening, Benjamin. 2001. Syntax at the edge: Cross-clausal phe-
nomena and the syntax of Passamaquoddy: MIT dissertation.

Bruening, Benjamin. 2008. Quantification in passamaquoddy. In
Quantification: A cross-linguistic perspective, 67–103. Emerald
Group Publishing.

Carlson, Gregory N. 1984. Thematic roles and their role in seman-
tic interpretation. Linguistics 22(3). 259–280. doi:10.1515/ling.



Carnie, Andrew. 2013. Syntax: A generative introduction. Black-

Carpenter, Bob. 1998. Type-logical semantics. MIT Press.

Castañeda, Hector-Neri. 1967. Comments. In Nicholas Rescher
(ed.), The logic of decision and action, University of Pittsburgh

Champollion, Lucas. 2015. The interaction of compositional se-
mantics and event semantics. Linguistics and Philosophy 38(1).
31–66. doi:10.1007/s10988-014-9162-8.

Champollion, Lucas, Josh Tauberer & Maribel Romero. 2007. The
penn lambda calculator: Pedagogical software for natural lan-
guage semantics. In Tracy Holloway King & Emily M. Bender
(eds.), Proceedings of the grammar engineering across frame-
works (geaf ) 2007 workshop, CSLI On-line Publications.

Chierchia, Gennaro & Sally McConnell-Ginet. 2000. Meaning and
grammar: An introduction to semantics. Cambridge, MA: MIT
Press 2nd edn.

Chomsky, Noam. 1973. Conditions on transformations. In
Stephen Anderson & Paul Kiparsky (eds.), A festschrift for Morris
Halle, 232–286. New York: Holt, Rinehart and Winston.

Chomsky, Noam. 1995. The minimalist program. Cambridge, MA:
MIT Press.

Chomsky, Noam & Howard Lasnik. 1977. Filters and control. Lin-
guistic Inquiry 8. 435–504.

Condoravdi, Cleo. 2010. NPI licensing in temporal clauses. Natu-
ral Language and Linguistic Theory 28. 877–910.

Cooper, Robin. 1983. Quantification and syntactic theory. Reidel.


Coppock, Elizabeth. under second-round review. Outlook-based
semantics. Submitted to Linguistics and Philosophy.

Coppock, Elizabeth & David Beaver. 2015. Definiteness and deter-
minacy. Linguistics and Philosophy 38(5). 377–435.

Davidson, Donald. 1967. The logical form of action sentences.
In Nicholas Rescher (ed.), The logic of decision and action, 81–
95. Pittsburgh, PA: University of Pittsburgh Press. doi:10.1093/

Dowty, David, Robert E. Wall & Stanley Peters. 1981. Introduction
to Montague semantics. Dordrecht: Kluwer.

von Fintel, Kai. 1999. NPI licensing, Strawson entailment, and
context-dependency. Journal of Semantics 16. 97–148.

Francez, Itamar. 2011. Quantified possessives and direct com-
positionality. In Ed Cormany, Satoshi Ito & David Lutz (eds.),
Proceedings of Semantics and Linguistic Theory (SALT), vol. 19,
165–179. eLanguage.

Frege, Gottlob. 1892. Über Sinn und Bedeutung. Zeitschrift für
Philosophie und philosophische Kritik 100. 25–50. Translated
as On Sense and Reference by M. Black in Translations from the
Philosophical Writings of Gottlob Frege, P. Geach and M. Black
(eds. and trans.), Oxford: Blackwell, third edition, 1980.

Frege, Gottlob. 1892 [reprinted 1948]. Sense and reference. The
Philosophical Review 57(3). 209–230.

Gallin, Daniel. 1975. Intensional and higher order modal logic.
Amsterdam: North Holland Press.

Geach, Peter. 1962. Reference and generality. Ithaca, NY: Cornell
University Press.

and the interpretation of language. In Rainer Bäuerle. 1984. Dynamic predicate logic.P. Linguistics and Philosophy 22. Theo Janssen & Martin Stokhof (eds. Discourse representation the- ory. The semantics of definite and indefinite noun phrases: U. Harvard University Press. 1982. Logic and conversation. 2011. H. Anastasia. 1975. 1991. Heim. Bart & David Beaver. more than half. Natural Language Semantics 17. File change semantics and the familiarity the- ory of definiteness. Linguistics and Philosophy 14. Jeroen & Martin Stokhof. Martin. Truth. Stanford. 39–100. 367–421. Haug. 2009. 1990. Groenendijk. Jeroen & Martin Stokhof. . Proceedings of the Second Symposion on Logic and Language 3–48. Berlin: Walter de Gruyter. Dynamic Montague grammar.). Netherlands: Foris. interpretation and information: selected papers from the third amsterdam colloquium. Heim.). Hackl.). vol.348 BIBLIOGRAPHY Geurts. Partial dynamic semantics for anaphora: Com- positionality without syntactic coindexation. Dordrecht. Meaning. Affective dependencies. 164–189. 1999. CSLI. Syntax and semantics. In Peter Cole & Jerry Morgan (eds. 2013. 63–98. 3. 1983a. Christoph Schwarze & Arnim von Stechow (eds. Giannakidou. Grice. Irene. 41–58. Dag. Journal of Seman- tics (online first). On the grammar and processing of propor- tional quantifiers: most vs. Groenendijk. Stan- ford encyclopedia of philosophy. Uri Nodelman & Colin Allen (eds.). use. Irene. In Edward Zalta. Groenendijk. Jeroen. Mass Amherst dissertation.

Journal of Semantics 9. Jacobson. J.). 55–82. & M. Jacobson. Cresswell. Higginbotham. Presupposition projection and the semantics of attitude verbs. Peter. Michael Barlow & Michael Westcoat (eds. 1983. Paycheck pronouns. Oxford: Blackwell. The logic of perceptual reports: An extensional alternative to situation semantics. 114–125. What is classical mereology? Journal of Philo- sophical Logic 38(1). 2009. Oxford University Press. Lin- guistics and Philosophy 22. Direct compositionality. Hendriks. 1999. Heim. doi:10. Stanford. 117–184. Semantics in generative grammar. Heim. Irene. 1993. doi:10. Hermann. . James. In The Ox- ford handbook of compositionality. Jacobson. 183–221. Wescoat (eds. Irene. An introduction to modal logic. Bach-Peters sen- tences.2307/2026237. G. On the projection problem for presupposi- tions. 1998. 1968. 2000. 1983b. Natural Language Seman- tics 8. Pauline.). Towards a variable-free semantics. Barlow. 2012. Proceedings of the second west coast conference on formal linguistics. CA: Stanford University Press.BIBLIOGRAPHY 349 Heim. In M. and variable-free semantics. London: Methuen and Co Ltd. Hovda. Hughes. Heim. Pauline. Studied flexibility: ILLC dissertation. The Journal of Philosophy 80(2). 1983c. Pauline. D. Proceed- ing of the second annual west coast conference on formal lin- guistics (wccfl). Flickinger & M. On the projection problem for presupposi- tions. 1992. In Daniel Flickinger.E. Irene & Angelika Kratzer. 109–129. Irene.1007/s10992-008-9092-4. 100–127. 77–155. .

Hans & Uwe Reyle. 1987. . Lauri.). Karttunen. 2015. Kaplan. 2005. Lin- guistic form and its computation. Lauri. Formal properties of ‘now’. Demonstratives: An essay on the seman- tics. In An- tje Rossdeutscher Christian Rohrer & Hans Kamp (eds. Keenan. Kamp. In James D. 1971. David. Kipper-Schuler. Kamp. University of Amsterdam. 1976. Theoria 37(3). 1977.). Hans. Kamp. Synthese 192(3). Oxford: Oxford University Press. Karin. From discourse to logic. In Proceedings of the sixth Amsterdam Colloquium. 2001.350 BIBLIOGRAPHY Jespersen. Dor- drecht: Kluwer Academic Publishers. The importance of presupposition. 169–193. Notes from the linguistic underground (Syntax and Se- mantics 7). Stanford: CSLI Publications. Linguistic Inquiry 4(2). Edward. 1974. New York: Academic Press. David.). Hans. 1973. John Perry & Howard Wettstein (eds. Lauri. 525–534. Lecture pre- sented at the University of California at Berkeley. compre- hensive verb lexicon. McCaw- ley (ed. Kaplan. logic. Presuppositions and linguistic context. 363–385. Introduction. Verbnet: A broad-coverage. PA: University of Pennsylva- nia dissertation. Philadelphia. The meaning of ouch and oops. Theoretical Linguistics 1. In Joseph Almog. Discourse referents. 181–194. 227– 273. 267–298. 1999. Semantic case theory. Karttunen. metaphysics. 1993. Bjorn & Marie Duží. Themes from Kaplan. and epistemology of demonstratives and other indexicals. Karttunen. Presuppositions of compound sentences.

Manuscript. The event argument and the semantics of verbs. Kratzer. Thematic relations as links between nomi- nal reference and temporal constitution. Harvard University Press. Cam- bridge. Krifka. Amherst. Events and grammar. Event structures in linguistic form and interpretation. Dordrecht. On the notion ‘affective’ in the analysis of negative polarity items. Linguistics and Philosophy 12. MA: Massachusetts Institute of Technology dissertation. Hand- book of contemporary semantic Archive/GU1NWM4Z/. The semantics of scope in English.). 1974. MA.). In Susan Rothstein (ed. In Ivan A. 29–53.). http://semanticsarchive. Ladusaw. Lexical matters. 70 Studies in Linguistics and Philosophy. Anthony S. Stanford. Manfred. Groups.). Manfred. doi:10. UK: Blackwell Publishing. The origins of telicity. Saul. 269–300. Naming and necessity. 1963. 1007/978-94-011-3969-4_9. Landman. Kroch. 1– 16. Berlin.BIBLIOGRAPHY 351 Kratzer. 1992. Kripke. 1980. Germany: de Gruyter. 1998. In Shalom Lappin (ed. Oxford. Plurality. Tatjana Heyde-Zybatow & Martin Schäfer (eds. 83–89. Amherst: University of Mas- sachusetts. 1996. 559–605. CA: CSLI Pub- lications. Krifka. Angelika. Journal of Linguistic Research 1. William A. Angelika. 2007. Landman. On the plurality of verbs. . 425–457. Fred. Acta Philosophica Fennica 16. 1989. In Johannes Dölling. I. 2000. Sag & Anna Szabolcsi (eds. Netherlands: Kluwer. chapter 2. Semantical considerations on modal logic. Kripke. 197–235. Saul. Fred. 1980. vol.

Christoph Schwarze & Arnim von Stechow (eds. Dordrecht. Richard. MA: Blackwell. In The Cambridge handbook of gener- ative syntax.352 BIBLIOGRAPHY Landman. Howard & Terje Lohndal. Julius Moravcsik & Patrick Suppes (eds. 2004. 76 Studies in Linguistics and Philosophy. 1993. LaPierre. Fred.). In Jaakko Hintikka. Meaning. The proper treatment of quantifica- tion in ordinary English. Counterpart theory and quantified modal logic. Montague. 221–242. Brief overview of the his- tory of generative syntax. vol. Godehard. 279–333. Concept types and determination. Approaches to natural language: Proceed- ings of the 1970 Stanford workshop on grammar and seman- tics. Sebastian.). Netherlands: Kluwer. doi:10. 1992. Fred. doi:10. 26–60. Dordrecht. 517– 541. Löbner. Indefinites and the type of sets. Notre Dame Journal of Formal Logic 33(4). 2011. Link. Levin. Jour- nal of Semantics 28. use and interpretation of language. The logical analysis of plurals and mass terms: A lattice-theoretical approach. Lasnik. 1973. IL: University of Chicago Press. 1983. Chicago.1007/978-94-010-2506-5_10. 2000. Serge. Events and plurality: The Jerusalem lec- tures. 303–323. Cambridge: Cambridge University Press. Germany: de Gruyter.1007/978-94-011-4359-2. In Reiner Bäuerle. 2013. English verb classes and alternations: A prelim- inary investigation. Malden. David. Netherlands: Dordrecht. . Landman. Journal of Philosophy 65. 113–126. Lewis. 49 Synthese Library. Berlin. A functional partial semantics for Inten- sional Logic. Beth. vol. 1968.

. Geoffrey. The proper treatment of mass terms in English. English as a formal language. Linguistics and Philosophy 19. doi:10. 635–662. 2013. For- mal philosophy. Tokyo. 1992. 188–221. New Haven: Yale University Press. Muskens. Parsons. Lambda grammars and hyperinten- sionality. Formal philosophy. Cam- bridge. Meaning and partiality. Linguistic Inquiry 26(4). In Richmond H.jstor. 6. Thematic relations and arguments. Alex & Timothy Smiley. Talk presented at NII. Montague. 1996. Parsons. 19 February 2005. Reinhard. 1990. 173–178. Muskens. Reinhard. Thomason (ed. Thomason (ed.). Plural logic. vol. Dordrecht. Oxford Univer- sity Press. Stanford. In Francis Jeffry Pelletier (ed. 143– 186. Muskens. MA: MIT Press. The proper treatment of quantifica- tion in ordinary English. 1995. Reinhard. Indexicality and deixis. Terence. In Rich- mond H. CA: CSLI Publications. Linguistics and Philosophy stable/4178917. Nunberg. Mass terms: Some philosophical problems. 1–43. 1974b. 2005.1007/978-1-4020-4110-5_12. Oliver.). 1979. Combining Montague semantics and discourse representation. New Haven: Yale University Press. 1974a. 1995.BIBLIOGRAPHY 353 Montague. Richard. http://www. Montague. Nether- lands: Reidel.). Richard. Events in the semantics of English. Richard. Terence.

I. use and the interpretation of language. Dick de Jongh & Martin Stokhof (eds.354 BIBLIOGRAPHY Partee. 1983a. Walter de Gruyter. 1992. Poesio. Christoph Schwartze & Arnim von Stechow (eds. Partee. 1990. Uniformity vs. Barbara H. Alice ter Meulen & Robert E. Chicago: University of Chicago Press. In Jeroen Groenendijk. Barbara H. Amsterdam: Elsevier. Partee. MA: MIT Press. Partee. Dordrecht: Kluwer.1515/ 9783110852820. .). Invitation to cogni- tive science. Germany: de Gruyter. Head-driven phrase structure grammar. In Johan van Benthem & Alice ter Meulen (eds. 347–366. 101–111. doi:10. Generalized conjunction and type ambiguity. Barbara H. Noun phrase interpretation and type- shifting principles.). Partee. Quine. Berlin. 115–143. Sag.361. In Rainer Bäuerle. Janssen (1997) “Composition- ality”. Cambridge. appendix to T. Journal of Philosophy 53. 1994. In Lila Gleitman & Mark Lieberman (eds. Christoph Schwarze & Arnim von Stechow (eds.). The handbook of logic and language. 361–383. & Mats Rooth. Partee.). Generalized conjunction and type ambiguity. In Proceedings of salt ii. Carl & Ivan A. 1986. & Mats Rooth. 1983b. 1956. Willard. use and interpretation of language. vol. Dor- drecht: Foris. 211–160. Lexical semantics and compositionality. Studies in Discourse Representation The- ory and the theory of generalized quantifiers.). Math- ematical methods in linguistics. Meaning. Massimo & Alessandro Zucchi. Wall. In Rainer Bäuerle. On telescoping. Pollard. Barbara. 1983/1997. versatility: The geni- tive. a case study. Barbara H. 361–393.. Barbara H. Quantifiers and propositional attitudes. Meaning. 1995.

doi:10. Mind 14. In Stephan Kepser & Marga Reis (eds. Netherlands: Mathematical Center Tracts.). Scha. J. (1984). Linguistics and philosophy 26(1). 1976. Amherst dissertation. Bound variables in syntax: Are there any? In Proceedings of the sixth amsterdam colloquium. Academic Press. In Syntax and semantics. vol. Robert. 9. Am- sterdam. Capturing the adjective: University of Mas- sachusetts. Remko. On referring. 29–120. 1057/9780230210752. 1905. London. Siegel. 1950. Uli. Distributive. Germany: Mouton de Gruyter. Constraints on variables in syntax: MIT dissertation. collective and cumulative quan- tification. 413–434. Anna. N. 2003. 243–281. On denoting. theoretical and com- putational perspectives. Mind 59(235). Muffy E. In Uli Sauerland & Penka Stateva (eds. John Robert. 1978.: Ablex]. Philippe. Russell.BIBLIOGRAPHY 355 Ross. Bertrand. Assertion. Jan Anderssen & Kazuko Yatsushiro. Presuppositions and implicature in com- positional semantics. Szabolcsi. 320–344. Benjamin. Stalnaker. In Jeroen Groenendijk. Berlin. UK: Palgrave. 1968. University of Amsterdam. [Published in 1986 as Infinite Syntax! Norwood. The plural is semantically unmarked. Spector.). Theo Janssen & Martin Stokhof (eds. Reprinted in Groenendijk et al. A plea for monsters. Schlenker. Sauerland. . 479–93. 1987. 2005. Linguistic evidence: Empirical. Formal methods in the study of language. F.). Strawson. 1981. P. Aspects of the pragmatics of plural mor- phology: On higher-order implicatures. 2007.

von Fintel.2224. Cambridge. MA: MIT Press. Cambridge. Intensional semantics. Operations on argument structure. 84. doi:10. Anaphoric relations in English: MIT dis- sertation. Flexibility principles in Boolean semantics. Dieter. Studia Linguistica 56. chap.und Kommu- nikationswissenschaft / Handbooks of Linguistics and Com- munication Science (HSK). Thomas A. de Gruyter. 2224–2259. Winter. MIT lecture notes. 2012. 1972. Semantics: An international handbook of natural lan- guage meaning. 3 Handbücher zur Sprach. Wunderlich. Yoad. Claudia Maienborn & Paul Portner (eds. Winter.356 BIBLIOGRAPHY Vikner. Wasow. MA: MIT Press. 2002. 191–226. 2011. vol. A semantic analysis of the English genitive: Interaction of lexical and formal semantics. 2001b. . Carl & Per Anker Jensen. Yoad. Flexibility principles in Boolean semantics.). 2001a. In Klaus von Heusinger. Kai & Irene Heim.1515/9783110253382.