Professional Documents
Culture Documents
L L
Christopher Potts
UMass Amherst
LSA Institute 2007, Stanford, July 1–3
λw . . .
June 14, 2007 Christopher Potts
2 Foundational concepts 5
2.1 Truth conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Compositionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Models and talking about models . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Direct and indirect interpretation . . . . . . . . . . . . . . . . . . . . . . 10
3 Technical preliminaries 12
3.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Ordered tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 Propositional logic 21
4.1 The usual presentation of PL . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 PLf : A functional perspective on PL . . . . . . . . . . . . . . . . . . . . 22
4.4 Comparing PLf with PL . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 A linguistic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.6 Assessment of the linguistic theory . . . . . . . . . . . . . . . . . . . . . 27
4.7 The intensionality of propositional logic . . . . . . . . . . . . . . . . . . 29
i
6 The axioms of the lambda calculus 41
6.1 A general note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.2 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.3 Alpha conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.4 Beta reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.5 Eta reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7 Intensions 44
7.1 The limits of extensional models . . . . . . . . . . . . . . . . . . . . . . 44
7.2 An intensional logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.3 Linguistic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.4 Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.5 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9 Quantifiers 52
9.1 The view from first-order logic . . . . . . . . . . . . . . . . . . . . . . . 52
9.2 The view from generalized quantifier theory . . . . . . . . . . . . . . . . 52
9.3 Conservativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.4 Two other properties of determiners . . . . . . . . . . . . . . . . . . . . 55
9.5 The terrain not covered . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
10 Pragmatic connections 58
10.1 Indexicality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
10.2 Deictic pronouns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.3 Propositions and probabilities . . . . . . . . . . . . . . . . . . . . . . . . 62
References 68
A Problems 69
A.1 Relative truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.2 Tarski’s hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.3 Idioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.4 Nondeterministic translation . . . . . . . . . . . . . . . . . . . . . . . . 70
A.5 A subtlety of predicate notation . . . . . . . . . . . . . . . . . . . . . . . 70
A.6 Exclusive union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.7 Is it a function? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.8 Characteristic sets and functions . . . . . . . . . . . . . . . . . . . . . . 72
A.9 Some counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
A.10 Schönfikelization and implications . . . . . . . . . . . . . . . . . . . . . 73
ii
A.11 nor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.12 The type definition for PLf . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.13 A more readable PLf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.14 Interdefinability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.15 Relating PL and PLf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.16 PLf and negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.17 PLf and implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.18 Exclusive disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.19 PLf and compositionality . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.20 Conjunctions and constituency . . . . . . . . . . . . . . . . . . . . . . . 77
A.21 Coordination and function composition . . . . . . . . . . . . . . . . . . 77
A.22 PL intensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
A.23 Alternative type definition . . . . . . . . . . . . . . . . . . . . . . . . . 78
A.24 Possible types given assumptions . . . . . . . . . . . . . . . . . . . . . . 79
A.25 Vacuous abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.26 Partiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.27 Novel types and meanings . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.28 Types, expressions, and domains . . . . . . . . . . . . . . . . . . . . . . 81
A.29 Recursive interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
A.30 An alternative mode of composition . . . . . . . . . . . . . . . . . . . . 81
A.31 Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
A.32 Variable names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.33 Cross-categorial and . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.34 A relational reinterpretation . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.35 What’s the source of the ill-formedness? . . . . . . . . . . . . . . . . . . 84
A.36 Building a fragment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.37 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.38 Beta reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
A.39 Eta conversion and distinguishable meanings . . . . . . . . . . . . . . . 86
A.40 Extensional beliefs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.41 Modals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.42 Hintikka’s believe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.43 Individual concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.44 How many worlds are there? . . . . . . . . . . . . . . . . . . . . . . . . 88
A.45 Finding common ground . . . . . . . . . . . . . . . . . . . . . . . . . . 88
A.46 Definites and semantic composition . . . . . . . . . . . . . . . . . . . . 88
A.47 Contradictory beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.48 Degree constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.49 Singular and plural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.50 Exactly 1 in first-order logic? . . . . . . . . . . . . . . . . . . . . . . . . 90
iii
A.51 A closer look at the universal . . . . . . . . . . . . . . . . . . . . . . . . 90
A.52 All and only Lisa’s properties . . . . . . . . . . . . . . . . . . . . . . . . 90
A.53 Intensional quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A.54 Nonconservative determiners? . . . . . . . . . . . . . . . . . . . . . . . 91
A.55 Coordination and monotonicity . . . . . . . . . . . . . . . . . . . . . . . 92
A.56 Indexicals as proper names? . . . . . . . . . . . . . . . . . . . . . . . . 92
A.57 Indexicals and constants: A crucial difference . . . . . . . . . . . . . . . 92
A.58 Denotations as sets of assignments . . . . . . . . . . . . . . . . . . . . . 92
A.59 Dynamic indefinites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.60 Probabilities and sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
iv
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
“In mathematics you don’t understand things, you just get used to them.”
—John von Neumann
This is a deep insight into the way in which humans come to understand semantic theory as
well. It’s here as a kind of warning: if this is your first time through material like this, then
you can’t expect to fully understand all the hows and whys. It takes repeated exposure,
and it takes time for these things to sink in. Thus:
• Don’t give up if you feel like you don’t quite see what’s happening.
• Expect to feel confused at times, as we settle into this web of interdependent con-
cepts.
• Raise your hand often — it’s the only way to get your needs met.
1
About this course
1.2.3 Translation
In formal linguistic semantics, analysis always involves translating from the descriptive
generalizations into a formal system. Papers vary in their explicitness about the pieces
(the natural language, the formal system, the bridge between), but they’re always present.
2
About this course
3
About this course
• Carpenter (1997): A wonderfully rich toolkit. If your pair it with Heim and Kratzer
(1998), you get a great introduction to analysis and the tools you need to do it.
• Portner and Partee (2002): This contains many classic articles in linguistic seman-
tics and pragmatics. Just one caveat: for some, the 1970s notation (adapted from
Montague’s original papers) makes the work somewhat inaccessible.
• Halvorsen and Ladusaw (1979): A great way into Montague (1974). A good next
step after Gamut (1991b) but before Montague semantics itself.
• van Benthem (1991) is a wonderful little guide to, among other things, the ways in
which we can interpret types and lambdas.
4
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
In the simple systems we’ll study initially, there are just two values for sentences, T and
F. This is a simplification; we’ll need more and different values before long (handout 7).
But, for now, it means that we can state truth conditions in a way that makes them more
obviously like definitions (equality statements):
(2.2) The sentence Lisa is a linguist is interpreted as T if and only if (iff) Lisa is a
linguist.
5
Foundational concepts
The phrase ‘if and only if’ has the same force as an equal sign. In logic and linguistics, it
is often abbreviated to ‘iff’. Some equivalent formulations are ‘just in case’ and ‘exactly
when’.
6
Foundational concepts
people to be in love, or married, or employed by someone else. These topics are clearly
nonlinguistic. So we compromise by writing informal things like ‘. . . is a linguist’ and ‘x
loves y’, on the assumption that the relevant expert could fill out the truth conditions.
This means that, in the end, linguists have very little to say about the meanings them-
selves. In fact, our theories are deliberately defined so as to make as few commitments
as possible about what meanings are. We are more interested in how meanings (whatever
they are) interact with each other to produce ever richer meanings.
A specific example helps to highlight the importance of this point. Assume that I speak
neither Italian nor Irish. Suppose that I learn that the Irish noun madra translates as cane
in Italian. Have I learned the meaning of madra or cane ? I have not. I’ve just learned a
bit about the translation function that takes Irish to Italian. To learn the meaning of madra,
I need to learn what conditions have to be like for a given object to count as having the
property named by madra.
The same is true internal to a language. I might learn that the English words wood-
chuck and groundhog are synonymous. But if I don’t know what it takes for d is wood-
chuck or d is groundhog to be true, then I don’t know the meaning of woodchuck or
groundhog. It’s for this reason that dictionaries are not semantic theories. They provide
(language-internal) translations, appealing always, at some point, to their readers’ knowl-
edge of semantics.
7
Foundational concepts
2.2 Compositionality
Here’s a very broad, oft-repeated statement of the principle of compositionality in linguis-
tic semantics:
(2.4) The meaning of an expression is a function of the meanings of its parts and the
way they are syntactically combined. (Partee 1984:153)
This seems simple enough. The definition harbors some technical notions behind common
language (‘is a function of’, ‘syntactically combined’), but the intuition behind it is clear:
our meaning for Chris smiled should be unique and it should be fully determined by the
meaning of Chris, the meaning of smiled, and some general principle or principles for
putting these two meanings together.
8
Foundational concepts
when kick the bucket is used to mean ‘die’, should we, as semanticists, be looking at the
syntactic subphrases of this expression, or should we just assign a meaning to the whole?
Ex. A.3
2.2.2 The importance of syntax
The other part of the empirical aspects of compositionality concerns the syntax. In a
properly-designed linguistic theory, the ‘parts’ mentioned in (2.4) should be given to us
by the syntactic theory. For these notes, we’ll say that the parts correspond perfectly to
the nodes in syntactic structures. This approach has much to recommend it, and it is also
a useful way to be precise about how compositional interpretation is controlled.
9
Foundational concepts
Constrained by the medium. If I want to tell you the meaning of Geoff Pullum, I cannot
very well drag him around with me. So I am forced to resort to symbols. Where possible,
I use pictures. But pictures are cumbersome. Thus, many authors rely on very subtle
typographic differences to distinguish the languages from the things the language talks
about. For instance, this is common:
(2.6) jog is interpreted as jog
On the left, we have a piece of language (some symbols). On the right, we are supposed
to imagine that we have the property of jogging. Semanticists must be typographically
aware!
Some sloppiness. A semanticist might say “Sam combines with jog ” when she means
to say “the individual named by Sam combines with the interpretation of jog ”.
Anyway, onward into the models; everything else is in the service of studying them.
10
Foundational concepts
Two examples:
interpretation
(2.8) Bart =⇒
' (
interpretation
(2.9) Simpson =⇒ , , , ,
translation interpretation
(2.11) Bart =⇒ bart bart =⇒
translation
(2.12) Simpson =⇒ simpson
' (
interpretation
simpson =⇒ , , , ,
2.4.3 Discussion
Many practitioners of indirect translation countenance a stopping-off point — the logical
formulae — merely for convenience. The logical formulae might have a more obviously
systematic structure than the natural-language expressions. Or the researcher might want
to stay clear of debates about syntactic phrasing, category-labels, and so forth. In these
systems, the assumption is usually that we have two regular, structure-preserving map-
pings: translation and interpretation. The composition (handout 3, section 3.4.4) of these
two operations is also a regular, structure-preserving mapping. That is, the intermediate
step is dispensable.
But it’s possible to imagine systems in which the intermediate logical language is not
dispensable. It might provide information that is not present in the natural-language syntax
but that is nonetheless crucial for interpretation. The most famous arguments for a theo-
retically robust meaning language come from Discourse Representation Theory (DRT;
Kamp and Reyle 1993).
Ex. A.4
11
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
3.1 Sets
A set is an abstract collection of objects. These can be real-world objects, concepts, other
sets, etc.
3.1.1 Notation
3.1.1.1 Curly braces
By convention, sets are specified using curly-braces. Commas usually separate the mem-
bers. For example, here is a depiction of the set containing Bart Simpson, the letter b, and
the number 47:
' (
(3.1) b, 47,
And here is a picture of the set whose members are Lisa Simpson and the set above:
' ' ((
(3.2) , b, 47,
There are, by convention, two equivalent ways of specifying the empty (null) set: with ∅
(the empty-set symbol) and with { } (empty curly braces).
The empty set is simply the set with no members. There is only one empty set. It is a
subset (section 3.1.4.4) of every set.
12
Technical preliminaries
b 47
(3.3)
This is glossed as ‘the set of all x such that x is a natural number’. So the curly braces tell
us that we are talking about a set, and the vertical line (sometimes a colon) is read as ‘such
that’. It’s important to keep sight of the fact that this specification does not tell us about
any specific x. The choice of x as the symbol in this specification is arbitrary. All of the
following are equivalent to (3.4):
(3.5) a. y | y is a natural number
) *
b. n | n is a natural number
) *
c. † | † is a natural number
) *
A note of caution, though: use the variable symbol systematically. The following is differ-
ent from (3.4) and (3.5):
(3.6) x | y is a natural number
) *
+ ,
(3.9) ∈ a | a is a Simpson
‘Bart is a member of the set of all a such that a is a Simpson.’
A slash through the set-membership symbol (or just about any other logical connective)
is its negation. Thus, (3.10) asserts that Burns is not a member of the set of Simpsons.
+ ,
(3.10) ! d | d is a Simpson
‘Burns is not a member of the set of all d such that d is a Simpson.’
14
Technical preliminaries
When we induce an ordering on a set, the result is an ordered tuple. These are discussed
in section 3.2 below.
3.1.3.2 No repetitions
When specifying a set, repetitions of the same object are meaningless. For example, each
of the following depicts the set containing only Bart Simpson:
is the same as ,
Here are a two equivalent ways of specifying the union of the set containing Bart and Lisa
with the set containing Lisa, Maggie, and the number 17.
+ , + ,
(3.12) a. , ∩ , , 17
b.
17
The intersection is the smallest circle, the one containing just Lisa.
3.1.4.2 Union
(3.13) The union of a set A with a set B is the set of all things that are in A or B. (Things
in both sets are included in the union.) In symbols, A ∪ B.
def
A ∪ B = {x | x ∈ A or x ∈ B}
15
Technical preliminaries
Here are two equivalent ways of specifying the union of the set containing Bart and Lisa
with the set containing Lisa, Maggie, and the number 17.
' ( ' (
(3.14) a. , ∪ , , 17
b.
17
Ex. A.6
3.1.4.4 Subset
The subset relation doesn’t return a new set (in contrast to ∪, ∩, and −). Rather, it returns
truth or falsity:
A special case to keep in mind Every set is a subset of itself; A ⊆ A, for any set A. If
you want to ensure that there are things in B but not in A (and that everything in A is in B),
write A ⊂ B.
3.1.4.5 Equality
Equality is defined in terms of the subset relation:
This states very clearly that there is nothing more to a set than its members.
16
Technical preliminaries
3.1.5 Powerset
The powerset of a set A, written ℘(A), is the set of all subsets of A.
{a, b, c}
def {a, b} {a, c} {b, c}
℘({a, b, c}) =
{a} {b} {c}
{}
4 5
(3.19) , 17
4 5 4 5
, ,
4 5
(3.20) a.
4 5
b. ,
Ordered tuples are the members of relations, which are another fundamental building block
of meaning. Relations are our next topic.
17
Technical preliminaries
3.3 Relations
3.3.1 Basics
A relation is a set of n-tuples. For example:
4 5 4 5 4 5
(3.21)
, , , , ,
Here is an example of the usual predicate-notation for relations:
+ ,
(3.22) *x, y+ | x teases y
3.4 Functions
3.4.1 Technical specifications
Here’s a useful depiction of the function that maps Bart, Lisa, and Maggie to T and Burns
to F:
T
(3.25)
F
The domain is the set of objects that can be inputs (on the left). The range (sometimes
called the co-domain) is the set of objects that can, but need not be, outputs for some input.
We gloss f : A -→ B as ‘the function f with domain A and range B’.
18
Technical preliminaries
• A function f is total iff every element in the domain of f has a value in the range of
f . If f fails to meet this condition, it is called a partial function.
• A function f is onto iff every element in the range of f is the value of some element
in the domain of f .
• A function f is one-to-one iff no member of the range is assigned to more than one
member of the domain.
Ex. A.7
If f is a function into the domain {T, F}, then the characteristic set of f is the set of all
objects d such that f (d) = T.
If A is a set and U is the universe of objects in the same domain as A, then the characteristic
function of A is the f such that f (d) = T if d ∈ A, and f (d) = F if d ! A, for all d ∈ U.
It’s crucial that we know what the universe of discourse is, so that we know which
objects to map to F. (The objects that map to 1 are just those that are in A.)
Ex. A.8,
19 A.9
Technical preliminaries
20
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
Ex. A.11
21
Propositional logic
4.2 Functions
PL’s models can be presented as a class of functions. This doesn’t change the underlying
logic, but it has conceptual advantages:
i. Its compositionality is obvious.
ii. It permits us to come closer to natural language syntax.
iii. It reveals that this is a subsystem of the more complex logics we’ll use later.
The following presentation is adapted and expanded from the one in Gamut 1991a:§2.7.
4.3.1.2 Expressions
i. p, q, p0, q0 , . . . are expressions of PLf , type t.
ii. ¬, I, 3, and ⊥ are expressions of PLf (one-place connectives), type *t, t+.
iii. ∧, ∨, →, and ↔, are expressions of PLf (two-place connectives), type *t, *t, t++.
iv. ϕ is a PLf expression of type *σ, τ+ and ψ is a PLf expression of type σ, then (ϕ(ψ))
is a PLf expression of type τ.
v. Nothing else is an expression of PLf .
The first three clauses are lexical. They divide up by type. The fourth clause handles all of
the combinatorics. With it, we can build expressions of arbitrary complexity by building
from simpler ones.
22
Propositional logic
4.3.2.1 Domains
i. The interpretation of the type t is Dt = {T, F}.
ii. The interpretation of a type *σ, τ+ is the set of all functions from Dσ to Dτ .
Since we have a finite basic domain Dt and a finite hierarchy of types, we can be concrete
about the functional domains specified in clause (ii):
i. D*t,t+
T -→ F T → - T T → - T T → - F
F -→ T F → - F F → - T F → - F
ii. D*t,*t,t++
6 7 6 7 6 7 6 7
T→
- T T -→ T T -→ T T -→ T
T -→ T -→ T -→ T -→
F→- F F→ - F F→ - F F→ - T
6 7 6 7 6 7 6 7
F -→ T -→ F T -→ F T -→ T T -→ T
F →
- F →
- F -→
6F - → F7 6F - → T7 6F - → T7 F→ - F
T - → F T - → F T - → F
T -→ T -→ T -→
- → - → - →
F F F F F F
...
6 7 6 7 6 7
T -→ F T -→ F T→
- T
F -→ F -→ F -→
F -→ F F -→ T F→- F
I’ve depicted all four unary functions. There are a total of 42 = 16 binary functions (which
are functions from truth values into the domain of unary functions). Some of them are
common in discussions of PL, whereas others are mostly neglected.
23
Propositional logic
Interpreting constants I emphasize that 5·5M interprets only the constants. We’ll handle
the complex expressions momentarily.
We place some general conditions on 5·5M , so that it behaves like a propositional logic.
The restrictions give us wiggle room only with the propositional letters.
(4.1) M = *D, 5·5M + is a model for PLf only if the following conditions hold of 5·5M :
5p5M ∈ Dt if p is a propositional letter
T -→ F T -→ T
5¬5M = 5I5M =
F →
- T F →
- F
T -→ T T -→ F
535M = 5⊥5 M
=
F -→ T F -→ F
6 7 6 7
T -→ T T -→ T
T -→ F -→ F T -→ F -→ T
5∧5M = 5∨5 M
=
6
7 6 7
T -→ F T -→ T
F -→ F -→
F -→ F 7 F -→ F 7
6 6
T -→ T T -→ T
T →
- T →
-
F -→ F F -→ F
5→5M = 6 7 5↔5 M
= 6 7
F -→ T →
- T
F -→
T →
- F
F -→ T F -→ T
Interpreting complex formulae The interpretation function [[·]]M interprets complex ex-
pressions in the model M, via this recursion:
(4.3) a. [[ϕ]]M = 5ϕ5M if ϕ is a constant.
b. [[(ϕ(ψ))]]M = [[ϕ]]M ([[ψ]]M )
The domain of [[·]]M is the set of all expressions of PLf . It maps those formulae to objects
in the domains. In clause (b), we might immediately end up using 5·5M for both the sub-
expressions. But, if they are complex, then we will again break them down via clause (b)
and interpret those parts as instructed.
24
Propositional logic
T -→ F
[[¬]]M = 5¬5M = [[p]]M = 5p5M = T
F -→ T
T -→ T : ;
(4.5) [[((∧(p))(q))]]M = F = F
F -→ F
6 7
T -→ T
T -→ F →
- F
: ;
6
T -→ T
7
[[q]]M = 5q5M = F [[(∧(p))]]M =
6 7 T = F → - F
F → T → - F
-
F -→ F
6 7
T→
- T
T → -
F→- F
M M
[[∧]] = 5∧5 = [[p]]M = 5p5M = T
6 7
T→
- F
F -→
F→- F
(4.6) (¬((∨(q)(p)))) : t F
&&%%% **)))
&& % *** ))
¬ : *t, t+ ((∨(q))(p)) : t T -→ T
&&%%% T
&& % **)))
p:t (∨(q)) : *t, t+
F -→ F **
* ))
('
( '
T -→ T
( ' T
∨ : *t, *t, t++ q : t F -→ F
,+
6 , ,, 7 ++ +
T -→ T
T -→ F -→ T F
6 7
T -→ T
F -→
F -→ F
25
Propositional logic
Categorematic A symbol is categorematic if it can exist on its own, i.e., it need not be
introduced by a rule.
In the representations we’ve been using so far, a symbol is categorematic iff it can be
a terminal symbol. The major difference between the usual presentation of PLf and the
functional one is that only the functional one introduces the connectives categorematically.
4.4.3 Constituency
PLf and PL assign essentially the same structure to negations and the other unary con-
nectives (though PLf has many more brackets), but they differ in how they analyze binary
connectives:
PL PLf
(p ∧ q) ((∧(p))(q)) : t
"! &&%%%
" ! && %
p q q:t (∧(p)) : *t, t+
('
(( ''
∧ : *t, *t, t++ p:t
4.4.4 Interdefinability
I defined a sightly different set of connectives for PLf than I did for PL. But this is merely a
presentational difference. Any missing truth-functional connectives are definable in terms
of the stock of connectives already given. Indeed, in general, one needs only a negation
and one other connective to do the job, and certain binary connectives can do the job all
Ex. A.14, on their own (see exercise A.11).
A.15
26
Propositional logic
To assess the linguistic theory, we check the entailments of the logic and see if they corre-
spond to accurate or inaccurate predictions about the language. This means that we are si-
multaneously investigating both the logic and the language, checking for correspondences
and divergences as we go.
27
Propositional logic
One can be a great deal more rigorous than this, but I think these descriptions suffice.
The point is that our hypotheses (4.7) derive for us the properties of natural language that
we aimed to characterize.
4.6.2 Compositionality
PLf is of course a compositional theory of the truth functions — a paradigm case, in fact.
But we can still level charges of noncompositionality against it. The major problem comes
with hypothesis (4.7a), which simply stipulates that declaratives sentences are letters. We
are therefore unable to state any intuitive interconnections between related sentences. For
instance, there can be no principled semantic connection between David laughed and Chris
laughed, nor between Robert taught and Robert laughed. And so forth.
Ex. A.19 Exercise A.19 asks you to push this argument still further.
4.6.3 Categorematicity
If we can look past the problem of declarative sentences, we find that we are not doing too
badly. Just as and is a constituent of natural language, so too does ∧ have its own place in
the logical lexicon and its own meaning.
4.6.4 Constituency
Hypothesis (4.7b) predicts the following:
The first two hypotheses are controversial in syntax. The third is of significant logical
interest. We will return to it in our discussion of extensional lambda calculi (handout 5).
For now, suffice it to say that there is a way to reconcile our current hypotheses with the
Ex. A.20 facts of nonsentential conjunction, but we need a richer logic for that.
4.6.5 Commutativity
In PLf , ∧ is commutative, which just means that we can reverse the order of its arguments
without any change to the truth value of the whole. (It is worth staring at the depiction
in section 4.3.2.2 until this makes sense to you.) It is easy to find data that call this into
Ex. A.21 doubt:
28
Propositional logic
4.6.6 Associativity
This property of ∧ allows for rebracketing. In PL, it is the equivalence between, e.g.,
((p ∧ q) ∧ q0 ) and (p ∧ (q ∧ q0 )). Our PLf connective ∧ also has this property, and thus
is too might be called into doubt by examples like the following (which, tellingly, involve
nonsentential coordination):
29
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
5.1 Background
The lambda calculus, in general (semi-historical) terms, is a theory of computation. There
are many lambda calculi. In linguistics, people generally work with typed versions.
The lambda calculus is enormously powerful. The chances are small that you will
run up against something you want to do technically but cannot do within its bounds.
Therefore, if we are going to use it to build linguistic theories, we will have to impose
extra conditions, and we will have to be careful to isolate just the functions that we want to
allow into our theory. This is welcome news, of course — it means we get to craft theories,
rather than just inheriting them from logicians.
5.2.1 Types
The only change from PLf is the new basic type e for entities.
i. e is a (basic) type.
30
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
5.2.2 Expressions
The first two clauses handle the primitives. The second two clauses build complex expres-
sions.
i. There are constant symbols of many different types. Constant symbols are expres-
sions.
ii. For every type τ, we have an infinite stock of variables of type τ. Variables are
expressions.
iii. If α is an expression of type *σ, τ+ and β is an expression of type σ, then (α(β)) is an
expression of type τ.
iv. If α is an expression of type τ and χ is an variable of type σ, then (λχ. α) is an
expression of type *σ, τ+.
Ex. A.24,
5.2.3 Domains A.25
Ex. A.26
5.2.4 Interpretation
5.2.4.1 Models
A model for the lambda calculus is a pair M = *D, 5·5M +, where D is the infinite hierarchy
of domains defined in section 5.2.3 above, and 5·5M is a valuation function interpreting the
constants of the language.
31
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
The expression g[x -→ d] names the assignment function that is just like the assignment g
except that the variable x maps to the entity d.
5.2.4.4 Interpretation
We’ve built up what we need to specify the interpretation function, the heart of our theory:
[[·]]M,g
This function provides the interpretation of all expressions (constants, variables, and com-
plex expressions formed from them) in the model M, relative to assignment g:
iv. If (λχ. α) is of type *σ, τ+, then [[(λχ. α)]]M,g = the Φ ∈ D*σ,τ+ such that Φ(!) =
[[α]]Mg[χ-→!] , for all ! ∈ Dσ .
32
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
i. 5bart5M =
ii. 5smile5M = the function Φ such that f (!) = T iff ! smiles.
iii. 5tease5M = the function 6 such that 6(!) = the function Φ such that Φ(") = T iff
" teases !.
Other than this, you are likely to be able to use the general definitions above. But you
should also feel free to tailor them to your needs or the needs of your audience.
Ex. A.27
5.4 Commentary
5.4.1 Types: Your semantic workspace
The types help to establish a lawful connection between the way we organize our expres-
sions and the way the models are organized.
5.4.1.1 Syntax
Semantic types play much the same role as syntactic categories do in the realm of syntax:
they organize the expressions of the theory, thereby allowing us to control their interactions
and state broad generalizations.
(5.3) dog : N ‘the lexical item dog is of category N’
(5.4) dog : *e, t+ ‘the expression dog is of semantic type *e, t+
We can identify the category N with the set of all lexical items with that category specifi-
cation, and we can identify the type *e, t+ with the set of all logical expressions with that
type specification.
33
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
5.4.1.2 Semantics
Types do more than just categorize expressions. They are also important in categorizing
denotations (meanings). In a typed, interpreted system, each type has a corresponding
denotation domain, as in section 5.2.3 above. This in turn leads to a natural constraint on
Ex. A.28 the sort of interpretation functions we are willing to consider: (5.1) and (5.2).
I(ϕ, M, g)
1 if ϕ is a constant
2 then return 5ϕ5M
3 elseif ϕ is a variable
4 then return g(ϕ)
5 elseif ϕ is of the form (α(β)) : ;
6 then return I(α, M, g) I(β, M, g)
7 elseif . . .
Ex. A.29
34
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
In the definition of compositionality, this is usually the “function” that is used to combine
expressions to form new expressions.
Common abbreviations:
(5.6) ((α(β))(γ))
a. α(β)(γ) (VOS)
b. α(γ, β) (VSO)
In terms of context-free compositionality (handout 2, section 2.2), the daughters are the
function and the argument, and the mother is the value after the equal sign.
Ex. A.30
5.4.4 Abstraction
Abstraction is technically more challenging than application. If you deem it suspect, then
you might pursue a semantic theory without it (Jacobson 1999).
35
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
5.4.7.1 An example
Here’s a look at the assignment functions for a logic with just two variables, both type e,
and a domain of entities De consisting of Bart, Lisa and Burns.
x -→ x -→ x -→ x -→ x -→
y -→ y -→ y -→ y -→ y -→
x -→ x -→ x -→ x -→
y -→ y -→ y -→ y -→
Then we have:
x → - x -→
[y -→ ] =
y → - y -→
36
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
• We can think of every expression as paired with a variable store holding all the
variables that are free in that expression (Cooper 1983).
37
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
• It’s worth paying particular attention to clauses (i) and (iv) of the definition. The
first introduces variables. The second discharges them.
5.6 Assessment
As a logic, the lambda calculus is quite useful, of course. We will not assess its properties.
Rather, we will assess the application of it in section 5.5.
5.6.1 Compositionality
A major shortcoming of the PLf -based linguistic theory of handout 4 was that it did not
reach into sentences. Instead, it treated them all as semantically atomic. Of course, this is
wildly inaccurate.
We’re doing better now. The gains are the result of adding the domain of entities
and allowing ourselves to build functions involving them. We can now do a substantial
amount of decomposition of sentences into their constituent parts. In fact, the hierarchy
of functions specified in section 5.2.3 is so big and rich that the one you are looking for is
almost certainly in there somewhere.
38
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
5.6.2 Constituency
Let’s first address a major benefit of working with this particular class of functions: we
achieve a really nice subtree-to-meaning correspondence with the syntax.
Consider, for instance, VPs containing transitive verbs. If we define transitive verbs
meanings as in section 5.2.2, then VPs denote in *e, t+, and they have intuitively correct
meanings. For instance, [[see(bart)]]M,g denotes the function Φ such that Φ(!) = T iff !
saw Bart.
In fact, quite generally, our theory of unary functions squares extremely well with the
idea that all syntactic structures are binary branching.
Ex. A.33,
A.34
5.6.3 Still not enough meanings
We’re still suffering from way too much meaning collapse. If Bart skateboards and Lisa
studies, then [[skateboard(bart)]]M,g = [[study(lisa)]]M,g = T, and similarly for things that
happen both to be false. The problem traces ultimately to the fact that our logic is geared
towards getting us into the domain Dt , but that domain contains just two values, {T, F}.
Thus, in the end, we end up making binary distinctions.
We will return to this point in more detail when we add intensionality (handout 7). For
now, let’s just examine the conceptual reason for this limitation: the heart of the exten-
sional viewpoint is that we specify everything about a single reality and evaluate things
relative to that single reality. This means that, at some point, we have to specify lexical
entries: happy picks out this function, see picks out that function, and so forth.
But we can and should consider alternative realities. Technically speaking, we can
identify these alternative realities with different interpretation functions: [[·]]M,g
i might say
M,g
that Lisa is happy, whereas [[·]] j might say that she is not. This is the guiding idea behind
intensionality. Let’s try to keep sight of it from here on out.
ii. She follows through on her empirical claims about the mapping from that ill-formed
structure into the logical language.
39
Lambdas
iii. She observes that, in the logical language, one has an ill-formed expressions —
or, that the objects involved do not fit together by any admissible rule of semantic
composition.
∗
(5.11) Smile run.
It’s perverse, in a sense: one works very hard to reach the point where one has created an
ill-formed expression of the logic. But the force is clear: those things don’t go together
Ex. A.35, linguistically because of something fundamental about their meanings.
A.36
40
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
6.2 Substitution
One operation on expressions is central to the statement of these axioms. It is the substi-
tution of one expression for another:
We have to be careful about how we perform substitution. The following clauses both
define and restrict the operation. We use x for any variable, c for any constant, and δ for
any expression. The overall effect is worth keeping in mind: we want to prevent accidental
binding, i.e., we want to prevent our substitution from resulting in a given λ binding more
variables than it did prior to substitution.
v. (λx. ϕ)[x!δ] is permitted, but it changes nothing: (λx. ϕ)[x!δ] = (λx. ϕ).
Ex. A.37
41
The axioms of the lambda calculus
Two terms that are relatable by this axiom are alphabetic variants of one another. Im-
portantly, in pairs of alphabetic variants, the lambdas always bind the same number of
variables.
Sound examples
α
(6.2) λx. happy(x) =⇒ λy. happy(y)
α α α
(6.3) λxλy. see(x)(y) =⇒ λzλy. see(z)(y) =⇒ λzλx. see(z)(x) =⇒ λyλx. see(y)(x)
Illicit conversions
α
(6.4) λx. see(x)(y) 8=⇒ λy. see(y)(y)
α
(6.5) λx. a(man)(λy. see(x)(y)) 8=⇒ λy. a(man)(λy. see(y)(y))
42
The axioms of the lambda calculus
In extensional lambda calculi of the sort we are dealing with, the order in which one
does the reductions does not matter. This property is often called confluence, because
two different β-reduction paths can diverge, but they will always merge again. Another
name is the diamond property : if one diagrams all possible paths for reduction, one gets
diamond-shaped graphs wherever there is a choice about which thing to reduce.
β
(6.6) a. (λx. happy(x))(sam) =⇒ happy(sam)
β
b. (λxλy. (see(x))(y))(friend-of(y)) 8=⇒ (λy. ((see(friend-of(y)))(y)))
Alpha-conversion provides an important tool for getting out of jams like the one in (6.6b)
Ex. A.38
In the light of this axiom, we can see that the following is not a meaning analysis (though
things like this are commonly found):
If we have a convention for the type of x, and we know what kind of things dog is supposed
to be, then the lambda on the right tells us something about the type of this function. But it
doesn’t get us any closer to the function itself than we were when we started. η-reduction
tells us this right away.
Ex. A.39
43
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
Handout 7: Intensions
7.1 The limits of extensional models
In handout 5, section 5.6.3, we began building the case for the claim that the extensional
lambda calculus doesn’t make available to us enough meanings. The root of the problem is
that, as with PLf , everything is geared towards the domain Dt of truth values. This section
discusses this problem, and those that relate to it, in more depth than we did before.
([[believe]]M,g(F))( )=T
But now we have attributed to Lisa a believe in every falsity! This might be wildly out of
synch with her behavior. We are predicting that she will endorse the claim that the earth is
flat, for example.
Ex. A.40, Exercise A.40 asks you to make matters much worse for an extensional view of believe.
A.41
44
Intensions
i. 5bart5M =
ii. 5smile5M is the function Φ such that, for any entity ! ∈ De , Φ(!) = the function π
such that, for any world 9 ∈ D s , π(9) = T iff ! smiles at 9.
iii. 5tease5M is the function 6 such that, for any entity ! ∈ De , 6(!) = the function
Φ such that, for any entity " ∈ De , Φ(") = the function π such that for any world
9 ∈ D s , π(9) = T iff " teases ! at 9.
iv. 5believe5M = the function Φ such that, for any proposition π ∈ D*s,t+ , Φ(π) is the
function Ψ such that, for any entity ! ∈ De , Ψ(!) = the proposition π0 ∈ D*s,t+ such
that π0 (9) = T iff the set of belief worlds for ! in 9 is a subset of the set of worlds
in which π is true.
Ex. A.42
45
Intensions
(7.1) Hypotheses
7.4 Commentary
Intensional logics are amazingly rich. I can’t touch upon all aspects of the moves we have
made. But this section suggests a few things, and the exercises can help guide you deeper
into these structures.
46
Intensions
To see this, consider a model in which De contains just Bart and Lisa and suppose we
have just one property, say, the property of being skeptical. Now let’s study this function
a bit:
9 -→
→
- T
1
-→ T
92 -→
-→ T
-→ F
M
5skeptical5 =
-→ F
93 -→
→
- F
→
- F
9 -→
4
-→ T
We have distributed over the worlds {91 . . . 94 } all the ways in which the world could be
(for this single predicate). So we have four distinct possible worlds (given just this two
entity domain and the one property to talk about). In world 91 they are both skeptical, but
in world 93 , neither of them is. And so forth. (This strongly recalls the logic of truth tables
explored throughout handout 4, especially section 4.7.)
Here’s the point of interest at present: if we fix a world, then we are looking at a
function from entities to truth values — we have our extensional property back!
Ex. A.44
7.4.3 World variables in or out
The system defined above treats its worlds just like regular entities. We can have variables
and constants that pick them out. We can abstract over them. And so forth.
Not all the intensional logics encountered in semantics are so kind to their world vari-
ables. It is common to find them pushed out into the interpretation scheme itself. The
resulting interpretation function is likely to look like this:
[[·]]Mg,w
47
Intensions
where w is the world of interpretation. Thus, if we use the constants defined above, we can
form expressions like bald(bart). For us, this picks out a function from worlds to truth
values. In a system of interpretation like the one suggested by [[·]]Mg,w , we would instead
have a value of T or F for this expression, depending on the value of [[bald(bart)]]Mg,w (w).
(7.3) λx. bald(x)(@) : *e, t+ (7.4) λ f. the( f )(@) : **e, *s, t++, e+
One could even specify this as part of the denotations:
(7.5) 5bald5M = the function Φ such that Φ(!) = T iff ! is bald at [[@]]M,g
7.5 Assessment
7.5.1 Entities or individual concepts?
We left our proper names out of the intensional sphere, so to speak, by giving them de-
notations that are independent of the world we are in. It is wise to question this move.
Ex. A.43 Exercise A.43 pushes you in that direction.
48
Intensions
50
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
Very often, one constructs a model only to find that it is not expressive enough to make
certain distinctions. Here are some examples, along with responses.
Individuals
• Problem: Basic propositional logic has only expressions of type t. There is no way
to talk about Bob or Carol or Ted or Alice.
• Response: Add a domain of entities and define function that take members of that
domain (and functions build from them) as arguments. (Handout 5.)
Worlds
• Problem: Even D*e,t+ isn’t sufficient. We cannot, for instance, give a semantics for
belief statements in these terms
• Response: Add a new set of entities, D s = W, the set of possible worlds. Sentence
meanings are now functions from worlds into truth values; VP meanings are now
functions from entities to sentence meanings; and so forth. (Handout 7.)
Times
• Problem: A model with no elements for representing time cannot give a semantics
for any explicitly temporal expressions.
• Response: Add a new set of entities, D j = R, the class of times. Sentence meanings
are now functions from times into something else.
Ex. A.48,
A.49
51
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
Handout 9: Quantifiers
This handout is a brief introduction to quantifiers. I can’t hope to be compre-
hensive, but I can try to impart a sense for the deep results of this subfield of
semantics. (For a comprehensive, technical, but nonetheless accessible review
of the field today, I recommend Peters and Westerståhl (2006).) My treatment
is purely extensional; see the exercises for tips on intensionalization.
The instructions for the universal tell us to interpret an open sentence under every possible
interpretation of its free variable. The universal force comes our exhaustive search through
the domain. If we syntacticized this, we would see a close connection with conjunction
(though not an exact one; Boolos et al. 2002:§10):
Here, the existential force comes from the fact that we are looking for some entities d
such that if we interpret the scope formula with d as the value of x, then we get T. If we
syntacticize, we end up with a disjunctive statement:
52
Quantifiers
notation. This is the sense in which the theory is ‘generalized’: anything in this functional
domain is quantificational (and there are a lot of functions in this domain!).
Barwise and Cooper (1981) is a classic of GQ theory (of semantics!). Additional piv-
otal work in the primary literature: Keenan and Stavi (1986), Keenan and Falz (1985),
Keenan (2002), and Peters and Westerståhl (2006).
S
$#
$ #
DP VP
.-
. -
Det NP
53
Quantifiers
def
(9.7) no = λ f λg. ∀x f (x) → ¬g(x) {*X, Y+ | X ∩ Y = ∅}
def
(9.8) most = λ f λg. |{x | f (x) ∧ g(x)}| > |{x | f (x) ∧ ¬g(x)}|
{*X, Y+ | |X ∩ Y| > |X ∩ (U − X)}
Ex. A.51,
A.52,
A.53
9.3 Conservativity
GQ theory would seem to be open to the following challenge: there are excessively many
objects in **e, t+, t+, to say nothing of the number of quantificational determiner meanings.
The richness of the space seems out of step with the phrases we actually encounter in
language.
But theorists have a remarkable, nontrivial response to this objection: they propose the
following universal claim:
(9.12) A determiner Det is conservative iff the following equivalence holds (feel free
to change singular to plural):
Det linguists smoke ⇔ Det linguists are linguists that smoke.
Ex. A.54
54
Quantifiers
9.4.1 Intersectivity
The property of intersectivity gets at the notion of ‘indefinite’, but it is a broader class that
that (historically) morphological classification suggests.
(9.13) A determiner meaning D is intersective iff
Some examples:
(9.14) a. Some cyclists are bald. ⇔ Some bald cyclists exist.
b. Exactly three cyclists are bald. ⇔ Exactly three bald cyclists exist.
c. No cyclists are bald. ⇔ No bald cyclists exist.
d. Fewer than ten cyclists are bald. ⇔ Fewer than ten bald cyclists exist.
e. Every cyclist is bald. !⇒ Every bald cyclist exists.
(In a situation containing three cyclists, all of whom are hairy, the right
side is trivially true but the left side is false.)
f. Most cyclists are bald. !⇒ Most bald cyclists exist.
Keenan (1996) calls upon intersectivity to formulate a generalization about which deter-
miners can appear as pivots in the English existential construction:
some bald cyclists in the race.
exactly seven bald cyclists in the race.
(9.15) There is/are fewer than ten bald cyclists in the race.
∗
every bald cyclist in the race.
∗ most bald cyclists in the race.
The hypothesis is that only intersective determiners can be pivots. However, only is not
intersective, and thus it can be regarded as a challenge to this generalization (but see ex-
ercise A.54). Keenan (2003) takes up the challenge and proposes that all and only the
55
Quantifiers
9.4.2 Monotonicity
Montonicity properties tell us important things about inference patterns for quantifiers.
Here are the basic definitions:
• A determiner D is right (left) non-monotone iff it is neither right (left) upward mono-
tone nor right (left) downward monotone.
It is important to distinguish right and left monotonicity. For instance, every is left down-
ward but right upward. Here are some simple tests:
(9.16) a. Det is left downward monotone iff Det linguists smoke ⇒ Det phonolo-
gists smoke.
b. Det is left upward monotone iff Det phonologists smoke ⇒ Det linguists
smoke.
c. Det is right downward monotone iff Det linguists smoke ⇒ Det linguists
smoke cigars.
d. Det is left upward monotone iff Det linguists smoke cigars ⇒ Det linguists
smoke.
56
Quantifiers
9.4.2.1 DP coordination
Barwise and Cooper (1981) propose that monotonicity is at the heart of our intuitions
about whether to use and or but to coordinate DPs.
(9.17) a. no linguists {but/?? and} many topologists
b. no linguist {?? but/and} no topologists
c. no linguist {but/?? and} every topologist
Exercise A.55 asks you to explore the full range of data (testing left and right) to try to
determine the precise generalization.
Ex. A.55
9.4.2.2 Polarity sensitive items
The most famous application of monotonicity properties is in the area of polarity sensi-
tivity. Negative polarity items prefer ‘negative’ environments, and positive polarity items
prefer ‘positive’ ones.
Ladusaw (1980) proposed that negative polarity items were licensed in downward en-
tailing environments, and his hypothesis has since been refined, expanded, rejected and
resurrected numerous times. Why do people find it so compelling? Because it works in
such a surprising array of cases. For instance, as noted above, every is left downward
and right upward. True to Ladusaw’s generalization, a polarity item like ever is happy in
every ’s restriction but not in its nuclear scope.
(9.18) a. Every linguist who has ever taken a model theory class knows about com-
pactness.
∗
b. Every linguist has ever taken a model theory class.
I recommend van der Wouden 1997 for very fine-grained analysis of various polarity items,
using a variety of different strengths of negation. I also urge readers who find these gener-
alizations compelling to read Giannakidou 1999, which reviews the major problems with
these hypotheses and offers a compelling alternative.
57
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
10.1 Indexicality
Kaplan (1989) is a pioneering work in formal pragmatics. It is the first detailed set of
arguments and theoretical proposals for a robust theory of indexicals like here, now, us,
and me. The paper is big and wide-ranging. It presents a few different formalizations of
the central insight, which is that indexicals are inherently tied to the utterance context and
thus immune to manipulation by any and all operators. (This position has turned out to
be too rigid; see section 10.1.3 below.) The present handout seeks to convey the central
insight with a simple formal system.
i. cS is the speaker of c.
ii. cH is the hearer of c.
iii. cT is the time of c.
Just to be clear A context is not a tuple of symbols, but rather a tuple of entities. An
example context:
4 5
, ,&
Types We have the usual types for entities and truth values (handout 5) as well as a new
type j for times. (There is no type for contexts!)
58
Pragmatics
Constants The only twist here is that our intensional parameter is a time rather than a
world. The items to watch are the indexicals:
Variables We have variables over expressions of any type. (Since there is no type for
contexts, there are no variables for contexts.)
The interpretation function The interpretation function is now dependent upon a model,
an assignment, and a context:
[[·]]M,g,c
A sampler of interpretations
i. 5burp5M = the function Φ ∈ D*e,* j,t++ such that Φ(!) = the function = ∈ D* j,t+ such
that =(&) = T iff ! burps at time &.
ii. [[me]]M,g,c = cS
iii. [[you]]M,g,c = cH
iv. [[now]]M,g,c = cT
59
Pragmatics
λc. [[burp(you)(now)]]M,g,c
4 5
, , & -→ [[burp(you)(now)]]M,g,c
4 5
, & -→ [[burp(you)(now)]]M,g,c
,
..
.
Without a context, we can’t get at the semantic meaning. Thus, sentences containing
indexicals are inherently tied to the context of utterance.
= Interpreted in context c, this says that there are (presumably different) con-
texts in which cA is short.
$ In some contexts c, cA is short.
The second meaning is the one we would get if we could shift indexicals around. It would
predict that we can interpret (10.1) as the trivially true assertion that we can find contexts
in which the speaker in that context is short; this is the meaning we perceive for (10.2).
But this reading is absent from (10.1), and Kaplan’s system tells us why: there cannot be
an operator of the sort In some contexts c, because we would require at least a variable
Ex. A.56, over contexts for it to work.
A.57
60
Pragmatics
(10.4) [[happy(x)]]M
I’ve deliberately left the assignment function off. One could respond to this by saying,
“Well, I can’t evaluate this. Without a tool for interpreting the variable x, I am stuck at
that point”. But we could also view this as asking us to interpret the formula under every
possible variable assignment, keeping only those that make the formula true. Suppose the
denotation of happy is as in (10.5), and assume the set of assignments is the one from
section 5.4.7.1.
61
Pragmatics
-→ F
(10.5)
-→ T
-→ F
Then
x -→ x → - x -→
(10.6) [[happy(x)]]M =
y →
- y → - y -→
Thus, we have created more meaning distinctions: two sentences with free variables can
have radically different meanings even if they agree on some gs. Moreover, we have an
interestingly different view of deictic pronouns than the one we started with. Now, I need
not know exactly which entity you are referring to you when you use a pronoun. I might
just be learning things about a given variable — a discourse referent (Karttunen 1976).
This treatment also launches us on our way to exploring dynamic systems like those
of Heim (1983), Kamp and Reyle (1993), and Groenendijk and Stokhof (1991), since the
Ex. A.58, fundamental shift in those theories is the meanings-as-assignment-sets perspective.
A.59
62
Pragmatics
Clause (i) mimics nonmembership, and clause (ii) ensures that the probabilities are evenly
distributed across the worlds in the proposition (maximum entropy) — these distributions,
like the sets they come from, treat all their “members” (things with positive probability)
alike. An example:
(10.9) W = {91 , 92 , 93 }
91 → - .5
p = {w1 , w2 } 92 → - .5
93 → - 0
Of course, probability distributions need not be so uniform, but we’ll concentrate on the
ones that are, so that we keep a tight fit with the semantics.
Ex. A.60
10.3.2 Degrees of belief
The theory of belief statements of handout 7 is very good at getting at perfect belief and
perfect belief. We can of course mimic these extremes with probability distributions:
(10.10) Where Ba is the belief state for the individual a and Pa is the probability distri-
bution that mimics Ba (given some set of worlds W):
a. a believes p: Ba ⊆ p Pa (p) = 1
b. a disbelieves p: Ba ∩ p = ∅ Pa (p) = 0
But there is a great deal of middle ground between these two. In set terms, we can capture
this by saying that Ba is consistent with the content of p (Ba ∩ p $ ∅). But what about more
subtle degrees of belief like suspicion, strong suspicion, doubt, and so forth? It’s hard to
see how to define these in terms of sets alone, but probability distributions provide all the
intermediate ground we could want (assuming a large and rich enough W):
63
References
The conditional probability of p given q can be significantly higher than the probability of
q alone. If we presented us with four possibilities and some information eliminates two of
them, then we have a great gain. And this seems to be what is happening in (10.12). The
questioner has little idea of where Barbara lives. The first utterance eliminates many, many
possibilities. The other answers eliminate some possibilities, at least conditionally. This
is likely true of “It is cloudy”, but its contribution will pale in comparison to the others.
Here is a proposed measure of relevance:
(10.14) The relevance of q to p is given by P(p|q) − P(p).
We can use this mostly for comparative judgments: for saying that p is more relevant to q
than is q0 , and so forth.
64
Bibliography
Barker, Chris. 2007. Direct compositionality on demand. In Barker and Jacobson (2007),
102–131.
Barker, Chris and Pauline Jacobson, eds. 2007. Direct Compositionality. Oxford: Oxford
University Press.
Barwise, Jon and Robin Cooper. 1981. Generalized quantifiers and natural language. Lin-
guistics and Philosophy 4(4):159–219.
Barwise, Jon and John Perry. 1983. Situations and Attitudes. Cambridge, MA: MIT Press.
Beaver, David Ian. 1997. Presupposition. In van Benthem and ter Meulen (1997), 939–
1008.
van Benthem, Johan. 1991. Language in Action: Categories, Lambdas, and Dynamic
Logic. Amsterdam: North-Holland.
Boolos, George S., John P. Burgess, and Richard C. Jeffrey. 2002. Computability and
Logic. Cambridge: Cambridge University Press, 4 ed.
Curry, Haskell B. and Robert Feys. 1958. Combinatory Logic, Volume 1. Amsterdam:
North-Holland.
Ginzburg, Jonathan and Ivan A. Sag. 2001. Interrogative Investigations: The Form, Mean-
ing, and Use of English Interrogatives. Stanford, CA: CSLI.
65
References
Groenendijk, Jeroen and Martin Stokhof. 1991. Dynamic predicate logic. Linguistics and
Philosophy 14(1):39–100.
Halvorsen, Per-Kristian and William A. Ladusaw. 1979. Montague’s ‘Universal grammar’:
An introduction for the linguist. Linguistics and Philosophy 3(2):185–223.
Heim, Irene. 1983. On the projection problem for presuppositions. In Michael Barlow,
Daniel P. Flickinger, and Michael T. Wescoat, eds., Proceedings of the 2nd West Coast
Conference on Formal Linguistics, 114–125. Stanford, MA: Stanford Linguistics Asso-
ciation.
Heim, Irene and Angelika Kratzer. 1998. Semantics in Generative Grammar. Oxford:
Blackwell Publishers.
Hintikka, Jaakko. 1969. Reference and modality. In Leonard Linsky, ed., Philosophical
Logic, 145–167. Oxford: Oxford University Press.
Jackendoff, Ray. 1996. Semantics and cognition. In Shalom Lappin, ed., The Handbook
of Contemporary Semantic Theory, 539–559. Oxford: Blackwell Publishers.
Jacobson, Pauline. 1999. Towards a variable-free semantics. Linguistics and Philosophy
22(2):117–184.
Janssen, Theo M. V. 1997. Compositionality. In Johan van Benthem and Alice ter Meulen,
eds., Handbook of Logic and Language, 417–473. Amsterdam: Elsevier.
Kamp, Hans and Uwe Reyle. 1993. From Discourse to Logic. Introduction to Modeltheo-
retic Semantics of Natural Language, Formal Logic and Discourse Representation The-
ory. Dordrecht: Kluwer.
Kaplan, David. 1989. Demonstratives: An essay on the semantics, logic, metaphysics, and
epistemology of demonstratives and other indexicals. In Joseph Almog, John Perry, and
Howard Wettstein, eds., Themes from Kaplan, 481–614. New York: Oxford University
Press. [Versions of this paper began circulating in 1971].
Karttunen, Lauri. 1976. Discourse referents. In James D. McCawley, ed., Syntax and
Semantics, Volume 7: Notes from the Linguistic Underground, 363–385. New York:
Academic Press.
Keenan, Edward L. 1993. Natural language, sortal reduction, and generalized quantifiers.
Journal of Symbolic Logic 58(1):314–325.
Keenan, Edward L. 1996. The semantics of determiners. In Shalom Lappin, ed., The
Handbook of Contemporary Semantic Theory, 41–63. Oxford: Blackwell.
66
References
67
References
Partee, Barbara H., Alice ter Meulen, and Robert E. Wall. 1993. Mathematical Methods
in Linguistics. Corrected 1st edition. Dordrecht: Kluwer.
Peters, Stanley and Dag Westerståhl. 2006. Quantifiers in Language and Logic. Oxford:
Blackwell.
Portner, Paul and Barbara H. Partee, eds. 2002. Formal Semantics: The Essential Read-
ings. Oxford: Blackwell Publishing.
van der Wouden, Ton. 1997. Negative Contexts: Collocation, Polarity and Multiple Nega-
tion. London and New York: Routledge.
van Benthem, Johan and Alice ter Meulen, eds. 1997. Handbook of Logic and Language.
Cambridge, MA and Amsterdam: MIT Press and North-Holland.
68
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)
Handout A: Problems
Problems marked PRACTICE will push you deeper into the relevant concepts
from the handouts. Problems marked HARD are designed to be challenging.
Problems marked OPEN might not have neat resolutions. (And, in turn, prob-
lems not marked OPEN should be solvable in a reasonable amount of time.)
Your task Find two situations in which truth is evaluated, not with respect to our reality,
but rather with respect to a (potentially) different one. Might the people in these situations
show some awareness that their reality isn’t the only (or true) one? Might your situations
have an impact on how we design our semantic theory? If so, how? If not, why not?
Your task How would you respond to the skeptic who claimed that he needed a seman-
tics for set theory, to feel confident that its interpretation was well defined? Would giving
a semantic theory of ∈, ⊆ and the like satisfy the skeptic? (Why not?)
A.3 Idioms
PRACTICE
Background Complex idioms are obvious challenges for compositionality in any of its OPEN
forms. It seems that, on their idiomatic uses, none of the expressions in (A.1) has a mean-
ing that is predictable from the meanings of its parts:
69
Problems
Your task Articulate why these facts are challenging for compositionality, and outline a
possible resolution (or a few of them).
Your task Suppose that the translation process can be one-to-many. That is, suppose
a single expression E translates to distinct logical symbols L and L0 , and suppose that L
and L0 denote different model-theoretic objects. What would this mean for the status of
translation? How could a motivated argument for this nondeterministic translation inform
the debate about whether interpretation is direct or indirect?
Your task Describe the following sets in a way that is less obscure:
i. {n | n is a natural number and 3 < 4}
ii. {n | n is a natural number and 3 > 4}
What role does the second conjunct play in each case?
70
Problems
Your task
i. Use ⊆ to specify the relation that lawfully holds between A ∩ B and A ∪ B.
ii. Define an exclusive union operator — symbol of your choosing — that excludes
A ∩ B.
A.7 Is it a function?
PRACTICE
Background Functions are defined in section 3.4 of handout 3. They are everywhere in
linguistics, so it is essential that you be able to spot them in the wild.
Your task For each of (A.3)–(A.9), say whether or not it is a function. If it is a function,
say also whether it is an onto function and whether it is a total function.
1 1 1
(A.3)
(A.4)
(A.5)
0 0 0
(A.6)
(A.7)
71
Problems
(A.8) the relation R from nodes to nodes in tree structures that maps each node to its
daughter(s)
(A.9) the relation R−1 from nodes to nodes in tree structures that maps each node to its
mother(s)
Your task Specify the characteristic set for the function depicted here:
1
0
' (
, , , , , ,
' (
, ,
72
Problems
|Dτ ||Dσ |
where |A| is the cardinality of the set A and the superscript is an exponent.
ii. How many objects are in the powerset of A? How does your result help us under-
stand why the powerset of A is often given as 2A ?
iii. How many objects are in A × B. And what is the general method for calculating the
number of n-tuples in X1 × · · · × Xn ?
iv. How many objects are in the set of all functions from A × B into A? In general, how
many objects are in the set of all functions from X into the set of all functions from
Y into X (i.e., X -→ (Y -→ Z)?
73
Problems
Your task Articulate the intuitive connection between the data in (S) and Schönfinkel’s
trick.
Your task Create PL or PLf translations of the sentences in (S) and give their truth table.
What do you see?
A.11 nor
PRACTICE
Background There are many more definable connectives than appear on handout 4. A
couple of them have the magical property of being truth functionally complete, and one
might even be a reasonable translation of an English word. Let’s look.
Your task Define a PL connective that seems suitable for the English expression nei-
ther. . . nor. (You can imagine that it’s just nor you’re defining, so that you have a binary
operator akin to ∨.)
i. State the translation hypothesis in a form comparable to that of hypothesis (4.7b).
(You can make up your own symbol.)
ii. Give the type for your connective, using the system of section 4.3.1.1.
iii. Provide the interpretation for your connective, in the manner of section 4.3.2.2.
Your task Generalize the type definition so that it specifies infinitely many types, but
maintain the restriction that inputs are always t.
74
Problems
i. ((∧(p))(q)) (q ∧ p)
The expressions are just tools for helping people understand what is happening with the
denotations (functions). At present, they are not particularly illuminating.
Your task Devise some new syntactic rules, replacements for those in (4.3.1.2), that
determine expressions of the sort at right but maintain the virtue of the current system that
∧, ∧p, and the like are well formed.
A.14 Interdefinability
PRACTICE
Background It is possible to make due entirely with just one binary connective and a
negation. All others are definable in terms of combinations of them. For instance, it is
common to treat (ϕ → ψ) as an abbreviation for (¬ϕ ∨ ψ).
Your task Using truth tables, show that treating (ϕ → ψ) as an abbreviation for (¬ϕ ∨ ψ)
gives us the arrow defined on handout 4. Then show how this definition works for the PLf
functions as well. If you want to push still further by combining this answer with exercise
A.11, then see how much mileage you can get out of a nor -like connective.
Your task
(The reverse direction is harder because not all PLf expressions correspond to well-formed
formulae of PL. One must concentrate on the truth-valued expressions.)
75
Problems
Your task How does this hypothesis fare in light of the natural language data you know
about? (The best way to answer this is to create a list of properties of → and check them
against the linguistic facts.)
Your task
i. The PLf operator ∨ is defined so that [[p ∨ q]]M = T if [[p]]M = [[q]]M = T. Define
a corresponding exclusive disjunction operator that excludes this case (symbol of
your choosing).
ii. Draw a truth table for a formula consisting of two exclusive disjunctions. What is
odd about the values this turns up?
iii. Suppose that your exclusive disjunction provides the meaning for English or. What
prediction would this make about the sentence Sam is at the store, or Sam is on his
cell phone, or Sam is inspecting broccoli ?
76
Problems
Your task Amass as many arguments as you can think of for why this is a hopelessly
bad hypothesis.
S
((''
( '
S and S
Your task How might we devise a semantics that does justice to these structures and
predicts the same truth conditions as our binary version? (It might be useful to think about
currying in this case (section 3.4.5 of handout 3).
If you’re looking for an additional challenge: what would it take it generalize your
definition of and to n-ary conjunction, for any finite n? You might try to write down a
lambda term. Be sure to confront, in prose if not in symbols, the fact that we can’t fix n
ahead of time.
Your task Is function composition commutative? If it is, then prove that claim. If it is
not, then find two functions f and g for which ( f ◦ g) and (g ◦ f ) are well defined, but
( f ◦ g) $ (g ◦ f ). What would happen if we modeled coordination in these terms?
77
Problems
A.22 PL intensions
PRACTICE
HARD Background This course is building quickly to an intensional perspective on meanings.
It is important to see that the roots of this idea are present in PL and PLf as well. This
exercise asks you to draw that perspective out, by redefining the logical constants so that
they are less about truth than about possibilities.
Your task Suppose we wanted to take more seriously the metalogical observations about
intensionality in PL, as summarized in section 4.7. Suppose we wanted to do interpretation
in terms of the sets of indices at the bottom of that truth table.
ii. What would be appropriate denotations for the following connectives in light of your
reformulation of Dt ?
a. ¬
b. ∧
c. ∨
d. →
e. ↔
For each of the following, say whether it is in the above type space:
78
Problems
i. *†, ◦+
ii. *◦, †+
iii. *◦, •+
= B C>
iv. †, †, *†, ◦+
Your task What are the two possible types for α if the mode of composition is functional
application? What is the model-theoretic reason for this limitation?
γ : *e, t+
('
(
( ''
α : β : *e, *e, t++
Your task Try to articulate what aspects of the system allow vacuous abstraction, and
try also to find evidence for or against allowing it in our linguistic theory as well. The
following examples might be useful in this regard.
(V1) People would call all the time and ask for Ali and we would say,“it’s not our
company,” she said. Usually, the calls were complaints, Ms. Tams said, adding:
“It’s been one of those things where we were going to go to him and talk to him
about having him change his fictitious name, but it’s something we never got
around to doing. And I wish we did.”1
1
Officials Puzzled About Motive of Airport Gunman Who Killed 2 By Rick Lyman and Nick Madigan,
New York Times, July 6, 2002, National section.
79
Problems
(V2) “Johnny, believe me. We may be dealing with something neither one of us
should get near, something way up in the clouds that we — I — don’t have the
knowledge to make a proper decision.”2
A.26 Partiality
OPEN
HARD Background It is very common to find that an author is implicitly or explicitly depend-
ing on some functions in the functional domains being partial, rather than total (handout
3, section 3.4). Such functions can provide an elegant formal basis for a theory of pre-
suppositions (Beaver 1997). But it has logical consequences that we should be aware of
(Muskens 1989, 1995).
Your task Suppose that the denotes a partial function from properties into entities (type
**e, t+, e+, one that is defined for f iff f is true of just one entity in De (uniqueness). What
are the consequences of this for expressions like the(dog), where we assume that dog
is of type *e, t+? What consequences does this have for our link between the types of
expressions and their domains (see exercise A.28)?
Your task Fill out the following semantic analysis with types and logical expressions,
and then provide a denotation for bet that makes it a reasonable hypothesis for the trans-
lation of English bet.
80
Problems
Your task Try to articulate the nature of the connection established by (4.2). If we write
down (ϕ(ψ)) where ϕ is of type *σ, τ+ and ψ is of type ρ, where ρ $ σ, what happens
when we try to interpret that formula? In what sense is our type-theoretic problem also a
semantic (model-theoretic) problem?
Your task Provide the missing clause for interpreting lambda abstracts, i.e., the case in
which ϕ is of the form (λχ. ψ). Please feel free also to write an actual program for parsing
and interpreting lambda terms!
Your task
i. In what sense, if any, does predicate modification expand what we can do with the
logic? (Could we define this rule using just functional application?)
81
Problems
ii. What predictions does this rule make about the dependencies between α and β? Does
it allow that the interpretation of one might be conditioned by the interpretation of
the other. (This aspect of the problem is ‘HARD’.)
iii. The rule is stated so as to allow any type ending in t. How might be characterize this
class of domains?
iv. Could we generalize this rule to any type *σ, τ+?
A.31 Assignments
PRACTICE
Background The act of changing an assignment according to an instruction can seem
complicated, but in fact it is quite minimal and easy to visualize with a little practice.
Your task Fill out these equality expressions (I’ve not relativized to a model to keep
things simple):
x -→
i. [[x]] y -→
=
x -→
ii. [[y]] y -→
=
x -→
D E
x-→
iii. [[x]] y -→
=
x -→
D E
x-→
iv. [[happy(y)]] y -→
=
82
Problems
x -→
D E
x-→
v. [[λy. happy(y)]] y -→
=
Your task For each pair, say whether its members can differ model-theoretically. If they
can, exemplify the difference. If they can’t, try to articulate why they can’t.
X a. happy(x)
b. happy(y)
Y a. λx. happy(x)
b. λy. happy(y)
Your task Define a cross-categorial and that is build up from our ∧ from PLf but can
take any pair of arguments in *σ, t+.
83
Problems
This is all very well, until we realise that we have coded binary relations be-
tween ternary relations as functions from functions from individuals to func-
tions from individuals to functions from individuals to truth values to func-
tions from functions from individuals to functions from individuals to truth
values. In other words, we have replaced objects that we have some intu-
itive grasp on by monsters that we can reason about only in an abstract way.
(Muskens 1995:12)
It’s a point well taken. One might object that these monsters are necessary if we want a the-
ory that assigns a meaning to each syntactic phrase. But Muskens answers that objection.
See if you can do so as well.
Your task Formulate an operation on relational meanings that essentially abstracts over
one of the coordinates in the tuples it contains, so that we can maintain our usual theory of
composition with these (arguably) simpler relational objects.
Your task The task is to provide explanations for the deviance of each of the examples
in (a)–(e).
∗
(D) a. Ed devoured. (but what about Ed ate ?)
∗
b. I saw Sue and that it was raining.
∗
c. Ed glimpsed the dog the printer.
#
d. It’s not raining, but Sue realizes it’s raining.
#
e. The A-train suffered an existential crisis.
(cf. I dreamed that the A-train suffered an existential crisis.)
Two things to keep in mind:
• We are not (necessarily) after a unified theory of the deviance seen in (D).
• If you’re unsure of how to analyze a constituent semantically and it isn’t important
to your argument how it is analyzed, then translate it into a single predicate. For
example, The A-train ! the-train.
84
Problems
Your task Your goal for this part is to construct a fragment that handles all the intransi-
tive verb constructions in (F). (Ignore all issues relating to tense.)
iv. An interpretation function that takes the logical expressions to objects in the domains
for the types (in a way that respects typing).
To show readers how your fragment works, you should provide a derivation of some kind
for one of the sentences in (F).
Strive for generality. If your fragment works for the examples in (F), it will also work
for lots of other intransitive sentences. Either sketch how your fragment could be gen-
eralized to new intransitive sentences with proper-name subjects or (better) define your
fragment so that it has this level of generality built into it.
A.37 Substitution
Background Substitution is an apparently simple operation on formulae that is nonethe-
less complicated by the conditions geared towards ensuring that no accidental binding
takes place. It’s worth practicing a bit. PRACTICE
85
Problems
Your task For each of the following perform the substitution operation if it is permitted,
else indicate what blocks the substitution.
i. y[y!(cyclist(x))]
ii. (cyclist(x))[x!y]
i. (λx. like(x))(y)
Your task Try to articulate the model-theoretic grounding for η-conversion. Why is it
guaranteed to work, given the formulation of functional application and functional abstrac-
tion. (It might help to think about what happens when you abstract over x in an expression
like happy(x).)
86
Problems
Your task See if you can make matters worse for “extensional believe ”, by perhaps
deriving meanings that run directly counter to our intuitions. Can you make Lisa both
believe and disbelieve every truth?
A.41 Modals
PRACTICE
Background Modal verbs seem to say something about propositional content. But,
within an extensional model, our ‘propositions’ are just truth values.
Your task Why can’t we just use one of the functions from Dt into Dt to analyze modals?
Your task Formulate Dox precisely, but make sure that you take into account, some-
how, that an individual’s beliefs can vary from world to world. Rework the definition
of [[believe]]M,g using this new Dox, and explain your decisions about how to handle the
world arguments throughout.
87
Problems
Your task Where do you come down on this issue? Why? What impact, if any, does your
position have on the treatment of proper names? (You might widen your scope enough to
include fictional names and the like — it depends on how daring your are.)
Your task Suppose there are n individuals. How many different properties can there be?
How many worlds should we have to ensure that we can make all the distinctions among
meaning that we want to be able to make?
Your task Define a notion of truth relative to a common ground, so that we can still have
truth/assertion after giving up on @.
88
Problems
Your task Propose a solution to this problem. You might change our assumptions about
predicates. You might sneak in a free variable over worlds. You might define a new rule
of semantic composition. Feel free to think freely, but do try to motivate the choice you
make.
Your task Suppose I believe something impossible. What does my belief state look like
then? Is this realistic? If no, how might we do better?
Your task Extend an extensional or intensional lambda calculus with a type for degrees
and an associated domain, then use this enriched set of tools to give a meaning for tall.
Strive for a meaning that will work in a broad range of cases. (It might seem easiest to
start with examples like That mouse is tall, but in fact it is easier to start with comparative
data.)
89
Problems
Your task How should we account for this pattern? What changes to the logical system
does your answer require?
Your task Give a semantics for expressions like ∃!x ϕ that makes them true iff there is
exactly one entity with the property ϕ. And then describe how you would generalize this
to exactly n, for any natural number n.
Your task
• What if the restriction and nuclear scope have the same extension?
90
Problems
Your task Suppose that Lisa is young, intelligent, and literate. She is not angry, and she
is not tall. Assume that there are no other properties besides these. Using these facts, draw
a picture of the function specified in (A.12).
(A.12) λ f. f (lisa)
Your task Provide a type for intensionalized quantificational determiners, and provide
the meanings for every and most in these new terms. What did you decide to do with the
world arguments? Why?
Your task
i. Run the conservativity test on each sentence in (C), and indicate which if any of the
entailments go through and which don’t.
ii. Articulate why your results seem problematic for the conservativity generalization.
iii. Propose a resolution (reject the generalization, follow Keenan’s advice in the quota-
tion, something else entirely).
91
Problems
Your task Provide the data needed to obtain a generalization concerning the choice of
coordinating element and formulate that generalization, and then try to formulate a suitable
generalization.
Your task Suppose, then, that we tried to analyze indexicals as proper names. What (if
anything) would this analysis get right, and what (if anything) would it get wrong?
Your task Explain what this means in the context of the theory described in section 10.1
of handout 10.
Your task Provide denotations of the following in terms of sets of assignments, keeping
in mind that an assignment g is in the denotation of an expression ϕ iff interpreting ϕ
relative to g produces T.
92
Problems
i. [[happy(sam) ∨ ¬happy(sam)]]M
Your task Devise an assignment-based theory of indefinites that captures the fact that
they introduce new information. A first step might be to assume that assignments are
partial functions on infinite lists of variables. When one uses an indefinite, one adds new
variable–entity mappings in some uniform fashion.
Your task What are the analogues of seta intersection, set difference, and subset in the
realm of probabilities? Can you find important ways in which the pairs you propose differ?
93