This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.

Find out moreText for

Logic 2003A: Symbolic Logic I

Kent A. Peacock

Department of Philosophy,

University of Lethbridge.

April 22, 2006

ii

Copyright c ( 2005, 2006 Kent A. Peacock

Preface

The purpose of this text is to provide a correct but ﬂexible command of symbolic logic up to

but not including the point at which one can begin to study metalogic (the “logic of logic”).

As such, it covers most of the material usually covered in a typical second-year university

course on symbolic logic, including such time-honoured topics as truth tables, truth trees,

and natural deduction for propositional and ﬁrst-order predicate logic (without and with

identity). A working command of these techniques is simply assumed in the the literature on

analytic philosophy, mathematical logic, foundations of mathematics, computer science, and

much of physical science. For enrichment there are also chapters on widely-used properties

of relations, the parallels between circuit logic and propositional logic, elementary set theory,

and a short treatment of the categorical syllogism using the tools of predicate logic. (The

chapter on circuit logic may especially be of interest to those many students who come

to logic and philosophy from computer science.) There is more here than can usually be

covered in a single course, but these extra topics give students and instructors plenty to

choose from.

A concluding chapter introduces the reader brieﬂy and qualitatively to some of the riches

that lie beyond, including modal and deviant logics, G¨ odel, Turing, artiﬁcial intelligence,

and quantum computing. Not everyone will wish to explore such advanced topics in detail,

but every student of philosophy and logic should be aware that such things exist, and have

at least a passing acquaintance with them.

The aim of this text is not to present a rigorous development of propositional and

predicate logic from the simplest and most general ﬁrst principles available, but to oﬀer a

workaday (but of course valid!) system of logic that will be as useful as possible, both for

beginning logicians who may later wish to go on to a more foundational treatment, and also

those many others who are interested in logic more as a tool than for its own sake.

One might naturally ask why the world needs to be blessed with yet another logic

text, when there are so many excellent texts on the market now. Like almost everyone

who has taught logic, I could not avoid working out my own approach to the subject. In

order to teach the subject to myself and others, I had to reconstruct it in my own way,

and I ended up with an approach that I believe is a useful complement to other methods

of teaching this sort of logic. And there is room for new approaches, for the fact remains

that elementary logic, unlike certain parts of elementary mathematics, is still a developing

subject. I modestly believe that the approach I present here has an advantage over many

other texts at a similar level because of its emphasis on ﬂexibility and ease of use.

The choice of topics in this text was designed in particular to meet the needs of the

second-year symbolic logic course at the University of Lethbridge. We have a ﬁrst year

course that is mostly informal logic and critical thinking (with a smattering of “baby”

iii

iv PREFACE

deductive logic), a second-year course called Symbolic Logic I, that introduces the formal

methods of classical two-valued Boolean logic (primarily propositional and predicate calcu-

lus) up to the point at which one can begin to do metatheory, and then a third-year course,

Symbolic Logic II, in which those who are so inclined can ﬁnally dip into metatheory.

The notation for natural deduction I used here is adapted with some simpliﬁcations

from that used in superb but now largely out-dated texts by Mates and Lemmon [18, 17],

supplemented with the highly eﬃcient truth tree techniques pioneered by Jeﬀrey [14]. I

build my natural deduction system on basic rules similar to those of Lemmon, but I have

streamlined the presentation of natural deduction techniques in such a way as to make them

as easy to use as possible. I also use Lemmon’s eﬃcient way of setting up truth tables, with

a few shortcuts of my own thrown in. In the end, I have written the logic text that I think

would have beneﬁted me the most as a student, and I hope that it will be useful to others

as well. Although the book is aimed in the ﬁrst instance at college and university students,

I hope that it will also be useful for professionals in disciplines such as physics or computer

science, or indeed anyone at all who may wish to refresh their knowledge of the techniques

of logic.

The advantage of the Mates/Lemmon notation for natural deduction over the popular

and graphically explicit Fitch system (as used in, e.g., [2, 13]) is that it is more compact,

more ﬂexible, and that it allows one to introduce a theorem at any point in the deduction.

(This will all be explained below.) Logic does not yet have one notation that has become

universally accepted (unlike most branches of mathematics such as calculus), but once you

have learned one notation, it is not very hard to learn another. Some students may in

the end prefer the Fitch system over the slightly more abstract Mates/Lemmon system

because Fitch allows us to visualize whenever we are in a subproof. On the other hand, the

concision and ﬂexibility of Mates/Lemmon certainly help to make symbolic logic useful —

which places it squarely in the spirit of this text.

Although I tremendously admire Lemmon’s book, I base my teaching of logic on a dif-

ferent pedagogical philosophy. Lemmon, and indeed numerous other authors of logic texts,

believe that the student should learn how to derive every result from ﬁrst principles, using

as few rules as possible. This “logical boot camp” approach certainly develops technical

adroitness in those students who are willing to tough it out. But it has the disadvantage that

one often requires derivations of great length and complexity in order to prove results that

should be very simple and obvious. My view is that the subject is diﬃcult enough as it is.

I think that (especially for beginning students) the most useful approach is to do just what

one does in mathematics, which is to use, as creatively and as early as possible, previously

derived results to get new results. The ultimate purpose is to streamline derivations and

get useful or interesting formulas as quickly and painlessly as possible, just as one would

in (say) a trigonometry text. Trig would be nearly useless if one had to prove an identity

such as cos

2

θ + sin

2

θ = 1 every time one needed to use it. I therefore encourage students

to freely use earlier theorems and sequents, and I provide a formalism that allows one to

do this in an eﬃcient and clear manner. (Lemmon allows the introduction of sequents and

theorems, too, but he does not emphasize it nearly as much as I think it would be useful to

do so.) Again, the aim is to develop a system of logic that could actually be useful .

Another respect in which this book diﬀers from some other treatments of symbolic

logic is in its views about the purpose of learning natural deduction. Once the student has

mastered syntactic methods of verifying a given deduction, such as truth trees, she may

v

well wonder what is the point of struggling through the process of constructing a syntactic

derivation of a result that can be quickly checked with a tree or table.

The experienced logician will know that one reason for learning syntactic methods is

that not all logics are complete, so that once we get beyond ﬁrst-order predicate logic,

syntactic methods are not merely a redundant gloss on semantic methods.

Another reason for learning natural deduction is that it is a formalization of the way

we do actually tend to think about things: presented with a collection of premisses, we do

indeed go through something like natural deduction (although usually sloppily, and with a

free admixture of inductive steps) to arrive at a conclusion. Like Moli`ere’s M. Jourdain,

who was astonished to learn that he had been speaking prose his whole life, we may be

surprised to learn that we use natural deduction whether we know it or not; the methods

of logic simply make it more explicit and precise.

There is yet another reason that natural deduction is worth learning, however, a reason

that I do not think has received suﬃcient attention. What often happens in real life is that

we are presented with a collection of givens (assumptions or known facts) and we then try to

deduce what may follow from those givens. We often do not have a pre-cooked conclusion

any more than we know the sum total of a list of expenses before we add it up. There

are a number of exercises included here which are meant to develop the skill of using the

rules of natural deduction to work from a given collection of premisses to a conclusion that

may have certain desired or known properties but which has not yet been found. Once we

have a complete chain of reasoning from premisses to conclusion we can, if we wish, use

semantic methods to check the argument structure we have constructed. The relation of

natural deduction to truth trees and tables is something like the relation of long division

to multiplication. In long division we have to guess our trial divisors, but once we have

an answer we can check it in a purely mechanical way using multiplication. Similarly, in

natural deduction we have to guess which rules will help us uncover a conclusion with the

desired properties, but once we have a conclusion we can check the deduction mechanically

with semantic methods. In some of the exercises in this book the student is presented with

a collection of premisses without a conclusion, and invited to explore the consequences of

these premisses using the rules of natural deduction.

A number of rules and other results are used in this course that cannot be proven by

means of the tools introduced in the course. (An example is the Deduction Theorem.) Rest

assured that all of the rules I use are valid and can be rigorously established in a more

advanced approach.

Logic resembles gymnastics or language study in that it can be learned only by practice.

To this end, I have included a generous selection of exercises. I follow the practice of many

old-fashioned math texts and divide these exercises into A, B, and C levels, together with

occasional Questions for Discussion. The A exercises are “ﬁnger exercises” that test your

understanding of the deﬁnitions and techniques introduced in the chapter. If you cannot do

the A exercises then you have missed something basic in the chapter. The B exercises are

a bit more complex than the A’s and usually combine several concepts and techniques, but

they should be doable by anyone who has worked through the A exercises diligently. The

B exercises are therefore tests of basic competency in the material of the chapter or section.

The C exercises are advanced and may require quite a bit of thought, or even additional

research beyond the material in this text. The solutions to about half of the exercises are

presented at the end of the book, and the rest are available in the Instructors’ Handbook,

vi PREFACE

except for some of the more open-ended C problems.

The author of any text on logic must at some point decide how to spell the word

“premiss” (plural “premisses”). I have decided to follow Lemmon’s convention and use the

version with double s’s because it avoids confusion with the legalistic sense of the term (as

in, for instance, “keep oﬀ the premises!”) and because it is probably more widely used in

logic.

Although the book is elementary, its pace is brisk. I assume that my readers are like

me in that they do not enjoy being patronized; this is not Logic for the Complete Dummy.

I would be very grateful for any suggestions as to how this document can be improved.

Acknowledgements

I am grateful to the University of Lethbridge and the Social Sciences and Humanities Re-

search Council of Canada for essential material and ﬁnancial support. Many thanks for dis-

cussions, advice, suggestions, shorter proofs, or error corrections to Bryson Brown, Sheldon

Chow, Dawn Collins, Mark Ebner, Herb Kort´e, Laura Smith, Mark Smith, Sam Woodruﬀ,

and John Woods. A special thanks to Ardis Anderson for her help in tracking down M.

Jourdain. Any errors that remain (and a quick induction suggests that there must be some!)

are entirely my own responsibility.

Contents

Preface iii

1 Introduction to Logic: Getting Your Feet Wet 1

1.1 Logic, Logical Form, and Validity . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 How to Teach Yourself Logic . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.1 How To Use This Text . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Questions for Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Propositional Logic: Symbolization and Translation 7

2.1 Notation in Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Common English Equivalents for Propositional Connectives . . . . . . . . . 8

2.2.1 Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.2 Conjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.3 Disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.4 Material Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2.5 The Biconditional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.6 Commutativity and Associativity . . . . . . . . . . . . . . . . . . . . 11

2.2.7 “Unless” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 Truth Table Deﬁnitions of the Propositional Connectives . . . . . . . . . . . 13

2.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.4 Interpreting the Material Conditional . . . . . . . . . . . . . . . . . . . . . 14

2.4.1 “Ifs” versus “Only ifs” . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.4.2 The “Paradoxes” of Material Implication . . . . . . . . . . . . . . . 15

2.5 Syntactic Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.5.1 Scope of a Connective . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.5.2 Main Connective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.5.3 Bracketing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.6 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.6.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.7 Negated Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.7.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.8 Intensional Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.9 Alternative Notation for Propositional Connectives . . . . . . . . . . . . . . 19

2.10 Exercises on Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

vii

viii CONTENTS

3 Introduction to Natural Deduction 23

3.1 Basic Rules of Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1.1 Alternative Names for the Basic Rules. . . . . . . . . . . . . . . . . . 26

3.2 Comments on Basic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3 Simple Examples of the Basic Rules . . . . . . . . . . . . . . . . . . . . . . 28

3.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.4 Some Useful Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.4.1 One-Way Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.4.2 Interderivability Results . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.4.3 Form of Justiﬁcation When Using Derived Rules . . . . . . . . . . . 42

3.5 Summary of Basic Sequents and Rules . . . . . . . . . . . . . . . . . . . . . 42

3.5.1 Basic Rules and Sequents . . . . . . . . . . . . . . . . . . . . . . . . 43

3.5.2 Summary of Derived Sequents . . . . . . . . . . . . . . . . . . . . . . 44

3.6 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.6.1 A Common Misuse of vE . . . . . . . . . . . . . . . . . . . . . . . . 46

3.6.2 Aﬃrming the Consequent . . . . . . . . . . . . . . . . . . . . . . . . 46

3.6.3 Denying the Antecedent . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.6.4 Avoid Circularity! . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.7 Exercises on Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4 Techniques and Applications 51

4.1 Recognition of Logical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.1.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.2 Combining Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.3 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.3.1 The “Laws” of Thought . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.4 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.4.1 Uniform Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.4.2 Nonuniform Substitution . . . . . . . . . . . . . . . . . . . . . . . . 55

4.5 Introduction Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.5.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.6 The Deduction Metatheorem . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.6.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.6.2 A Turnstile is not an Arrow! . . . . . . . . . . . . . . . . . . . . . . 59

4.7 Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.7.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.8 Filling in the Gaps in an Argument . . . . . . . . . . . . . . . . . . . . . . . 60

5 Propositional Semantics: Truth Tables 63

5.1 Syntax versus Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.2 Truth Table Deﬁnitions of the Propositional Connectives . . . . . . . . . . . 64

5.2.1 How Many Possible Truth Tables Are There? . . . . . . . . . . . . . 64

5.2.2 Expressive Completeness . . . . . . . . . . . . . . . . . . . . . . . . 64

5.2.3 Evaluating Complex Wﬀs . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3 Semantic Classiﬁcation of Single Wﬀs . . . . . . . . . . . . . . . . . . . . . 66

5.3.1 Tautology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

CONTENTS ix

5.3.2 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.3.3 One-to-One Relation Between Inconsistencies and Tautologies . . . . 66

5.3.4 Contingency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.3.5 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.4 Relation of Syntax to Semantics . . . . . . . . . . . . . . . . . . . . . . . . 67

5.5 Semantic Classiﬁcation of Pairs of Wﬀs . . . . . . . . . . . . . . . . . . . . 67

5.5.1 Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.5.2 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.5.3 Contrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.5.4 Subcontrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.5.5 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.5.6 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.5.7 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.6 Relations Between Semantic Properties of Wﬀs . . . . . . . . . . . . . . . . 68

5.7 Semantic Classiﬁcation of Sets of Wﬀs . . . . . . . . . . . . . . . . . . . . . 69

5.7.1 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.7.2 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.7.3 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.7.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.8 Testing Validity of Sequents Using Truth Tables . . . . . . . . . . . . . . . 69

5.8.1 Deduction Theorem from the Semantic Viewpoint . . . . . . . . . . 71

5.8.2 Short-cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.9 Using Short-Cuts in Testing for Other Semantic Properties . . . . . . . . . 72

5.10 Evaluating Long Conjunctions or Disjunctions . . . . . . . . . . . . . . . . . 73

5.11 Sheﬀer and Nicod Strokes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.12 Review Exercises on Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . 75

6 Propositional Semantics: Truth Trees 77

6.1 Branches and Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

6.2 Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6.3 Stacking and Branching Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6.3.1 Substitution of Equivalents . . . . . . . . . . . . . . . . . . . . . . . 80

6.4 Checking Semantic Properties of Formulas with Trees . . . . . . . . . . . . 80

6.4.1 Tautology, Contingency, and Consistency . . . . . . . . . . . . . . . 80

6.4.2 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6.5 Checking Validity of Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . 82

6.6 Trees or Tables? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.8 Questions for Further Research . . . . . . . . . . . . . . . . . . . . . . . . . 84

7 Predicate Logic: Symbolization and Translation 85

7.1 Why We Need Predicate Logic . . . . . . . . . . . . . . . . . . . . . . . . . 85

7.2 Predicate Logic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7.2.1 Universe of Discourse . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7.2.2 Terms and Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7.2.3 Monadic Predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

7.2.4 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

x CONTENTS

7.2.5 Propositions and Propositional Functions . . . . . . . . . . . . . . . 88

7.2.6 Quantiﬁers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

7.2.7 Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.2.8 Categorical Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.2.9 Don’t Be Fooled. . . ! . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7.2.10 Combinations of Properties . . . . . . . . . . . . . . . . . . . . . . . 92

7.2.11 Use of “Except” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

7.2.12 Use of “Only” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

7.2.13 Variants of Categorical Forms Using Relations . . . . . . . . . . . . 93

7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

8 Predicate Logic: Natural Deduction 95

8.1 Duality Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

8.2 Introduction and Elimination Rules . . . . . . . . . . . . . . . . . . . . . . . 95

8.2.1 Universal Elimination (UE) . . . . . . . . . . . . . . . . . . . . . . . 95

8.2.2 Universal Introduction (UI) . . . . . . . . . . . . . . . . . . . . . . . 96

8.2.3 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

8.2.4 Existential Introduction (EI) . . . . . . . . . . . . . . . . . . . . . . 97

8.2.5 Existential Elimination (EE) . . . . . . . . . . . . . . . . . . . . . . 97

8.2.6 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

8.3 Summary of Introduction and Elimination Rules for Quantiﬁers . . . . . . . 99

8.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

8.5 Distribution Rules for Quantiﬁers . . . . . . . . . . . . . . . . . . . . . . . . 102

8.6 Proofs of Duality Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

8.7 Rules of Passage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

8.8 Interderivability ⇔ Equivalence Theorem . . . . . . . . . . . . . . . . . . . 104

8.9 What to Memorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.10 A Shortcut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.11 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.12 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.13 Summary of Rules for Predicate Natural Deduction . . . . . . . . . . . . . . 106

8.13.1 Basic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

8.13.2 Duality Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

8.13.3 Rules of Passage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

8.13.4 Distribution Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

8.13.5 Direct Manipulation of Propositional Functions . . . . . . . . . . . . 108

8.14 Review Exercises for Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . 109

9 Predicate Logic With Identity 111

9.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

9.2 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

9.3 Deﬁnite Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

9.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

9.3.2 Expressing Numerical Relations . . . . . . . . . . . . . . . . . . . . . 116

9.3.3 Superlatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

9.4 Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

CONTENTS xi

Solutions to Selected Exercises 119

Solutions for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Solutions for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Solutions for Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Solutions for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Solutions for Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Solutions for Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Solutions for Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Solutions for Chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

xii CONTENTS

Chapter 1

Introduction to Logic: Getting

Your Feet Wet

1.1 Logic, Logical Form, and Validity

What is logic? There is much disagreement among professionals about this question. I will

oﬀer two deﬁnitions that, while no doubt debatable, give us something to work with:

• Logic is the art and science of reasoning.

• Logic (especially symbolic logic) is the mathematics of language.

As the very word “logic” suggests (its root is the Greek term “logos,” meaning word),

logic has a lot to do with language. But it also points to something beyond language,

for logic instantiates deep laws of form that appear in many guises in many subjects and

disciplines [5]. One can get into interesting philosophical debates about the nature of these

mathematical forms. (The Platonists, who believe in the independent reality of mathe-

matical objects, have been debating for millennia with the nominalists, who insist that

the “truths” of logic and mathematics are nothing more than linguistic conventions.) We

could also ask which comes ﬁrst, logic or mathematics? The school of logicism, headed by

Bertrand Russell and Gottlob Frege (who themselves contributed in important ways to the

discipline that we now call modern symbolic logic) argued that all of mathematics is based

on laws of pure logic. But it is now generally felt that this ambitious project did not work,

and that it is more likely the other way around; i.e., that logic is a sort of applied math-

ematics. Unfortunately, we cannot get into such fascinating disputes in an introductory

work like this. For more on the philosophy and foundations of mathematics and logic, see

[1, 5].

We should begin by noting the important distinction between inductive and deductive

logic. In deductive logic we extract information already contained, or implicit, in a set

of premisses (statements that are accepted as given in the context of an argument). We

might accept the premisses for several reasons: they may be known facts whose implications

we are trying to uncover, for instance, or they might be hypotheses or possibilities whose

consequences we wish to explore. But in deductive reasoning we never go beyond the

information contained in the premisses; rather, we rearrange that implicit information into

forms that may be more interesting or useful to us.

1

2 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET

Inductive reasoning is said to be ampliative in that it expands upon the information

given in the premisses. Inductive reasoning is usually implicitly or explicitly probabilistic,

and it becomes quantitative in many scientiﬁc contexts. Induction, especially as it is used

in science and engineering, also involves factors that are very diﬃcult to deﬁne formally,

such as creativity, intuition, and the grasp of aesthetic relationships.

Most applications of reasoning in real-life situations involve a mixture of inductive and

deductive methods. However, it is very useful to study deductive methods in isolation,

because of its importance as the foundation for an understanding of all other methods of

reasoning. This course is almost entirely concerned with deductive reasoning.

Now, why do we do symbolic logic? The advantages of symbols in logic are much the

same as their advantages in mathematics: precision, concision, and their ability to express

complex relationships that could not be expressed eﬃciently or at all in words. Also, symbols

have the advantage of greater universality; an argument in diﬀerent languages that has the

same logical form would be expressible in the same symbols.

A lot of what logicians do (though not all) is analyze arguments. For the logician,

an argument is not a dispute or a shouting match; rather, it is an attempt to establish a

conclusion on the basis of given premisses (singular, premiss). The premisses are statements

that are assumed to be given; as logicians we don’t know where they come from.

Warning! Please spell the word “argument” correctly! One mark oﬀ any

assignment or test any time you write “arguement”!

Here’s a simple deductive argument:

If Sundin stays healthy, the Leafs have a chance.

But if Sundin takes his vitamins, he will stay healthy.

∴ If Sundin takes his vitamins, the Leafs have a chance.

(There’s our ﬁrst symbol: “∴”, which means “therefore”.)

Notice that we don’t worry too much about tense in the examples used here; there are,

however, “tensed” logics for which the point in time at which a proposition was uttered is

important.

Now, we need to distinguish between validity and soundness. Consider the following

argument:

If you are in Lethbridge, then you are in Alberta.

If you are in Alberta, then you are in Canada.

∴ If you are in Lethbridge, you are in Canada.

Both premisses and the conclusion happen to be true statements. Substitute the word

“France” for “Alberta” and we would have an argument with false premisses; but the

conclusion would still seem to follow from the premisses. Therefore, there are arguments

that intuitively seem to be valid in the sense that the conclusions somehow follow from the

premisses, but which still have something missing.

To clarify what is missing, we make a crucial distinction between validity and soundness:

Validity: a deductive argument (or argument form) is valid if and only if (or iﬀ ) it is

impossible for its conclusion to be false when its premisses are true.

1.1. LOGIC, LOGICAL FORM, AND VALIDITY 3

Soundness: A deductive argument is sound iﬀ it is valid and has true premisses.

This is the modern usage of the terms “valid” and “sound” as they are applied to arguments

or argument forms. Some older books, such as Lemmon’s Beginning Logic [17], use the word

“sound” where we will use “valid;” that terminology is now out of date.

The following rough-and-ready concept of validity is also useful:

An argument is valid if it follows from its premisses by correct application of

certain rules of deduction.

Eventually, we have to ask whether or not the rules of deduction are themselves valid in

the deeper sense given above. In fact, they are, so that these two deﬁnitions of validity

turn out to be the same thing in the long run. Later in the course we will learn other

ways of expressing the notion of validity, all of which are equivalent to the ﬁrst deﬁnition

given above. In a sense, this course is simply about all the ways of deﬁning and testing for

deductive validity, and for validly deriving conclusions from premisses.

The practical value of valid deductive reasoning stems from the fact that if a valid

argument has true premisses, the conclusion is guaranteed to be true as well. Validity

transmits truth from premisses to conclusions.

Surprisingly, there are some arguments that can be valid, for formal reasons, but which

can never be sound. Later on we will see some examples of rather suspicious-looking argu-

ments that are, in fact, valid.

Common parlance often confuses the terms “valid” and “true.” Be careful to avoid

this ambiguity when you are doing formal logic. Arguments can be valid or invalid, and

propositions can be true or untrue, but there are no such things as “valid” statements or

“true” arguments. As we shall see, however, valid arguments can be transformed by the

techniques of natural deduction into certain kinds of true statements called logical truths.

As logicians, we do not worry about where the premisses came from or whether or not

they are true; we are only concerned about the validity of arguments, not their soundness.

In this respect, logicians might be compared to automotive mechanics, who ensure that your

car runs properly but do not ask where you drive it. As Bertrand Russell once quipped,

“Pure mathematics [which Russell saw as an extension of logic] can be deﬁned as that

subject in which we do not know what we are talking about, nor whether what we are

saying is true.” [20].

Now consider the following argument:

If you are in Hobbiton, then you are in the Shire.

If you are in the Shire, then you are in Middle Earth.

∴ If you are in Hobbiton, you are in Middle Earth.

Compare this to the argument given above about being in Alberta. What is common

to these two arguments is their form, which we could represent in a more or less obvious

way, without even deﬁning the symbols, as follows:

P → Q

Q → R

∴ P → R.

4 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET

So we can distinguish between particular arguments in a given natural language (such

as English, French, Russian, etc.), and their forms, which are more universal. One of the

main skills that you will develop in learning logic is the ability to perceive logical form.

Our main business as logicians is to analyze the validity of arguments by determining the

validity of argument forms.

There are diﬀerent kinds of logical form that capture diﬀerent ways in which an argu-

ment can be valid. We can intuitively recognize some kinds of validity by means of what

I shall call “linguistic common sense;” but the powerful tools of logic allow us to test the

validity of arbitrarily complex argument forms.

A deductive form is very much like a computer program which accepts input (the

premisses) and outputs a conclusion. In fact, computers can easily be programmed to

follow the steps of a deductive argument. The validity of a deductive form can be compared

to the reliability of a piece of software. A reliable accounting program, for instance, will

correctly tell you whether you have a proﬁt or a loss in your business so long as you correctly

enter all income and expenses. This is like a valid argument form that can be guaranteed

to return a true conclusion if the premisses are true. Of course, if some of the premisses

are false then there is no telling whether the conclusion will be true or false, just as if you

put incorrect data into your accounting program you might, or might not, get the right

answer back from it. Just as the job of software engineers and programmers is to write

reliable code, the job of logicians (so far as deductive logic is concerned) is to devise valid

arguments.

Another helpful analogy between logic and computing is that we can think of logic as

being like a computer game with levels, where we have to master each level before we can

go on to the next. In this course we cover three main levels, each one of which gives us

more machinery to capture validity. They are:

1. Propositional logic — also known as sentential logic, sentential calculus, or proposi-

tional calculus.

2. Elementary ﬁrst-order predicate logic.

3. Elementary ﬁrst-order predicate logic with identity.

Each level has extra machinery that allows us to represent a kind of validity that cannot

be represented by the lower level.

We begin with propositional logic, which is an absolutely basic discipline that appears

in a hundred useful contexts, and serves as the foundation for all higher-level work in logic.

1.2 How to Teach Yourself Logic

I entitled this section “how to teach yourself logic”, not “how to learn logic” or “how to

be taught logic”, because I want to emphasize that logic, like any other complex set of

inter-related skills, is ultimately self-taught no matter what courses you take or what books

you read.

The secret of learning any complex set of skills is attentive repetition, together with the

cultivation of a calm intensity that does not lose its focus on the desired goal, but which at

the same time awaits improvement with almost inﬁnite patience.

1.3. QUESTIONS FOR DISCUSSION 5

I once had the temerity to take a course called Introduction to Theoretical Physics,

given at the University of Toronto by a great teacher and scholar named Professor Lynn

Trainor. The very ﬁrst equation he presented to us will be the ﬁrst formula of this text as

well:

Understanding = Familiarity.

What Professor Trainor meant was that in learning a complex technical subject such as

logic or physics, one must accept the fact that fully conﬁdent understanding may not come

immediately; rather, it follows from becoming so familiar with the operations of the subject

that they become second nature. In fact, learning logic is very much like learning a language;

in the beginning, you have to work very hard simply to familiarize yourself with the basic

deﬁnitions and rules, some of which at ﬁrst may seem arbitrary and strange. After a while

the structure of the language becomes so obvious that you cannot quite ﬁgure out why you

once could not understand it. Yes, in order to learn logic you have to memorize things,

and then diligently practice their use. Learning a subject involves putting the details of the

subject into your head, and becoming adept in their use by dint of long practice.

There are, however, also some important diﬀerences between learning logic and learning

a language. Although there are things that must be memorized in learning logic, there are

far fewer of them than in learning a language, where one must memorize thousands of words

as a basis for full ﬂuency. Furthermore, as one becomes more experienced and familiar with

the use of logic, the inter-relations between its concepts become more and more obvious,

and mere memorization becomes less important, until it ﬁnally dawns on you that all of

logic is in a deep sense just many diﬀerent ways of saying the same thing.

1.2.1 How To Use This Text

• Work through on paper the demonstrations given in the text.

• There are a generous number of practice exercises, and solutions to about half of them

are given in the back. Try to work as many of these as you can. Try the A’s ﬁrst, and

if you cannot do all of them you have missed something important in the previous

section of the text. When you can do the A’s, move on to the B’s.

• If you do all of the exercises in this text — do them again! — and perhaps again, to

the point at which they become as obvious to you as the streets and alleys of your

home town. I repeat, repetition is the secret of learning.

• You may also ﬁnd it very helpful to try and make up your own problems.

• It is very important to be neat and tidy when writing out your proofs, truth tables,

and so forth. A certain amount of doing logic correctly is nothing more than clerical

accuracy. Writing things out completely, correctly, and legibly will facilitate careful

and accurate thought — and will also encourage our over-worked teaching assistant

to give you better marks.

1.3 Questions for Discussion

1. If all deductive logic is simply a process of reworking information given in the pre-

misses, why aren’t all deductive arguments circular? Or are they?

6 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET

2. Is logic just a branch of mathematics? Or the other way around? Or neither?

3. Are logic and mathematics discovered or invented?

Chapter 2

Propositional Logic: Symbolization

and Translation

2.1 Notation in Propositional Logic

As explained in Chapter 1, propositional logic is the ﬁrst “level” of logic. In propositional

logic we study the logical behavior of propositions, which are linguistic expressions or ut-

terances that are intended by their user to indicate a fact or state of aﬀairs. Propositions

(also sometimes called statements) are distinct from questions, requests, commands, or ex-

clamations in that they can have a truth value (though as we shall see, it may be sometimes

diﬃcult to say just what that value is). Examples of propositions are “This is an example of

a proposition,” “2 is a number,” “D-Day was June 6, 1944,” and “E = mc

2

.” Propositions

are so-called because in eﬀect the speaker “proposes” that something is the case.

We will represent propositions by letters such as P, Q, R, . . . P

1

, P

2

, . . ..

Some logicians will say that “It is raining” and “Il pluit” represent the same proposition

expressed in diﬀerent languages; others (perhaps more commonsensically) will say that these

represent diﬀerent propositions that happen to point to the same state of aﬀairs. We need

not concern ourselves with this subtle distinction in an introductory course.

Propositions can be joined together by certain symbols called propositional connec-

tives. These are also called truth functional connectives because, as we shall eventually see,

the truth value of compound propositions formed by linking simpler propositions together

with these particular connectives is strictly a function of the truth values of the simpler

propositions. Not all ways of forming compound propositions from simpler propositions are

truth-functional. However, truth-functional combinations of propositions are the main ones

we are concerned with here because all that propositional logic is really concerned with are

the truth values of propositions; it is really a calculus of truth values.

We shall be mainly concerned with the following truth-functional connectives: − (not,

or negation); & (and), ∨ (inclusive or), → (the material conditional), and ↔ (the bicon-

ditional). Common translations of these symbols are given in the next section.

This is by no means the smallest set of connectives with which one can do propositional

logic. These connectives have been chosen because they are fairly close to the less formal

logically connective concepts that are used in many natural languages. It is possible to

deﬁne other truth-functional connectives, and we will brieﬂy discuss this in Chapter 5.

Combinations of propositional letters and connectives will be called formulas. Those

7

8 CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

strings of symbols that can represent propositions are called well-formed formulas or wﬀs.

It is possible to give a precise recursive deﬁnition of the set of all possible wﬀs, but we will

not do that in this course. (See, for instance, Lemmon’s Beginning Logic [17].) Instead,

we will simply rely on our knowledge of English to tell us which formulas can stand for

complete sentences and which cannot.

Propositions can be atomic or molecular. Atomic propositions are those that cannot

be broken down into truth functional combinations of other propositions, while molecular

propositions are those that can. An example of an atomic proposition is “Today is Tuesday,”

while an example of a molecular proposition is “Today is Tuesday and I have to go to a

meeting.”

2.2 Common English Equivalents for Propositional Connec-

tives

Below I list some of the most commonly encountered English equivalents for the standard

propositional connectives. It is impossible to list all of the possible English expressions that

could be correctly translated into the standard symbols such as &. We have to rely on

our knowledge of the language, what might be called “linguistic common sense,” in order

to work out a speaker or author’s implicit logical sense. Expressions in natural languages

such as English can be ambiguous, imprecise, or capable of multiple meanings and subtle

rhetorical nuances, and may not always have a unique logical translation.

The rules for translation that we present here certainly do not capture all of the con-

ceivable ways that logical structure can be expressed in a natural language. However, many

years of practical experience shows that the rules we present here (or ones very similar to

them) allow us to deﬁne a system of logic that is very useful and which can form a basis

for more advanced logics.

In the examples in this chapter we’ll use the following propositional symbols:

B := “Bob goes to the store”

A := “Alice goes to the store”

C := “Cheryl goes to work”

T := “Tad stays home”

The symbol := means “is deﬁned as . . . ”, and we will use the double-arrow ⇔ to mean “can

be translated as . . . ”.

2.2.1 Negation

To negate a proposition is to deny that it is true or that it is the case. Thus, −P can be

read as “not P,” or “it is not the case that P,” “P is not true,” or “P is false.”

Negation is a unary connective, in that it only applies to the proposition written imme-

diately to its right. This proposition could be a combination of other propositions, however.

Example: Bob does not go to the store ⇔ −B.

2.2. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES 9

2.2.2 Conjunction

To conjoin two propositions is to assert or to hold them jointly. We will write conjunction

in the form P &Q. The two conjoined propositions P and Q are called the conjuncts.

Although conjunction is usually taken to be binary, meaning that it applies to only two

propositions, and while we will usually consider cases where only two propositions are

conjoined, there is no reason in principle why any number of conjuncts cannot be linked in

the same conjunction (although we will not prove that fact formally in this course). The

most general conjunction can be written in the form P

1

& . . . &P

n

.

Here are some common expressions that would all be translated into symbols as P &Q:

P and Q.

Both P and Q.

P but Q.

P although Q.

P despite Q.

P even though Q.

P whereas Q.

P in spite of Q.

In addition to Q,P.

etc.!

Note that the subtle rhetorical distinctions between “but,” “and,” “whereas,” etc., are all

lost in moving to propositional notation. If these distinctions make a diﬀerence to the logic

of an argument, then they must somehow be expressed clearly enough that they can be

translated into propositional notation.

Example 1: Cheryl goes to work and Tad stays home ⇔ C &T.

Example 2: Alice goes to the store while Tad stays home ⇔ A&T.

Example 3: Bob and Alice both go to the store ⇔ B&A.

Example 4: Bob and Alice go to the store while Cheryl goes to work ⇔ B&A&C.

Note that order does not matter in any of these examples, or, indeed, in any correct

usage of &. Thus, Example 2 could be translated just as well as T &A.

2.2.3 Disjunction

To disjoin two propositions is to say that one or the other, or possibly both, is or are the

case. We write disjunction in the form P ∨ Q. The two disjoined propositions P and Q are

called the disjuncts. Like conjunction, disjunction is usually taken to be binary, but also

like conjunction it is perfectly admissible to write disjunctions of any number of disjuncts.

The most general disjunction can be written in the form P

1

∨ . . . ∨ P

n

.

Here are common expressions that would be translated as P ∨ Q:

P or Q.

Either P or Q.

P or Q or both.

10CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

In propositional logic we will almost always take “or” in the inclusive sense, meaning

that if we accept “P or Q” as true, we are willing to grant that either P or Q is true on

their own, or that P and Q may be true together. This is how “or” is usually used in

English and many other natural languages. Here’s a typical example: “Either negligence or

incompetence accounted for the sinking of the Titanic.” Clearly, it might have been both.

Occasionally, it will be useful to express the concept of exclusive disjunction, also known

as exclusive or, or in circuit theory as XOR. Later on we will see how to construct exclusive

“or” in terms of the usual connectives. Here’s an example in which the fact that a disjunction

is exclusive might make a logical diﬀerence: “Every natural number is either even or odd.”

Again, we can usually take “or” to be inclusive unless clearly stated otherwise.

In Latin there are two distinct words for the two senses of “or”: (“vel”) for the inclusive

sense, and (“aut”) for the exclusive sense — and thus less chance of confusing the two

concepts. The usual symbol for inclusive “or,” ∨, was probably adapted from the Latin vel .

Example 1: Bob or Alice go to the store ⇔ B ∨ A.

Example 2: Either Cheryl goes to work or Bob goes to the store ⇔ C ∨ B.

Again, as with &, order does not matter. Thus, Example 2 could just as correctly have

been written as B ∨ C.

Example 3: Either Bob goes to the store, Alice goes to the store, or Cheryl goes to

work ⇔ B ∨ A ∨ C.

Example 3 shows that, as with &, any number of propositions can be disjoined.

2.2.4 Material Implication

It is probably fair to say that all or virtually all types of logic, formal or informal, involve

some notion of logical consequence — the idea that one thing may be taken to follow from

certain others. In propositional logic the consequence relation is called material implication,

and it is crucial to understand its properties, which are sometimes counterintuitive. Material

implication is a strictly binary connective, and is symbolized in this course by →.

In a statement of the form P → Q the proposition P is the antecedent, and Q is the

consequent. The form P → Q is often called the conditional .

Here are common translations of P → Q:

If P then Q.

P only if Q.

Q if P.

If P, Q.

P is a suﬃcient condition for Q.

Q is a necessary condition for P.

P implies Q.

Q is implied by P.

P entails Q.

Q is entailed by P.

Q while P.

While P, Q.

Given P, Q.

2.2. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES11

Innumerable minor linguistic variations of these forms are possible.

It is very important not to confuse “P only if Q” with “P if Q”. In these two forms

the implication goes in opposite directions.

Example 1: If Bob goes to the store then Tad stays home ⇔ B → T.

Example 2: Bob goes to the store only if Tad stays home ⇔ B → T.

Example 3: Bob goes to the store if Tad stays home ⇔ T → B.

Example 4: In order for Alice to go to the store, it is necessary for Cheryl to go to work

⇔ A → C.

Note carefully that if P → Q, it does not always follow that Q → P. Also, unlike ∨

and &, implication is strictly binary.

2.2.5 The Biconditional

The symbol ↔ is called the biconditional , and is deﬁned such that P ↔ Q if and only if both

P implies Q and Q implies P. It is, of course, not generally true of any two propositions

chosen at random that they entail each other.

Here are the most common translations of P ↔ Q.

P if and only if Q. (This is often abbreviated as “P iﬀ Q.”)

P is a necessary and suﬃcient condition for Q.

P is equivalent to Q.

P implies Q and Q implies P.

Again, there are many possible ways of expressing this notion in natural languages.

Example: Bob goes to the store if and only if Tad stays home ⇔ B ↔ T.

2.2.6 Commutativity and Associativity

Commutativity: An operation is commutative if it returns the same result regardless

of the order in which it is applied. For example, P &Q has the same truth value as

Q&P.

Associativity: An operation is associative if it returns the same result regardless of how

we group the terms to which it is applied. For instance, if we take bracketing to

indicate that the bracketed term is evaluated ﬁrst, it can be shown that (P ∨ Q) ∨ R

has the same truth value as P ∨ (Q ∨ R).

Conjunction, disjunction, and the biconditional are commutative and associative, while

material implication is not.

2.2.7 “Unless”

“Unless,” like “or,” is a word that is used in more than one way in the English language, and

some confusion has grown up over the right way to translate it into propositional symbols.

Like “or,” “unless” can be used in an inclusive or exclusive sense.

Here is an example of the inclusive sense of “unless:”

12CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

Johnny will fail the exam unless he studies.

Sadly, we know that Johnny studying for the exam does not exclude the possibility of

Johnny failing the exam.

Here is an example of the exclusive use of “unless:”

We will have a picnic unless it rains.

The very thing we wish to deny is that we are going to have a picnic in the rain, while at

the same time we aﬃrm that we will have a picnic if it does not rain. This is most easily

translated as exclusive “or”.

In the chapter on propositional semantics we will examine the truth table behavior of

the various senses of “or” and “unless”.

It would be very rare that the logic of an argument would depend on whether or not

a usage of “unless” was exclusive. Therefore, in the large majority of cases, you can follow

the advice of most logic books, which is to translate “P unless Q” as P ∨ Q. But check

the context carefully.

2.2.8 Exercises

A

1. Translate the following into symbols:

(a) Tad stays home while Cheryl goes to work.

(b) Tad stays home but Cheryl does not go to work.

(c) If Tad stays home then Cheryl does not go to work.

(d) Either Bob goes to the store or Tad stays home.

(e) Alice going to the store is implied by Bob not going to the store.

(f) Bob does not go to the store only if Alice does.

(g) Cheryl goes to work if Bob goes to the store.

(h) For Cheryl to go to work it is suﬃcient that Tad stays home.

(i) For Cheryl not to go to work it is necessary that Tad not stay home.

(j) Cheryl going to work implies Alice going to the store, and Alice going to the

store implies Cheryl going to work.

(k) Bob does not go to the store if Alice does.

(l) Bob going to the store is equivalent to Tad not staying home.

(m) Tad stays home unless Cheryl does not go to work.

2. Translate the following into clear, grammatically correct, idiomatic English:

(a) B → −A.

(b) B&C.

(c) −B ∨ C.

(d) −C ↔ T.

2.3. TRUTH TABLE DEFINITIONS OF THE PROPOSITIONAL CONNECTIVES 13

(e) −A → B.

(f) −A& −T.

(g) −A ∨ −T.

(h) B → T.

(i) T → B.

(j) −T → −B.

2.3 Truth Table Deﬁnitions of the Propositional Connectives

All of the propositional connectives we have used in natural deduction can be deﬁned in

terms of their characteristic truth tables. We introduce truth tables now because they will

help us to understand why some of the connectives are translated in certain ways.

Here is the simplest truth table, that of a single proposition P:

P

T

F

This merely expresses the assumption that any proposition is taken to be either deﬁnitely

T or deﬁnitely F. The table is constructed simply by listing all the possible truth values of

the proposition in a column below it.

Tables for propositions containing two variables have four lines, in order to represent

all possible combinations of truth values of the atomic propositions. We can call these the

“input” values. The truth value of the whole expression is written in a column underneath

the connective; call these values the “outputs”.

Here are the truth-table deﬁnitions of the ﬁve most commonly used propositional con-

nectives:

Negation:

− P

F T

T F

Conjunction:

(P & Q)

T T T

T F F

F F T

F F F

Disjunction:

(P ∨ Q)

T T T

T T F

F T T

F F F

Material Implication:

(P → Q)

T T T

T F F

F T T

F T F

Biconditional:

(P ↔ Q)

T T T

T F F

F F T

F T F

It is absolutely necessary for beginners in logic to memorize these basic tables, because

they are the basis for a large part of what we do in this text. We will study truth tables in

much more detail in Chapter Four, where we will see that they oﬀer a powerful method for

the analysis of the validity of arguments.

14CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

2.3.1 Exercises

A

1. If it is true that Bob goes to the store and false that Cheryl goes to work, then what

are the truth-values of the following propositions?

(a) Bob goes to the store and Cheryl goes to work.

(b) If Bob goes to the store then Cheryl goes to work.

(c) Either Bob goes to the store or Cheryl goes to work.

(d) Bob goes to the store if and only if Cheryl does not go to work.

(e) For Bob to go to the store it is suﬃcient that Cheryl not go to work.

2. If P is T and (P &Q) is F, then state the truth values of

(a) P ∨ Q

(b) P → Q

(c) −P ∨ Q

(d) P ↔ Q

(e) Q → P

2.4 Interpreting the Material Conditional

2.4.1 “Ifs” versus “Only ifs”

As we learn more about how deduction works, it will become apparent that the truth table

for → is the backbone of deductive logic. However, this table has some counterintuitive

features which you may only become fully comfortable with after a lot of experience with

logic. We can note right away, however, that the table will help us to understand the various

ways in which → is translated into English. The reading “if P then Q” should be clear:

from the table, we can see that on all the lines where P → Q is T, Q is true whenever P is

T. In other words, for Q to be T, it is suﬃcient for P to be true.

The reading “P only if Q” sometimes confuses students (and a few professors from time

to time as well). However, it makes perfect sense if you notice that on all the lines of the

table where P → Q is T, that P is T only in the lines in which Q is true also. Hence, also,

the reading “Q is a necessary condition for P”: given P → Q, P cannot be T unless Q is

T.

The concepts of necessity and suﬃciency are easy to mix up. Concrete examples may

help to make them clear. For a person to be adequately nourished, it is suﬃcient that they

eat East Indian food. However, there are many other kinds of nourishing food, and it is

not necessary to eat East Indian in order to live. (Lovers of curry might disagree with me

about this.) On the other hand, it is necessary to ingest a certain amount of protein in

order to be adequately nourished.

Here is a nice example (thanks to a student!) which illustrates the proper reading of

P → Q. Consider the following statements:

(1) You will win the lottery only if you buy a ticket.

2.4. INTERPRETING THE MATERIAL CONDITIONAL 15

(2) You will win the lottery if you buy a ticket.

These are two very diﬀerent statements. The ﬁrst is the generally true claim that you have

to buy a ticket in order to have a chance to win; but this, of course, does not guarantee

that you will win the lottery. The second says that you will win if you buy a ticket, which

is certainly false for most lotteries!

Statement (1) above is equivalent to the following:

To win the lottery it is necessary that you buy a ticket.

On the other hand, statement (2) is equivalent to this:

To win the lottery it is suﬃcient that you buy a ticket.

Again, the truth conditions for these two statements are clearly quite diﬀerent.

2.4.2 The “Paradoxes” of Material Implication

Consider the following garden-variety implication:

If you change your oil regularly then your engine will last longer.

We recognize this as true because we know about the causal relevance of regular oil changes

to engine longevity. Here’s another example:

If a party in a Parliamentary system wins the most seats in an election, it gets

to form the government.

Here the relevance of antecedent to consequent is not causal, but legal. Still, anyone familiar

with the way Parliamentary governments work will recognize the statement as true. But

now consider the following:

If Ottawa is the capital of Canada then Caesar crossed the Rubicon.

Both antecedent and consequent are true, and so by the truth table for “if-then” the whole

statement is true. And yet, there seems to be little if any relevance of antecedent to

consequent. This illustrates a fact that is often called the Paradox of Material Implication:

The truth value of a material conditional has absolutely nothing to do with the

meaning of the antecedent and consequent, beyond their truth values.

This is a “paradox” only because it is an aﬀront to our linguistic common sense. In proposi-

tional logic the only “meaning” the propositional letters P, Q, . . . have is their truth values;

propositional logic is purely an algebra of truth values, and any formula is true or false

depending only on the truth values of the input variables and the logical relations between

them.

The challenge to common sense becomes even more acute when we consider cases in

which the antecedent is false, for the table says that such conditionals are true regardless

of the truth value of the consequent. Thus, both of the following are true statements:

If 2 + 2 = 17π, then Canada is in North America.

If 1 = 0 then 1 = 2.

16CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

These examples are an illustration of the following principle of classical two-valued logic:

Ex Falso Quodlibet: from a falsehood anything follows.

A free translation of the Latin phrase is, “from a falsehood, conclude whatever you like.”

This principle is sometimes abbreviated Ex Falso, and we shall encounter it again in the

next chapter.

There is an advanced branch of logic called relevance (or relevant) logic in which logi-

cians attempt to construct notions of implication which can express relevance of antecedent

to consequent in a way that is closer to ordinary language. However, long experience has

shown that the simple, stripped-down notion of material consequence presented here is on

the whole the most useful notion of implication we have, since it can be adapted in so many

ways.

2.5 Syntactic Conventions

2.5.1 Scope of a Connective

The scope of a propositional connective is the set of formulas that it operates on. For

instance, in −P, the scope of the negation sign is just P; in −P &Q, the scope of the

sign of conjunction is −P and Q; and in −(P &Q) the scope of the negation sign is all of

(P &Q).

2.5.2 Main Connective

The main connective of a propositional formula is the connective whose scope is the whole

formula. For instance, in (P ∨ Q) → (P &Q), the → is the main connective; while in

−(P &Q) the main connective is the negation sign. As these examples indicate, scope can

be ﬁxed by brackets if there is any danger of ambiguity.

Is there a main connective in a conjunction or disjunction with more than two terms?

Since conjunction and disjunction are fully associative, it would seem that no one connective

in (P

1

. . . & . . . P

n

) or (P

1

. . . ∨ . . . P

n

) should be privileged. This question illustrates the

drawback of the usual notation we use for disjunction and conjunction, which (as George

Spencer Brown pointed out [5]) are usually represented as if they were binary when in

fact they are not restricted to two variables. The highly ﬂexible notation used by Spencer

Brown in his calculus of indications [5] gets around this problem elegantly. It is also possible,

although we will not rely on it here, to represent conjunctions and disjunctions respectively

by writing something like this:

(P

1

, P

2

, . . . , P

n

) =

i

P

i

and

(P

1

, P

2

, . . . , P

n

) =

i

P

i

,

where the connectives in front operate on the whole list of propositions given in brackets

and thus, in eﬀect, are the main connectives for the whole conjunction or disjunction.

In Chapter 5 we will introduce the highly ﬂexible Reverse Polish Notation (RPN), and

show how it can be used to streamline the evaluation of complex formulas, especially those

containing conjunctions or disjunctions with more than two terms.

2.6. COMBINATIONS 17

2.5.3 Bracketing

There are almost as many bracketing conventions as there are logic books. In this book we

will use a very simple convention whose sole purpose is to avoid ambiguity.

Brackets indicate the scope of a connective. Bracket oﬀ a sub-formula if there is any

possibility of confusion about which connective applies to which part of the formula. For

instance, in (P &Q) → R, the & connects P and Q, and the → connects the whole

subformula (P &Q) with R. Brackets can be nested, and the general rule is that one

evaluates inner brackets ﬁrst, and then works outwards. Putting brackets around a whole

formula is optional.

Example: In P &(Q ∨ R), the subformula Q ∨ R is considered as a unit, and this

formula would be translated as “P and either Q or R.” By comparison, (P &Q) ∨ R would

be translated as “Either P and Q or R;” or, perhaps more clearly as “R or both P and Q.”

Be careful! In English and many other natural languages, the scope of logical con-

nectives is ambiguous and must be understood from context and a knowledge of common

usage.

Example: “If Kerry wins then Bush loses and Cheney has to retire” would usually

be understood to be bracketed as follows: “Kerry wins → (Bush loses & Cheney has to

retire).” Strictly speaking, it could also be read as “(Kerry wins → Bush loses) & Cheney

has to retire.” However, few experienced speakers of English would read it this way. In

Chapter Four we will learn how to show that these two versions of the sentence are not

logically equivalent. I discuss further examples like this in the section on Combinations

below.

Remember that we allow unlimited associativity for conjunction, disjunction, and the

biconditional. This means that we can write conjunctions of n formulas as (P

1

& . . . &P

n

),

disjunctions of n formulas as (P

1

∨ . . . ∨ P

n

), and equivalences of n formulas as (P

1

↔ . . . ↔

P

n

), without having to bracket oﬀ pairs of conjuncts or disjuncts. (There is an exception

to this when we do truth tables, which will be explained in Chapter 5.)

2.6 Combinations

Truth-functional combinations of propositions can be created in literally an inﬁnite variety

of ways. As indicated in the discussion of bracketing above, a sensitivity to context and

conventional usage is sometimes required to translate combined expressions either from a

natural language to a formula or the reverse.

Example 1: If Bob and Alice go to the store then Tad doesn’t stay home ⇔ (B&A) →

−T.

Example 2: Either Bob goes to the store or Tad stays home and either Alice goes to

the store or Cheryl goes to work ⇔ (B ∨ T) &(A ∨ C).

Example 3: Either Bob goes to the store only if Alice goes to the store or else Cheryl

goes to work only if Tad stays home ⇔ (B → A) ∨ (C → T).

Propositional logic can express relationships that would be far too complicated to state

conveniently in ordinary language.

18CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

Example: The expression (P

1

& . . . &P

n

) → (Q

1

∨ . . . ∨ Q

m

) is a perfectly legitimate

formula, and yet few would try and say anything like this in words if n and m were much

larger than, say, 2 or 3.

2.6.1 Exercises

A

1. Using the propositional letters deﬁned at the beginning of this chapter, translate the

following English sentences into formulas:

(a) If Bob goes to the store and Tad stays home then Alice goes to the store only if

Cheryl goes to work.

(b) Bob and Alice don’t go to the store if and only if Tad doesn’t stay home.

(c) If Tad stays home then Cheryl goes to work but Cheryl goes to work only if Tad

stays home.

(d) When Bob goes to the store, either Alice also goes to the store or Cheryl doesn’t

go to work.

(e) Either Bob or Alice goes to the store if Tad doesn’t stay home.

2. Translate the following formulas into clear, grammatically correct, idiomatic English:

(a) (A&B&C) → −T

(b) (A&B) ↔ (C ∨ −T)

(c) A → (B → C)

(d) (A → B) → C

(e) (A → B) → (C → T)

2.7 Negated Forms

It is not always obvious how to translate English expressions involving negations that op-

erate over or within conjunction, disjunction, or elimination. Here are a few of the more

frequently encountered negated forms:

Neither P nor Q: −(P ∨ Q).

Either −P or −Q: −P ∨ −Q

Not both P and Q: −(P &Q).

Both not P and not Q: −P & −Q.

P does not imply Q: −(P → Q).

P implies not-Q: P → −Q.

P is not equivalent to Q: −(P ↔ Q).

P is equivalent to not-Q: P ↔ −Q.

2.8. INTENSIONAL CONTEXTS 19

2.7.1 Exercises

A

1. Using the propositional letters deﬁned at the beginning of this chapter, translate the

following English sentences into formulas:

(a) Neither Bob nor Alice goes to the store.

(b) Either Cheryl doesn’t go to work or Tad doesn’t stay home.

(c) If Tad stays home it does not necessarily mean that Alice goes to the store.

(d) Cheryl going to work is insuﬃcient to guarantee that Tad stays home.

(e) Cheryl goes to work only if Tad doesn’t stay home.

2. Translate the following formulas into clear, grammatically correct, idiomatic English:

(a) −(−A ∨ B)

(b) −(B → (−C ∨ T))

(c) −(A ↔ B)

(d) A ↔ −B

(e) −(A ∨ B)

2.8 Intensional Contexts

Intensional contexts are statements that report on how a proposition is represented to or in

a person or thing capable of representation. For instance, they might report on someone’s

state of mind or utterances. Here are typical examples:

Joe believes that Santa is real.

Joe wishes that Santa could be real.

“Santa came last night,” said Joe.

Science proves that Santa does not exist.

How should we translate these? The key is to see that the truth value of the whole statement

— for instance, whether or not it is true that Joe said that Santa came last night — is

entirely independent of the truth value of the statement itself that Joe made. Therefore, the

connection between Joe’s statement itself and the claim that he made a certain statement is

not truth-functional. If we are given any statements like the four examples above to translate

into propositional symbols, we have no choice but to treat them as atomic propositions, and

give them simply a single propositional letter such as P.

2.9 Alternative Notation for Propositional Connectives

Virtually every elementary calculus text published anywhere in the world uses nearly the

same notation for the basic operations of the theory. Logic, however, has not achieved the

same standardization, which is a reﬂection of the fact that logic is still a subject that is

very much under development.

Here are some other notations that one might encounter for the symbols we use in this

text.

20CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

−P is often written as ∼ P, P, or P. (The symbol ∼ is called the tilde.)

P → Q is frequently written as P ⊃ Q. (The symbol ⊃ is pronounced “horseshoe.”)

P &Q is often written as P ∧ Q and sometimes as P Q, or occasionally as PQ.

P ∨ Q is often written as P +Q.

P ↔ Q is sometimes written as P ≡ Q.

2.10. EXERCISES ON CHAPTER 2 21

2.10 Exercises on Chapter 2

B

[Under construction]

22CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

Chapter 3

Propositional Logic: Introduction

to Natural Deduction

We begin by setting forth rules and procedures for what logicians call natural deduction.

Some students may well ﬁnd it to be decidedly unnatural at ﬁrst (though familiarity will

bring ease of use). However, this system of deductive reasoning is, in fact, a very useful

distillation of the most important deductive principles that we use in ordinary reasoning.

In fact, they represent more than merely a sort of anthropology of reasoning: they are

normative, in the sense that they are an attempt to codify, clarify, and make more precise

our day-to-day reasoning. (A normative rule is one that says what we should do; for

instance, “Honour thy father and mother,” or “Change your oil every 3000 km.”) The sort

of logic we set forth here by no means exhausts all possible useful logics. The justiﬁcation

for the approach we take, out of the many approaches that are mathematically possible, is

pragmatic, in that extensive experience shows that what we set forth here can be applied

to a great deal of the day-to-day reasoning that we actually carry out, and can also serve

as a very useful foundation for further developments.

Logicians call this way of reasoning natural deduction because it is meant to come as

close as one reasonably can to the way people tend to reason deductively in ordinary natural

languages (such as English). There are also numerous axiomatic systems of logic that are

focused on logical economy and generality. Such axiomatic systems play an important role

in research into the basis of mathematics and logic, and in discovering new systems of logic

that might be useful. However, a natural deductive approach designed more for ease of

use and closeness to intuitive methods of reasoning is much more helpful for beginners in

logic, especially those who might want to learn logic mainly because of its numerous useful

applications in other ﬁelds such as philosophy, mathematics, computing, and science.

We shall symbolize the logical forms of arguments by sequents, which are expressions

of the general form

P

1

, P

2

, . . . , P

n

¬ Q, (3.1)

where the propositions P

i

are the premisses (the statements that are taken for granted in

the context of the problem), and Q is the conclusion (the proposition to be arrived at by

valid deductive steps from the premisses).

The symbol ¬ (called the turnstile) can be read as “therefore,” “proves,” or (more

fussily) as “syntactically entails.” If P ¬ Q we can also say that Q is derivable from P.

23

24 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

It amounts to the claim that the stated conclusion can be derived validly from the given

premisses by accepted rules of deduction.

Since the premisses of a sequent are taken to be held conjointly, a sequent of the form

3.1 above is just the same thing as saying

(P

1

& . . . &P

n

) ¬ Q. (3.2)

Note carefully that in general deductive entailment goes only one way; that is, given

that we can show P ¬ R, we cannot automatically conclude that R ¬ P. There is an

important class of formulas, however, for which the entailment goes both ways — each

formula entails the other. In such a case, we will write a sequent of the form

P ¬¬ Q. (3.3)

Such a formula expresses an interderivability between P and Q. To establish interderivabil-

ity we generally have to carry out two proofs, one showing P ¬ Q, and the other showing

Q ¬ P, and two such proofs are often quite diﬀerent in form.

It will be shown that some formulas can be derived without any premisses left over.

The sequents for such formulas can be written in the form

¬ P, (3.4)

which can be read “Provable P,” or “P is a theorem.” Theorems are propositions that can

be asserted without the qualiﬁcation of any premisses at all. Such theorems are sometimes

referred to by the grand name of logical truths.

There is no reason we can’t concatenate sequents together to get something like this:

P, Q ¬ R ¬ S ¬ T, (3.5)

in which a sequence of conclusions are derived one after the other. This reﬂects common

linguistic practice, where it would not be uncommon to make an argument of the form “P

and Q, therefore R; therefore S, etc.” However, in this book we will be almost entirely

concerned with sequents with only one turnstile.

The proof or derivation of a sequent will consist of a sequence of lines, beginning with

the statement of the premisses as assumptions, and ending (one hopes) with the desired

conclusion. Each line has to show four things: the dependencies (if any), the line number,

the proposition asserted on the line, and the justiﬁcation for asserting that proposition.

The dependencies are the line numbers of the assumptions on which the line is based;

if the line itself is merely an assumption, the dependency is the number of the line itself. If

the line is a theorem there will be no dependencies. We shall say that a proposition depends

upon the assumptions listed to the left of it; or equivalently, that the assumptions to the

left of a proposition in a line govern that proposition.

The justiﬁcation is a statement of the rule used to get the line, together with the

numbers of the previous lines (if any were needed) from which the line was derived by

means of the indicated rule.

The meaning of these terms will be clearer when we consider examples of proofs.

3.1. BASIC RULES OF DERIVATION 25

3.1 Basic Rules of Derivation

The following list of rules is very similar to that in Lemmon’s Beginning Logic [17], except

that we will derive MT instead of taking it as basic. Also, I have made certain assumptions

that Lemmon uses more explicit than he does.

In the following, the proposition letters P, Q, etc., may denote either atomic or molec-

ular formulas.

1. Rule of Assumption (A)

We may introduce any proposition anywhere in a proof that we wish — so long as we

never forget that it is an unproven assumption!

Dependency: The assumption so introduced depends only on itself, and the line num-

ber at which it is introduced appears in the assumption column to its left.

Form of justiﬁcation: A

2. Modus Ponens (MP)

From P and P → Q we can derive Q.

Dependency: The conclusion Q depends upon whatever assumptions P and P → Q

depend upon.

Form of justiﬁcation: m, n MP, where m and n are the lines for the premisses.

3. Double Negation (DN)

From P we can derive −−P, or from −−P we can derive P.

Dependency: The double negation of a line in a proof always depends upon the same

assumption as the line double-negated.

Form of justiﬁcation: m DN, where m is the line double-negated.

4. &-Introduction (&I)

Given P and Q separately, we can derive P &Q or Q&P.

Dependency: the conclusion depends on whatever assumptions P and Q depended

upon.

Form of justiﬁcation: m, n &I, where m and n are the lines for the premisses.

5. &-Elimination (&E)

Given the conjunction P &Q, we can derive either of the conjuncts P or Q separately.

Dependency: the conclusion depends upon whatever assumptions the original con-

junction depended upon.

Form of justiﬁcation: m &E, where m is the line containing the conjunction used.

6. v-Introduction (vI)

Given P, we may derive P ∨ Q or Q ∨ P, where Q is any proposition whatever.

Dependency: The conclusion depends upon whatever assumptions the premiss de-

pended upon.

Form of justiﬁcation: m vI, where m is the line of the proposition “or-ed” onto.

7. v-Elimination (vE)

From a disjunction P

1

∨ . . . ∨ P

n

, we may conclude Q, so long as we can ﬁnd a

derivation of Q from all n of the disjuncts separately.

26 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

Dependency: the conclusion Q depends upon any assumptions that the starting dis-

junct depended upon, together with any additional assumptions that had to be in-

troduced (but which could not be again discharged) in the derivations of Q from the

individual disjuncts.

General form of justiﬁcation: k, l

1

, m

1

, . . . l

n

, m

n

vE, where K is the line of the start-

ing disjunction, l

i

is the ﬁrst line of the i-th subproof, and M

i

is the last line of the

i-th subproof. See examples below.

8. Conditional Proof (CP)

If Q follows from the assumption P, then we can conclude P → Q.

Dependency: The conclusion P → Qdepends upon whatever assumptions P depended

upon, together with any additional assumptions that had to be introduced (but which

could not again be discharged) in the derivation of Q from P.

Form of justiﬁcation: m, n CP, where m is the line number of the antecedent of the

conditional to be proven, and n is the line number of the consequent.

9. Reductio ad Absurdum (RAA)

If from the assumption P (or −P) we can derive a contradiction of the form Q& −Q,

we can conclude −P (or P respectively).

Dependency: The conclusion depends upon any assumptions that governed any lines

used in the derivation of the contradiction, but which were not themselves discharged

in that derivation.

Form of justiﬁcation: m, n RAA, where m is the line at which we assume the nega-

tion of the conclusion to be established, and n is the line at which we arrive at the

contradiction that negates the initial assumption.

10. Reﬂection (Rﬂ)

A very basic assumption of the sort of natural deduction we use here, which Lemmon

does not explicitly state himself but which he certainly uses, is that any proposition

may be taken to follow from itself. (If it is true that my name is Kent, then it would

be an odd sort of logic in which I could not immediately though perhaps redundantly

infer that my name is Kent.) This allows us to cite a single proposition as both the

beginning and end of certain kinds of sub-proofs in vE, RAA, and CP, and enormously

streamlines certain derivations. I’ll call this the principle of Reﬂection, just to give it

a name; though we need not cite it explicitly when it is used.

11. Reiteration (R)

Any line may be reiterated or repeated anywhere it is needed in a proof.

Dependency: The reiterated line has the same dependencies as the line it was copied

from.

Form of justiﬁcation: m R, where m is the number of the line being reiterated.

12. Deﬁnition Introduction (DfIntro)

If P := Q, then P ¬¬ Q.

3.1.1 Alternative Names for the Basic Rules.

MP is also known as Detachment, Modus Ponendo Ponens, or Conditional Elimination.

3.2. COMMENTS ON BASIC RULES 27

&I is also sometimes called Conjunction

&E is also sometimes called Simpliﬁcation.

vI also sometimes called Addition.

RAA is sometimes called Indirect Proof or Negation Introduction.

CP is sometimes called Conditional Introduction, → Intro, or ⊃ Intro.

3.2 Comments on Basic Rules

Some students may feel that it is a little ﬁshy to be allowed to assume anything we want at

any time. The fact is that an assumption is nothing more than that — an unjustiﬁed

adoption of a proposition, perhaps merely for the sake of argument. There is nothing

logically illegitimate about doing this, so long as you remember that it was just an

assumption. Adopting an assumption is rather like borrowing money on a credit card

that has no credit limit, but on which all debts must ultimately be paid. Once we

have adopted an assumption and use it in the subsequent derivation, everything from

that point on is dependent on the assumption and cannot be asserted unconditionally.

Assumptions can be discharged by CP, RAA, and vE; this is analogous to the process

of paying back borrowed money.

MP is the pattern for all deductive inferences. It expresses the fundamental notion of

logical consequence, that one proposition may follow from another.

If you cannot think of any other way to get a letter into the proof, vI may be the best way

to do it. To some people vI seems a bit like cheating, since it allows us to introduce any

proposition whatsoever into a derivation whenever we need it. You should be able to

see that nothing illegitimate is going on, however, if you can see that the proposition

added to or “or-ed” onto P is not being asserted categorically. An example might

help: if it is true that my name is Kent, it is also safe (although perhaps odd) to say

that either my name is Kent or the moon is made of green cheese.

vE allows us to derive conclusions from disjunctions. Although the general statement of

the rule given above allows us to derive conclusions from disjuncts containing any

number of terms, usually in the propositional logic section of this course it will be

applied to disjunctions of the form P ∨ Q containing only two terms. When we do

predicate logic we will use a rule called Existential Elimination (EE) which is, in

eﬀect, a form of vE over arbitrary numbers of disjuncts. Students often ﬁnd vE to be

diﬃcult at ﬁrst, but the diﬃculties are mostly notational. Practice will make perfect.

RAA is one of the most powerful and useful rules. If you cannot think of any other way

of getting to the desired conclusion, introduce as an assumption the negation of the

desired conclusion and then try to get a contradiction. Sometimes even very well

educated people are surprisingly reluctant to use RAA, since they are uncomfortable

assuming for the sake of argument a proposition that they know or wish to believe is

false. This is unfortunate, since the ability to freely entertain hypotheses, no matter

how outrageous they may seem, is one of the most useful tools of thought. RAA would

28 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

not be valid in a logic that was not bivalent (i.e., a logic in which not all propositions

were either deﬁnitely true or deﬁnitely false).

Reiteration is, of course, a consequence of Reﬂection. Strictly speaking, Reiteration is

not needed in a system like ours because one can always cite a line that has already

been written (so long as you don’t forget to also cite the assumptions upon which

it depends) anywhere else in the proof. Despite this, beginners will sometimes ﬁnd

Reiteration to be helpful, and so I have allowed it as a rule even though it is redundant.

Think of it as analogous to training wheels on a bicycle, which you will come to do

without as your skill increases.

It will become increasingly evident as the course progresses that these rules of deduction

are by no means the simplest rules upon which a valid system of propositional logic

could be based. These rules are chosen because they are fairly close to the way people

actually reason in ordinary natural languages; they are therefore designed mainly for

ease of use, not logical economy. It should also become evident that other rules could

just as easily be taken as basic. For instance, the reader might ﬁnd it a good exercise

to show that we could have taken Disjunctive Syllogism (DS, deﬁned below) as a basic

rule instead of MP, introduced → as a deﬁned symbol (taking the rule of Implication,

also given below, as a deﬁnition instead of a derived result), and then derived MP

using DS.

3.3 Simple Examples of the Basic Rules

In this section I state and prove several sequents which demonstrate the use of the Basic

Rules in especially simple and clear ways.

1 (P &Q) → R, P, −−Q ¬ R (This uses MP, DN, and &I.)

1 (1) (P &Q) → R A

2 (2) P A

3 (3) −−Q A

3 (4) Q 3 DN

2,3 (5) P &Q 2,4 &I

1,2,3 (6) R 1,5 MP

This derivation illustrates the basic pattern for all natural deductions. The centre

column lists the propositions that lead one after the other to the conclusion; the numbers

in brackets are the line numbers. The left-hand column lists the dependencies, that is,

the assumptions that the proposition in the centre column depends upon. The right hand

column lists the rule that justiﬁes the proposition written in that line, together with the

lines to which the rule was applied.

Those who have some familiarity with accountancy can think of the left column as

analogous to a tabulation of debit, and the centre column as credit. Making an assumption

is like borrowing money; the assumptions we make allow us (by means of the rules) to

assert other propositions, but we must keep track of the fact that we are relying on certain

assumptions just as we must keep track of money we owe. Debits and credits must balance

in the end. If we discharge all the assumptions and still have a proposition left over, that

3.3. SIMPLE EXAMPLES OF THE BASIC RULES 29

proposition counts as a theorem, that is, a proposition that can be held unconditionally —

like a property with no mortgage on it!

2 P &Q, P → R ¬ R (This uses &E and MP.)

1 (1) P &Q A

2 (2) P → R A

1 (3) P 1 &E

1,2 (4) R 2,3 MP

3 P ¬ P ∨ (Q&R) (This uses vI.)

1 (1) P A

1 (2) P ∨ (Q&R) 1 vI

4 P ∨ Q, P → R, Q → R ¬ R (This uses vE and MP.)

1 (1) P ∨ Q A

2 (2) P → R A

3 (3) Q → R A

4 (4) P A (vE)

2,4 (5) R 2,4 MP

6 (6) Q A (vE)

3,6 (7) R 3,6 MP

1,2,3 (8) R 1,4,5,6,7 vE

This is one of the simplest examples I have been able to think of that demonstrates the

use of vE. In some treatments of propositional logic, this sequent is actually called “v-

Elimination.” Here we will call it inference pattern Convergence, since variations on it will

occur often enough that it will be useful to give it a name.

The whole idea of vE is that we use it to establish the claim that from either P or Q

we can get to the desired conclusion. It is like a claim that we can get to Calgary from

Lethbridge via either Highway 2 or the Vulcan Highway. To demonstrate the claim, you

have to test both routes — because the claim is that either route works. This is why we use a

subproof for each disjunct in the starting disjunction. Each subproof starts by temporarily

assuming the disjunct to be tested; when the subproof is done, that temporary assumption

is discharged.

Notice also that in lines 4 and 6 I annotate the assumptions that start the subproofs

with an indication of the rule that I am trying to use in the subproof. This is an entirely

optional step, and whether or not you do it does not aﬀect the validity of the proof. However,

it can help you keep track of which assumption is part of which subproof; this is especially

helpful when you use nested subproofs. (See the proof of DS below.) I have borrowed this

device from Jacquette [13].

5 P ¬ Q → P (This demonstrates CP and R.)

1 (1) P A

2 (2) Q A (CP)

1 (3) P 1 R

1 (4) Q → P 2,3 CP

30 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

As in sequent 5, I have marked the assumption in line 2 with the rule that I intend to

use in the subproof.

6 P → Q, P ¬ Q (This demonstrates RAA.)

1 (1) P → Q A

2 (2) P A

3 (3) −Q A (RAA)

1,2 (4) Q 1,2 MP

1,2,3 (5) Q& −Q 3,4 &I

1,2 (6) Q 3,5 RAA

Of course, this sequent is merely the statement of MP. But it demonstrates in a very

clear way how to use RAA.

3.3.1 Exercises

A

1. Find natural deduction proofs for the following sequents.

(a) P, P → −Q ¬ −Q

(b) −−P, P → Q ¬ Q

(c) P &Q, P → R ¬ R

(d) P, P → R, P → S ¬ R&S

(e) −−P ¬ P ∨ R

B

1. Construct proofs of the following sequents.

(a) P → (Q&R), P ¬ R ∨ S

(b) P → Q, R → Q, P ∨ R ¬ Q ∨ S

(c) P ∨ Q, Q → R ¬ P ∨ R

(d) P, (P &Q) → R ¬ Q → R

(e) P → Q, Q → R, −R ¬ −P (Hint: try RAA)

3.4 Some Useful Sequents

In this section we derive from the Basic Rules a series of sequents that are very widely used

throughout logic, and which will be found to be very useful in our future propositional and

predicate natural deductions. Many of these sequents have common names, and you should

learn these names and acquire the ability to recognize these forms in their innumerable

variations.

3.4. SOME USEFUL SEQUENTS 31

3.4.1 One-Way Sequents

7 P → Q, −Q ¬ −P Modus Tollens (MT)

1 (1) P → Q A

2 (2) −Q A

3 (3) P A (RAA)

1,3 (4) Q 1,3 MP

1,2,3 (5) Q& −Q 2,4 &I

1,2 (6) −P 3,5 RAA

MT is one of the most widely used rules in deductive logic.

Verbal example: If Bob goes to work then Alice stays home. But Alice did not stay

home. Therefore, Bob did not go to work.

8 P ∨ Q, −Q ¬ P Disjunctive Syllogism (DS)

1 (1) P ∨ Q A

2 (2) −Q A

3 (3) P A (vE)

4 (4) Q A (vE)

5 (5) −P A (RAA)

2,4 (6) Q& −Q 2,4 &I

2,4 (7) P 5,6 RAA

1,2 (8) P 1,3,3,4,7 vE

Example in Words: Either Bob or Alice paid the phone bill; however, it was not Bob.

Hence, it was Alice who paid the phone bill.

The proof of DS will repay careful study, since it illustrates the use of vE and RAA. The

object is to derive P from P ∨ Q given the condition −Q. The ﬁrst disjunct is simply the

desired conclusion; from the assumption of the second disjunct we can derive a contradiction

and thereby prove P — or, indeed, anything we want. The conclusion follows by vE. In

the justiﬁcation for the conclusion at line (8), 1 is the line at which the premiss P ∨ Q

was introduced; 3 is the line at which the working assumption P was introduced and also

the line at which it is discharged (since P is the very conclusion we are aiming at); 4 is the

line at which the second working assumption was introduced and 7 the line at which it was

discharged. If we were working with a disjunction with more than two terms, there would

have to be two line numbers for each working assumption introduced and then discharged;

but we shall rarely use vE on disjunctions with more than two terms.

Note the use of Reﬂection in line 3, in which P is both assumption and conclusion of

a sub-proof. Reﬂection also allows us, for example, to establish the theorem ¬ P → P as

follows:

9 ¬ P → P Law of Identity (Id)

32 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

1 (1) P A

(2) P → P 1,1 CP

Example in Words: Paul Martin is Prime Minister of Canada only if Paul Martin is

Prime Minister of Canada. More generally, every proposition implies itself; this theorem

could be just as well called “self-implication”.

This example illustrates the fact that the utter triviality of many of the basic rules of

logic becomes apparent when they are put into words. It is fascinating how powerful these

apparently “trivial” inference rules can be, however.

Here is another simple sequent that can sometimes come in handy:

10 −P ¬ −(P &Q) Denial (Den)

1 (1) −P A

2 (2) P &Q A (RAA)

2 (3) P 2 &E

1,2 (4) P & −P 1,3 &I

1 (5) −(P &Q) 2,4 RAA

The reader will notice that the peculiar way in which we used RAA in the proof of

DS would apparently, in principle, allow us to deduce any proposition whatsoever from

a contradiction. This is, indeed, the case, and this phenomenon, which we have already

encountered in the last chapter, is important enough to deserve a name of its own: Ex falso

quodlibet, or more brieﬂy Ex Falso. This Latin phrase literally means “from a contradiction

(or falsehood), whatever. . . ”, or “from a contradiction anything follows.”

11 P, −P ¬ Q Ex Falso Quodlibet (ExF)

1 (1) P A

2 (2) −P A

3 (3) −Q A (RAA)

1,2 (4) P & −P 1,2 &I

1,2 (5) Q 3,4 RAA

Example in Words: If 2 + 2 = 4 and 2 + 2 = 4 then Captain Kangaroo is the Prime

Minister of Australia.

Here is a slightly diﬀerent version of ExF that helps to illustrate how the methods work:

12 P & −P ¬ Q

There are at least two ways to prove this. A “learner’s permit” method uses Reiteration:

1 (1) P & −P A

2 (2) −Q A (RAA)

1 (3) P & −P 1 R

1 (4) Q 2,3 RAA

It can be done more compactly as follows:

3.4. SOME USEFUL SEQUENTS 33

1 (1) P & −P A

2 (2) −Q A (RAA)

1 (3) Q 2,1 RAA

Note that line 1 is only “derived” from line 2 in the sense that it can be asserted when

2 is asserted simply because it has already been asserted. But that is enough to make RAA

work. Ex Falso will seem less odd when we come to truth table semantics in Chapter 5.

13 P → Q ¬ P → (P &Q) Absorption (Abs)

1 (1) P → Q A

2 (2) P A

1,2 (3) Q 1,2 MP

1,2 (4) P &Q 2,3 &I

1 (5) P → (P &Q) 2,4 CP

Example in Words: Given that Bob works only if Alice stays home, then if Bob works

then Bob works and Alice stays home.

This proof is a typical usage of CP. Note how the assumption on line 2 is discharged

when we arrive at the conclusion.

14 P → Q, Q → R ¬ P → R Hypothetical Syllogism (HS)

1 (1) P → Q A

2 (2) Q → R A

3 (3) P A (CP)

1,3 (4) Q 1,3 MP

1,2,3 (5) R 2,4 MP

1,2 (6) P → R 3,5 CP

Example in Words: If Bob works only if Alice stays home, and if Alice stays home only

if she is oﬀ shift, then Bob works only if Alice is oﬀ shift.

The next derivation is a good illustration of the use of vE:

15 P → Q, R → S, P ∨ R ¬ Q ∨ S Constructive Dilemma (CD)

1 (1) P → Q A

2 (2) R → S A

3 (3) P ∨ R A

4 (4) P A (vE)

1,4 (5) Q 1,4 MP

1,4 (6) Q ∨ S 5 vI

7 (7) R A (vE)

2,7 (8) S 2,7 MP

2,7 (9) Q ∨ S 8 vI

1,2,3 (10) Q ∨ S 3,4,6,7,9 vE

34 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

Example in Words: If Bob works only if Alice stays home, and if Tad works only if

Cheryl stays home, and if either Bob or Tad works, then either Alice or Cheryl stays home.

The following variant on Constructive Dilemma is easily established:

16 P → Q, R → S, −Q ∨ −S ¬ −P ∨ −R Destructive Dilemma (DD)

The proof is left as a B exercise.

3.4.2 Interderivability Results

In this section we state and prove a number of interderivability results of the general form

P ¬¬ Q. (3.6)

It can be shown that for every interderivability of the form (3.6), there is a theorem of the

form

¬ P ↔ Q. (3.7)

We shall say that any wﬀs P and Q for which ¬ P ↔ Q holds are provably equivalent. In

Chapter 5 we will see that expressions are provably equivalent iﬀ they have the same truth

tables.

In syntactic terms (that is, in terms of the use that this fact can be put in derivations),

this implies that any occurrence of a formula of the form P can be substituted for by

a formula of the form Q wherever they occur, or vice versa. We will call this metarule

Equivalence Introduction, and this rule, along with some other introduction rules, will be

discussed in greater detail in the next chapter. For now, we merely note the fact that because

of Equivalence Introduction these interderivabilities are very powerful tools of deduction.

We will use Equivalence Introduction in some of the proofs further on in this chapter.

De Morgan’s Laws

The following rules, called de Morgan’s Laws, are very important and widely used in logic:

17 −(P &Q) ¬¬ −P ∨ −Q de Morgan’s Laws (deM)

(a) −(P &Q) ¬ −P ∨ −Q

1 (1) −(P &Q) A

2 (2) −(−P ∨ −Q) A (RAA)

3 (3) −P A (RAA)

3 (4) −P ∨ −Q 3 vI

2,3 (5) (−P ∨ −Q) & −(−P ∨ −Q) 2,4 &I

2 (6) P 3,5 RAA

7 (7) −Q A (RAA)

7 (8) −P ∨ −Q 7 vI

2,7 (9) (−P ∨ −Q) & −(−P ∨ −Q) 2,8 &I

2 (10) Q 7,9 RAA

2 (11) P &Q 6,10 &I

3.4. SOME USEFUL SEQUENTS 35

1,2 (12) (P &Q) & −(P &Q) 1,11 &I

1 (13) −P ∨ −Q 2,12 RAA

This is an adaptation of the ingenious proof given by Lemmon for his Sequent 36(a). It

shows that RAAs can be nested.

(b) −P ∨ −Q ¬ −(P &Q)

1 (1) −P ∨ −Q A

2 (2) −P A (vE)

3 (3) P &Q A (RAA)

3 (4) P 3 &E

2,3 (5) −P &P 2,4 &I

2 (6) −(P &Q) 3,5 RAA

7 (7) −Q A (vE)

8 (8) P &Q A (RAA)

8 (9) Q 8 &E

7,8 (10) −Q&Q 7,9 &I

7 (11) −(P &Q) 8,10 RAA

1 (12) −(P &Q) 1,2,6,7,11 vE

18 −(P ∨ Q) ¬¬ −P & −Q de Morgan (deM)

(a) −(P ∨ Q) ¬ −P & −Q

1 (1) −(P ∨ Q) A

2 (2) P A (RAA)

2 (3) P ∨ Q 2 vI

1,2 (4) (P ∨ Q) & −(P ∨ Q) 1,3 &I

1 (5) −P 2,4 RAA

6 (6) Q A (RAA)

6 (7) P ∨ Q 6 vI

1,6 (8) (P ∨ Q) & −(P ∨ Q) 1,7 &I

1 (9) −Q 6,8 RAA

1 (10) −P & −Q 5,9 &I

(b) −P & −Q ¬ −(P ∨ Q)

1 (1) −P & −Q A

2 (2) P ∨ Q A (RAA)

1 (3) −P 1 &E

1,2 (4) Q 2,3 DS

1 (5) −Q 1 &E

1,2 (6) Q& −Q 4,5 &I

1 (7) −(P ∨ Q) 2,6 RAA

This can, of course, be proven without the use of the derived rule DS by simply, in eﬀect,

ﬁlling in the proof of DS between lines (3) and (4).

36 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

The de Morgan Laws amount to rules for the distribution of the negation sign; they

show that when we distribute negation over conjunctions they turn into disjuncts of the

negated terms, and when we distribute negation over disjunctions they become conjuncts

of the negated terms.

Implication Rules

The following two rules about the implication relation → are frequently used. They can

both be cited the same way.

19 P → Q ¬¬ −P ∨ Q Implication (Imp)

(a) P → Q ¬ −P ∨ Q

1 (1) P → Q A

2 (2) −(−P ∨ Q) A (RAA)

2 (3) −−P & −Q 2 deM

2 (4) P & −Q 3 DN

2 (5) P 4 &E

1,2 (6) Q 1,5 MP

2 (7) −Q 4 &E

1,2 (8) Q& −Q 6,7 & I

1 (9) −P ∨ Q 2,8 RAA

The converse is left as a B exercise.

Note the RAA subproof nested inside the CP subproof.

20 P → Q ¬¬ −(P & −Q) Implication (Imp)

The proofs of this version of Imp use Sequent 18, and they are left as a B exercise.

Example: Here is an example of a valid use of Imp and deM. It is justiﬁable by the

metarule of Equivalence Introduction, although you do not have to cite that explicitly.

1 (1) (P → Q) &(R → S) A

1 (2) (−P ∨ Q) &(R → S) 1 Imp

1 (3) −(P & −Q) &(R → S) 2 deM

This example illustrates the fact that an interderivability rule such as deM allows us to

substitute an interderivable formula for a subformula in a larger formula.

Transposition is also very useful:

21 P → Q ¬¬ −Q → −P Transposition (Trans)

P → Q ¬ −Q → −P

1 (1) P → Q A

2 (2) −Q A (CP)

1,2 (3) −P 1,2 MT

1 (4) −Q → −P 2,3 CP

The converse is left as a B exercise.

3.4. SOME USEFUL SEQUENTS 37

Idempotency

22 P &P ¬¬ P Idempotency (Idem)

These easy proofs are left as A exercises.

23 P ∨ P ¬¬ P Idempotency (Idem)

P ∨ P ¬ P

1 (1) P ∨ P A

2 (2) P A (vE)

3 (3) P A (vE)

1 (4) P 1,2,2,3,3 vE

This is the shortest possible application of vE. The converse is left as an A exercise.

Commutative Laws

The commutative laws are really just a restatement of the Basic Rules of &I and vI, which

allow us to “and” and “or” two propositions together in any order. Their proofs illustrate

that something which is “obvious” still may require a little work in order to be correctly

demonstrated from the Basic Rules.

24 P &Q ¬¬ Q&P Commutativity (Comm)

P &Q ¬ Q&P

1 (1) P &Q A

1 (2) P 1 &E

1 (3) Q 1 &E

1 (4) Q&P 2,3 &I

25 P ∨ Q ¬¬ Q ∨ P Commutativity (Comm)

P ∨ Q ¬ Q ∨ P

1 (1) P ∨ Q A

2 (2) P A (vE)

2 (3) Q ∨ P 2 vI

4 (4) Q A (vE)

4 (5) Q ∨ P 4 vI

1 (6) Q ∨ P 1,2,3,4,5 vE

Associative Laws

26 P &(Q&R) ¬¬ (P &Q) &R Associativity (Assoc)

This proof is left as a B exercise.

38 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

27 P ∨ (Q ∨ R) ¬¬ (P ∨ Q) ∨ R Associativity (Assoc)

This proof is a little tricky, since it requires nested vEs. It is left as a C exercise.

As noted previously, Associativity can be extended to conjunctions and disjunctions

of arbitrary numbers of terms, but the proof of this fact requires mathematical induction,

which is beyond (though not far beyond) the scope of this course.

Rules of Dominance

The Rules of Dominance may seem a little strange, but they have a very natural interpre-

tation from the semantic point of view. We will review them in Chapter 5. These rules

simplify the derivation of Equiv (below).

The proofs of the Rules of Dominance are simple but quite instructive. I give some

here, and leave the rest as exercises.

28 P &(Q ∨ −Q) ¬¬ P Dominance (Dom)

P ¬ P &(Q ∨ −Q)

1 (1) P A

2 (2) −(P &(Q ∨ −Q)) A (RAA)

2 (3) −P ∨ −(Q ∨ −Q) 2 deM

2 (4) −P ∨ (−Q& −−Q) 3 deM

2 (5) P → (−Q& −−Q) 4 Imp

1,2 (6) (−Q& −−Q) 1,5 MP

1 (7) P &(Q ∨ −Q) 2,6 RAA

The proof of the converse is left as an A exercise.

29 P &(Q& −Q) ¬¬ Q& −Q Dominance (Dom)

30 P ∨ (Q ∨ −Q) ¬¬ Q ∨ −Q Dominance (Dom)

31 P ∨ (Q& −Q) ¬¬ P Dominance (Dom)

P ∨ (Q& −Q) ¬ P

1 (1) P ∨ (Q& −Q) A

2 (2) P A (vE)

3 (3) Q& −Q A (vE)

3 (4) P 3 ExF

1 (5) P 1,2,2,3,4 vE

The converse is left as an A exercise.

Distributive Laws

32 P &(Q ∨ R) ¬¬ (P &Q) ∨ (P &R) Distributive Laws (Dist)

(a) P &(Q ∨ R) ¬ (P &Q) ∨ (P &R)

3.4. SOME USEFUL SEQUENTS 39

1 (1) P &(Q ∨ R) A

2 (2) −((P &Q) ∨ (P &R)) A (RAA)

2 (3) −(P &Q) & −(P &R) 2 deM

2 (4) (−P ∨ −Q) &(−P ∨ −R) 3 deM

2 (5) (P → −Q) &(P → −R) 4 Imp

2 (6) P → −Q 5 &E

2 (7) P → −R 5 &E

1 (8) P 1 &E

1 (9) Q ∨ R 1 &E

1,2 (10) −Q 6,8 MP

1,2 (11) −R 7,8 MP

1,2 (12) −Q& −R 10,11 &I

1,2 (13) −(Q ∨ R) 12 deM

1,2 (14) −(Q ∨ R) &(Q ∨ R) 9,13 &I

1 (15) (P &Q) ∨ (P &R) 2,14 RAA

Note the use of RAA.

(b) (P &Q) ∨ (P &R) ¬ P &(Q ∨ R)

1 (1) (P &Q) ∨ (P &R) A

2 (2) P &Q A (vE)

2 (3) Q 2 &E

2 (4) P 2 &E

2 (5) Q ∨ R 3 vI

2 (6) P &(Q ∨ R) 4,5 &I

7 (7) P &R A (vE)

7 (8) P &E

7 (9) R &E

7 (10) Q ∨ R 9 vI

7 (11) P &(Q ∨ R) 8,10 &I

1 (12) P &(Q ∨ R) 1,2,6,7,11 vE

This is a straightforward application of vE.

33 P ∨ (Q&R) ¬¬ (P ∨ Q) &(P ∨ R) Distributive Laws (Dist)

(P ∨ Q) &(P ∨ R) ¬ P ∨ (Q&R)

1 (1) (P ∨ Q) &(P ∨ R) A

1 (2) (−P → Q) &(−P → R) 1 Imp

1 (3) −P → Q 2 &E

1 (4) −P → R 2 &E

5 (5) −P A (CP)

1,5 (6) Q 3,5 MP

1,5 (7) R 4,5 MP

1,5 (8) Q&R 6,7 &I

1 (9) −P → (Q&R) 5,8 CP

1 (10) P ∨ (Q&R) 9 Imp

40 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

The converse is left as a B exercise.

The Distributive Laws show how “ORs” can be distributed over “ANDs,” and vice

versa. There is an important branch of advanced logic called quantum logic in which the

seemingly “obvious” distributive laws break down. (For a nicely accessible introduction to

this intriguing subject, see [12].)

Equivalence Rules

There are two Equivalence rules that are widely used.

34 P ↔ Q ¬¬ (P → Q) &(Q → P) Equivalence (Def↔)

This follows by Deﬁnition Introduction, since P ↔ Q := (P → Q) &(Q → P)

35 P ↔ Q ¬¬ (P &Q) ∨ (−P & −Q) Equivalence (Equiv)

I give the proof of both directions here, since they are simple but instructive.

(a) P ↔ Q ¬ (P &Q) ∨ (−P & −Q)

1 (1) P ↔ Q A

1 (2) (P → Q) &(Q → P) 1 Def↔

1 (3) (−P ∨ Q) &(−Q ∨ P) 2 Imp

1 (4) (−P &(−Q ∨ P)) ∨

(Q&(−Q ∨ P)) 3 Dist ( 2)

1 (5) ((−P & −Q) ∨ (−P &P)) ∨

((Q& −Q) ∨ (Q&P)) 4 Dist ( 2)

1 (6) (−P & −Q) ∨ (Q&P) 5 Dom

1 (7) (P &Q) ∨ (−P & −Q) 6 Comm ( 2)

Note how Dom allows us to “cancel” out contradictory expressions such as −P &P.

(b) (P &Q) ∨ (−P & −Q) ¬ P ↔ Q

1 (1) (P &Q) ∨ (−P & −Q) A

2 (2) (P &Q) A (vE)

2 (3) P 2 &E

2 (4) Q 2 &E

2 (5) −Q ∨ P 3 vI

2 (6) −P ∨ Q 4 vI

2 (7) Q → P 5 Imp

2 (8) P → Q 6 Imp

2 (9) (P → Q) &(Q → P) 7,8 &I

10 (10) (−P & −Q) A (vE)

10 (11) −P 10 &E

10 (12) −Q 10 &E

10 (13) −P ∨ Q 11 vI

10 (14) −Q ∨ P 12 vI

10 (15) P → Q 13 Imp

10 (16) Q → P 14 Imp

3.4. SOME USEFUL SEQUENTS 41

10 (17) (P → Q) &(Q → P) 15,16 &I

1 (18) (P → Q) &(Q → P) 1,2,9,10,17 vE

1 (19) P ↔ Q 18 Def↔

Negated Arrows

The following are easy, given the rules we have now developed.

36 −(P → Q) ¬¬ P & −Q Negation of Implication (Neg→)

−(P → Q) ¬ P & −Q

1 (1) −(P → Q) A

1 (2) −(−P ∨ Q) 1 Imp

1 (3) −−P & −Q 2 deM

1 (4) P & −Q 3 DN

We get the converse essentially by reversing these steps.

37 −(P ↔ Q) ¬¬ (P & −Q) ∨ (−P &Q) Negation of Equivalence (Neg↔)

−(P ↔ Q) ¬ (P & −Q) ∨ (−P &Q)

1 (1) −(P ↔ Q) A

1 (2) −((P → Q) &(Q → P)) Def↔

1 (3) −(P → Q) ∨ −(Q → P) 2 deM

1 (4) (P & −Q) ∨ (Q& −P) 3 Neg→

1 (5) (P & −Q) ∨ (−P &Q) 4 Comm

Again, we obtain the converse by reversing the steps. The derivation of this sequent by

using only the Basic Rules would run to many more than ﬁve lines.

Distributive Laws for →

It is possible to prove interderivability relations that act, in eﬀect, like distributive laws for

implication. (Thanks to Mark Smith for drawing this to my attention.)

38 P → (Q&R) ¬¬ (P → Q) &(P → R) Distributive Law for → (Dist→)

The proofs are left as B exercises.

39 P → (Q ∨ R) ¬¬ (P → Q) ∨ (P → R) Distributive Law for → (Dist→)

P → (Q ∨ R) ¬ (P → Q) ∨ (P → R)

1 (1) P → (Q ∨ R) A

1 (2) −P ∨ (Q ∨ R) 1 Imp

1 (3) (−P ∨ Q) ∨ R 2 Assoc

1 (4) ((−P ∨ Q) ∨ R) ∨ −P 3 vI

1 (5) (−P ∨ Q) ∨ (R ∨ −P) 4 Assoc

1 (6) (−P ∨ Q) ∨ (−P ∨ R) 5 Comm

1 (7) (P → Q) ∨ (P → R) 6 Imp

42 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

(In the next chapter we will show that it is possible to combine steps in proofs like this.)

The converse is left as a B exercise.

Finally, here is a rule that looks almost like a distributive law for →, but isn’t quite.

40 (P &Q) → R ¬¬ P → (Q → R) Exportation (Exp)

(P &Q) → R ¬ P → (Q → R)

1 (1) (P &Q) → R A

2 (2) P A (CP)

3 (3) Q A (CP)

2,3 (4) P &Q 2,3 &I

1,2,3 (5) R 1,4 MP

1,2 (6) Q → R 3,5 CP

1 (7) P → (Q → R) 2,6 CP

This proof of Exp demonstrates nested CPs. The converse is left as a B exercise.

3.4.3 Form of Justiﬁcation When Using Derived Rules

The general structure of a justiﬁcation notation in a proof line is that we state the rule

we used to get the line, and give the numbers of the lines we used. RAA, CP, and vE are

slightly diﬀerent, in that we only have to give the beginning and end lines of the subproofs;

but they are the only rules for which the usual pattern does not hold.

If you are using a Derived Rule such as MT, HS, or CD, the same basic pattern holds,

except that you need to mention the line numbers of all of the propositions that were used

as inputs to the use of the rule. For instance, a line that uses a rule that uses three lines

will need to have those three lines mentioned in its justiﬁcation.

Here is an example to make this clear:

41 P ∨ Q, −P ∨ R, −Q ∨ R ¬ R

1 (1) P ∨ Q A

2 (2) −P ∨ R A

3 (3) −Q ∨ R A

2 (4) P → R 2 Imp

3 (5) Q → R 3 Imp

1,2,3 (6) R ∨ R 1,4,5 CD

1,2,3 (7) R 6 Idem

Since CD requires three premisses, three lines must be cited in the justiﬁcation.

3.5 Summary of Basic Sequents and Rules

In this section, I present for convenience a concise summary of the Basic and Derived rules

and sequents we have covered in this chapter. These elementary sequents can occur as

fragments of longer derivations, and it is very important that you learn to recognize them

when they appear. One of the most important skills in logic is the ability to recognize

3.5. SUMMARY OF BASIC SEQUENTS AND RULES 43

logical form; for instance, to spot that a very complicated-looking argument is really just a

case of a simple form such as MP, MT, or HS.

Don’t forget that in the following statement of the rules, the propositional variables P,

Q, R, etc., could be either atomic or molecular propositions.

3.5.1 Basic Rules and Sequents

Some of the Basic Rules are really metarules, meaning that they are statements about what

sort of inferences we can make, given that certain other inferences are possible. Other Basic

Rules can be expressed as sequents.

Basic Metarules

Assumption (A):

Any proposition whatever may be assumed, so long as we do not forget that it was

an assumption.

∨-Elimination (vE):

If P ¬ R and Q ¬ R are provable separately, then we can conclude P ∨ Q ¬ R.

Conditional Proof (CP):

From P, Q ¬ R we can conclude P ¬ Q → R.

Reductio ad Absurdum (RAA):

From P ¬ R& −R we can conclude −P.

Reiteration (R):

Any line may be repeated anywhere in a proof, so long as we carry its dependencies

with it.

Deﬁnition Introduction (DefIntro):

If a set of symbols is deﬁned as a certain truth-functional combination of propositional

variables, then one may be substituted for the other wherever they occur.

Basic Sequents

Modus Ponens (MP):

P, P → Q ¬ Q

&-Introduction (&I):

P, Q ¬ P &Q

P, Q ¬ Q&P.

&-Elimination (&E):

P &Q ¬ P

P &Q ¬ Q

v-Introduction (vI):

P ¬ P ∨ Q

P ¬ Q ∨ P.

44 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

Double Negation (DN):

−−P ¬¬ P

Note that DN is the only Basic Rule that is an interderivability result.

3.5.2 Summary of Derived Sequents

I summarize here the sequents stated and (mostly) derived earlier in this chapter. There is

a denumerable inﬁnity of propositional sequents that can be derived from the Basic Rules.

The rules in this list are simply those that are used often enough to be given names.

One-Way Sequents

Modus Tollens (MT):

P → Q, −Q ¬ −P

Disjunctive Syllogism (DS):

P ∨ Q, −Q ¬ P

Denial (Den):

−P ¬ −(P &Q)

Ex Falso Quodlibet (ExF):

P, −P ¬ Q

Hypothetical Syllogism (HS):

P → Q, Q → R ¬ P → R

Constructive Dilemma (CD):

P → Q, R → S, P ∨ R ¬ Q ∨ S

Destructive Dilemma (DD):

P → Q, R → S, −Q ∨ −S ¬ −P ∨ −R

Convergence (Conv):

P ∨ Q, P → R, Q → R ¬ R

Absorption (Abs):

P → Q ¬ P → (P &Q)

Interderivability Sequents

de Morgan’s Laws (deM):

−(P &Q) ¬¬ −P ∨ −Q

−(P ∨ Q) ¬¬ −P & −Q.

Implication Rules (Imp):

P → Q ¬¬ −P ∨ Q

P → Q ¬¬ −(P & −Q)

Transposition (Trans):

P → Q ¬¬ −Q → −P

3.6. FALLACY ALERT! 45

Idempotency (Idem):

P &P ¬¬ P

P ∨ P ¬¬ P.

Commutativity (Comm):

P &Q ¬¬ Q&P

P ∨ Q ¬¬ Q ∨ P

Associativity (Assoc):

P &(Q&R) ¬¬ (P &Q) &R

P ∨ (Q ∨ R) ¬¬ (P ∨ Q) ∨ R

Dominance (Dom):

P &(Q ∨ −Q) ¬¬ P

P &(Q& −Q) ¬¬ Q& −Q

P ∨ (Q ∨ −Q) ¬¬ Q ∨ −Q

P ∨ (Q& −Q) ¬¬ P

Distributivity (Dist):

P &(Q ∨ R) ¬¬ (P &Q) ∨ (P &R)

P ∨ (Q&R) ¬¬ (P ∨ Q) &(P ∨ R)

Deﬁnition of Equivalence (Def↔):

P ↔ Q ¬¬ (P → Q) &(Q → P).

Equivalence (Equiv):

P ↔ Q ¬¬ (P &Q) ∨ (−P & −Q).

Negation of Implication (Neg→):

−(P → Q) ¬¬ P & −Q

Negation of Equivalence (Neg↔):

−(P ↔ Q) ¬¬ (P & −Q) ∨ (−P &Q)

Exportation (Exp):

(P &Q) → R ¬¬ P → (Q → R)

Distribution for → (Dist→):

P → (Q&R) ¬¬ (P → Q) &(P → R)

Distribution for → (Dist→):

P → (Q ∨ R) ¬¬ (P → Q) ∨ (P → R)

3.6 Fallacy Alert!

A fallacy, loosely speaking, is simply some sort of error in reasoning. We fallible humans

commit certain characteristic errors so often and so spontaneously that it has been found

useful to give them names; they are almost like optical illusions of the mind. They are com-

mon precisely because, like optical illusions, they look believable if they are not examined

closely. Many of these fallacies belong to informal logic or inductive logic, and they are

46 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

discussed in books on rhetoric and critical thinking. (See [6, 16, 8] for good introductions

to critical thinking and fallacy theory.) There are a few fallacies of deductive reasoning that

occur rather often as well, and I will mention a few of them here. In Chapter 5 we will learn

systematic techniques for diagnosing fallacious deductive reasoning.

3.6.1 A Common Misuse of vE

It is very common for students to attempt the following move:

1 (1) P ∨ Q A

1 (2) P 1 vE (???)

This is a glaring error. You can convince yourself that this pattern of reasoning is invalid

by considering the following example:

The number 3 is either even or odd. (This is true, of course.)

Therefore the number three is even. (False!)

The rule of vE does not work like &E!

This example also illustrates the fact that we can check the validity of a sequent by

trying to ﬁnd speciﬁc examples of it in which obviously true premisses lead to obviously

false conclusions. If you can do this, the sequent must be invalid.

If you cannot ﬁnd an example that invalidates a sequent, that doesn’t mean that the

sequent is valid! It just means that you can’t think of an example to show that it is not. In

Chapter 5 we learn general and reliable methods for checking the validity of propositional

logic sequents and arguments.

3.6.2 Aﬃrming the Consequent

Another very common fallacy is Aﬃrming the Consequent:

P → Q, Q ¬ P (????) (3.8)

To see that this is invalid, consider the following example:

If today is Tuesday, we have a logic class.

We are having a logic class.

Therefore, today must be Tuesday.

Even if the premisses are correct, the conclusion could still be false, simply because logic

might be held on other days of the week besides Tuesday. This error is seductive, however,

since it looks so much like the valid rule MP.

3.6.3 Denying the Antecedent

This is an error which is very similar to Aﬃrming the Consequent, except that it is more

like a misuse of MT:

P → Q, −P ¬ −Q. (3.9)

A simple example demonstrates invalidity:

3.6. FALLACY ALERT! 47

If today is Tuesday, we have a logic class.

Today is not Tuesday.

Therefore we don’t have logic.

Again, the problem is just that we might have logic on other days of the week.

3.6.4 Avoid Circularity!

From this point onward the student can take any of the derived sequents above and use

them in the proof of further sequents. But you should not forget that these were proven in

a certain order. For instance, de Morgan’s Laws are proven ﬁrst here, and then used in the

proof of the Implication rules. It is possible to prove Implication using only Lemmon’s Basic

Rules as well. (See Sequent 35 in Lemmon.) It would also be possible to use Implication in

the derivation of the de Morgan Laws — but you must not do that if you used de Morgan

to get Implication! That would be a circular procedure, or “begging the question,” as it

is sometimes called in informal logic. A circular proof is no proof at all, since it merely

amounts (though perhaps in a roundabout way) to a restatement of its assumptions.

It would also be quite legitimate to prove Implication ﬁrst from the Basic Rules and

then use it in the derivation of de Morgan. (You might want to try this as an exercise.) But

once you have chosen an order of proof you should stick to it, in order to avoid circularity.

48 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

3.7 Exercises on Chapter 3

A

1. Find natural deduction proofs for the following named sequents:

(a) P &P ¬ P (Idem)

(b) P ¬ P &P (Idem)

(c) P ¬ P ∨ P (Idem)

(d) P &(Q ∨ −Q) ¬ P (Dom)

(e) P ¬ P ∨ (Q& −Q) (Dom)

(f) Q& −Q ¬ P &(Q& −Q) (Dom)

(g) P &(Q& −Q) ¬ Q& −Q (Dom)

(h) Q ∨ −Q ¬ P ∨ (Q ∨ −Q) (Dom)

2. Find natural deduction proofs for the following miscellaneous sequents:

(a) P, Q ¬ P → Q

(b) Prove the following without the use of R. Your proof should be only three lines

long.

P ¬ Q → P

(c) P → (Q ∨ R), −(Q ∨ R) ¬ −P

(d) P ∨ −Q, Q ¬ P

(e) (Q → R), −(Q → R) ¬ R

(f) (M ∨ N) → Q ¬ (M ∨ N) → (Q&(M ∨ N))

(g) P → (Q → R), (Q → R) → T ¬ P → T

(h) −(P & −Q) ¬ −P ∨ Q

(i) −(P ∨ −(R → S)) ¬ −P &(R → S)

(j) (X ∨ Y ) → Q ¬ (−X & −Y ) ∨ Q

(k) −M ∨ N, M ¬ N

(l) −(X → Y ) → −P, P ¬ (X → Y )

(m) P → (Q ∨ Q) ¬ −(P & −Q)

(n) P &(Q → R) ¬ (P & −Q) ∨ (P &R)

(o) X → (P ↔ Q), (P & −Q) ∨ (−P &Q) ¬ −X

B

1. Find natural deduction proofs for the following named sequents:

(a) P → Q, R → S, −Q ∨ −S ¬ −P ∨ −R (DD)

(b) −P ∨ Q ¬ P → Q (Imp) (Hint: use DS)

(c) P → Q ¬ −(P & −Q) (Imp)

3.7. EXERCISES ON CHAPTER 3 49

(d) −(P & −Q) ¬ P → Q (Imp)

(e) −Q → −P ¬ P → Q (Trans)

(f) P &(Q&R) ¬ (P &Q) &R (Assoc)

(g) P ∨ (Q&R) ¬ (P ∨ Q) &(P ∨ R) (Dist)

(h) (P ∨ Q) &(P ∨ R) ¬ P ∨ (Q&R) (Dist)

This sequent is proven in the text using CP. Try it using RAA.

(i) P → (Q → R) ¬ (P &Q) → R (Exp)

(j) P → (Q&R) ¬ (P → Q) &(P → R) (Dist→)

(k) (P → Q) &(P → R) ¬ P → (Q&R) (Dist→)

(l) P ∨ (Q ∨ −Q) ¬ Q ∨ −Q (Dom)

(m) −(P & −Q) ¬ P → Q (Neg→)

(n) (P → Q) ∨ (P → R) ¬ P → (Q ∨ R) (Dist→)

2. The following questions require you to ﬁnd natural deduction proofs for a variety of

sequents:

(a) P, Q ¬ P ↔ Q

(b) P ↔ Q, Q ↔ R ¬ P ↔ R

(c) (P → R), (Q → R) ¬ (P &Q) → R

(d) −P ∨ R, −(Q& −S), −P → Q, −R ¬ S

(e) R, R ↔ P, −Q → −P ¬ Q

(f) P → Q, Q → R, R → S, −S ∨ −R ¬ −P

(g) P ↔ S, P ∨ Q, P ∨ R, −Q ∨ −R ¬ S

(h) P → (Q ∨ R), Q → S, R → T, −(S ∨ T) ¬ −P

(i) P → (Q ∨ R), (P → Q) → X, (P → R) → Y ¬ X ∨ Y

(j) P → R, −Q → S, −(R ∨ S) ¬ −(Q → P)

C

1. MT is a basic pattern of inference which is very important in scientiﬁc reasoning.

Suppose we have a new scientiﬁc theory, and suppose this entire theory is expressed as

a single proposition T. Suppose also the theory deductively entails a certain prediction

P. Finally, suppose that (to our great disappointment) experimentation shows that P

is false. What does MT imply about T? (Philosophers of science refer to this process

as falsiﬁcation.) Does this mean that we have to junk all of T and start all over again?

If not, why not?

2. Find a natural deduction proof for the following sequent:

(a) P ∨ (Q ∨ R) ¬ (P ∨ Q) ∨ R

50 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

3. Suppose we had a system of Natural Deduction that was exactly the same as the

one we use in this book, except that, in the place of Modus Ponens (MP), it used

Disjunctive Syllogism (DS) (P ∨ Q, −Q ¬ P) as a Basic Deductive Rule.

(a) Show that in this new system we can derive MP using only DS and other Basic

Rules, so long as we deﬁne P → Q as −P ∨ Q. (If we don’t have MP as a Basic

Rule, we have to treat → as a deﬁned symbol, since in our usual system → is

deﬁned implicitly by the way it behaves in MP.)

(b) Now suppose, instead, that we deﬁne P → Q as −(P & − Q), as is done in

numerous logic texts. (See, e.g., Copi and Cohen [6].) This deﬁnition captures

the intuition that to say P implies Q is to say that we never simultaneously have

P and −Q. Show that in this system we can, as well, derive MP using Basic

Rules. Hint: try RAA.

Chapter 4

Techniques and Applications of

Propositional Natural Deduction

In this chapter we introduce several techniques that greatly simplify propositional natural

deductions, and indicate some of its possible applications.

Don’t forget that in this chapter the propositional variables P, Q, R, etc., can be either

atomic or molecular unless stated otherwise.

4.1 Recognition of Logical Form

One of the most important skills to develop in symbolic logic is the ability to recognize

logical form. Suppose, for instance, you were asked to deduce a conclusion from the following

seemingly-complicated premisses: ((R ↔ S) &(S ↔ T)) ∨ (R → −S) and R&S. A little

thought will convince you that this is merely an instance of DS, so that an immediate

conclusion is (R ↔ S) &(S ↔ T). Formally, we could write this out as follows:

1 (1) ((R ↔ S) &(S ↔ T)) ∨ (R → −S) A

2 (2) R&S A

2 (3) −(R → −S) 2 Neg→

1,2 (4) (R ↔ S) &(S ↔ T) 1,3 DS

Something that threatened to be forbiddingly complicated in fact turns out to be very

simple.

Try to “chunk” complicated formulas together in your mind and see if they start to

look like instances of simpler rules before you start blindly calculating.

4.1.1 Exercises

A

1. Identify the simple rule used in each of the following deductions:

(a) (X ∨ Y ) → (Y ∨ Z), (X ∨ Y ) ¬ (Y ∨ Z)

(b) ((X ∨ Y ) → Q) → (P &Q), −(P &Q) ¬ −(((X ∨ Y ) → Q) → (P &Q))

(c) P ¬ ((M ↔ N) ∨ (M ↔ (R&S))) ∨ P

51

52 CHAPTER 4. TECHNIQUES AND APPLICATIONS

(d) P → (Q&X), (Q&X) → T ¬ P → T

(e) ((P → Q) → R) → (P ↔ (Q&R)) ¬ −((P → Q) → R) ∨ (P ↔ (Q&R))

4.2 Combining Steps

As you become more experienced in doing proofs you will naturally want to combine steps

when they seem to be obvious. For instance, you might do a fragment of a larger proof like

this:

1,2 (16) −(−P ∨ Q) (from previous lines)

1,2 (17) P & −Q 16 deM,DN

This is perfectly okay, so long as you don’t get ahead of yourself and make a mistake.

Just as you can use more than one rule to get a given line, you can use the same rule

more than once. For instance, you could use Imp twice on the same line as follows:

1,2 (1) P → (P → Q) (from previous lines)

1,2 (2) −P ∨ (−P ∨ Q) 1 Imp (2)

If you have the slightest doubt about the steps you are taking, it is better painstakingly

to write them all out one by one and thereby reduce the chance that you will make a

mistake. Remember, too, that on tests and assignments the onus is on you to convince your

instructor that you understand how the proof works!

4.2.1 Exercises

A

1. State the justiﬁcations for each of the following steps:

(a) (n) P → (Q → P) TI

(n+1) −P ∨ (−Q ∨ P) ?

(b) (n) P → (Q&R)

(n+1) (−P ∨ Q) &(−P ∨ R) ?

(c) (n) −P → (Q&R)

(n+1) P ∨ (Q&R) ?

(d) (n) (P → Q) &(P → R)

(n+1) −P ∨ (Q&R) ?

(e) (n) −M → N

(n+1) −N

(n+2) M ?

(f) (n) (P ∨ −P) ∨ (Q ∨ −Q)

(n+1) (−P ∨ Q) ∨ (−Q ∨ P) ?

4.3. THEOREMS 53

4.3 Theorems

In the last chapter we introduced the concept of the theorem, meaning a wﬀ that can be

proven with no assumptions left over. If P is a theorem, we assert

¬ P, (4.1)

(read as “Provable P” or “P is a theorem”), which means that P can stand on its own,

without the support of any premisses. In the next chapter we will see that the theorems of

propositional logic are just the same as its tautologies, which are the formulas that are T

in all lines of their truth tables.

For now, I will give a few simple examples of theorems, to demonstrate typical tech-

niques that can be used to prove them.

42 ¬ P → P

Here are two proofs of this elementary theorem. The ﬁrst uses CP and R:

1 (1) P A (CP)

1 (2) P 1 R

(3) P → P 1,2 CP

We can also do this without using R:

1 (1) P A

(2) P → P 1,1 CP

This is no doubt the shortest possible application of CP!

If we want to derive a result that depends on no premisses from a set of premisses, we

have to have a way of discharging assumptions, and that means that we are going to use

either CP or RAA. (The rule vE also discharges assumptions, but only in a sub-proof.)

Here is an important theorem that we can prove using RAA:

43 ¬ P ∨ −P Law of Excluded Middle (ExM)

1 (1) −(P ∨ −P) A (RAA)

1 (2) −P & −−P 1 deM

(3) (P ∨ −P) 1,2 RAA

This illustrates the power of RAA: if you can’t think of any other way to get a result,

assume its negation and try to derive a contradiction.

Here’s another way to get ExM:

1 (1) P A (CP)

(2) P → P 1,1 CP

(3) P ∨ −P 2 Imp,Comm

This theorem is called the Law of Excluded Middle because it says that every proposition

is either true or false, with nothing in between. This is sometimes expressed in Latin as

tertium non datur — “a third is not given”.

54 CHAPTER 4. TECHNIQUES AND APPLICATIONS

Here is another example: Show that any given proposition P is implied by any other

proposition Q. I am going to give ﬁve ways of proving this theorem, in order to demonstrate

the variety of proof strategies that can be used for theorems.

44 ¬ P → (Q → P)

1 (1) P A (CP)

2 (2) Q A (CP)

1 (3) P 1 R

1 (4) Q → P 2,3 CP

(5) P → (Q → P) 1,4 CP

This is a straightforward use of CP. The use of R in line 3 is redundant; see if you can do

it without using R.

1 (1) P A (CP)

1 (2) −Q ∨ P 1 vI

1 (3) Q → P 2 Imp

(4) P → (Q → P) 1,3 CP

The above proof reminds us of how we can use vI to bring new propositional letters into a

proof.

1 (1) −(Q → P) A (CP)

1 (2) Q& −P 1 Neg→

1 (3) −P 2 &E

(4) −(Q → P) → −P 1,3 CP

(5) P → (Q → P) 4 Trans

The above proof illustrates the idea that if we want to prove a conditional, we might try

to prove the negation of its consequent ﬁrst and then use Transposition to get the desired

conditional.

1 (1) −(P → (Q → P)) A (RAA)

1 (2) P & −(Q → P) 1 Neg→

1 (3) P 2 &E

1 (4) −(Q → P) 2 &E

1 (5) Q& −P 4 Neg→

1 (6) −P 5 &E

1 (7) P & −P 3,6 &I

(8) P → (Q → P) 1,7 RAA

The above proof is a bit longer than the others, but it illustrates the fact that if all else fails

we can use RAA; that is, assume the negation of what you want to prove, and then derive

a contradiction by brute force.

4.4. SUBSTITUTION 55

(1) P ∨ −P ExM

(2) (P ∨ −P) ∨ −Q 1 vI

(3) −P ∨ (P ∨ −Q) 2 Comm,Assoc

(4) P → (P ∨ −Q) 3 Imp

(5) P → (Q → P) 4 Comm,Imp

In line 1 of the above proof we introduce a theorem, the Law of Excluded Middle. Theorem

Introduction is discussed further below; the basic idea is that a theorem can be stated at

any point in a proof without justiﬁcation. We then expand line 1 using vI, rearrange the

results, and apply Imp to get what we wanted. This is the deepest proof, since it builds

directly from ExM. In fact, it would be possible to derive all theorems, and thereby all of

deductive logic, directly from ExM if we wanted to.

4.3.1 The “Laws” of Thought

There is no limit to the number of propositional theorems that could be proved. However,

three simple theorems are important enough that they have come to be known as the “laws

of thought.”

The Law of Excluded Middle (ExM):

¬ (P ∨ −P)

The Law of Identity (Id):

¬ (P → P)

The Law of Non-Contradiction (NC):

¬ −(P & −P).

We have seen the proofs of the ﬁrst two; the proof of NC is left as an A exercise.

4.4 Substitution

Under certain circumstances, it is valid to substitute one wﬀ for the whole or part of another

wﬀ. There are two kinds of substitution, uniform and nonuniform.

4.4.1 Uniform Substitution

We will say that a wﬀ P is substituted uniformly for a wﬀ Q if every instance of Q in a

larger formula is replaced by P. Example: Consider the formula (P &R) → (Q&R). A

uniform substitution of (M ∨ N) for R would yield (P &(M ∨ N)) → (Q&(M ∨ N)).

4.4.2 Nonuniform Substitution

We will say that a wﬀ P is substituted nonuniformly for a wﬀ Q if some but not all instances

of Q in a larger formula are replaced by P. Example: Consider again (P &R) → (Q&R).

A nonuniform substitution of (M ∨ N) for R would yield either (P &(M ∨ N)) → (Q&R)

or else (P &R) → (Q&(M ∨ N)).

56 CHAPTER 4. TECHNIQUES AND APPLICATIONS

4.5 Introduction Rules

Introduction Rules are metarules that allow us to introduce previously accepted results into

a derivation in order to simplify it. In this section we state without formal justiﬁcation

(since that would be beyond the scope of this course) a number of Introduction Rules that

will greatly facilitate proofs of sequents and theorems. We have already used all of these

rules except Theorem Introduction in Chapter 3.

1. Deﬁnition Introduction (DefIntro): Any deﬁned symbol can be replaced by its deﬁni-

tion, or vice versa.

So far, the only symbol we are taking as deﬁned in terms of the basic set of symbols is

↔. But there is no reason that we could not deﬁne other symbols as combinations of

basic symbols or other previously deﬁned symbols. For instance, suppose we stipulate

that P ∗Q := (P ∨ Q) → (P &Q). Then any occurrence of P ∗Q could be substituted

for by its deﬁnition, and any wﬀ that was interderivable with its deﬁnition could be

substituted for by P ∗ Q. Any such usage can be justiﬁed by citing the line number

that was operated on and then putting down either DefIntro or Def∗.

2. Sequent Introduction (SI): Any uniform substitution instance of a proven sequent can

be introduced in a proof.

We have already used SI many times in Chapter 3, when we applied sequents such as

MT, HS, DS, and so forth in the proofs of further sequents. Do not forget that the

substitution has to be uniform; only uniform substitution instances of sequents can

be guaranteed to be valid. If the sequent you are using has a special name (such as

MP or HS) then just cite the name in the justiﬁcation column; otherwise, write SI

and give a reference to where the sequent was previously proven (such as in an earlier

exercise).

3. Equivalence Introduction (EqI): Given an equivalence theorem ¬ P ↔ Q, we may

substitute P for Q (or the other way around) in any occurrences of Q in a valid

sequent; this kind of substitution need not be done uniformly.

If the equivalence rule you are using has a special name (such as deM or Dist) then just

cite the name in the justiﬁcation column; otherwise, write EqI and give a reference to

where the rule was previously proven (such as in an earlier exercise).

4. Theorem Introduction (TI): Any uniform substitution instance of a theorem may be

written as a line anywhere in a proof. No assumptions will be listed in the assumption

column (because theorems do not depend on assumptions) and the notation TI should

be put in the justiﬁcation column (together with the name of the theorem if it has one

in brackets).

A theorem is a proposition that we have already shown can be proven with no as-

sumptions left over. Hence, we could introduce it, or a substitution instance of it, at

any point in a proof; in eﬀect, doing so is just shorthand for a statement of the entire

proof of the theorem. If the theorem you are using has a special name (such as ExM

or Id) then just cite the name in the justiﬁcation column; otherwise, write TI and

give a reference to where the theorem was previously proven (such as in an earlier

exercise).

4.6. THE DEDUCTION METATHEOREM 57

Example of TI:

45 P → Q, −P → R ¬ Q ∨ R

1 (1) P → Q A

2 (2) −P → R A

(3) P ∨ −P ExM

1,2 (4) Q ∨ R 1,2,3 CD

4.5.1 Exercises

A

1. Prove the Law of Non-Contradiction, ¬ −(P & −P).

B

1. Prove the following theorems:

(a) ¬ (P & −P) → Q

(b) ¬ P → (−P → Q)

(c) ¬ P ∨ (P → Q)

(d) ¬ (P → Q) ∨ (Q → P)

(e) ¬ (P → Q) ∨ (Q → R)

(f) ¬ (−P → P) → P

C

1. Prove ExM using only Basic Rules.

4.6 The Deduction Metatheorem

The following metatheorem is one of the most important results in deductive logic. It

cannot be proven in all generality with the tools we use in this course. However, we will

work out proofs for a few simple cases, and they should make apparent the plausibility of

this centrally important theorem.

Deduction Theorem:

The sequent P

1

, P

2

, . . . , P

n

¬ Q is valid iﬀ ¬ (P

1

&P

2

& . . . &P

n

) → Q.

Example: Given P ¬ R, show ¬ P → R; and given ¬ P → R, show P ¬ R. The proofs

of these results are very simple, but they nicely illustrate the application of SI and TI.

(a) Given P ¬ R, show ¬ P → R.

1 (1) P A (CP)

1 (2) R 1 SI

(3) P → R 1,2 CP

58 CHAPTER 4. TECHNIQUES AND APPLICATIONS

In line 2 we apply the given sequent P ¬ R. It doesn’t have a special name (like MT, HS,

etc.), but we can substitute it into the proof as required. Note that we have to cite line 1,

since this is the line that the introduced sequent is applied to.

(b) Given ¬ P → R, show P ¬ R.

1 (1) P A

(2) P → R TI

1 (3) R 1,2 MP

On the assumption that P → R is a theorem, we can substitute it in wherever we want;

and we do not have to cite any previous assumptions or lines since a theorem holds without

qualiﬁcation.

The Deduction Theorem gives us a better understanding of a very useful result that we

used in the previous chapter:

P ¬¬ Q if and only if ¬ P ↔ Q.

Here is an informal argument to establish this result:

(a) Given P ¬¬ Q, show ¬ P ↔ Q.

The expression P ¬¬ Q simply means P ¬ Q and Q ¬ P. By the Deduction Theorem,

this means that we have ¬ P → Q and ¬ Q → P. We can then prove ¬ P ↔ Q as

follows:

(1) P → Q TI

(2) Q → P TI

(3) (P → Q) &(Q → P) &I

(4) P ↔ Q 3 Def↔

(b) Given ¬ P ↔ Q, show P ¬¬ Q.

From ¬ P ↔ Q we get ¬ P → Q and ¬ Q → P separately; by the Deduction Theorem

this gives us P ¬ Q and Q ¬ P, which is just the same as P ¬¬ Q.

The main use of the Deduction Theorem, for the purposes of this introductory course,

is simply the fact that whenever we have an interderivability result, we have an equivalence;

and such equivalences can be used freely in transforming expressions into equivalent forms

which may help us get to where we want to go in a deduction.

4.6.1 Exercises

B

1. Given P, Q ¬ R, show ¬ (P &Q) → R.

2. Given ¬ (P &Q) → R, show P, Q ¬ R. (These two questions in combination show the

Deduction Theorem for the case of two premisses.)

4.7. BOOLEAN ALGEBRA 59

4.6.2 A Turnstile is not an Arrow!

There is a natural tendency to think of P → Q as saying just the same thing as P ¬ Q.

This is a mistake, one which can sometimes be glossed over in an elementary course such

as this, but which in a course involving metatheory would be serious. The symbol ¬ is a

metalogical symbol. It is a claim about what can be derived, while → is a logical symbol,

meaning (as far as propositional logic is concerned) that it denotes a relation between truth

values.

Now, the Deduction Theorem assures us that given P ¬ Q, it is a theorem that P → Q.

But P ¬ Q and P → Q are diﬀerent claims. The ﬁrst says that Q can be derived from P

by the rules of natural deduction; and the Deduction Theorem says that this is the same

thing as saying that P → Q can be derived with no assumptions left over. But P → Q

only says that given P, Q is true. It so happens that elementary propositional logic has the

property of completeness, meaning that every logically true statement can be proven within

the system. Because of completeness, we can sometimes in propositional logic gloss over

the ﬁne distinction between “true” and “provable”. But (as G¨odel showed) there are logical

systems that are incomplete; therefore, once we move beyond elementary propositional logic

the distinctions between what can be proven from a given set of rules, and what is true,

have to be kept carefully in mind.

4.7 Boolean Algebra

We can greatly simplify many deductions that involve the use of equivalence relations by

means of Boolean algebra. For our purposes, we will mean by “Boolean algebra” the

manipulation of propositional formulas in a manner very similar to ordinary algebraic ma-

nipulations.

If all we are doing is trying to go from one expression to another which is equivalent

to it, we can write our calculations in a much simpler and more direct way that does not

require the full apparatus of natural deduction. This will seem very familiar to those who

have some experience with the manipulation of algebraic identities in ordinary mathematics.

Here is a simple example:

−(P → Q) ↔ −(−P ∨ Q) Imp

↔ −−P & −Q deM

↔ P & −Q DN

This establishes the theorem ¬ −(P → Q) ↔ (P & −Q).

As with regular natural deduction, we can combine steps:

−(P → Q) ↔ −(−P ∨ Q) Imp

↔ P & −Q deM,DN

Again, be careful that you don’t get ahead of yourself. If you aren’t quite sure of what you

are doing, don’t combine steps.

We can substitute equivalent expressions for subformulas within a larger formula either

uniformly or nonuniformly:

(−P ∨ Q) &(P → R) ↔ (P → Q) &(P → R) Imp

↔ P → (Q&R) Dist→

60 CHAPTER 4. TECHNIQUES AND APPLICATIONS

4.7.1 Exercises

B

1. Use Boolean algebra to demonstrate the following equivalence theorems:

(a) ¬ (P ↔ Q) ↔ (−(P → −Q) ∨ −(−Q → P))

(b) ¬ ((Q ∨ R) → P) ↔ ((Q → P) &(R → P))

(c) ¬ (P → (Q ∨ R)) ↔ ((P → Q) ∨ (P → R))

(d) ¬ (P → (Q&R)) ↔ ((P → Q) &(P → R))

(e) ¬ (P ∨ (Q → R)) ↔ ((Q → P) ∨ (−P → R))

4.8 Filling in the Gaps in an Argument

It is unavoidable that most of this text has to be devoted to explaining the basic techniques

of symbolic logic, and we can’t devote as much space as we should to its possible applications.

This could lead an attentive student to wonder what is the point of going through all the

pain of ﬁnding a deductive proof of a conclusion that has already been worked out —

especially when the semantic techniques of the next two chapters allow us to check the

validity of an argument quickly and eﬃciently without having to construct a proof at all.

The fact remains that when we use natural deduction in “real life” (and despite all

appearances to the contrary, there actually are some people who attempt to reason their

way through real-life problems) we do not have a conclusion worked out in advance. Rather,

we are given some evidence (which we express in premisses) and we try to work out what

that evidence implies. We may not know the conclusion in advance any more than we know

the sum of a column of ﬁgures before we add them up. Often, we do have some idea of

what requirements the conclusion should satisfy; for instance, a detective trying to solve a

murder mystery wants to ﬁnd out who dunnit. Or else we may have a fairly deﬁnite idea of

the conclusion, but we may not have all of the premisses required to establish it deductively;

we might have to work backwards to ﬁll in the gaps in the argument.

In this section I will introduce a few exercises in which we have to work out natural

deduction proofs in which part of the argument is missing.

Example: Suppose we know the following: either Bob or Alice paid the phone bill; if

Bob paid the bill he would have the receipt; if Alice paid the bill then she would have the

receipt; Bob does not have the receipt. Who paid the phone bill?

It is pretty obvious that Alice got stuck with the bill. But let’s go through the symbol-

ization and proof to illustrate the general method of attack.

First, let’s deﬁne our symbols:

A := Alice pays the bill.

B := Bob pays the bill.

R := Bob has the receipt.

S := Alice has the receipt.

We can put down all of the premisses we have on hand, and try a few steps to see where

they lead:

4.8. FILLING IN THE GAPS IN AN ARGUMENT 61

1 (1) B ∨ A A

2 (2) B → R A

3 (3) A → S A

4 (4) −R A

2,4 (5) −B 2,4 MT

1,2,4 (6) A 1,5 DS

We quickly come to the conclusion that it was Alice who paid the bill, and we stop there

because this answers the question. We have therefore established the sequent B ∨ A, B →

R, A → S, −R ¬ A. Remember, in this sort of problem we usually don’t know in advance

what sequent we are going to be proving; rather, we explore a set of premisses to see where

they will lead. Note, also, that one of the premisses (line 3) turned out to be irrelevant; and

that, of course, is just like real life, where we often have to sort out the useful information

from the noise.

Example: We are given the following facts: If either Paul or Peter did it then either

Randa helped or Michael wasn’t entirely innocent. If the weapon was in Paul’s oﬃce he did

it. On the other hand, if the weapon wasn’t in Paul’s oﬃce, then Michael wasn’t entirely

innocent. Can we determine whether Randa was involved?

Symbolization:

P := Paul did it

Q := Peter did it

R := Randa helped

M := Michael was innocent

N := The weapon was found in Paul’s oﬃce

Write down the premisses:

1 (1) (P ∨ Q) → (R ∨ −M) A

2 (2) N → P A

3 (3) −N → −M A

There is no way we can draw a deﬁnite conclusion about the truth or falsity of R from this,

since there is no way to extract it from its conditional. But now suppose that independent

evidence shows that Michael has an alibi — he was out of town for a meeting at the time

of the incident. Then we can continue:

4 (4) M A

3 (5) M → N 3 Trans

2,3 (6) M → P 2,5 HS

2,3,4 (7) P 4,6 MP

1 (8) (P → (R ∨ −M)) &(Q → (R ∨ −M)) 1 SI (Exercise 4.7.1, 1(b))

1 (9) P → (R ∨ −M) 8 &E

1,2,3,4 (10) R ∨ −M 7,9 MP

1,2,3,4 (11) R 4,10 DS

62 CHAPTER 4. TECHNIQUES AND APPLICATIONS

Chapter 5

Propositional Semantics: Truth

Tables

5.1 Syntax versus Semantics

The method of natural deduction that we have learned so far are instances of what are

called syntactic procedures. These are procedures deﬁned by rules that tell us the allowable

combinations of symbols that may be written down starting from given wﬀs. For instance,

MT says that given P → Q and −Q, we may (if it suits our purposes) then correctly write

down −P. Such rules are concerned only with syntax. We commonly interpret the Ps and

Qs of propositional logic as meaningful statements that can bear truth values, but these

symbols could be given other interpretations as well — or no interpretation at all.

If you make a mistake when typing in commands to a computer program, it may tell

you that you have committed a “syntax error.” This simply means that you have broken

the rules that say which combinations of symbols are allowed. The computer has no notion

whatsoever of the possible meanings of the symbols being used by the program.

In this chapter we introduce truth tables, the ﬁrst of two main semantic techniques we

will learn in this course. When we do semantics, we are concerned with the interpretation

or meaning of the symbols we write down. In propositional logic, the most common inter-

pretation of the propositional symbols is that they represent truth values of propositions;

propositional logic, in eﬀect, is a calculus of truth values.

In propositional logic we make the simplifying assumptions that there are only two

possible truth values, which we may represent as T (true) and F (false); this is called

bivalence. (These are often symbolized as · and ⊥, or 1 and 0.) Bivalence is an idealization,

since we know that in real life there are often various shades, modes, and degrees of truth.

We also, at this stage at least, assume that every proposition has one and only one of these

two possible truth values; i.e., that every proposition is either deﬁnitely T or deﬁnitely F.

We will see later that this assumption also breaks down under certain interesting conditions.

Nevertheless, the logic based on these two somewhat idealized assumptions turns out to be

extremely powerful and useful, and is the basis for most or all further and more nuanced

elaborations of logic.

Bivalence should not be confused with the theorem derived in the previous chapter

called the Law of Excluded Middle, ¬ P ∨ −P. See [7] for an illuminating discussion of

this point.

63

64 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

5.2 Truth Table Deﬁnitions of the Propositional Connectives

I reproduce here the basic truth tables from Chapter Two, for convenience:

Atomic

Proposition:

P

T

F

Negation:

− P

F T

T F

Conjunction:

(P & Q)

T T T

T F F

F F T

F F F

Disjunction:

(P ∨ Q)

T T T

T T F

F T T

F F F

Material Implication:

(P → Q)

T T T

T F F

F T T

F T F

Biconditional:

(P ↔ Q)

T T T

T F F

F F T

F T F

5.2.1 How Many Possible Truth Tables Are There?

The ﬁve familiar truth tables above are not the only possible truth tables for binary con-

nectives. Consider the general truth table for any binary connective ∗:

(P ∗ Q)

T ? T

T ? F

F ? T

F ? F

There are two ways to ﬁll each space in the centre column and four lines per table,

meaning that the total number of ways to ﬁll in the centre column will be just 2222 =

2

4

= 16. In other words, there are 16 possible truth functions of two propositional variables.

While the ones we deﬁne above are the ones most commonly used in elementary logic, others

are possible, and some (such as exclusive OR) have practical applications in circuit theory.

5.2.2 Expressive Completeness

Clearly, some of the truth functions we have deﬁned are redundant, in the sense that they

can be written as combinations of other symbols. For instance, we can show that P → Q is

equivalent to −P ∨ Q; hence our use of the symbol → is largely for linguistic convenience.

Some sets of symbols would not be suﬃcient to produce in combination all possible truth

tables. However, it can be shown that combinations of ¦−, ∨¦ and ¦−, &¦ are suﬃcient

to generate all possible tables. It is also possible to deﬁne a single connective that is

expressively complete in this sense, and for this purpose one can use either the Sheﬀer

stroke P[Q (deﬁned as −P ∨ −Q) or the Nicod stroke P ↓ Q (deﬁned as −P & −Q). We

will examine some of the properties of these symbols later on.

5.2. TRUTH TABLE DEFINITIONS OF THE PROPOSITIONAL CONNECTIVES 65

5.2.3 Evaluating Complex Wﬀs

Evaluate wﬀs as follows:

1. Be sure you have written the input values for all the occurrences of each variable in

the same order.

2. Evaluate all negations of atomic formulas ﬁrst; then,

3. starting with the innermost brackets, work outward until you come to the main con-

nective.

Arbitrarily complicated wﬀs can be evaluated mechanically by truth tables. Truth

tables therefore constitute what computer scientists call an eﬀective procedure — meaning

simply a recipe that can be followed in a completely unambiguous manner that is guaranteed

to produce a unique result. The practical drawback of truth tables is that the number of

lines in a table for a formula containing n propositional variables is 2

n

. This is a rapidly

increasing function; recall the old fable about the wise man who demanded

64

i=1

2

i

grains

of rice from his king in return for a favour. (See [3]. A quick estimation will show why

the king may have been tempted to cut oﬀ the wise man’s head.) Large truth tables may

therefore take impractically long times to evaluate.

Many of the truth table problems encountered in elementary propositional logic can be

solved by means of short-cuts, and we will consider these below. There is no guarantee that

a short cut exists for a given problem. On the other hand, if in an introductory course you

ﬁnd yourself faced with the task of evaluating a formula that would contain 8, 16, 32, or

more lines if fully written out, take a careful look at it before beginning blindly to ﬁll in

the whole table. There is probably a short-cut!

Example: do the full truth table for the wﬀ −(P → (−Q ∨ R)) &P.

Write out the formula as neatly as possible, with fairly wide, even spacing between the

symbols. Then enter all the possible combinations of truth values for the three letters in

the formula, P, Q, and R. (You can think of these as “inputs”.) There is nothing sacred

about the order in which I have written the inputs below, but it is a standard order that

is commonly used in logic, and it would be helpful to stick to this order. Since P appears

twice in the formula you will have to enter its input values twice. Whenever a letter appears

more than once in a table, it is essential that you write the values in its column of input

values in the same order for each instance of the letter; otherwise, the truth table will be

meaningless. There will be 8 = 2

3

lines in the table since there are three variables. Then

proceed to evaluate the various subformulas in the order indicated above, ending with the

main connective, which in this case is the & symbol.

− (P → (− Q ∨ R)) & P

F T T F T T T F T

T T F F T F F T T

F T T T F T T F T

F T T T F T F F T

F F T F T T T F F

F F T F T F F F F

F F T T F T T F F

F F T T F T F F F

66 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

We see that the formula is T only in the case in which P and Q are T and R is F.

5.3 Semantic Classiﬁcation of Single Wﬀs

Well-formed formulas of propositional logic can be classiﬁed according to their semantic

properties:

5.3.1 Tautology

A wﬀ is said to be tautologous, or to be a tautology, if and only if it comes out T under all

possible combinations of input values. Tautologies can also be referred to as logical truths

or necessary truths.

Example: Law of Excluded Middle

In the previous chapter we proved the Law of Excluded Middle as a theorem. The table

shows that it is a tautology as well, as it should be.

P ∨ − P

T T F T

F T T F

Example: (P → (Q → P))

In the previous chapter we gave several proofs of this formula as a theorem. One can easily

see that it is a tautology as well. The only way it could be F would be if P were T and

(Q → P) were F. But if P is T, then (Q → P) is T and the whole wﬀ is T. Hence, it is not

possible for this wﬀ to be false.

5.3.2 Inconsistency

A wﬀ is said to be an inconsistency, or to be inconsistent, if it comes out F for all possible

combinations of input values. Such a wﬀ is sometimes loosely called a contradiction, but

that term is properly reserved for an inconsistent wﬀ such as P & −P containing only one

propositional variable.

Example: Law of Non-Contradiction

The Law of Non-Contradiction states that it is never the case that a proposition is both

true and false; i.e., that (P & −P) is always false. This is supported by the truth table:

P & − P

T F F T

F F T F

5.3.3 One-to-One Relation Between Inconsistencies and Tautologies

It is easily seen by de Morgan’s Law that the Law of Non-Contradiction is simply the

negation of the Law of Excluded Middle. It can be shown in general that for every tautology

there is a contradiction, generated by taking the negation of the tautology, and vice versa.

5.4. RELATION OF SYNTAX TO SEMANTICS 67

5.3.4 Contingency

A wﬀ is said to be contingent or to be a contingency iﬀ it has occurrences of both T and

F in its truth table. Its truth or falsity in a given situation therefore depends on the facts,

not on logical form. The simplest example of a contingency is just P; other examples are

P &Q, P ∨ Q, and P → Q.

5.3.5 Consistency

A wﬀ is said to be consistent if it is T for at least some of its input values; in other words, a

wﬀ is consistent, or is a consistency, iﬀ it is either a tautology or a contingency. Examples:

P, P → Q, P → P.

5.4 Relation of Syntax to Semantics

A logical system is said to be sound iﬀ every theorem that it can generate is a tautology. A

logical system is said to be complete iﬀ every tautology can be proven as a theorem. One

of the most important things done in metalogic (the logic of logic) is to investigate whether

various possible logical systems are sound or complete.

We do not attempt to prove the soundness or completeness of propositional logic in

this course. However, it can be shown that propositional logic is, in fact, both sound and

complete. (See Lemmon [17], Sect. 2.4 and 2.5, for one way of doing this.) This fact has

practical signiﬁcance for us because it tells us that there is a nice one-to-one correspondence

between theorems and tautologies: every theorem we can prove using natural deduction will

turn out to be a tautology if we do truth table analysis on it, while every tautology can be

proven as a theorem.

It may seem too “obvious” to bear mentioning that propositional logic is both complete

and sound. In fact, it turns out that many important systems of logic that are more powerful

than propositional logic (in the sense that they can express and derive results about logical

or mathematical structure that is invisible to propositional logic) are incomplete. In 1930,

Kurt G¨odel shocked the mathematical world by proving that any system of logic that

contains enough machinery to deﬁne the natural numbers is, if it is consistent, incomplete.

(See [10, 19].) This tells us that there is no ﬁnite set of formal rules from which one can

derive all the possible truths about the natural numbers; or to put it another way, there

could never be a computer program that could by itself generate all possible truths about

the numbers.

5.5 Semantic Classiﬁcation of Pairs of Wﬀs

There are useful semantic properties that can be identiﬁed for pairs of wﬀs.

5.5.1 Implication

Proposition P is said to imply Q iﬀ P → Q is a tautology.

68 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

5.5.2 Equivalence

One proposition P is equivalent to another, Q, iﬀ P ↔ Q is a tautology.

5.5.3 Contrariety

A proposition P is contrary to Q if they are never both T. They could, however, both be

false. Two propositions P and Q are contraries iﬀ −(P &Q) is a tautology.

Examples: The propositions P &Q and P & −Q are contraries, since they are both F

when P is F, but never both T. It is easily seen that the negation of their conjunction is a

tautology.

5.5.4 Subcontrariety

Two propositions P and Q are subcontraries if they are never both false. They could,

however, both be true. Formally, P and Q are subcontrary iﬀ P ∨ Q is a tautology. The

concept of subcontrariety is not widely used these days, though it was important in medieval

logic.

Examples: The propositions P ∨ Q and P ∨ −Q are subcontraries since one of them

is T even if the other is F; however, they are both T if P is T. It is also easily seen that

their disjunction is a tautology.

5.5.5 Inconsistency

Two propositions P and Q are inconsistent iﬀ they never agree in truth value. This is the

same thing as saying that −(P ↔ Q) is a tautology.

5.5.6 Consistency

Two propositions P and Q are consistent iﬀ their conjunction P &Q is consistent.

5.5.7 Independence

Two propositions are said to be independent iﬀ none of the above relations apply to them.

The simplest example of independent propositions are two atomic propositions, say P and

Q.

5.6 Relations Between Semantic Properties of Wﬀs

It is possible to use natural deduction to establish relationships between the semantic prop-

erties of pairs of wﬀs.

Example: Show that if P and Q are contrary then one implies the negation of the other.

The given propositions are contrary if and only if −(P &Q) is a tautology. This suggests

that to test the implications of contrariety, we assume that −(P &Q) is true and carry out

a simple natural deduction:

5.7. SEMANTIC CLASSIFICATION OF SETS OF WFFS 69

1 (1) −(P &Q) A

1 (2) −P ∨ −Q 1 deM

1 (3) P → −Q 2 Imp

1 (4) −Q ∨ −P 2 Comm

1 (5) Q → −P 4 Imp

Lines 3 and 5 express the required properties of P and Q.

5.7 Semantic Classiﬁcation of Sets of Wﬀs

Some of the above properties of pairs of wﬀs can be usefully extended to sets of any number

of wﬀs.

5.7.1 Equivalence

A set of wﬀs is said to be equivalent iﬀ all its members are pair-wise equivalent.

5.7.2 Inconsistency

A set of wﬀs is said to be inconsistent iﬀ its conjunction is always false; or, equivalently, iﬀ

the negation of its conjunction is a tautology. E.g.: ¦P, P → Q, −Q¦.

5.7.3 Consistency

A set of wﬀs is said to be consistent iﬀ it is not inconsistent; this is the same thing as saying

that its conjunction is consistent; i.e., either a tautology or a contingency.

5.7.4 Independence

A set of two or more propositions are independent of each other iﬀ they are independent

pairwise.

5.8 Testing Validity of Sequents Using Truth Tables

Suppose you are asked to ﬁnd a proof of a sequent such as P, Q, R ¬ S. If you can’t come

up with a proof, that doesn’t mean that the sequent is invalid; it may simply mean that

you are unable to think of a route from premisses to conclusion. If we can ﬁnd a proof that

makes correct use of the rules, then the sequent is valid; but if we can’t ﬁnd a proof, that

doesn’t mean that it is not valid. We need a general procedure for testing the validity of

sequents.

One of the most important uses of truth tables is in testing the validity of argument

forms. A sequent is said to be valid if and only if the conclusion is never false when the

premisses are true. To check the validity of a sequent, therefore, one thing we can do is set

up the truth table for the sequent, with separate columns for premisses and conclusion, and

see if there is any line in the truth table in which the premisses are all T and the conclusion

is F. If this is not the case, then the sequent is valid.

Here is an example of a valid sequent (MP):

70 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

P P → Q Q

T T T T T

T T F F F

F F T T T

F F T F F

(I use a double vertical line to separate the conclusion from the premisses.) In each line

in which the conclusion is F, at least one of the premisses is F as well. This is enough to

show that the sequent is valid.

By contrast, here is the table for Aﬃrming the Consequent:

Q P → Q P

T T T T T

F T F F T

T F T T F

F F T F F

In the third line, both premisses are T while the conclusion is F. This proves that

Aﬃrming the Consequent is invalid. You need only ﬁnd one line in which this happens in

order to show invalidity. Note very carefully that we cannot conclude that the argument

is valid just because there are some lines (such as the ﬁrst) in which everything comes up

true. That is just like an invalid computer programme that occasionally outputs the correct

answer. The program is valid only if it never makes a mistake, not if it is sometimes but

not always right.

Here is an example that may seem counter-intuitive. Is P, −P ¬ Q valid or invalid?

This is just ExF, which we already know how to prove using RAA. But the truth table is

instructive:

P − P Q

T F T T

T F T F

F T F T

F T F F

The sequent is valid since there is no line in which the conclusion is false and all the

premisses are T. But the sequent can never be sound, meaning valid with all premisses true.

Suppose now we have a sequent in which the conclusion is a tautology, such as P →

Q ¬ P → P. Here is the table:

P → Q P → P

T T T T T T

T F F T T T

F T T F T F

F T F F T F

As before, there is no line with the premiss T and the conclusion F, and the sequent is

valid. But we need hardly have done the whole table to see this, since the conclusion is a

5.8. TESTING VALIDITY OF SEQUENTS USING TRUTH TABLES 71

tautology and must always come out T.

We can summarize this as follows: suppose F is a contradictory proposition, T is a

tautology, and X is any proposition whatsoever, true or false. Then any sequents of the

following form are valid:

F ¬ X (5.1)

X ¬ T (5.2)

5.8.1 Deduction Theorem from the Semantic Viewpoint

From the semantic viewpoint, the Deduction Metatheorem can be expressed as follows:

The sequent P

1

, P

2

, . . . , P

n

¬ Qis valid iﬀ the conditional (P

1

&P

2

& . . . &P

n

) →

Q is a tautology.

Our method for checking for validity is designed to take advantage of this fact.

5.8.2 Short-cuts

The truth-table check of validity is what logicians call an eﬀective procedure, meaning that

it is always guaranteed to produce a deﬁnite answer. However, as noted, the size of a truth

table increases very rapidly as a function of the number of propositional variables involved;

the table for a problem with 6 letters, for instance, will have 2

6

= 64 lines. It would be

very good, therefore, to have short-cuts that would make it unnecessary for us to write out

every line of the table in order to check the validity of a sequent.

The key idea of the short-cut method is to work backwards from the conclusion to the

premisses. Recall that to show that a sequent is valid we have to show that there are no

lines in the truth table where all the premisses are T and the conclusion F; conversely,

to show that a sequent is invalid it is suﬃcient to ﬁnd just one line in its table with all

premisses T and the conclusion F. What we do, therefore, is set up the truth table for the

sequent in the manner suggested above, with the conclusion set oﬀ to the right. Then set

the conclusion to be F, and see whether this forces any or all of the premisses to be T. Some

sequents can be checked in one line; some require more than one.

There can be no guarantee that a short-cut can be found for every problem; some may

be irreducibly complex and will require a full or nearly full truth table for their analysis.

But it is always a good idea to see if a short-cut can be found before starting to write out

a huge truth table.

Here are a few examples to illustrate the method.

Example: show that MP is valid by the short-cut method.

P P → Q Q

T T F F F

F F T F F

The only lines that need concern us are the two lines in which the conclusion is F.

However, we immediately see that in both of these lines one of the premisses is F; the

sequent is therefore valid. Writing what I have shown here constitutes a complete solution

to the problem of testing the validity of MP; it is not necessary to write out the whole table.

72 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

Example: show that Aﬃrming the Consequent is invalid by the short-cut method.

Q P → Q P

T F T T F

Set the purported conclusion P to be F; this forces the premiss P → Q to be T

regardless of the value of Q; and this means that we can choose the other premiss Q to

be T. Therefore, there is a line with all premisses T and the conclusion F; therefore, the

sequent is invalid.

Example: let’s check Constructive Dilemma, which is more complicated. This sequent

has 4 propositional variables, and so its full table would have 2

4

= 16 lines.

P ∨ Q P → R Q → S R ∨ S

F F F F T F F T F F F F

This can be checked with just one line. Set the conclusion to be F. That forces both

R and S to be F. The values of P and Q remain to be determined. But any possible

invalidating line has to have P → R and Q → S come out T, and which means that P and

Q would have to be F. However, this makes the ﬁrst premiss P ∨ Q F as well. Hence, there

is no invalidating line, and the argument is valid.

There will sometimes be sequents in which you have to check more than one line.

Example: check (−P ∨ −Q) → R, −R ¬ P &Q.

(− P ∨ − Q) → R − R P & Q

F T T T F F F T F T F F

T F T F T F F F F T

T F T T F F F F F F

It is only necessary to consider lines on which −R is T, which makes R false. However,

there are three ways in which P &Q can be false. Plugging R = F into the ﬁrst premiss, we

see that all possible combinations of the truth values of P and Q which make the conclusion

F also make the ﬁrst premiss F. Therefore, the argument is valid.

5.9 Using Short-Cuts in Testing for Other Semantic Proper-

ties

Short-cut truth tables can be used sometimes to test for semantic properties other than

validity as well.

It is impossible to run through all of the possible situations that might arise. Here is

an example which will suggest the sort of approaches that one could take.

Example: Determine whether (P &−P) → (P → (Q∨ R)) is contingent, contradictory,

or tautologous.

The antecedent (P & − P) is always F, and therefore by the truth table for → the

formula is always T — i.e., it is a tautology. The table need only show enough information

to express this fact:

5.10. EVALUATING LONG CONJUNCTIONS OR DISJUNCTIONS 73

(P & − P) → (P → (Q ∨ R))

F T

The general approach to this sort of problem is to see what it would take to make the

wﬀ T and F. If it can be both T and F, then it is contingent; if it can only be F, it is

contradictory; and if it can only be T, it is a tautology.

5.10 Evaluating Long Conjunctions or Disjunctions

By a “long” conjunction or disjunction I simply mean one containing more than two formu-

las. We accept conjunctions and disjunctions of more than two formulas as wﬀs. However,

truth table analysis as we have set it up here (and as it is usually done in most elementary

texts) deals only with & and ∨ as binary connectives. In order to evaluate a formula

containing a long conjunction or disjunction, therefore, we have to bracket oﬀ the terms

two at a time, and arbitrarily pick one of the connectives as the main connective. In fact,

which one we pick makes no diﬀerence to the outcome, although we cannot prove that fact

in all generality in this course.

Example: Evaluate P ∨ Q ∨ R when P and Q are T and R is F, and show that we get

the same result regardless of what order we associate the terms.

P ∨ (Q ∨ R) (P ∨ Q) ∨ R

T T T T F T T T T F

(I’ve only shown two of the possible ways to associate this formula; the others can

be found by applying commutation.) In the left-hand formula, the ﬁrst ∨ is the main

connective; in the right-hand formula, the last ∨ is the main connective.

In this course you will rarely if ever have to evaluate a conjunction or disjunction with

more than two terms, so this is not something that you will frequently need to worry about.

5.11 Sheﬀer and Nicod Strokes

At the beginning of this chapter I mentioned that there are other possible truth tables for

two propositional variables than the familiar ones we use in elementary logic. The latter are

chosen (i) because they are expressively complete, and (ii) because they come reasonably

close to capturing the sense of the logical connectives that we use in ordinary language.

It is, however, possible to do the logic of propositions with only one symbol, and for this

purpose one can use either the Sheﬀer or the Nicod strokes. As the practice examples at

the end of the chapter will convince you, these connectives are almost entirely useless for

practical calculations in logic. But their existence does tell us something about how logic

works, and they have important applications in circuit theory.

The Sheﬀer stroke is deﬁned as “not both P and Q”, and has the following truth table:

74 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

P [ Q

T F T

T T F

F T T

F T F

In circuit theory, this is called a NAND gate (“not-and”).

The Nicod stroke is deﬁned as “neither P nor Q,” and has the following table:

P ↓ Q

T F T

T F F

F F T

F T F

In circuit theory, this is called a NOR gate (“not-or”).

5.12. REVIEW EXERCISES ON CHAPTER 5 75

5.12 Review Exercises on Chapter 5

A

1. Write the complete truth table for each of the following wﬀs:

(a) −(P ∨ Q) & −(P &Q)

(b) −(−(P → Q) & −(Q → P))

(c) −(P &(Q&R))

(d) P → (Q → R)

2. Evaluate the following wﬀs, when P = T and Q = F.

(a) −(P → Q)

(b) −(P ↔ Q)

(c) (−P ↔ Q)

(d) (−P & −Q) ∨ (P &Q)

3. Use truth tables to characterize the following wﬀs as tautologous, inconsistent, or

contingent.

(a) P → P

(b) −P → P

(c) P → (−P → P)

(d) (P ∨ −P) → (P & −P)

4. Check the following sequents for validity. Use short-cuts where they are suitable.

(a) P ∨ Q, −Q ¬ P (DS)

(b) P → Q, −Q ¬ −P (MT)

(c) P → Q ¬ P → (P &Q) (Abs)

(d) P → Q, −P ¬ −Q (Denying the Antecedent)

B

1. Use truth tables to classify the following wﬀs as either tautologous, inconsistent, or

contingent:

(a) P ∨ (Q&R)

(b) P → (R&S)

(c) P → (Q → P)

(d) (P → R) → (P → (P &R))

(e) (P → (Q → P)) → R

(f) (P ↔ Q) ↔ (Q ↔ −P)

(g) P → (R& −R)

76 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

(h) (P → R) → −P

(i) (P → (Q&R)) &(P → (Q& −R))

(j) −(P → Q) & −(Q → P)

2. Use truth tables to check the following sequents for validity:

(a) P → Q, Q → R ¬ P → R (HS)

(b) P ↔ Q, Q ↔ R ¬ P ↔ R

(c) P → Q, R → Q ¬ P → R

(d) P → Q ¬ −Q → P

(e) P → Q, R → S, −Q ∨ −S ¬ −P ∨ −R (DD)

(f) P ∨ Q, P → R, Q → R ¬ R (Conv)

(g) P → Q ¬ P → (P ∨ Q)

(h) P → Q ¬ P → (P &Q)

(i) P ∨ R, P → (R ∨ S), −S ¬ −P

(j) P → (M &N), M → R, N → S, −(R ∨ S) ¬ −P ∨ M

3. The following questions deal with the analysis of semantic properties.

(a) Show that if P and Q are contrary then −P and −Q are subcontrary.

(b) Show that P and Q are equivalent if and only if P and −Q are contradictory.

(c) Show that −P and

Q

are equivalent if and only if P and Q are equivalent.

(d) Show that two wﬀs are inconsistent iﬀ they are both contraries and subcontraries.

(e) If P, Q ¬ R is a valid sequent, is it possible for R to be false? Explain your

answer.

(f) If P, Q ¬ R is invalid, is it impossible for P, Q, and R to all be true? Explain

your answer.

C

1. (a) Show how to write −P, P ∨ Q, P &Q, and P → Q in terms of the Sheﬀer stroke.

(b) Do the same for the Nicod stroke.

(c) Check the following sequent for validity by constructing its full truth table:

(P[P)[(Q[Q), P[(R[R) ¬ R.

(d) Translate the sequent in part (c) into ordinary propositional notation.

Chapter 6

Propositional Semantics: Truth

Trees

Truth trees are one of the most eﬃcient ways of checking the semantic properties of propo-

sitional formulas. In particular, it gives a very easy way of checking the validity of sequents.

The basic idea of truth trees is that they give a graphic way of displaying whether or

not a set of formulas is inconsistent.

6.1 Branches and Stacks

Truth trees are graphic objects built up out of combinations of two basic structures, stacks

and branches. Stacks show the decomposition of conjunctions into their separate conjuncts,

and branches show the decomposition of disjunctions into their separate disjuncts.

P &Q

P

Q

Basic Stack

P ∨ Q

d

d

P Q

Basic Branch

We place a checkmark beside every formula that has been decomposed.

Because any number of propositions can be conjoined, a stack can contain any number

of propositions. For instance, we could start with the formula (P &Q) & − R&(P ∨ Q),

and break it down as shown on the top of the next page:

77

78 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

(P &Q) & −R&(P ∨ Q)

(P &Q)

−R

P

Q

P ∨ Q

d

d

P Q

Figure 6.1: A Typical Stack

Similarly, because any number of propositions can be disjoined, we can have branches

with any number of stems.

P ∨ Q ∨ R

¨

¨

¨

r

r

r

P

Q

R

Figure 6.2: A Triple Branch

However, we will rarely need branches with more than two stems in this book.

Stacks and branches can be linked together into trees of arbitrary size. For instance,

the wﬀ (P &Q) ∨ R can be decomposed as follows:

(P &Q) ∨ R

d

d

P &Q

P

Q

R

Figure 6.3: A Typical Tree

Decomposition has gone as far as it can when all molecular formulas are broken down

into atomic formulas or their negations (which are collectively called literals).

6.2 Paths

A route going from the initial set of formulas at the top down to one of the atomic formulas

at the bottom, while choosing only one side of each branch as one goes, will be called here

a path. (In some logic books these are called branches, but this invites confusion with the

sense of branch as a forked structure splitting oﬀ from a disjunction, as noted above.) As

each formula is broken down into its corresponding branch or stack, it is checked oﬀ as

shown so that we don’t forget that we have broken it down.

Here is another simple example:

P &Q

P ∨ −Q

P

Q

d

d

P −Q

**6.3. STACKING AND BRANCHING RULES 79
**

To arrive at this we ﬁrst broke down the conjunction (P &Q) into a stack, and then

broke down the disjunction (P ∨ −Q) into a branch. It would have been perfectly valid

to do the branch ﬁrst, and then the stack. (Try it.) As a rule, though, it is usually more

eﬃcient to do as many stacks as one can ﬁrst, and then do the branches, since this will

minimize the number of branches to be drawn.

Truth trees are a kind of partially ordered set or poset. These terms will be explained

later in this book.

My use of the term path here is analogous to the way it is used in DOS and UNIX,

which possess hierarchical ﬁle structures. In the last tree above, the two paths could be

notated in a form similar to the way we note paths in UNIX. That is, the two paths in

the above tree could be written more abstractly as ((P &Q) &(P ∨ −Q)) ` (P &Q) ` P

and ((P &Q) &(P ∨ −Q)) ` (P &Q) ` Q. However, it will not be especially useful to do

it this way, since the main purpose of truth trees is to make the implications of a stack of

propositions graphically explicit.

Notice that the path ending in −Q contains contradictory formulas, namely −Q itself

and Q. Any path that contains a contradiction is marked oﬀ with an at the end, and

declared to be closed. Any path that is not closed is open.

If all branches on a tree are closed, then the tree itself is said to be closed. If just one

branch remains open after decomposition has gone as far as it can go, the tree itself is open.

Although we will usually have to break formulas all the way down to the literal level,

we can stop decomposing formulas along a path as soon as a contradiction is found. For

example, in the above tree, replace Q with (R → S). (Try it.) You will obviously get a

closed path without having to break down (R → S) into its literal components, and you

can stop working on that path at that point. In general, once a path is closed, we do not

need to go any farther along it.

What we are doing, in eﬀect, is treating all the formulas along a given path as if they

were true, and seeing whether or not this will lead to a contradiction. Testing the branches

that sprout from a disjunction is similar to vE in that we go exploring down each branch

of the disjunction, one at a time, to see where it will lead. We can start from any number

of initial formulas and break them down as far as we can. If all branches of the resulting

tree are closed, the set of formulas is inconsistent.

6.3 Stacking and Branching Rules

We need to know how to decompose other formulas besides simple conjunctions and dis-

junctions. The rules for doing this can be divided into stacking rules and branching rules.

P &Q

P

Q

−−P

P

−(P ∨ Q)

−P

−Q

−(P → Q)

P

−Q

Stacking Rules

P ∨ Q

d

d

P Q

P → Q

d

d

−P Q

−(P &Q)

d

d

−P −Q

P ↔ Q

d

d

P

Q

−P

−Q

−(P ↔ Q)

d

d

P

−Q

−P

Q

Branching Rules

80 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

These rules follow from equivalence formulas (such as ¬ (P → Q) ↔ (−P ∨ Q)) which

can be easily established by natural deduction, and checked by truth tables.

6.3.1 Substitution of Equivalents

To further simplify the use of trees, we can also make it a rule that we can replace formulas

with their equivalents, either piecewise or as a whole, in stacks. Thus, a valid stack could

look like this:

(P → Q)

−P ∨ Q

Be sure to check oﬀ the formula you have replaced. It would also be a courtesy to our belea-

guered marker to jot down, oﬀ to the side, the rule you are using (in this case Implication)

that justiﬁed the replacement.

The stacking rules cover all possible cases you could encounter in propositional logic,

so strictly speaking replacement of equivalents like this is not necessary. However, in the

spirit of the approach in this text we will allow any method that makes the work easier —

which might be the case if, say, you have forgotten a stacking or branching rule!

6.4 Checking Semantic Properties of Formulas with Trees

Although the main purpose of truth trees is to reveal inconsistency, all of the major semantic

properties of sets of propositional formulas can be checked using truth trees.

6.4.1 Tautology, Contingency, and Consistency

To check whether or not a wﬀ is a tautology, we run the tree for its negation. The formula

P is a tautology if and only if the tree for −P closes. For example, to show that (P →

(Q → P)) is a tautology, we can run the following tree:

−(P → (Q → P))

P

−(Q → P)

Q

−P

**There is only one path, and since it is closed we see that the negation of the formula is
**

inconsistent, meaning that the formula itself is tautologous.

If we ran the a tree on the formula (P → (Q → P)) itself, we would get this:

(P → (Q → P))

d

d

−P (Q → P)

d

d

−Q P

6.4. CHECKING SEMANTIC PROPERTIES OF FORMULAS WITH TREES 81

All paths on this tree are open. However, this does not show that the formula is a

tautology, only that it is consistent; and we already know that if all we know about a

formula is that it is consistent then it could be either a tautology or a contingency.

A set of formulas ¦P

1

, . . . , P

n

¦ is inconsistent if and only if (P

1

& . . . &P

n

) is always

false, which is the same thing as saying that −(P

1

& . . . &P

n

) is a tautology. To check

whether a set of formulas is inconsistent, stack them up and run a tree. The set is incon-

sistent iﬀ the tree closes.

To check whether a formula P is contingent, we have to run trees for both P and −P.

The formula is contingent if and only if neither tree closes. For instance, the following trees

show that (P → Q) is contingent:

P → Q

d

d

−P Q

−(P → Q)

P

−Q

In summary: a set of formulas is inconsistent iﬀ its stack has a closed tree, and is

consistent iﬀ its stack has an open tree. A formula is contingent iﬀ the tree for both the

formula and its negation are open.

6.4.2 Equivalence

To check whether or not two formulas P and Q are equivalent, we use the fact that P and

Q are equivalent iﬀ (P ↔ Q) is a tautology, and this is the same thing as saying that the

tree for −(P ↔ Q) must close. For instance, to demonstrate that −(P &Q) ↔ (−P ∨ −Q)

(one of de Morgan’s Laws), we run the following tree:

−(−(P &Q) ↔ (−P ∨ −Q))

¨

¨

¨

¨

¨ ¨

r

r

r

r

rr

−(P &Q)

−(−P ∨ −Q)

P

Q

P &Q

−P ∨ −Q

P

Q

d

d

d

d

−P

−Q

−P

−Q

**Since all paths close, the formula is a tautology. This tree looks relatively complex,
**

but it follows straightforwardly by the application of the branching and stacking rules given

above.

Of course, we could have also demonstrated the same result by doing two trees, one for

−(P &Q) ¬ (−P ∨ −Q), and one for (−P ∨ −Q) ¬ −(P &Q). (Try it.)

Checking a set of more than two formulas for equivalence can be a little more compli-

cated, and is in fact not something that one often has to do. But suppose we have to check

whether all formulas in the set ¦P

1

, P

2

, P

3

¦ are equivalent. We could check each possible

pair for equivalence. However, we should keep in mind that equivalence is transitive, that

is, P

1

↔ P

2

, P

2

↔ P

3

¬ P

1

↔ P

3

. (This sequent can be proven by an easy application of

82 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

HS; make sure you try it!) Therefore, to check three formulas for equivalence, we only need

to check two formulas for inconsistency: −(P

1

↔ P

2

) and −(P

2

↔ P

3

).

In many cases the formulas will be simple enough in structure that it will be obvious

whether some pair of them is not equivalent. If that is the case, you are done; since to show

that a whole set of formulas is inequivalent it is suﬃcient to show that any two members of

a set are inequivalent.

6.5 Checking Validity of Sequents

The most important application of truth trees is in checking the validity of sequents.

A sequent P

1

, . . . , P

n

¬ Q is valid if and only if (P

1

& . . . P

n

& −Q) is inconsistent. This

simply expresses the notion of validity, which is that if the argument (or the sequent that

formally represents its propositional structure) is valid, then the premisses cannot lead to

the negation of the conclusion.

We can check the validity of any propositional sequent P

1

, . . . , P

n

¬ Q simply by making

a stack consisting of the premisses and the negation of the conclusion, and breaking down

all of the formulas to see if we arrive at closed paths. The sequent is valid if and only if the

whole tree is closed. In other words, if even one path is open, the sequent is invalid; since

that would express the fact that it is possible to get from the premisses to the negation of

the purported conclusion without contradiction.

To check the validity of an interderivability result such as (P → Q) ¬¬ (−P ∨ Q) we

have to run two trees, one in each direction.

Here are some examples of trees for familiar valid sequents:

P → Q

P

−Q

d

d

−P

Q

MP

P → Q

Q → R

−(P → R)

P

−R

d

d

−Q

R

d

d

−P

Q

HS

These sequents are shown to be valid by the fact that all paths close.

Don’t forget that the order in which we choose to break down the formulas in our initial

stack does not make any diﬀerence to the validity of the result. However, one often arrives

at a simpler tree structure by doing as much stacking as possible ﬁrst. Also, don’t forget

that as shown in the tree for HS, once a path is closed, no further breakdown is needed.

On the next page, for comparison, are the trees for some invalid sequents:

6.6. TREES OR TABLES? 83

P → Q

Q

−P

d

d

−P Q

P ∨ Q

Q ∨ R

−R

¨

¨

¨

¨

¨ ¨

r

r

r

r

rr

P Q

d

d

d

d

Q

R

Q

R

**In the ﬁrst example (which is just Aﬃrming the Consequent), neither path closes. In
**

the second example, two paths close, but two remain open, showing that the sequent is

invalid. Also, in the second example, you will see that if you decompose Q ∨ R ﬁrst you

will get a slightly simpler tree structure.

To check whether or not a formula is a theorem is (by the soundness and completeness

of propositional logic) the same thing as checking whether it is a tautology. Thus, for

instance, the tree above that checks whether (P → (Q → P)) is a tautology shows that

¬ (P → (Q → P)).

6.6 Trees or Tables?

Truth trees are usually quicker and simpler to use than truth tables, especially when eval-

uating very complex arguments with several premisses. But there are cases in which truth

tables are simpler to use. For example, let us check the sequent P → Q, Q → R ¬ R → P.

It should be apparent that this sequent is invalid. To demonstrate this fact by means of a

truth tree, we would have to construct the following tree:

P → Q

Q → R

−(R → P)

R

−P

¨

¨

¨

¨

¨ ¨

r

r

r

r

rr

−Q

R

d

d

−P Q

d

d

−P Q

Since the tree is open, the sequent is invalid. Strictly speaking, one could stop drawing

the tree as soon as even one open path was revealed since that would be suﬃcient to

demonstrate invalidity, but it is still a moderately complex tree.

We could also demonstrate that the above sequent is invalid by using a short-cut truth

table, as follows:

84 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

P → Q Q → R R → P

F T T T T F F

While the full table for this sequent has eight lines, only one partial line is needed to

demonstrate invalidity: assume the conclusion is false, and work backwards. We can see

that it is possible to make both the premisses T when the conclusion is F, regardless of

the value of Q. This is clearly less work than the tree. The lesson is, therefore, that both

trees and tables should be in the logician’s toolkit, and with experience the logician will get

to know which is likely to be easier to try. If there are no more than (say) three or four

premisses, and if the tree will involve a lot of branching, it may be easier to use a table.

6.7 Exercises

B

1. Use truth trees to check the following sequents for validity:

(a) P → Q, −P ¬ −Q (Denying the Antecedent)

(b) P ∨ Q, −P ¬ Q (Disjunctive Syllogism)

(c) P ∨ Q, P → R, Q → S ¬ R ∨ S (Constructive Dilemma)

(d) P → R, Q → S, −R ∨ −S ¬ −P ∨ −Q (Destructive Dilemma)

(e) P → Q ¬ P → (P &Q) (Absorption)

(f) ¬ (P → Q) ∨ (Q → R)

(g) P ¬ (P → Q)

(h) ¬ −(P ∨ Q) ↔ (−P & −Q) (de Morgan)

(i) ¬ P ∨ −P (Excluded Middle)

(j) P → (Q&R), −Q ¬ −P

6.8 Questions for Further Research

1. Set up general rule for checking the equivalence of a set of any number of wﬀs.

Chapter 7

Predicate Logic: Symbolization

and Translation

7.1 Why We Need Predicate Logic

Logic can be compared to a computer game in which there is a hierarchy of levels, where

once you have conquered a level you move onto the next higher level. In this chapter we

move to a higher level. It builds upon what we have learned before but gives us tools with

which to study the validity of a very large class of widely-used arguments that so far we do

not know how to analyze.

Consider the following elementary arguments:

All philosophers are absent-minded.

Kent is a philosopher.

∴ Kent is absent-minded.

Intuitively, this argument seems valid. (Whether or not it is sound is another question.

I have, in my roughly twenty year career, known two or three philosophers who were not

absent-minded, so I think that in fairness to the profession I would have to say that the ﬁrst

premiss is false.) However, we have no way of representing the validity of this argument

using the tools of propositional logic we have learned so far, although it does seem as if it

might have something to do with MP.

Now consider a slight generalization of this argument:

All philosophers are absent-minded.

There are some philosophers.

∴ there are some who are absent-minded.

This again seems obviously valid, but, again, we have no tools with which to express its

validity in any precise way. Arguments like these seem to have a structure suggesting

some sort of generalization of propositional structures with which we are already familiar.

However, all that propositional logic as such could do for us would be to translate the

sentences in these arguments into logically independent symbols, and so both arguments

would have the generally invalid form P, Q ¬ R.

To represent the logical structure of these arguments we therefore need the following

tools:

85

86 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

1. A way to symbolize properties of individual persons or things (so that we can say

something like “Kent is a philosopher” or “Mars is a planet”);

2. Ways to express the logic of quantiﬁers such as “all” and “some.”

7.2 Predicate Logic Notation

In the following I introduce the main types of logical structures that occur in predicate

translation.

7.2.1 Universe of Discourse

Predicate notation is always assumed to refer to a domain or universe of discourse |. This

could be the set of all entities whatsoever, the set of all people, the set of all students in

Logic 2003A, the set of all natural numbers, or any other well-deﬁned collection of objects,

persons, or entities. Sometimes the universe of discourse will be understood implicitly;

sometimes it will need to be speciﬁed explicitly; and sometimes it does not need to be

mentioned at all. Usually we will assume that | is not empty.

7.2.2 Terms and Variables

We ﬁrst need to introduce symbols to denote elements of the universe of discourse.

Proper names are letters which serve the same function as proper names such as “Susan”

or “the planet Mars” in a natural language; that is, they denote individual entities within

|. By convention, proper names are usually written using lower-case letters chosen from

the beginning or middle of the alphabet. For instance, we could set k := “Kent” and

m := “the planet Mars”. If there are a large number of proper names, it may be useful to

subscript them, as n

1

, n

2

, . . ..

Arbitrary constants, also called arbitrary names, are letters that behave much like

proper names, or if you like as if they were proper names; but they are supposed to refer

to individuals picked from | at random, and about which we know nothing to distinguish

them from any other member of |. By contrast, if we use a letter as a proper name, it

is assumed that we know something that distinguishes the individual to which the letter

refers from all other members of |. Letters for arbitrary constants are, like proper names,

usually taken from the beginning to the middle of the alphabet, and, again like proper

names, can be subscripted like a

1

, a

2

, . . . if it is convenient to do so. Arbitrary constants

are very important in predicate natural deduction, and, as we shall see in the next chapter,

their use requires some care.

Proper names and arbitrary constants are collectively called terms.

Variables are letters which serve as place-holders where proper names or arbitrary

constants can be inserted in a formula. They serve a function very similar to that served by

variables in algebra. By convention, variables are usually written using lower-case letters

from the end of the alphabet, such as x, y, z, t . . .. As with constants and names, they can

be subscripted if it is convenient to do so. For instance, we could have a set of variables of

the form x

1

, x

2

, . . .. A variable is not a term, but rather a place-holder where a term can

be inserted.

7.2. PREDICATE LOGIC NOTATION 87

7.2.3 Monadic Predicates

Suppose we want to translate “Mars is a planet” into predicate notation. We now know

that we can treat “Mars” as a proper name, and choose some letter for it such as, say, m.

We also need to know how to say that a term has a property, such as the property of being a

planet. To say things like this, we introduce predicate letters. These will usually be written

as upper-case letters like A, B, . . . , F, G, . . . , P and so forth; and they can, if necessary, be

subscripted as well. We deﬁne predicate letters by expressions such as

Px := “x is a planet”.

The terminology for properties that apply to single terms is diverse: they can be called

predicates, monadic predicates, qualities, properties, attributes, adjectives, unary relations,

or 1-place relations.

7.2.4 Relations

Predicates that apply to two or more terms, also called relations, polyadic relations, n-ary

relations, or n-place relations, can easily be accommodated in the notation of predicate

calculus.

Here are some examples:

Txy := “x is taller than y”

Bxyz := “y is between x and z”

Proper names or arbitrary constants can be substituted for variables in relation-expressions.

For instance, if we take (as before) k := “Kent”, and p := “Paul”, then we can say Tpk,

meaning “Paul is taller than Kent”.

Note that the order in which the variables are expressed in a relation is usually crucial to

the meaning of the proposition. For example, while Tpk happens to be true, the proposition

Tkp is false.

In principle the notation can allow for relations with any number of arguments, although

in this course we will rarely consider relations with more than two arguments. Predicate

logic thus allows us to express much more complex relational concepts than could be ex-

pressed in any natural language. This way of expressing relational concepts seems to be

very obvious, but its introduction was a major step in the history of logic.

There is a detailed theory of relations, which studies the properties of special classes of

relations, such as symmetric relations, equivalence relations, and so forth. We will not do

relation theory in this course (except in some C exercises); Lemmon’s Beginning Logic [17]

contains an excellent introduction to relation theory.

Alternate Notation

Some books write two-place relations Rxy as xRy, which conforms more closely to standard

usage in English. (Unless we are Yoda of Star Wars, we would be more likely to say “x is

taller than y” than “Taller is x than y.”) However, we will stick with the notation in which

the variables are to the right of the relation letter (call this Yoda-notation, if you like), since

it is more easily generalized.

88 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

7.2.5 Propositions and Propositional Functions

Expressions such as Px or Txy are called open sentences or (less correctly) propositional

functions. In themselves they have no truth value since they are, in eﬀect, incomplete

sentences. We can’t evaluate “x is a philosopher” until we know who or what x is supposed

to be. When we substitute proper names or arbitrary constants for the variables in open

sentences then we get propositions which can be presumed to be either T or F, and which

can be combined using the familiar connectives that we use in propositional logic. For

example, if Gx := “x is a game-theorist”, Px := “x is a philosopher”, k := “Kent”, and p

:= “Paul”, then we can write sentences such as Pk &Gp (“Kent is a philosopher and Paul is

a game-theorist”), Gk → −Pp (“Kent is a game-theorist only if Paul is not a philosopher”),

and so forth.

Note carefully that an expression like Txa is an open sentence, not a proposition, since it

contains one unassigned variable. Expressions like Txa or Px are often called propositional

functions, especially in older literature in analytical philosophy. Strictly speaking, this is a

misnomer, since a function is not an expression but a mapping from a domain to a range

such that every set of input values from the domain maps into one and only one element of

the range. Propositional functions therefore map elements of | into the set of all possible

propositions; since propositions, in turn, map into truth values, propositional functions in

eﬀect map elements of | into truth values. Despite this complication, it is often convenient

to refer to expressions like Px or Txy as propositional functions, just as it is convenient in

ordinary mathematics to refer to an expression such as x

2

as a function of x, even though

properly speaking it is just part of the function that takes real numbers into their squares.

7.2.6 Quantiﬁers

Quantiﬁers are powerful symbolic tools that allow us to refer to collections where we do

not or cannot know all the members in advance. These could be large ﬁnite collections

where it would be impractical to name all members even if we knew them, or (especially

in mathematics) inﬁnite collections where it would be impossible to name all the members

even if we knew what they were.

The introduction of quantiﬁers (principally by Gottlob Frege) toward the end of the

19th century was one of the major innovations in the history of logic.

The Universal Quantiﬁer

We always quantify over a domain or universe of discourse |. Suppose that | is the set of

students in Logic 2003A, in the Spring Term of 2004. Deﬁne Sx := “x studies logic”. If we

had the class list, then we could express the proposition “Everyone (in |) studies logic” as

S(Lindsey) &S(Ben) & . . . &S(Craig) (7.1)

for all of the approximately 55 students in the class. A universal statement, which expresses

the concept that everything in the domain of discourse has a certain property, is therefore

equivalent to a conjunction over all members of the domain of discourse.

For a large class it is quite inconvenient to write a universal claim this way, and impos-

sible if we do not know everyone’s name, if the membership in the class was unknown or

7.2. PREDICATE LOGIC NOTATION 89

undetermined, or if it was inﬁnite. We can get around these diﬃculties by introducing the

universal quantiﬁer ∀x. . ., and write

∀xSx. (7.2)

This can be read in several ways, the most common of which are the following:

Everyone studies logic.

For all x, Sx.

For any x, Sx.

It is important to remember that the meaning of a quantiﬁcational statement is always

relative to a universe of discourse, although | does not always need to be mentioned. If we

know in advance that | is a certain logic class, then we can write “Everyone studies logic”

as in 7.2 above. If | were a larger set, such as the set of all students at a certain university,

or the set of all people, then to express the same fact — that everyone in a certain class

studies logic — we would have to write

∀x(Lx → Sx), (7.3)

where Lx := “x is in Logic 2003A”. The latter would be read “Everyone in Logic 2003A

studies logic,” or “If anyone is in Logic 2003A, then that person studies logic.”

The Existential Quantiﬁer

Suppose now we wanted to express the claim that someone in Logic 2003A is wearing blue

jeans. Let Bx := “x wears blue jeans”. If we know in advance that | is the membership of

that particular class, then we could write

B(Lindsey) ∨ B(Ben) ∨ . . . ∨ B(Craig) (7.4)

As before, if the class is large it would be quite inconvenient to write this all out, and

impossible if we do not know everyone’s name, or if the membership in the class was

unknown, undetermined, or inﬁnite. We get around these diﬃculties by introducing the

existential quantiﬁer ∃x. . ., and write simply

∃xBx. (7.5)

This can be read in various ways:

There exists something such that it wears blue jeans.

Someone wears blue jeans.

At least one person wears blue jeans.

Note carefully: In logic, the word “some” means nothing more than “at least one” or

(in the case of continuous quantities such as liquids) “an amount greater than zero.” The

statement “Some mammals are warm-blooded” is therefore consistent with the possibilities

either that there is only one warm-blooded mammal or that all mammals are warm-blooded

(and in fact the latter statement happens to be true, although as logicians we do not concern

ourselves with the mere facts of biology).

If | were a larger set than the membership of Logic 2003A, then we would have to

write

∃x(Lx&Bx), (7.6)

which can be read as “Someone in Logic 2003A wears blue jeans.”

90 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

Alternate Notation

Quantiﬁers are often written in enclosing brackets, such as (∀x)Fx. Many older logic books

write the universal quantiﬁer without the ∀ symbol, so that (x)Fx would be read as “for

all x, Fx”.

A few books (such as the widely-read Metalogic by G. Hunter [11]) use so-called Stanford

notation, wherein universals are represented by

and existentials by

. This is inspired

by the fact that universals and existentials are generalized “ands” and “ors” respectively.

Bound and Free Variables

Variables in quantiﬁcational formulas are either bound or free. Bound variables are those

that are linked to a quantiﬁer, while free variables are those that do not appear in a

quantiﬁer. If a quantiﬁcational expression contains a free variable, it is an open sentence;

it is a proposition only if all variables are bound.

Here are some examples:

In Px, the variable x is free, and the expression is an open sentence.

In ∀xPx and ∃yTy the variables are bound, and the expressions are propositions

(which say, respectively, “Everything is P” and “Something is T”).

In ∃yBayx, a is an arbitrary constant or name and y is bound; but x is unbound,

so that the expression is an open sentence with no truth value.

Scope of Quantiﬁers

It is possible to give a formal recursive deﬁnition of a well-formed formula of predicate logic

(see Lemmon [17], for instance), but we will not bother to do so here since it is generally

obvious which formulas are properly formed and which are not.

Here are the main things to keep in mind: quantiﬁers, like negation signs, apply to the

shortest possible string that follows them. Thus, for instance, in −(Pa &Qb), the whole

expression (Pa &Qb) is negated, while in −Pa &Qb, only the sentence Pa is negated.

Similarly, in ∃xFx&Gx, the quantiﬁer binds only the x in Fx; this expression, therefore,

is an open sentence. However, in ∃x(Fx&Gx), the quantiﬁer applies to all of (Fx&Gx), .

Also, a quantiﬁer only binds variables within its scope, and variables can be used over

again if there is no ambiguity about which quantiﬁer they link with. Thus, ∃xFx&∃xGx

means exactly the same thing as ∃xFx&∃yGy.

7.2.7 Order

The predicate logic we study in this book is said to be ﬁrst-order. This means that the

variables in the quantiﬁers apply only to elements of the universe of discourse |. In second-

order logic we quantify over predicates and relations. This introduces further complications

that are well beyond the scope of this course.

7.2.8 Categorical Forms

Universal aﬃrmative (A): All A are B.

∀x(Ax → Bx) (7.7)

7.2. PREDICATE LOGIC NOTATION 91

Universal negative (E): No A are B.

∀x(Ax → −Bx) (7.8)

Particular aﬃrmative (I): Some A are B.

∃x(Ax&Bx) (7.9)

Particular negative (O): Some A are not B.

∃x(Ax& −Bx) (7.10)

Note carefully that a universal statement does not involve an existence claim. That

is, ∀x(Ax → Bx) does not claim that there are any A which are B. It says merely that

if there are any As, then they are B. (Thus, we can truly say “All orcs are nefarious”

without, fortunately, there being any orcs in the real world.) By the truth table for the

material conditional →, the conditional in the quantiﬁer is true even if there are no As.

Such universals are said to be vacuously true.

Existential claims such as “Some animals are mammals” do assert existence. The latter

statement can also be read “There are (or exist) some animals which are mammals.”

If we speciﬁcally wanted to express the true statement that “Some but not all animals

are mammals,” then we would have to write

∃x(Ax&Mx) & −∀x(Ax → Mx), (7.11)

or equivalently

∃x(Ax&Mx) &∃x(Ax& −Mx), (7.12)

where Mx := “x is a mammal” and Ax := “x is an animal”.

A negative existential such as “Some animals are not reptiles” also asserts existence,

for it is translated as

∃x(Ax& −Rx). (7.13)

7.2.9 Don’t Be Fooled. . . !

Translation from natural languages such as English into predicate notation requires some

skill and attention. The translations we have described here so far are not inﬂexible rules

to be followed blindly, but rather guidelines to be used with good judgement. You have to

be aware of the nuances of your language and of the intended logic of the statements you

are translating.

Here is an example where “and” should not be translated as &: “All dogs and cats

are carnivorous.” Let Mx := “x is carnivorous”. We might be tempted to translate this as

∀x((Dx&Cx) → Mx). (7.14)

But this cannot be right, since it literally says, “If anything is both a dog and a cat, then

it is carnivorous.” And that is not quite what we wanted to say. In fact, we wanted to say

“All dogs are carnivorous and all cats are carnivorous.” This can be written as

∀x(Dx → Mx) &∀x(Cx → Mx). (7.15)

92 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

In the next chapter we will learn techniques that will enable us to show that this is equivalent

to the following:

∀x((Dx ∨ Cx) → Mx). (7.16)

This can be read as “If anything is either a dog or a cat, then it is carnivorous”; and that

is what we actually wanted to say.

7.2.10 Combinations of Properties

The basic categorical forms can be easily extended to describe situations involving combi-

nations of properties. It is impossible to anticipate all possible cases, but here are a few

typical examples that will serve as guides:

1. All philosophers at the University of Lethbridge are either brilliant or witty.

∀x((Px&Lx) → (Bx ∨ Wx)).

2. Some philosophers at the University of Lethbridge are brilliant but not witty.

∃x(Px&Lx&Bx& −Wx).

3. No philosopher at the University of Lethbridge is neither brilliant nor witty.

∀x((Px&Lx) → −−(Bx ∨ Wx))

(By Double Negation, this is of course equivalent to

∀x((Px&Lx) → (Bx ∨ Wx)).)

7.2.11 Use of “Except”

Propositions of the general form “all A except B are C” are called exceptives. Not all logic

texts agree on how they should be translated.

Suppose we want to translate into symbols the sentence “All dogs except chihuahuas

like the cold,” to borrow Lemmon’s example (see his Chapter 3, '1). Using obvious notation

for the predicates, he expresses this as

∀x((Dx& −Cx) → Lx) (7.17)

Literally, this says that if anything is a dog but not a chihuahua, then it likes the cold.

Unfortunately, this does not tell us what is the case if a dog is a chihuahua. Lemmon must

have been aware of this, of course, and so I assume that his translation reﬂected a preference

for logical minimalism — that is, translating into symbols in a way that requires as few

commitments as possible. This may not be a bad practice in general. However, most logic

texts now translate exceptives in such a way as to make the intention as explicit as possible.

For instance, Lemmon’s sentence above would now be translated as

∀x((Dx& −Cx) → Lx) &∀x((Dx&Cx) → −Lx). (7.18)

It can be shown that this is equivalent to

∀x(Dx → (Cx ↔ −Lx)). (7.19)

In words, to say that all dogs except chihuahuas like the cold is the same thing as saying

that if anything is a dog, then it is a chihuahua if and only if it does not like the cold.

7.3. EXERCISES 93

Negative exceptives can be translated in a similar way. “No animal except the mongoose

likes to eat cobras” could be translated as

∀x(Ax → (Mx ↔ Cx)). (7.20)

Examples such as exceptives illustrate the fact that the aim of symbolization is not

to translate the natural language expression literally into symbols, but to analyze and

express the implicit logical intention of the natural language expression — if one can be

found (since in natural language we are often not very logically precise at all). In order to

translate a sentence correctly, we have to ﬁgure out its logic, not merely transliterate words

into symbols without thinking about their meanings.

7.2.12 Use of “Only”

The key to these is to recall that P → Q can be translated not only as “If P then Q,”

but also as “P only if Q” or “Only if Q, P.” So we would therefore translate “Only the

mongoose likes eating cobras” as

∀x(Cx → Mx). (7.21)

where Cx := “likes eating cobras” and Mx := “is a mongoose”.

7.2.13 Variants of Categorical Forms Using Relations

We can construct innumerable variations of the basic categorical forms using relations.

These will often involve multiple quantiﬁcation.

Here are a few illustrative examples, which are, again, in no way meant to exhaust all

the possibilities.

1. Every A has relation R to some B:

∀x∃y((Ax&By) → Rxy).

2. Any cat is smarter than any dog.

∀x∀y((Cx&Dy) → Sxy).

3. Some dogs are smarter than some people.

∃x∃y(Dx&Py &Sxy).

Note that order of quantiﬁers is crucial to the meaning. For instance, let | be the set of

all people, and Pxy := “x is the parent of y”. Then ∀x∃yPxy says that everyone is the

parent of someone, while ∃x∀yPxy says that at least one person is the parent of everyone

— obviously a very diﬀerent claim.

7.3 Exercises

In these exercises, use the following symbolizations:

Ax := “x is an aardvark”

V x := “x is from the Transvaal”

Fx := “x is an ant”

94 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

Sx := “x likes to sleep in the sun”

Cx := “x is a cobra”

Dx := “x has a strange diet”

Tx := “x is a termite”

Lx := “x is an animal”

Mx := “x is a mongoose”

Exy := “x likes to eat y”

Txy := “x wins the Lou Marsh trophy in y”

m := “Mike Weir”

n := “2003”

1. Translate the following sentences into predicate notation:

(a) All aardvarks like eating any ants.

(b) Only mongooses like to eat cobras.

(c) All aardvarks except those from the Transvaal like to sleep in the sun.

(d) No animals except aardvarks like to eat termites.

(e) No mongoose is from the Transvaal.

(f) Mike Weir won the Lou Marsh Trophy in 2003.

2. Translate the following formulas into sensible English:

(a) ∀x((Mx&V x) → Dx)

(b) ∃x∃y(Cx&My &Sx&Sy)

(c) ∀x((Mx ∨ Ax) → ∀y(Fy → Exy))

(d) ∀x((Cx ∨ Mx) → −V x)

Chapter 8

Predicate Logic: Natural

Deduction

Natural deduction for ﬁrst-order predicate logic consists of ordinary natural deduction plus

four special rules that allow us to take oﬀ quantiﬁers and put them back on again.

8.1 Duality Relations

Before stating the introduction and elimination rules for quantiﬁers, however, it is useful to

state the so-called Duality Relations which relate existentials and universals. Later in this

chapter we will see how formally to prove these relations, but they can be seen intuitively

to follow from the basic meanings of the quantiﬁers.

∀xFx ↔ −∃x −Fx

∃xFx ↔ −∀x −Fx

∀x −Fx ↔ −∃xFx

∃x −Fx ↔ −∀xFx

When used to justify a move in a deduction, an instance of any one of these rules

can be referred to simply as Duality. Examples will be given below, once we review the

introduction and elimination rules for quantiﬁers.

8.2 Introduction and Elimination Rules

We will introduce the rules for the introduction and elimination of quantiﬁer symbols in

order of their diﬃculty.

8.2.1 Universal Elimination (UE)

From∀xFx we may infer Fa, where Fx is any propositional function of x (possibly including

other bound quantiﬁers) and a is any term. This rule simply expresses what is naturally

implied by a universal claim, which is that it applies to anything in the universe of discourse.

(UE is sometimes called “Universal Instantiation,” because we use it to write an instance

of a universal claim.)

95

96 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

Example: From the statement “All philosophers are witty” and the statement “Paul is

a philosopher”, we can validly conclude that Paul is witty:

1 (1) ∀x(Px → Wx) A

2 (2) Pp A

1 (3) Pp → Wp 1 UE

1,2 (4) Wp 2,3 MP

8.2.2 Universal Introduction (UI)

This rule allows us to move from a proposition of the form Fa, where a is an arbitrary

constant, to a universal ∀xFx. This is valid, however, only if a is genuinely arbitrary. We

can assure ourselves of that if we make sure that a was not mentioned in the premisses.

(Remember, in deductive logic we have no access to any information that is not explicitly

given in the premisses.) UI is called “Universal Generalization” in many logic texts.

Here is a valid use of UI.

Example: Symbolize the following argument, and show that it is valid: All horses are

mammals, all mammals are warm-blooded; therefore, all horses are warm-blooded.

Using obvious deﬁnitions of the predicate symbols, we can symbolize this as follows:

46 ∀x(Hx → Mx), ∀x(Mx → Wx) ¬ ∀x(Hx → Wx)

1 (1) ∀x(Hx → Mx) A

2 (2) ∀x(Mx → Wx) A

1 (3) Ha → Ma 1 UE

2 (4) Ma → Wa 2 UE

1,2 (5) Ha → Wa 3,4 HS

1,2 (6) ∀x(Hx → Wx) 5 UI

The arbitrary constant a introduced in line 3 does not appear in the premisses. The

use of UI in moving from line 5 to 6 is therefore valid.

For those who have studied categorical logic, this sequent is the familiar syllogism in

Barbara: all A’s are B’s, all B’s are C’s; therefore, all A’s are C’s.

Restriction on UI

We cannot use UI on a term that was introduced in an assumption.

8.2.3 Fallacy Alert!

If we attempt to universalize on a premiss that contains the arbitrary constant, we risk

committing the fallacy of Hasty Generalization. For instance, from the fact that Paul is

egregiously bald, we cannot validly infer that everyone is egregiously bald:

1 (1) Bp A

1 (2) ∀yBy 1 UI (?!)

This is invalid because the term p appears in an assumption.

8.2. INTRODUCTION AND ELIMINATION RULES 97

8.2.4 Existential Introduction (EI)

Existential Introduction, like Universal Elimination, is a very safe rule to use, in that we

do not have to worry about any special restrictions on the terms involved.

EI states that from Fa, where a is any term (proper name or arbitrary constant), we

may infer ∃xFx.

For example, from the statement that Paul is a witty philosopher, we can validly con-

clude that something is a witty philosopher (or, in other words, that there exists a witty

philosopher):

1 (1) Pp &Wp A

1 (2) ∃x(Px&Wx) 1 EI

There are no restrictions on EI.

8.2.5 Existential Elimination (EE)

The general pattern of EE is that we infer a conclusion P from an existential claim ∃xFx

(possibly together with other premisses). Just as an existentially quantiﬁed statement is a

generalization of an OR statement, Existential Elimination is a generalization of vE. Like

vE, as well, it is a bit tricky to use, but most of the diﬃculties are matters of bookkeeping.

In vE we have to examine each possible inferential path starting by temporarily assuming

each disjunct and seeing if it leads to the desired conclusion; in EE, we examine a typical

inferential path deﬁned by introducing an arbitrary constant.

A typical example of the use of EE will make this clearer. Suppose we want to es-

tablish the validity of the following argument: All philosophers are wise; there are some

philosophers; therefore, something is wise. We proceed as follows:

47 ∀x(Px → Wx), ∃xPx ¬ ∃xWx

1 (1) ∀x(Px → Wx) A

2 (2) ∃xPx A

3 (3) Pa A (EE)

1 (4) Pa → Wa 1 UE

1,3 (5) Wa 3,4 MP

1,3 (6) ∃xWx 5 EI

1,2 (7) ∃xWx 2,3,6 EE

The object is to prove the desired conclusion from the existential in line 2, in combina-

tion with the assumption at line 1. At line 3 we start a subproof that is similar in function

to the subproofs used in vE. In 3 we introduce an arbitrary constant a. It is crucial (as

we shall see) that the arbitrary constant introduced here be a new constant that has not

appeared in any earlier lines (because that would, in eﬀect, remove its arbitrariness). In

eﬀect, we are saying that we will suppose that a is the name of the entity referred to in line

2.

In line 4 we can instantiate on line 1 with UE in terms of a because UE can be applied

to any term whatsoever. We then do a little elementary propositional logic, and then at

line 6 we get the desired conclusion by using EI to go from line 5. However, we can’t stop

at this point, because line 6 still depends upon the temporary or working assumption on

98 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

line 3. We therefore have to discharge line 3. EE allows us to do this; on the next line, we

ﬁnally arrive at the conclusion dependent only on lines 1 and 2, as desired.

Note carefully the form of the justiﬁcation given at the end of line 7. The ﬁrst number,

2, is the line number of the existential statement on which we run the EE. The next number,

3, is the line number of the working assumption; this assumption is discharged at this point.

The ﬁnal number, 6, is the line number at which we arrive at the desired conclusion. This

is the general pattern of EE.

This notation may seem a little tricky, but what we are doing is actually very simple,

and is a pattern of reasoning that we often use in day-to-day discussions. Here is the same

argument cast into informal language:

Suppose we accept that all philosophers are wise and that there are some philoso-

phers. Suppose also that a is a typical philosopher. (Note that this is an as-

sumption; we are picking an arbitrary name and supposing that it is attached

to one of the philosophers whose existence is claimed in the premisses.) Then a

must be wise; therefore, someone is wise.

Restrictions on EE

There are two restrictions on the use of EE.

1. The arbitrary constant cannot appear in a premiss of the argument, or any previous

assumptions unless they were discharged.

2. The arbitrary constant cannot appear in the conclusion of the EE subproof.

8.2.6 Fallacy Alert!

There are several types of fallacies that we can fall into if we do not obey the restrictions

on the use of the arbitrary constant.

Fallacies from Constants in a Premiss

Here is a fallacy that results from running an EE with a constant that appears in a premiss:

1 (1) ∃xFx A

2 (2) Ga A

3 (3) Fa A (EE)

2,3 (4) Fa &Ga 2,3 &I

2,3 (5) ∃x(Fx&Gx) 4 EI

1,2 (6) ∃x(Fx&Gx) (?!) 1,3,5 EE (?)

The mistake here is to use the arbitrary constant a from line 2 in the assumption in line 3.

The use of EI in going from 4 to 5 is in itself correct.

Intuitively, this “argument” says that from the fact that something is F, and that a is

G, we can conclude that the same thing is both F and G. This is as if we said that from

the fact that there exists a man, and from the fact that our friend Susan is a woman, that

Susan is both woman and man.

By contrast, here is a slightly diﬀerent sequent that is, in fact, valid:

8.3. SUMMARY OF INTRODUCTION AND ELIMINATION RULES FOR QUANTIFIERS99

48 ∃xFx, Ga ¬ ∃xFx&∃xGx

1 (1) ∃xFx A

2 (2) Ga A

2 (3) ∃xGx 2 EI

1,2 (4) ∃xFx&∃xGx 1,3 &I

Here is an instance of this sequent in words: from the fact that there exists a man, and

from the fact that Susan is a woman, we can validly conclude that there exists a man and

that there exists a woman.

Fallacies from Constants in the Conclusion of the Sub-Proof

Here is a fallacious example which shows that the arbitrary constant cannot appear in the

conclusion of the EE subproof:

1 (1) ∃xFx A

2 (2) Fa A (EE)

1 (3) Fa 1,2,2 EE

1 (4) ∀xFx 3 UI

The ﬁnal move, from line 3 to 4 using UI, is valid. The mistake was to use a in the conclusion

of the subproof at line 3. This “argument” states that from the fact that something is F

we can conclude that everything is F, and this is clearly invalid.

Fallacies Combining EE with UI

The example given in the “Fallacy Alert!” for UI, involving Hasty Generalization, is obvi-

ously fallacious. But slightly more subtle fallacies are possible if we are not careful to obey

the prohibition against using UI on constants that appeared in an assumption. Consider

the following “derivation”:

1 (1) ∃xFx A

2 (2) Fa A (EE)

2 (3) ∀xFx 2 UI (?)

1 (4) ∀xFx (?!) 1,2,3 EE

From the assumption that something is F we seem again to have arrived at the conclusion

that everything is F! In fact, the use of EE at line 4 is correct. The fallacy here lies in having

universalized on the assumption in line 2. This shows that the restriction on universalizing

on constants that appear in a premiss applies to the assumptions that start a subproof as

well as the main premisses of a sequent.

8.3 Summary of Introduction and Elimination Rules for Quan-

tiﬁers

Universal Elimination (UE):

From ∀xFx, we can infer Fa, where a is any term.

Restrictions: none.

100 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

Universal Introduction (UI):

From Fa, where a is an arbitrary constant, we can infer ∀xFx.

Restrictions: the arbitrary constant a cannot have been mentioned in a premiss.

Existential Introduction (EI):

From Fa, where a is a term, we can infer ∃xFx.

Restrictions: none.

Existential Elimination (EE):

We derive a conclusion P from ∃xFx and possibly other premisses, using a subproof

in which we make an assumption of the form Fa.

Restrictions:

• The arbitrary constant a cannot appear in a previous undischarged premiss.

• The arbitrary constant a cannot appear in the conclusion of the EE subproof.

8.4 Examples

In this section I develop a number of typical quantiﬁcational arguments to show the appli-

cation of the Introduction and Elimination Rules.

49 ∀x(Hx → Mx), ∀x(Mx → −Cx) ¬ ∀x(Hx → −Cx)

1 (1) ∀x(Hx → Mx) A

2 (2) ∀x(Mx → −Cx) A

1 (3) Ha → Ma 1 UE

2 (4) Ma → −Ca 2 UE

1,2 (5) Ha → −Ca 3,4 HS

1,2 (6) ∀x(Hx → −Cx) 5 UI

Here is an instance of this sequent in words:

All horses are mammals;

no mammals are chitinous;

∴ no horses are chitinous.

50 ∀x((Dx ∨ Fx) → Cx), −Cb ¬ −Db

1 (1) ∀x((Dx ∨ Fx) → Cx) A

2 (2) −Cb A

1 (3) (Db ∨ Fb) → Cb 1 UE

1,2 (4) −(Db ∨ Fb) 2,3 MT

1,2 (5) −Db & −Fb 4 deM

1,2 (6) −Db 5 &E

An instance in words:

8.4. EXAMPLES 101

All dogs and cats are carnivorous;

Bessy is not carnivorous;

∴ Bessy is not a dog.

51 ∀x(Rx → V x), ∃xRx ¬ ∃xV x

1 (1) ∀x(Rx → V x) A

2 (2) ∃xRx A

3 (3) Ra A (EE)

1 (4) Ra → V a 1 UE

1,3 (5) V a 3,4 MP

1,3 (6) ∃xV x 5 EI

1,2 (7) ∃xV x 2,3,6 EE

Here is an instance in words:

Whoever wears red has a vivid colour sense;

someone is wearing red;

∴ someone has a vivid colour sense.

52 ∀x(Rx → (V x&Ix)), ∃xRx ¬ ∃xIx

1 (1) ∀x(Rx → (V x&Ix)) A

2 (2) ∃xRx A

3 (3) Ra A (EE)

1 (4) Ra → (V a &Ia) 1 UE

1,3 (5) V a &Ia 3,4 MP

1,3 (6) Ia 5 &E

1,3 (7) ∃xIx 6 EI

1,2 (8) ∃xIx 2,3,7 EE

Here is an instance in words:

Whoever wears red has a vivid colour sense and imagination;

someone here is wearing red;

∴ someone here has imagination.

53 ∀x(Rx → (V x&Ix)), ∀x(Ix → Gx), ∃xRx ¬ ∃xGx

1 (1) ∀x(Rx → (V x&Ix)) A

2 (2) ∀x(Ix → Gx) A

3 (3) ∃xRx A

4 (4) Ra A (EE)

1 (5) Ra → (V a &Ia) 1 UE

1,4 (6) V a &Ia 4,5 MP

1,4 (7) Ia 6 &E

102 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

2 (8) Ia → Ga 2 UE

1,2,4 (9) Ga 7,8 MP

1,2,4 (10) ∃xGx 9 EI

1,2,3 (11) ∃xGx 3,4,10 EE

Instance in words:

Whoever wears red has a vivid colour sense and imagination;

whoever has imagination will go far in life;

someone is wearing red;

∴ someone will go far in life.

8.5 Distribution Rules for Quantiﬁers

There are a number of sequents which govern the distribution of quantiﬁers over formulas.

Some require care in their application.

54 ∀x(Fx&Gx) ¬¬ ∀xFx&∀xGx

The proof of this interderivability result is fairly straightforward, and is left as an

exercise (see Review Exercises). Note, however, that the following, which very similar, is

invalid:

∀x(Fx ∨ Gx) ¬ ∀xFx ∨ ∀xGx

A simple example shows that this is invalid: it is true that all natural numbers are either

even or odd, but it is false that either all natural numbers are even or all natural numbers

are odd.

The converse, however, is valid:

55 ∀xFx ∨ ∀xGx ¬ ∀x(Fx ∨ Gx)

If everything is red or everything is green, then everything is red or green. (See Review

Exercises.)

56 ∃x(Fx ∨ Gx) ¬¬ ∃xFx ∨ ∃xGx

(See Review Exercises.)

57 ∃x(Fx&Gx) ¬ ∃xFx&∃xGx

(See Review Exercises.) The converse is invalid, however; from the fact that there are

some numbers which are even and some numbers which are odd, we cannot conclude that

there are numbers which are both even and odd.

8.6 Proofs of Duality Rules

The Duality Rules make intuitive sense from the meaning of quantiﬁcational symbols. But

they can be proven using the introduction and elimination rules for predicate logic. I will

prove the ﬁrst one here; the remaining proofs are covered in the Review Exercises. Some

8.7. RULES OF PASSAGE 103

of these derivations, like many proofs of propositions that seem to be “obvious,” are rather

subtle.

58 ∀xFx ¬¬ −∃x −Fx

(a) ∀xFx ¬ −∃x −Fx

1 (1) ∀xFx A (RAA!)

2 (2) ∃x −Fx A (RAA)

3 (3) −Fa A (EE)

1 (4) Fa 1 UE

1,3 (5) Fa & −Fa 3,4 &I

3 (6) −∀xFx 1,5 RAA

2 (7) −∀xFx 2,3,6 EE

1,2 (8) ∀xFx& −∀xFx 1,7 &I

1 (9) −∃x −Fx 2,8 RAA

(b) −∃x −Fx ¬ ∀xFx

1 (1) −∃x −Fx A

2 (2) −Fa A (RAA)

2 (3) ∃x −Fx 3 EI

1,2 (4) ∃x −Fx& −∃x −Fx 1,3 &I

1 (5) Fa 2,4 RAA

1 (6) ∀xFx 5 UI

These elegant proofs are due to Lemmon [17, p. 123–4]. Note the trick in the ﬁrst proof

of using the starting assumption as the beginning of an RAA. In this proof there are, in

eﬀect, two overlapping RAAs. In the second proof, the step from (5) to (6) may seem ﬁshy,

since we are using a term a that was introduced in the assumption in line (2). However, we

are on safe ground, since (2) was discharged when we backed out of the RAA. Therefore,

there is nothing special about the constant a and we can safely universalize upon it. But

the way Lemmon emerges from the RAA with a still intact does look a lot like a magician

pulling a rabbit out of a hat!

59 ∃xFx ¬¬ −∀x −Fx

60 ∀x −Fx ¬¬ −∃xFx

61 ∃x −Fx ¬¬ −∀xFx

8.7 Rules of Passage

There are a number of quantiﬁcational formulas that are known as the Rules of Passage.

In the following, P is any proposition whatsoever; it could, for instance, be a predicate

formula itself containing quantiﬁers and bound variables. The rules themselves are called

“Rules of Passage” because they allow us to pass P in or out of the scope of a quantiﬁer.

∃x(P &Fx) ¬¬ P &∃xFx (8.1)

104 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

∀x(P &Fx) ¬¬ P &∀xFx (8.2)

∃x(P ∨ Fx) ¬¬ P ∨ ∃xFx (8.3)

∀x(P ∨ Fx) ¬¬ P ∨ ∀xFx (8.4)

∃x(P → Fx) ¬¬ P → ∃xFx (8.5)

∀x(P → Fx) ¬¬ P → ∀xFx (8.6)

∃x(Fx → P) ¬¬ ∀xFx → P (8.7)

∀x(Fx → P) ¬¬ ∃xFx → P (8.8)

Note the switch between universals and existentials in the last two formulas. I will give

proofs of the last formula here; the rest of the Rules of Passage are left as Review Exercises.

(a) ∀x(Fx → P) ¬ ∃xFx → P

1 (1) ∀x(Fx → P) A

2 (2) ∃xFx A (CP)

3 (3) Fa A (EE)

1 (4) Fa → P 1 UE

1,3 (5) P 3,4 MP

1,2 (6) P 2,3,5 EE

1 (7) ∃xFx → P 2,6 CP

(b) ∃xFx → P ¬ ∀x(Fx → P)

1 (1) ∃xFx → P A

2 (2) −∀x(Fx → P) A (RAA)

2 (3) ∃x −(Fx → P) 2 Duality

2 (4) ∃x(Fx& −P) 3 Neg→

5 (5) Fa & −P A (EE)

5 (6) Fa 5 &E

5 (7) −P 5 &E

5 (8) ∃xFx 6 EI

1,5 (9) P 1,8 MP

1,5 (10) P & −P 7,9 &I

1,2 (11) P & −P 4,5–10 EE

1 (12) ∀x(Fx → P) 2–10 RAA

8.8 Interderivability ⇔ Equivalence Theorem

Don’t forget that any interderivability result in predicate logic, such as the Duality Rules

or the Rules of Passage, is translatable into a theorem stating an equivalence, and any such

equivalences can be used for substitutions just as with propositional logic.

For instance, from the ﬁrst Rule of Passage listed above, we have

62 ¬ ∃x(P &Fx) ↔ P &∃xFx

8.9. WHAT TO MEMORIZE 105

8.9 What to Memorize

I do not expect you to memorize the Rules of Passage for this course. You should know the

Duality Rules, however, since they depend on a very basic understanding of the meaning of

the quantiﬁcational symbols. And of course you should be able to use UE, UI, EE, and EI.

8.10 A Shortcut

It is essential to master the use of the four introduction and elimination rules, because

they allow for general procedures that can be applied to any ﬁrst-order quantiﬁcational

argument. Under some circumstances to be speciﬁed below, however, we can skip the use of

these rules and do Boolean operations directly on propositional functions within the scope

of quantiﬁers. This can greatly simplify many derivations.

Here are a few simple examples:

∀x∃y −(Fx → Rxy) ↔ ∀x∃y(Fx& −Rxy) Neg→

∀x∀y(Fx → (Gx → Hy)) ↔ ∀x∀y((Fx&Gx) → Hy) Exp

∀x(Fx → (Gx&Hx)) ↔ ∀x((Fx → Gx) &(Fx → Hx)) Dist→

In other words, propositional equivalence relations can be applied to propositional functions

so long as they are entirely within the scope of the quantiﬁers that govern them.

The proof of the general validity of this rule requires metalogical techniques that are

beyond the scope of this course, although individual examples can be proven using the

techniques in this chapter.

8.11 Fallacy Alert!

A rather common error is the following:

(n) −∀xFx

(n+1) −Fa n UE

Line (n) is (by Duality) equivalent to ∃x(−Fx). So there is something that is −F, but

it might not be the individual a! If we want to use the statement −Fa (as we might, for

instance, if we were going to try an EE), we would have to assume it, and then (we hope)

discharge it if the EE is successful. In other words, the statement −∀xFx is not really a

universal statement at all, and hence we cannot apply UE to it.

8.12 Fallacy Alert!

It is not possible to derive a particular existential from a universal. For instance, from the

statement that all hobbits live in the Shire, we cannot infer that there are some hobbits

living in the Shire. The following inference is invalid:

1 (1) ∀x(Hx → Sx) A

1 (2) ∃x(Hx&Sx) 1 ???

106 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

This is essentially a consequence of the fact that a universal claim is hypothetical and does

not assert existence, while an existential claim does.

By comparison, the following pattern of inference is valid:

1 (1) ∀x(Hx → Sx) A

1 (2) Ha → Sa 1 UE

1 (3) ∃y(Hy → Sy) 2 EI

However, line (3) does not assert the existence of hobbits, whether living in the Shire or

not. It merely says that there is something which if it is a hobbit, lives in the Shire. This

is like saying that if I were a hobbit, I would live in the Shire; and this in no way asserts

the existence of hobbits!

8.13 Summary of Rules for Predicate Natural Deduction

In this section I summarize the rules for predicate natural deduction. The derived rules,

such as the Rules of Passage, the Duality relations, and the Distribution rules, may be used

freely to help you solve more complicated problems.

8.13.1 Basic Rules

Here, ﬁrst, in succinct form, are the Basic Rules of predicate natural deduction.

Universal Elimination (UE)

∀xFx ¬ Fa, where a is any constant or proper name (term).

No restrictions on x.

Universal Introduction (UI)

Fa ¬ ∀xFx.

Restrictions: a must be an arbitrary constant, which means that it must not have been

used in an undischarged assumption.

Existential Introduction (EI)

Fa ¬ ∃xFx.

No restrictions on a.

8.13. SUMMARY OF RULES FOR PREDICATE NATURAL DEDUCTION 107

Existential Elimination (EE)

EE is a generalized vE, and if you have any questions about how vE works you should

review it at this point. The object of EE is to derive a conclusion from an existential claim,

so the overall result will look like ∃x ¬ P.

Here is the general pattern of EE:

1 (1) ∃xFx A

2 (2) (other premisses, if any) A

3 (3) Fa A (EE)

.

.

.

2,3 (n) P

1,2 (n + 1) P 1,3,n EE

Lines 3 to n are the EE subproof. In the justiﬁcation of line n + 1, the ﬁrst number is the

line where the starting existential claim was introduced, and the second and third numbers

are the beginning and end lines of the subproof.

Restrictions on the choice of a:

1. The constant a must be arbitrary, which means that it cannot appear in any previous

assumption.

2. The constant a cannot appear in the conclusion of the EE subproof.

8.13.2 Duality Rules

I state the Duality Rules here as interderivability results:

∀xFx ¬¬ −∃x −Fx

∃xFx ¬¬ −∀x −Fx

∀x −Fx ¬¬ −∃xFx

∃x −Fx ¬¬ −∀xFx

Any of these can be cited in a justiﬁcation simply as “Duality”. (See examples in the Review

Exercises.)

8.13.3 Rules of Passage

I won’t repeat here all of the Rules of Passage, which are listed in 8.7. Any of them can

be cited simply by saying “Passage” in the justiﬁcation column. (See Review Exercises for

examples.)

8.13.4 Distribution Rules

As described in detail in 8.5, quantiﬁers can be distributed over their scopes sometimes,

but not in all cases; one has to be careful in using these rules. Here are the ones that are

108 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

valid, together with convenient short-form names that can be used when they are cited in

a justiﬁcation line:

∀x(Fx&Gx) ¬¬ ∀xFx&∀xGx Dist∀

∀x(Fx ∨ Gx) ¬ ∀xFx ∨ ∀xGx Dist∀

∃x(Fx ∨ Gx) ¬¬ ∃xFx ∨ ∃xGx Dist∃

∃x(Fx&Gx) ¬ ∃xFx&∃xGx Dist∃

Note carefully that some of these results are interderivabilities, while some go only one way.

Quantiﬁers can in some cases be “distributed” over → but the results are more com-

plicated and are best dealt with on an individual basis. (See the Review Exercises.)

8.13.5 Direct Manipulation of Propositional Functions

Don’t forget that propositional functions such as Fx&Gy or Hx → Gxy can be manipulated

using standard propositional equivalence rules, so long as you stay within the scope of the

quantiﬁer. For instance, the following transformation is valid:

∀x∃y −(Px → Qxy) ↔ ∀x∃y(Px& −Qxy) Neg → (8.9)

8.14. REVIEW EXERCISES FOR CHAPTER 8 109

8.14 Review Exercises for Chapter 8

Use the following symbols for individuals, predicates, and relations:

p := Plato

s := Socrates

h := Heidegger

r := Rufus

k := The Yukon

c := the Republic

t := Thumper

b := Being and Time

Lx := x is a rabbit

Dx := x is a dialogue

Ox := x is bamboo

Bx := x is brown

Ux := x is a bear

Hx := x is a person

Nx := x loves honey

Ax := x loves asparagus

Rx := x loves broccoli

Mx := x is a mammal

Wx := x is warm-blooded

Lxy := x loves y

Cxy := x came from y

Txy := x takes y seriously

Axy := x is the author of y

A

1. Translate the following arguments into symbols, and prove them valid:

(a) Rufus loves honey.

∴ something loves honey.

(b) Plato was the author of The Republic.

∴ Plato was the author of something.

(c) All bears love honey.

Rufus is a bear.

∴ Rufus loves honey.

(d) No bears love broccoli.

Thumper loves broccoli.

∴ Thumper is not a bear.

(e) All bears are mammals.

All mammals are warm-blooded.

Rufus is a bear.

∴ Rufus is warm-blooded.

2. Prove the validity of the following sequents:

(a) Fa &Fb ¬ ∃xFx

(b) ∀x(Mx → Wx), Ma &Ra ¬ Wa

(c) ∀x(Fx → Gx), ∃xFx ¬ ∃xGx

(d) ∀x(Fx → −Gx), ∃xGx ¬ ∃x −Fx

(e) ∀x −(Fx ∨ Gx) ¬ −∃x(Fx ∨ Gx)

110 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

B

1. Translate the following arguments into symbols and prove them valid:

(a) No one takes anything written by Heidegger seriously.

Being and Time was written by Heidegger.

∴ Socrates does not take Being and Time seriously.

(b) All bears love honey.

Rufus is a brown bear.

∴ Something brown loves honey.

(c) Plato was the author of Republic.

Republic is a dialogue.

∴ Plato wrote some dialogues.

(d) All bears love honey.

Some bears are brown.

All bears are mammals.

∴ Some brown mammals love honey.

(e) All bears and rabbits are mammals.

Bears do not like asparagus.

Some bears come from the Yukon.

∴ some mammals do not like asparagus.

2. Prove the Rules of Passage, 8.1 through 8.7

3. Prove the validity of the following sequents:

(a) Dist∀ (all of them)

(b) Dist∃ (all of them)

(c) ∀x(Fx → Gx) ¬ ∀xFx → ∀xGx

(d) ∃x(Fx → Gx) ¬ ∀xFx → ∃xGx

4. It can be very instructive to try to make up your own examples as well!

C

1. Prove the Duality relations that were not proven in the Chapter; that is, Sequents

59, 60, and 61.

Chapter 9

Predicate Logic With Identity

In this chapter we introduce the use of two simple rules concerning the relation of identity.

These rules permit a powerful extension of predicate logic which enables us to express the

logical structure of a much larger class of quantiﬁcational propositions with a very minimal

increase in formal machinery.

By “identity” we simply mean a two-term relation which says that two proper names

or constants point to the same object in the universe of discourse. We could write this as

Ixy := x is identical to y

but it is easier, and more in keeping with standard mathematical usage, to express identity

as follows:

x = y := x and y point to the same member of |.

We will also allow the use of the following obvious notational convenience:

a = b := −(a = b)

Deﬁning identity is, in fact, a tricky philosophical problem. The deﬁnition I have given

here is close to being circular, since we probably would have to use the identity relation in

order to express the fact that two symbols single out the same individual. Deﬁning identity

in a way that is clearly non-circular would probably require the resources of higher-order

logic (since we would have to quantify over predicates and relations); one would also likely

have to examine how identity is used in physics (where the notion of “identical particle” is

very diﬃcult). For our purposes, we can deﬁne identity simply as a relation that obeys the

introduction and elimination rules, given below.

We now introduce introduction and elimination rules for identity:

Identity Introduction (=I):

For any term a the formula a = a is a theorem, and may be introduced at any

point in a proof. In symbols:

¬ a = a.

Identity Elimination (=E):

If s and t are terms, if s = t, and given a proposition A(s) (that is, a proposition

containing s), we can derive A(t) by substituting one or more occurrences of

s in A(s) with t.

111

112 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

The second rule, =E, is the one we will use most often. It simply allows the substitution

of one term symbol for another, similarly to the way we can substitute equals for equals in

mathematics.

9.1 Examples

Here are a few elementary examples which illustrate the use of these rules, and demonstrate

some basic properties of the identity relation:

63 a = b ¬ b = a

1 (1) a = b A

(2) a = a =I

1 (3) b = a 1,2 =E

In line (3) we cite two lines, one of which is the statement of the identity we are using, while

the other is the line into which we make a substitution. In line (2) we substitute b for a.

This proof demonstrates that identity is a symmetric relation.

64 a = b &b = c ¬ a = c

1 (1) a = b &b = c A

1 (2) a = b 1 &E

1 (3) b = c 1 &E

(4) a = c 2,3 =E

This proof demonstrates that identity is a transitive relation.

The next proof demonstrates a triviality: to say that something is Paul, and that thing

loves curry, is equivalent to saying that Paul loves curry. The main interest in this proof is

that it demonstrates the use of =E.

65 Fp ¬¬ ∃x(x = p &Fx)

I’ll prove one part of it; the converse is left as an exercise.

1 (1) ∃x(x = p &Fx) A

2 (2) b = p &Fb A (EE)

2 (3) b = p 2 &E

2 (4) Fb 2 &E

2 (5) Fp 3,4 =E

1 (6) Fp 1,2,5 EE

The next proof shows that the way we use identity here corresponds to our basic

notion of identity, which is that things that are presumed identical should have all the

same properties.

66 a = b ¬ Fa ↔ Fb

9.2. FALLACY ALERT! 113

1 (1) a = b A

2 (2) Fa A (CP)

1,2 (3) Fb 1,2 =E

1 (4) Fa → Fb 2,3 CP

5 (5) Fb A (CP)

1,5 (6) Fa 1,5 =E

1 (7) Fb → Fa 5,6 CP

1 (8) (Fa → Fb) &(Fb → Fa) 4,7 &I

1 (9) Fa ↔ Fb 8 Def↔

9.2 Fallacy Alert!

As shown above, the following pattern of reasoning is correct:

a = b ¬ Fa ↔ Fb. (9.1)

This simply expresses what we mean by identity. But do not attempt the following move:

Fa ↔ Fb ¬ a = b. (????) (9.2)

This is invalid because the two individuals a and b do not necessarily have to be the same

in order for Fa to be true iﬀ Fb is true. Suppose Fx is deﬁned as “x is a philosopher.” The

statement “F(Socrates) ↔ F(Plato)” is true, and yet Socrates was a distinct individual

from Plato.

Remember, to say a = b is to say that the symbols a and b point to the same individual

in the universe of discourse. To say Fa ↔ Fb is to say that the propositions Fa and Fb

have the same truth value.

9.3 Deﬁnite Descriptions

Phrases such as “the present King of France,” “the man in the Moon,” “the last person to

cross the ﬁnish line,” or “the ﬁrst African woman to win the Nobel Peace Prize,” are called

deﬁnite descriptions. They are widely used in ordinary language, and yet it might not be

immediately obvious how they should be analyzed logically. To see why this is important,

consider the following sentences:

1. “France presently has a bald King.”

2. “The present King of France is bald.”

The ﬁrst sentence is false. The second sentence seems very suspicious, but it is not imme-

diately obvious how to assign it a truth value. Some philosophers have gone so far as to

declare sentences such as this, in which a deﬁnite description points to a nonexistent ob-

ject, to be meaningless. But that doesn’t seem quite right either, since we make deductions

using deﬁnite descriptions all the time. Consider, for instance, the following elementary

argument:

Harper is the Prime Minister.

Layton is not Harper.

∴ Layton is not the Prime Minister.

114 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

Intuitively this seems to be valid, but we don’t yet know how to represent its validity

formally. As before, we motivate the need for new types of symbols by showing that

there are arguments that are clearly valid, but whose validity we do not yet know how to

demonstrate.

What makes this argument work is clearly the fact that (as of 2006) Harper is the Prime

Minister, not just any Prime Minister. In 1905, Bertrand Russell suggested that we should

analyse propositions using deﬁnite descriptions as combinations of existence claims with

uniqueness claims. Thus, for instance, we could express “Harper is the Prime Minister” as

Ph&∀y(Py → (y = h))

In words, “Harper is the Prime Minister and if anyone is the Prime Minister, that person is

Harper”. This statement needs to be distinguished carefully from the claim that “Harper is

a Prime Minister,” which would be symbolized simply as Ph. This statement makes sense,

since there are, after all, several countries that have Prime Ministers.

To say “The Prime Minister is a former Finance Minister,” we can write

∃x(Px&Fx&∀y(Py → (y = x)))

And to say “The present King of France is bald,” we can write

∃x(Kx&Bx&∀y(Ky → (y = x))

In other words, this simply says, “There exists a unique bald King of France,” and, at the

time of this writing, this statement is false.

We can also express the identity of two descriptions. “The author of Hamlet is the

author of Lear” can be expressed as

∃x(Hx&Lx&∀y((Hy &Ly) → (y = x))

As these examples suggest, a deﬁnite description such as “the Prime Minister of Canada”

functions grammatically very much like a proper name such as “Mr. Stephen Harper.”

What about negations of statements using deﬁnite descriptions? For instance, how

would we translate “Layton is not the Prime Minister?” Intuitively, it seems that to deny

this statement should amount to saying that either Layton is not a Prime Minister, or that

someone other than Layton is a Prime Minister; either or both of those possibilities would

be suﬃcient to negate the claim that Layton is the Prime Minister.

A quick bit of predicate natural deduction shows that this analysis is correct:

−(Pl &∀y(Py → (y = l)))

↔ −Pl ∨ −∀y(Py → (y = l)) deM

↔ −Pl ∨ ∃y −(Py → (y = l)) Duality

↔ −Pl ∨ ∃y(Py &(y = l)) Neg→

Russell’s method of analysing the logic of deﬁnite descriptions has generated a lot of

controversy in the literature on philosophy of language. However, it provides a workable

logical analysis of deﬁnite descriptive claims and, as we shall show next, allows us to demon-

strate the validity of arguments involving deﬁnite descriptions which should, indeed, turn

out to be valid. Russell’s method also illustrates a point that we noted earlier, which is that

the grammatical structure of a sentence in a particular natural language such as English is

not always a good indication of its underlying logical form.

In his later years Russell defended his approach against criticism by P. F. Strawson:

9.3. DEFINITE DESCRIPTIONS 115

My theory of descriptions was never intended as an analysis of the state of mind

of those who utter sentences containing descriptions. Mr Strawson gives the

name ‘S’ to the sentence ‘The King of France is wise’, and he says of me ‘The

way in which he [Russell] arrived at the analysis was clearly by asking himself

what would be the circumstances in which we would say that anyone who uttered

the sentence S had made a true assertion’. This does not seem to me a correct

account of what I was doing. . . . I was concerned to ﬁnd a more accurate and

analysed thought to replace the somewhat confused thoughts which most people

at most times have in their heads. [21, p. 243]

Russell, and indeed many of those who contributed to modern symbolic logic, believed that

the job of the logician is not only to describe the logic of ordinary reasoning in a precise

way, but to devise techniques that can help us to reason more eﬀectively.

9.3.1 Example

We return to the argument introduced above:

Harper is the Prime Minister.

Layton is not Harper.

∴ Layton is not the Prime Minister.

Using obvious deﬁnitions for the term and predicate symbols, we can prove this argu-

ment valid as follows:

1 (1) Ph&∀x(Px → (x = h)) A

2 (2) l = h A

1 (3) Ph 1 &E

1 (4) ∀x(Px → (x = h)) 1 &E

5 (5) Pl → (l = h) 4 UE

1,2 (6) −Pl 2,5 MT

1,2 (7) −Pl ∨ ∃y(Py &(y = l)) 6 vI

Note, again, that line (6) only says that Layton is not a Prime Minister. We need to “or”

on the extra clause that states that someone else might be Prime Minister as well as Layton.

(I thank Heather Rivera for insisting that I clarify this point.)

A Fine Point

Many logic texts write statements such as “Harper is the Prime Minister” in the following

more complicated form:

∃x(Px&∀y(Py → (y = h)) &Ph) (9.3)

This is equivalent to the simpler form we use here, which can be used whenever we know

the proper name of “the such-and-such”. This can be proven from the fact that

∃x(Px&(x = h)) ¬¬ Ph. (9.4)

This is proven in the Exercises, below.

116 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

9.3.2 Expressing Numerical Relations

It is also possible to express simple numerical relationships using identity.

At most one person knows the secret.

∀x∀y((Kx&Ky) → (x = y))

Here it is in words that are half-way between logic and ordinary English: For

all x and y, if x and y know the secret, then x and y are the same person.

Exactly (or only) one person knows the secret.

∃x(Kx&∀y(Ky → (y = x))).

In half-logic: Someone knows the secret, and anyone else who knows the secret

is that person.

Exactly two people know the secret.

∃x∃y(Kx&Ky &(x = y) &∀z(Kz → ((z = x) ∨ (z = y)))).

In half-logic: there are two distinct people who know the secret, and anyone else

who knows the secret is identical to one or the other of those two people.

9.3.3 Superlatives

It is also possible to express simple superlatives using identity. Here’s an example, with

hopefully obvious notation:

Smith is the tallest person in Logic class.

Ls &∀y((Ly &(y = s)) → Tsy)

In half-logic: Smith is in the Logic class and if there is anyone else in the Logic

class who is not Smith then Smith is taller than that person.

9.4. REVIEW EXERCISES 117

9.4 Review Exercises

Use the following symbol deﬁnitions:

a := Aristotle

s := The Stagirite

1

p := Plato

e := The Ethics

r := The Republic

t := The Timaeus

Kx := x knew (knows) The Good

Axy := x was (is) the author of y

A

1. Translate into symbols:

(a) Aristotle was not Plato.

(b) Aristotle was The Stagirite.

(c) Either Aristotle or Plato wrote the Republic.

(d) Plato wrote both the Republic and the Timaeus.

2. Translate into good, idiomatic English:

(a) p = s

(b) r = t

(c) Apr ∨ −Apr

3. Translate and prove:

(a) Aristotle wrote the Ethics.

Aristotle was The Stagirite.

∴ The Stagirite wrote the Ethics.

(b) If Aristotle wrote the Republic then Aristotle was Plato.

However, Aristotle was not Plato.

∴ Aristotle did not write the Republic.

4. Prove the following sequents:

(a) Fa &Ga, (a = b) ¬ Fa &Gb

(b) Fa → Ga, (a = b) ¬ Fb → Ga

(c) ∀x(Fax → Gbx), a = b ¬ ∀x(Fax → Gax)

(d) Faa → (a = a) ¬ −Faa

1

Aristotle came from a small town called Stagira, and so he was often called The Stagirite.

118 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

B

1. Translate into symbols:

(a) Plato was the author of the Republic.

(b) Aristotle was not the author of the Republic.

2. Translate into good, idiomatic English:

(a) ∃x((Axr &Axt) &∀y(Ayr → (x = y)))

(b) Apr &Apt &∀y(Ayr → (y = p))

3. Prove the following sequent:

∃x(Px&(x = l)) ¬¬ Pl

4. Translate the following argument into symbols, and prove the resulting sequent:

Plato knew the Good.

Aristotle was not Plato.

At most one person knew the Good.

∴ Aristotle did not know the Good.

Solutions to Selected Exercises

Chapter 2

Exercises 2.2.8

A

1. (a) T &C or C &T

(c) T → −C

(e) −B → A

(g) B → C

(i) −C → −T

(k) A → −B

(m) T ∨ −C or −C ∨ T

2. (a) If Bob goes to the store then Alice does not.

or

Bob goes to the store only if Alice does not.

(c) Either Bob doesn’t go to the store or Cheryl goes to work.

(e) If Alice does not go to the store then Bob does.

or

Alice does not go to the store only if Bob does.

(g) Either Alice does not go to the store or Tad does not stay home.

(i) Tad stays home only if Bob goes to the store.

Exercises 2.3.1

A

1. (a) F

(c) T

(e) T

119

120 SOLUTIONS TO SELECTED EXERCISES

2. (a) T

(c) F

(e) T

Exercises 2.6.1

A

1. (a) (B&T) → (A → C)

(c) C ↔ T

(e) −T → (A ∨ B)

2. (a) If Alice and Bob go to the store and Cheryl goes to work, then Tad does’t stay home.

(c) If Alice goes to the store then if Bob goes to the store Cheryl goes to work.

or:

If Alice goes to the store then Bob goes to the store only if Cheryl goes to work.

(e) If Alice goes to the store only if Bob goes too, then Cheryl goes to work only if Tad

stays home.

Exercises 2.7.1

A

1. (b) −C ∨ −T

(d) −(C → T)

2. (b) Bob going to the store does not imply either Cheryl not going to work or Tad staying

home.

(d) Alice goes to the store if and only if Bob does not.

Solutions for Chapter 3

Exercises 3.3.1

A

1. (a) P, P → −Q ¬ −Q

1 (1) P A

2 (2) P → −Q A

1,2 (3) −Q 1,2 MP

(c) P &Q, P → R ¬ R

121

1 (1) P &Q A

2 (2) P → R A

1 (3) P 1 &E

1,2 (4) R 2,3 MP

(e) −−P ¬ P ∨ R

1 (1) −−P A

1 (2) P 1 DN

1 (3) P ∨ R 2 vI

B

1. (a) P → (Q&R), P ¬ R ∨ S

1 (1) P → (Q&R) A

2 (2) P A

1,2 (3) Q&R 1,2 MP

1,2 (4) R 3 &E

1,2 (5) R ∨ S 4 vI

(c) P ∨ Q, Q → R ¬ P ∨ R

1 (1) P ∨ Q A

2 (2) Q → R A

3 (3) P A (vE)

3 (4) P ∨ R 3 vI

5 (5) Q A (vE)

2,5 (6) R 2,5 MP

2,5 (7) P ∨ R 6 vI

1,2 (8) P ∨ R 1,3,4,5,7 vE

(e) P → Q, Q → R, −R ¬ −P

1 (1) P → Q A

2 (2) Q → R A

3 (3) −R A

4 (4) P A (RAA)

1,4 (5) Q 1,4 MP

1,2,4 (6) R 2,5 MP

1,2,3,4 (7) R& −R 3,6 &I

1,2,3 (8) −P 4,7 RAA

122 SOLUTIONS TO SELECTED EXERCISES

Exercises 3.7

A

2. (a) P, Q ¬ P → Q

1 (1) P A

2 (2) Q A

2 (3) P → Q 1,2 CP

(c) P → (Q ∨ R), −(Q ∨ R) ¬ −P

1 (1) P → (Q ∨ R) A

2 (2) −(Q ∨ R) A

1,2 (3) −P 1,2 MT

(e) (Q → R), −(Q → R) ¬ R

1 (1) Q → R A

2 (2) −(Q → R) A

1,2 (3) R 1,2 ExF

(g) P → (Q → R), (Q → R) → T ¬ P → T

1 (1) P → (Q → R) A

2 (2) (Q → R) → T A

1,2 (3) P → T 1,2 HS

(h) −(P & −Q) ¬ −P ∨ Q

1 (1) −(P & −Q) A

1 (2) −P ∨ −−Q 1 deM

1 (3) −P ∨ Q 2 DN

(j) (X ∨ Y ) → Q ¬ (−X & −Y ) ∨ Q

1 (1) (X ∨ Y ) → Q A

1 (2) −(X ∨ Y ) ∨ Q 1 Imp

1 (3) (−X & −Y ) ∨ Q 2 deM

(l) −(X → Y ) → −P, P ¬ (X → Y )

1 (1) −(X → Y ) → −P A

2 (2) P A

1 (3) P → (X → Y ) 1 Trans

1,2 (4) (X → Y ) 2,3 MP

123

(n) P &(Q → R) ¬ (P & −Q) ∨ (P &R)

1 (1) P &(Q → R) A

1 (2) P &(−Q ∨ R) 1 Imp

1 (3) (P & −Q) ∨ (P &R) 2 Dist

B

1. (a) P → Q, R → S, −Q ∨ −S ¬ −P ∨ −R (DD)

1 (1) P → Q A

2 (2) R → S A

3 (3) −Q ∨ −S A

4 (4) −Q A (vE)

1,4 (5) −P 1,4 MT

1,4 (6) −P ∨ −R 5 vI

7 (7) −S A (vE)

2,7 (8) −R 2,7 MT

2,7 (9) −P ∨ −R 8 vI

1,2,3 (10) −P ∨ −R 3,4,6,7,9 vE

(c) P → Q ¬ −(P & −Q)

1 (1) P → Q A

1 (2) −P ∨ Q 1 Imp

1 (3) −P ∨ −−Q 2 DN

1 (4) −(P & −Q) 3 deM

(d) −(P & −Q) ¬ P → Q

1 (1) −(P & −Q) A

1 (2) −P ∨ −−Q 1 deM

1 (3) −P ∨ Q 2 DN

1 (4) P → Q 3 Imp

(g) P ∨ (Q&R) ¬ (P ∨ Q) &(P ∨ R)

1 (1) P ∨ (Q&R) A

2 (2) P A (vE)

2 (3) P ∨ Q 2 vI

2 (4) P ∨ R 2 vI

2 (5) (P ∨ Q) &(P ∨ R) 3,4 &I

6 (6) Q&R A (vE)

6 (7) Q 6 &E

6 (8) R 6 &E

6 (9) P ∨ Q 7 vI

124 SOLUTIONS TO SELECTED EXERCISES

6 (10) P ∨ R 8 vI

6 (11) (P ∨ Q) &(P ∨ R) 9,10 &I

1 (12) (P ∨ Q) &(P ∨ R) 1,2,5,6,11 vE

(i) P → (Q → R) ¬ (P &Q) → R (Exp)

1 (1) P → (Q → R) A

2 (2) P &Q A (CP)

2 (3) P 2 &E

1,2 (4) Q → R 1,3 MP

1 (5) Q 2 &E

1,2 (6) R 4,5 MP

1 (7) (P &Q) → R 2,6 CP

(k) (P → Q) &(P → R) ¬ P → (Q&R)

1 (1) (P → Q) &(P → R) A

2 (2) P A (CP)

1 (3) P → Q 1 &E

1 (4) P → R 1 &E

1,2 (5) Q 2,3 MP

1,2 (6) R 2,4 MP

1,2 (7) Q&R 5,6 &I

1 (8) P → (Q&R) 2,7 CP

(m) −(P & −Q) ¬ P → Q

1 (1) −(P & −Q) A

1 (2) −P ∨ −−Q 1 deM

1 (3) −P ∨ Q 2 DN

1 (4) P → Q Imp

2. (a) P, Q ¬ P ↔ Q

1 (1) P A

2 (2) Q A

2 (3) P → Q 1,2 CP

1 (4) Q → P 1,2 CP

1,2 (5) (P → Q) &(Q → P) 3,4 &I

1,2 (6) P ↔ Q 5 Def↔

(c) (P → R), (Q → R) ¬ (P &Q) → R

1 (1) P → R A

2 (2) Q → R A

125

3 (3) P &Q A (CP)

3 (4) P 3 &E

1,3 (5) R 1,4 MP

1 (6) (P &Q) → R 3,5 CP

(e) R, R ↔ P, −Q → −P ¬ Q

1 (1) R A

2 (2) R ↔ P A

3 (3) −Q → −P A

3 (4) P → Q 3 Trans

2 (5) (R → P) &(P → R) 2 Def↔

2 (6) R → P 5 &E

2,3 (7) R → Q 4,6 HS

1,2,3 (8) Q 1,7 MP

(g) P ↔ S, P ∨ Q, P ∨ R, −Q ∨ −R ¬ S

1 (1) P ↔ S A

2 (2) P ∨ Q A

3 (3) P ∨ R A

4 (4) −Q ∨ −R A

5 (5) −Q A (vE)

2,5 (6) P 2,5 DS

7 (7) −R A (vE)

3,7 (8) P 3,7 DS

2,3,4 (9) P 4,5,6,7,8 (vE)

1 (10) (P → S) &(S → P) 1 Def↔

1 (11) P → S 10 &E

1,2,3,4 (12) S 9,11 MP

(i) P → (Q ∨ R), (P → Q) → X, (P → R) → Y ¬ X ∨ Y

1 (1) P → (Q ∨ R) A

2 (2) (P → Q) → X A

3 (3) (P → R) → Y A

1 (4) (P → Q) ∨ (P → R) 1 Dist↔

5 (5) P → Q A (vE)

2,5 (6) X 2,5 MP

2,5 (7) X ∨ Y 6 (vI)

8 (8) P → R A (vE)

3,8 (9) Y 3,8 MP

3,8 (10) X ∨ Y 6 vI

1,2,3 (11) X ∨ Y 1,5,7,8,10 vE

126 SOLUTIONS TO SELECTED EXERCISES

Solutions for Chapter 4

Exercise 4.1.1 (A)

1. Identify the rule used in each of the following deductive steps.

(a) MP

(b) MT

(c) vI

(d) HS

(e) Imp

Exercise 4.2.1 (A)

1. State the justiﬁcations for each of the following steps.

(a) n Imp(2)

(b) n Imp, Dist

(c) n Imp, DN

(d) n Imp, Dist

(e) n, n+1 MT, DN

(f) n Assoc, Comm.

Exercise 4.5.1 (B)

1. (a) ¬ (P & −P) → Q

1 (1) P & −P A (CP)

1 (2) Q 1 ExF

(3) (P & −P) → Q 1,2 CP

(b) ¬ P → (−P → Q)

1 (1) P A (CP)

2 (2) −P A (CP)

1,2 (3) P & −P 1,2 &I

1,2 (4) Q 3 ExF

1 (5) −P → Q 2,4 CP

(6) P → (−P → Q) 1,5 CP

Alternate proof:

(1) P ∨ −P ExM

(2) (P ∨ −P) ∨ Q 1 vI

(3) −P ∨ (P ∨ Q) 2 Assoc, Comm

(4) P → (−−P ∨ Q) 3 Imp, DN

(5) P → (−P → Q) 4 Imp

127

(c) ¬ P ∨ (P → Q)

(1) −P → (P → Q) TI (Ques. 1(a))

(2) −−P ∨ (P → Q) 1 Imp

(3) P ∨ (P → Q) 2 DN

Alternate proof:

(1) P ∨ −P ExM

(2) (P ∨ −P) ∨ Q 1 vI

(3) P ∨ (−P ∨ Q) 2 Assoc

(4) P ∨ (P → Q) 3 Imp

(d) ¬ (P → Q) ∨ (Q → P)

(1) P ∨ −P ExM

(2) (P ∨ −P) ∨ (Q ∨ −Q) 1 vI

(3) (−P ∨ Q) ∨ (−Q ∨ P) 2 Assoc (2), Comm (3)

(4) (P → Q) ∨ (Q → P) 3 Imp (2)

(e) ¬ (P → Q) ∨ (Q → R)

(1) Q ∨ −Q ExM

(2) −P ∨ (Q ∨ −Q) 1 vI

(3) (−P ∨ (Q ∨ −Q)) ∨ R 2 vI

(4) (−P ∨ Q) ∨ (−Q ∨ R) 3 Assoc (2)

(5) (P → Q) ∨ (Q → R) 4 Imp (2)

(f) ¬ (−P → P) → P

1 (1) −P → P A (CP)

2 (2) −P Q (RAA)

1,2 (3) P 1,2 MP

1,2 (4) P & −P 2,3 &I

1 (5) P 2,4 RAA, DN

(6) (−P → P) → P 1,5 CP

Alternate proof:

1 (1) −P → P A (CP)

1 (2) −−P ∨ P 1 Imp

1 (3) P ∨ P 2 DN

1 (4) P 3 Idem

(5) (−P → P) → P 1,4 CP

Exercise 4.5.1 (C)

1. (a) ¬ P ∨ −P

128 SOLUTIONS TO SELECTED EXERCISES

1 (1) −(P ∨ −P) A (RAA)

2 (2) P A (RAA)

2 (3) P ∨ −P 2 vI

1,2 (4) (P ∨ −P) & −(P ∨ −P) 1,3 &I

1 (5) −P 2,4 RAA

1 (6) P ∨ −P 5 vI

1 (7) (P ∨ −P) & −(P ∨ −P) 1,6 &I

(8) P ∨ −P 1,7 RAA, DN

This elegant proof is due to Lemmon (his Sequent 44.) [17].

Exercise 4.6.1 (B)

1. (a) Given P, Q ¬ R, show ¬ (P &Q) → R

1 (1) (P &Q) A (CP)

1 (2) P 1 &E

1 (3) Q 1 &E

1 (4) R 2,3 SI

(5) (P &Q) → R 1,4 CP

(b) Given ¬ (P &Q) → R, show P, Q ¬ R

1 (1) P A

2 (2) Q A

1,2 (3) P &Q 1,2 &I

(4) (P &Q) → R TI

1,2 (5) R 3,4 MP

Exercise 4.7.1 (B)

1. (a) ¬ (P ↔ Q) ↔ (−(P → −Q) ∨ −(−Q → P))

(P ↔ Q) ↔ ((P &Q) ∨ (−P & −Q)) Equiv

↔ ((P & −−Q) ∨ (−Q& −P)) DN, Comm

↔ (−(P → −Q) ∨ −(−Q → P)) Neg→ (2)

(b) ¬ ((Q ∨ R) → P) ↔ ((Q → P) &(R → P))

((Q ∨ R) → P) ↔ (−(Q ∨ R) ∨ P) Imp

↔ ((−Q& −R) ∨ P) deM

↔ ((−Q ∨ P) &(−R ∨ P)) Dist

↔ ((Q → P) &(R → P)) Imp(2)

(c) ¬ (P → (Q ∨ R)) ↔ ((P → Q) ∨ (P → R))

129

(P → (Q ∨ R)) ↔ (−P ∨ (Q ∨ R)) Imp

↔ ((−P ∨ −P) ∨ (Q ∨ R)) Idem

↔ (−P ∨ Q) ∨ (−P ∨ R)) Assoc, Comm

↔ (P → Q) ∨ (P → R) Imp

(d) ¬ (P → (Q&R)) ↔ ((P → Q) &(P → R))

(P → (Q&R)) ↔ (−P ∨ (Q&R)) Imp

↔ ((−P ∨ Q) &(−P ∨ R)) Dist

↔ ((P → Q) &(P → R)) Imp(2)

(e) ¬ (P ∨ (Q → R)) ↔ ((Q → P) ∨ (−P → R))

(P ∨ (Q → R)) ↔ (P ∨ (−Q ∨ R) Imp

↔ ((P ∨ P) ∨ (−Q ∨ R)) Idem

↔ ((−Q ∨ P) ∨ (−−P ∨ R)) Comm, Assoc, DN

↔ ((Q → P) ∨ (−P → R)) Imp(2)

Solutions for Chapter 5

Review Exercises on Chapter 5

A

1. Write the complete truth table for each of the following wﬀs:

(a)

− (P ∨ Q) & − (P & Q)

F T T T F F T T T

F T T F F T T F F

F F T T F T F F T

T F F F T T F F F

(b)

− (− (P → Q) & − (Q → P))

T F T T T F F T T T

T T T F F F F F T T

T F F T T F T T F F

T F F T F F T F T F

130 SOLUTIONS TO SELECTED EXERCISES

(c)

− (P & (Q & R))

F T T T T T

T T F T F F

T T F F F T

T T F F F F

T F F T T T

T F F T F F

T F F F F T

T F F F F F

(d)

P → (Q → R)

T T T T T

T F T F F

T T F T T

T T F T F

F T T T T

F T T F F

F T F T T

F T F T F

2. Evaluate the following wﬀs, when P = T and Q = F.

(a)

− (P → Q)

T T F F

(b)

− (P ↔ Q)

T T F F

(c)

(− P ↔ Q)

F T T F

(d)

(− P & − Q) ∨ (P & Q)

F T F F T T T T T

F T F T F F T F F

T F F F T F F F T

T F T T F T F F F

3. Use truth tables to characterize the following wﬀs as tautologous, inconsistent, or

contingent.

131

(a)

P → P

T T T

F T F

Tautology

(b)

− P → P

F T T T

T F F F

Contin-

gent

(c)

P → (− P → P)

T T F T T T

F T T F F F

Tautology

(d)

(P ∨ −P) → (P & −P)

T F F

Contradiction

4. Check the following sequents for validity. Use short-cuts where they are suitable.

(a)

P ∨ Q − Q P

F F F T F F

This is valid, since making the conclusion F forces a premiss

to be F.

(b)

P → Q − Q − P

T F F T F F T

Valid

(c)

P → Q P → (P & Q)

T F F T F T F F

Valid

(d)

P → Q − P − Q

F T T T F F T

Invalid, since there is a line in the truth table in which the

premisses are T and the conclusion F.

132 SOLUTIONS TO SELECTED EXERCISES

B

1. Use truth tables to classify the following wﬀs as either tautologous, inconsistent, or

contingent.

(a)

P ∨ (Q & R)

F F F F F

T T T T T

Contingent; to demonstrate this, all we have to do is show

that there is at least one line that comes out T, and one that

comes out F.

(b)

P → (R & S)

T F F F

T T T T T

Contingent

(c)

P → (Q → P)

T T T T

Tautology

(d)

(P → R) → (P → (P & R))

T F F T T F T F F

Tautology; since the only way to make the consequent F

makes the antecedent F as well.

(e)

(P → (Q → P)) → R

T T T

T F F

Contingent; we know by (c) that the antecedent is always

T, so the value depends on R, which can be T or F.

133

(f)

(P ↔ Q) ↔ (Q ↔ − P)

T T T F T F F T

T F F F F T F T

F F T F T T T F

F T F F F F T F

Contradiction

(g)

P → (R& −R)

T F F

F T F

Contingent

(h)

(P → R) → − P

T T T F F T

F T T T F

Contingent

(i)

(P → (Q & R)) & (P → (Q & − R))

F T T F T

T T F F F

Contingent

(j)

− (P → Q) & − (Q → P)

F T T T F F T T T

T T F F F F F T T

F F T T F T T F F

F F T F F F F T F

Contradiction

2. Use truth tables to check the following sequents for validity.

(a)

P → Q Q → R P → R

T F F F T F T F F

Valid

134 SOLUTIONS TO SELECTED EXERCISES

(b)

P ↔ Q Q ↔ R P ↔ R

T F F F T F T F F

F F T T T T F F T

Valid

(c)

P → Q R → Q P → R

T T T F T T T F F

Invalid

(d)

P → Q − Q → P

F T F T F F F

Invalid

(e)

P → Q R → S − Q ∨ − S − P ∨ − R

T T T T T T F T F F T F T F F T

Valid

(f)

P ∨ Q P → R Q → R R

F F F F T F F T F F

Valid

(g)

P → Q P → (P ∨ Q)

F T T T T T

Valid, since the conclusion is a the-

orem.

(h)

P → Q P → (P & Q)

T F F T F T F F

Valid

135

(i)

P ∨ R P → (R ∨ S) − S − P

T T T T T T T F T F F T

Invalid

(j)

P → (M & N) M → R N → S − (R ∨ S) − P ∨ M

T F F F F F T F F T F T F F F F T F F

Valid

Solutions for Chapter 6

Exercise 6.7

1(b)

P ∨ Q

−P

−Q

d

d

P

Q

**The tree closes; therefore the sequent is valid.
**

1(c) P ∨ Q

P → R

Q → S

−(R ∨ S)

−R

−S

d

d

−Q

S

d

d

−P R

d

d

P

Q

**The tree closes; therefore the sequent is valid.
**

1(e)

P → Q

−(P → (P &Q))

P

−(P &Q)

d

d

−P

−Q

d

d

−P

Q

**The tree closes; therefore the sequent is valid.
**

136 SOLUTIONS TO SELECTED EXERCISES

1(g)

P

−(P → Q)

P

−Q

The tree is open; therefore the sequent is invalid.

1(j)

P → (Q&R)

−Q

P

d

d

−P

Q&R

Q

R

**The tree is closed; therefore the sequent is valid.
**

Solutions for Chapter 7

Section 7.3

1.(a) ∀x∀y((Ax&Fy) → Exy)

(b) ∀x∀y(Cy → (Exy → Mx))

OR

∀x∀y((Cy &Exy) → Mx)

(c) ∀x(Ax → (Sx ↔ −V x))

(d) ∀x∀y((Lx&Ty) → (Exy ↔ Ax))

(e) ∀x(Mx → −V x)

(f) Tmn

2.(a) Any mongoose from the Transvaal has a strange diet.

(b) Some cobras and Mongooses like to sleep in the sun.

(c) All mongooses and aardvarks like to eat ants.

(d) No cobra or mongoose is from the Transvaal.

137

Solutions for Chapter 8

Section 8.14

A

1. (a) 1 (1) Nr A

2 (2) ∃xNx 1 EI

2

(b) 1 (1) Apc A

1 (2) ∃xApx 1 EI

2

(c) 1 (1) ∀x(Ux → Nx) A

2 (2) Ur A

1 (3) Ur → Nr 1 UE

1,2 (4) Nr 2,3 MP

2

(d) 1 (1) ∀x(Ux → −Rx) A

2 (2) Rt A

1 (3) Ut → −Rt 1 UE

1,2 (4) −Ut 2,3 MT

2

(e) 1 (1) ∀x(Ux → Mx) A

2 (2) ∀x(Mx → Wx) A

3 (3) Ur A

1 (4) Ur → Mr 1 UE

2 (5) Mr → Wr 2 UE

1,2 (6) Ur → Wr 4,5 HS

1,2,3 (7) Wr 3,6 MP

2

2. (a) 1 (1) Fa &Fb A

1 (2) Fa 1 &E

1 (3) ∃xFx 2 EI

2

(b) 1 (1) ∀x(Mx → Wx) A

2 (2) Ma &Ra A

1 (3) Ma → Wa 1 UE

2 (4) Ma 2 &E

1,2 (5) Wa 3,4 MP

2

(c) 1 (1) ∀x(Fx → Gx) A

2 (2) ∃xFx A

3 (3) Fa A (EE)

1 (4) Fa → Ga 1 UE

1,3 (5) Ga 3,4 MP

138 SOLUTIONS TO SELECTED EXERCISES

1,3 (6) ∃xGx 5 EI

1,2 (7) ∃xGx 2,3–6 EE

2

(d) 1 (1) ∀x(Fx → −Gx) A

2 (2) ∃xGx A

3 (3) Ga A

1 (4) Fa → −Ga 1 UE

1,3 (5) −Fa 3,4 MT

1,3 (6) ∃x −Fx 5 EI

1,2 (7) ∃x −Fx 2,3–6 EE

2

(e) 1 (1) ∀x −(Fx ∨ Gx) A

1 (2) ∃x(Fx ∨ Gx) 1 Duality

2

B

1. (a) 1 (1) ∀x(Ahx → −∃yTxy) A

2 (2) Ahb A

1 (3) Ahb → −∃yTyb 1 UE

1,2 (4) −∃yTyb 2,3 MP

1,2 (5) ∀y −Tyb 4 Duality

1,2 (6) −Tsb 5 UE

2

(b) 1 (1) ∀x(Ux → Nx) A

2 (2) Ur &Br A

2 (3) Ur 2 &E

2 (4) Br 2 &E

1 (5) Ur → Nr 1 UE

1,2 (6) Nr 3,5 MP

1,2 (7) Br &Nr 4,6 &I

1,2 (8) ∃x(Bx&Nx) 7 EI

2

(c) 1 (1) Apc A

2 (2) Dc A

1,2 (3) Apc &Dc 1,2 &I

1,2 (4) ∃x(Apx&Dx) 3 EI

2

(d) 1 (1) ∀x(Ux → Nx) A

2 (2) ∃x(Bx&Ux) A

3 (3) ∀x(Ux → Mx) A

4 (4) Ba &Ua A (EE)

4 (5) Ba 4 &E

4 (6) Ua 4 &E

3 (7) Ua → Ma 3 UE

139

3,4 (8) Ma 6,7 MP

1 (9) Ua → Na 1 UE

1,4 (10) Na 6,9 MP

1,3,4 (11) Ma &Ba &Na 5,8,10 &I

1,3,4 (12) ∃x(Mx&Bx&Nx) 11 EI

1,2,3 (13) ∃x(Mx&Bx&Nx) 2,4–12 EE

2

(e) 1 (1) ∀x(Ux → Mx) &∀x(Lx → Mx) A

2 (2) ∀x(Ux → −Ax) A

3 (3) ∃x(Ux&Cxk) A

4 (4) Ua &Cak A (EE)

4 (5) Ua 4 &E

2 (6) Ua → −Aa 2 UE

2,4 (7) −Aa 5,6 MP

1 (8) ∀x(Ux → Mx) 1 &E

1 (9) Ua → Ma 8 UE

1,4 (10) Ma 5,9 MP

1,2,4 (11) Ma & −Aa 7,10 &I

1,2,4 (12) ∃x(Mx& −Ax) 11 EI

1,2,3 (13) ∃x(Mx& −Ax) 3,4–12 EE

2

2. (a) i. ∃x(P &Fx) ¬ P &∃xFx

1 (1) ∃x(P &Fx) A

2 (2) P &Fa A

2 (3) P 2 &E

2 (4) Fa 2 &E

2 (5) ∃xFx 4 EI

2 (6) P &∃xFx 3,5 &I

1 (7) P &∃xFx 1,2–6 EE

2

ii. This could be a good exam question!

(b) i. 1 (1) ∀x(P &Fx) A

1 (2) P &Fa 1 UE

1 (3) P 2 &E

1 (4) Fa 2 &E

1 (5) ∀xFx 4 UI

1 (6) P &∀xFx 3,5 &I

2

ii. This could be a good exam question!

(c) i. 1 (1) ∃x(P ∨ Fx) A

2 (2) P ∨ Fa A

3 (3) P A

3 (4) P ∨ ∃xFx 3 vI

5 (5) Fa A

5 (6) ∃xFx 5 EI

140 SOLUTIONS TO SELECTED EXERCISES

5 (7) P ∨ ∃xFx 6 vI

2 (8) P ∨ ∃xFx 2,3–4,5–7 vE

1 (9) P ∨ ∃xFx 1,2–8 EE

2

ii. This could be a really good question for the exam! Hint: it is basically the

reverse of the above, with EE’s inside vE subproofs.

(d) i. 1 (1) ∀x(P ∨ Fx) A

1 (2) P ∨ Fa 1 UE

3 (3) P A (vE)

3 (4) P ∨ ∀xFx 3 vI

5 (5) Fa A (vE)

5 (6) ∀xFx 5 UI

5 (7) P ∨ ∀xFx 6 vI

1 (8) P ∨ ∀xFx 2,3–4,5–7 vE

2

ii. Another possible exam question!

(e) i. 1 (1) ∃x(P → Fx) A

2 (2) P A

3 (3) P → Fa A

2,3 (4) Fa 2,3 MP

2,3 (5) ∃xFx 4 EI

1,2 (6) ∃xFx 1,3–5 EE

1 (7) P → ∃xFx 2–6 CP

2

ii. 1 (1) P → ∃xFx A

2 (2) −∃x(P → Fx) A (RAA)

2 (3) ∀x −(P → Fx) 2 Duality

2 (4) −(P → Fa) 3 UE

2 (5) P & −Fa 4 Neg→

2 (6) P 5 &E

2 (7) −Fa 5 &E

2 (8) ∀x −Fx 7 UI

2 (9) −∃xFx 8 Duality

1,2 (10) ∃xFx 1,6 MP

1,2 (11) ∃xFx& −∃xFx 9,10 &I

1 (12) ∃x(P → Fx) 2–11 RAA

2

(f) i. 1 (1) A

2 (2) A

1 (3) 1 UE

1,2 (4) 2,3 MP

1,2 (5) 4 UI

1 (6) 2–5 CP

2

ii. This is left as an exercise. Hint: it’s an EE inside an RAA.

141

(g) i. 1 (1) ∃x(Fx → P) A

2 (2) ∀xFx A

3 (3) Fa → P A

2 (4) Fa 2 UE

2,3 (5) P 3,4 MP

1,2 (6) P 1,3–5 EE

1 (7) ∀xFx → P 2–6 CP

2

ii. This is left as an exercise. Hint: it’s a pretty straightforward RAA.

3. (a) Sequent 54 is a straightforward application of UI; it is left as an exercise.

(b) Sequent 55 is a fairly easy vE; use UE and IU within each subproof.

(c) Prove ∃x(Fx ∨ Gx) ¬¬ ∃xFx ∨ ∃xGx

i. 1 (1) ∃x(Fx ∨ Gx) A

2 (2) Fa ∨ Ga A (EE)

3 (3) Fa A (vE)

3 (4) ∃xFx 3 EI

3 (5) ∃xFx ∨ ∃xGx 4 vI

6 (6) Ga A (vE)

6 (7) ∃xGx 6 EI

6 (8) ∃xFx ∨ ∃xGx 7 vI

2 (9) ∃xFx ∨ ∃xGx 2,3–5,6–8 vE

1 (10) ∃xFx ∨ ∃xGx 1,2–9 EE

2

ii. The converse proof is very similar, except that you do EE subproofs inside

each vE subproof.

(d) Prove ∃x(Fx&Gx) ¬ ∃xFx&∃xGx

Note: this is one that you really should be able to do for the exam.

1 (1) ∃x(Fx&Gx) A

2 (2) Fa &Ga A

2 (3) Fa 2 &E

2 (4) Ga 2 &E

2 (5) ∃xFx 3 EI

2 (6) ∃xGx 4 EI

2 (7) ∃xFx&∃xGx 5,6 &I

1 (8) ∃xFx&∃xGx 1,2–7 EE

2

(e) 1 (1) ∀x(Fx → Gx) A

2 (2) ∀xFx A (CP)

1 (3) Fa → Ga 1 UE

2 (4) Fa 2 UE

1,2 (5) Ga 3,4 MP

1,2 (6) ∀xGx 5 UI

1 (7) ∀xFx → ∀xGx 2–6 CP

2

142 SOLUTIONS TO SELECTED EXERCISES

Note: a question similar to this could easily be on the exam!

(f) 1 (1) ∃x(Fx → Gx) A

1 (2) ∃x(−Fx ∨ Gx) 1 Imp

1 (3) ∃x −Fx ∨ ∃xGx 2 Dist∃ (Sequent 56)

1 (4) ∀xFx ∨ ∃xGx 3 Duality

1 (5) ∀xFx → ∃xGx 4 Imp

2

C

1. Prove ∃xFx ¬¬ −∀x −Fx

(a) 1 (1) ∃xFx A

2 (2) ∀x −FX A

3 (3) Fa A

2 (4) −Fa 2 UE

2,3 (5) Fa & −Fa 3,4 &I

1,2 (6) Fa & −Fa 1,3–5 EE

1 (7) −∀x −Fx 2–6 RAA

2

(b) (1) −∀x −Fx A

(2) −∃xFx A

(3) Fa A

(4) ∃xFx 3 EI

(5) ∃xFx& −∃xFx 2,4 &I

(6) −Fa 3–5 RAA

(7) ∀x −Fx 6 UI

(8) ∀x −Fx& −∀x −Fx 1,7 &I

(9) ∃xFx 2–8 RAA

2

2. Prove ∀x −Fx ¬¬ −∃xFx

(a) 1 (1) ∀x −Fx A

2 (2) ∃xFx A

3 (3) Fa A

1 (4) −Fa 1 UE

1,3 (5) −Fa &Fa 3,4 &I

3 (6) −∀xFx 1–5 RAA

2 (7) −∀xFx 2,3–6 EE

1,2 (8) ∀xFx& −∀xFx 1,7 &I

1 (9) −∃xFx 2–8 RAA

2

(b) 1 (1) −∃xFx A

2 (2) A

2 (3) ∃xFx 2 EI

1,2 (4) ∃xFx& −∃xFx 1,3 &I

143

1 (5) 2–4 RAA

1 (6) ∀x −Fx 5 UI

2

3. Prove ∃x −Fx ¬¬ −∀xFx

(a) 1 (1) ∃x −Fx A

2 (2) ∀xFx A (RAA)

3 (3) −Fa A (EE)

2 (4) Fa 2 UE

2,3 (5) Fa & −Fa 3,4 &I

3 (6) −∀xFx 2–5 RAA

1 (7) −∀xFx 1,2–6 EE

2

(b) 1 (1) −∀xFx A

2 (2) −∃x −Fx A

3 (3) −Fa A

3 (4) ∃x −Fx 3 EI

2,3 (5) ∃x −Fx& −∃x −Fx 2,4 &I

2 (6) Fa 3–5 RAA

2 (7) ∀xFx 6 UI

1,2 (8) ∀xFx& −∀xFx 1,7 &I

1 (9) ∃x −Fx 2–8 RAA

2

144 SOLUTIONS TO SELECTED EXERCISES

Solutions for Chapter 9

A

1. (a) p = a

(b) a = s

(c) Aar ∨ Apr

(d) Apr &Apt

2. (a) Plato was not The Stagirite.

(b) The Republic is not the Timaeus.

(c) Plato either did or did not write Republic.

3. (a) 1 (1) Aae A

2 (2) a = s A

1,2 (3) Ase 1,2 =E

(b) 1 (1) Aar → (a = p) A

2 (2) a = p A

1,2 (3) −Aar 1,2 MT

4. (a) 1 (1) Fa &Ga A

2 (2) a = b A

1,2 (3) Fa &Gb 1,2 =E

(b) 1 (1) Fa → Ga A

2 (2) a = b A

1,2 (3) Fa → Ga 1,2 =E

(c) 1 (1) ∀x(Fax → Gbx) A

2 (2) a = b A

1,2 (3) ∀x(Fbx → Gbx) 1,2 =E

(d) 1 (1) Faa → (a = a) A

(2) a = a =I

1 (3) −Faa 1,2 MT

B

1. (a) Apr &∀y(Ayr → ∀y(Ayr → (y = p))

(b) −(Aar &∀y(Ayr → (y = a)))

This is equivalent to

−Aar ∨ −∀y(Ayr → (y = a))

and this is equivalent to

−Aar ∨ ∃y(Ayr &(y = a)).

2. (a) The author of the Republic was the author of the Timaeus.

(b) Plato was the author of the Republic and the Timaeus.

145

3. 1 (1) ∃x(ps &(x = l)) A

2 (2) Pa &(a = l) A (EE)

2 (3) Pa 2 &E

2 (4) a = l 2 &E

2 (5) Pl 3,4 =E

1 (6) Pl 1,2–5 EE

1 (1) Pl A

(2) l = l =I

1 (3) Pl &(l = l) 1,2 &I

1 (4) ∃x(Px&(x = l)) 3 EI

4. 1 (1) Kp A

2 (2) a = p A

3 (3) ∀x∀y((Kx&Ky) → (x = y)) A

3 (4) (Kp &Ka) → (p = a) 1 UE ( 2)

2,3 (5) −(Kp &Ka) 2,4 MT

2,3 (6) −Kp ∨ −Ka 5 deM

1,2,3 (7) −Ka 1,6 DS

146 SOLUTIONS TO SELECTED EXERCISES

Bibliography

[1] Benaceraﬀ, Paul, and Putnam, Hilary (eds.) Philosophy of Mathematics: Selected

Readings. Englewood Cliﬀs, NJ: Prentice-Hall, 1964.

[2] Bergmann, M., Moor, J., and Nelson, J. The Logic Book. Second Edition. New

York: McGraw Hill, 1990.

[3] Birch, David. Illustrations by Devis Grebu. The King’s Chessboard. New York:

Penguin, 1988.

[4] Boolos, George,and Jeﬀrey, Richard Computability and Logic Second Edition.

Cambridge: Cambridge University Press, 1980.

[5] Brown, George. S. The Laws of Form. London: George Allen and Unwin, 1969.

[6] Copi, I. M., and Cohen, C. Introduction to Logic Tenth Edition. Upper Saddle

River, NJ: Prentice-Hall, 1998.

[7] DeVidi, David, and Solomon, Graham, “On Confusions About Bivalence and Ex-

cluded Middle”, Dialogue 38(4), 1999, 785–799.

[8] Govier, T. A Practical Study of Argument. Third Edition. Belmont, CA:

Wadsworth, 1992.

[9] Halmos, P. R. Naive Set Theory. New York: Van Nostrand Reinhold, 1960.

[10] Hofstadter, Douglas. G¨odel, Escher, Bach: An Eternal Golden Braid.New York:

Basic Books, 1979.

[11] Hunter, Geoﬀrey. Metalogic. London: Macmillan, 1971.

[12] Hughes, R. I. G. “Quantum Logic,” Scientiﬁc American 245(4) (October 1981),

202–213.

[13] Jacquette, Dale Symbolic Logic. Belmont, CA: Wadsworth, 2001.

[14] Jeﬀrey, R. Formal Logic, Its Scope and Limits. Third Edition. New York: McGraw-

Hill, 1991.

[15] Klemke, E. Theory of Sets. New York: Dover, 1950.

[16] LeBlanc, J. Thinking Clearly: A Guide to Critical Reasoning. New York: W. W.

Norton, 1998.

147

148 BIBLIOGRAPHY

[17] Lemmon, E. J. Beginning Logic. Indianapolis, IN: Hackett, 1978. First published

1965, Van Nostrand Reinhold.

[18] Mates, Benson Elementary Logic. Second Edition. New York: Oxford University

Press, 1972.

[19] Rucker, Rudy Inﬁnity and the Mind: The Science and Philosophy of the Inﬁnite.

New York: Bantam Books, 1983.

[20] Russell, Bertrand. Introduction to Mathematical Philosophy. London: Allen & Un-

win, 1918.

[21] Russell, Bertrand. My Philosophical Development. London: Allen & Unwin, 1959.

[22] Russell, Bertrand, and Whitehead, A. N. Principia Mathematica. Second Edition.

Cambridge: Cambridge University Press, 1927.

[23] Schumm, G. F. A Teaching Companion to Lemmon’s Beginning Logic. Indianapo-

lis, IN: Hackett, 1979.

[24] Smullyan, Raymond. What is the Name of This Book? Englewood Cliﬀs, NJ:

Prentice-Hall, 1978.

ii

Copyright c 2005, 2006 Kent A. Peacock

Preface

The purpose of this text is to provide a correct but ﬂexible command of symbolic logic up to but not including the point at which one can begin to study metalogic (the “logic of logic”). As such, it covers most of the material usually covered in a typical second-year university course on symbolic logic, including such time-honoured topics as truth tables, truth trees, and natural deduction for propositional and ﬁrst-order predicate logic (without and with identity). A working command of these techniques is simply assumed in the the literature on analytic philosophy, mathematical logic, foundations of mathematics, computer science, and much of physical science. For enrichment there are also chapters on widely-used properties of relations, the parallels between circuit logic and propositional logic, elementary set theory, and a short treatment of the categorical syllogism using the tools of predicate logic. (The chapter on circuit logic may especially be of interest to those many students who come to logic and philosophy from computer science.) There is more here than can usually be covered in a single course, but these extra topics give students and instructors plenty to choose from. A concluding chapter introduces the reader brieﬂy and qualitatively to some of the riches that lie beyond, including modal and deviant logics, G¨del, Turing, artiﬁcial intelligence, o and quantum computing. Not everyone will wish to explore such advanced topics in detail, but every student of philosophy and logic should be aware that such things exist, and have at least a passing acquaintance with them. The aim of this text is not to present a rigorous development of propositional and predicate logic from the simplest and most general ﬁrst principles available, but to oﬀer a workaday (but of course valid!) system of logic that will be as useful as possible, both for beginning logicians who may later wish to go on to a more foundational treatment, and also those many others who are interested in logic more as a tool than for its own sake. One might naturally ask why the world needs to be blessed with yet another logic text, when there are so many excellent texts on the market now. Like almost everyone who has taught logic, I could not avoid working out my own approach to the subject. In order to teach the subject to myself and others, I had to reconstruct it in my own way, and I ended up with an approach that I believe is a useful complement to other methods of teaching this sort of logic. And there is room for new approaches, for the fact remains that elementary logic, unlike certain parts of elementary mathematics, is still a developing subject. I modestly believe that the approach I present here has an advantage over many other texts at a similar level because of its emphasis on ﬂexibility and ease of use. The choice of topics in this text was designed in particular to meet the needs of the second-year symbolic logic course at the University of Lethbridge. We have a ﬁrst year course that is mostly informal logic and critical thinking (with a smattering of “baby” iii

or indeed anyone at all who may wish to refresh their knowledge of the techniques of logic.g. and then a third-year course. My view is that the subject is diﬃcult enough as it is. I also use Lemmon’s eﬃcient way of setting up truth tables. but once you have learned one notation. e. Although I tremendously admire Lemmon’s book. she may .) Again. (Lemmon allows the introduction of sequents and theorems. [2. supplemented with the highly eﬃcient truth tree techniques pioneered by Jeﬀrey [14]. The notation for natural deduction I used here is adapted with some simpliﬁcations from that used in superb but now largely out-dated texts by Mates and Lemmon [18. such as truth trees. Another respect in which this book diﬀers from some other treatments of symbolic logic is in its views about the purpose of learning natural deduction. and that it allows one to introduce a theorem at any point in the deduction. believe that the student should learn how to derive every result from ﬁrst principles. I therefore encourage students to freely use earlier theorems and sequents. which is to use. Trig would be nearly useless if one had to prove an identity such as cos2 θ + sin2 θ = 1 every time one needed to use it. Some students may in the end prefer the Fitch system over the slightly more abstract Mates/Lemmon system because Fitch allows us to visualize whenever we are in a subproof. On the other hand. 13]) is that it is more compact. In the end. more ﬂexible. previously derived results to get new results. the concision and ﬂexibility of Mates/Lemmon certainly help to make symbolic logic useful — which places it squarely in the spirit of this text. I build my natural deduction system on basic rules similar to those of Lemmon. using as few rules as possible. a second-year course called Symbolic Logic I.iv PREFACE deductive logic). The advantage of the Mates/Lemmon notation for natural deduction over the popular and graphically explicit Fitch system (as used in. I hope that it will also be useful for professionals in disciplines such as physics or computer science. but he does not emphasize it nearly as much as I think it would be useful to do so. I have written the logic text that I think would have beneﬁted me the most as a student.) Logic does not yet have one notation that has become universally accepted (unlike most branches of mathematics such as calculus). But it has the disadvantage that one often requires derivations of great length and complexity in order to prove results that should be very simple and obvious. Lemmon. This “logical boot camp” approach certainly develops technical adroitness in those students who are willing to tough it out. but I have streamlined the presentation of natural deduction techniques in such a way as to make them as easy to use as possible.. and I hope that it will be useful to others as well. I think that (especially for beginning students) the most useful approach is to do just what one does in mathematics. that introduces the formal methods of classical two-valued Boolean logic (primarily propositional and predicate calculus) up to the point at which one can begin to do metatheory. 17]. too. and I provide a formalism that allows one to do this in an eﬃcient and clear manner. The ultimate purpose is to streamline derivations and get useful or interesting formulas as quickly and painlessly as possible. with a few shortcuts of my own thrown in. (This will all be explained below. in which those who are so inclined can ﬁnally dip into metatheory. Symbolic Logic II. Although the book is aimed in the ﬁrst instance at college and university students. the aim is to develop a system of logic that could actually be useful . it is not very hard to learn another. and indeed numerous other authors of logic texts. as creatively and as early as possible. just as one would in (say) a trigonometry text. Once the student has mastered syntactic methods of verifying a given deduction. I base my teaching of logic on a different pedagogical philosophy.

e who was astonished to learn that he had been speaking prose his whole life. Once we have a complete chain of reasoning from premisses to conclusion we can. and C levels. the methods of logic simply make it more explicit and precise. The experienced logician will know that one reason for learning syntactic methods is that not all logics are complete. but they should be doable by anyone who has worked through the A exercises diligently. The B exercises are a bit more complex than the A’s and usually combine several concepts and techniques. . I follow the practice of many old-fashioned math texts and divide these exercises into A. but once we have an answer we can check it in a purely mechanical way using multiplication.v well wonder what is the point of struggling through the process of constructing a syntactic derivation of a result that can be quickly checked with a tree or table. but once we have a conclusion we can check the deduction mechanically with semantic methods. What often happens in real life is that we are presented with a collection of givens (assumptions or known facts) and we then try to deduce what may follow from those givens. The C exercises are advanced and may require quite a bit of thought. The A exercises are “ﬁnger exercises” that test your understanding of the deﬁnitions and techniques introduced in the chapter. or even additional research beyond the material in this text. use semantic methods to check the argument structure we have constructed. so that once we get beyond ﬁrst-order predicate logic. however. together with occasional Questions for Discussion. (An example is the Deduction Theorem. in natural deduction we have to guess which rules will help us uncover a conclusion with the desired properties. syntactic methods are not merely a redundant gloss on semantic methods. There are a number of exercises included here which are meant to develop the skill of using the rules of natural deduction to work from a given collection of premisses to a conclusion that may have certain desired or known properties but which has not yet been found. The B exercises are therefore tests of basic competency in the material of the chapter or section. Another reason for learning natural deduction is that it is a formalization of the way we do actually tend to think about things: presented with a collection of premisses. and invited to explore the consequences of these premisses using the rules of natural deduction. The solutions to about half of the exercises are presented at the end of the book. and with a free admixture of inductive steps) to arrive at a conclusion. To this end. Similarly.) Rest assured that all of the rules I use are valid and can be rigorously established in a more advanced approach. A number of rules and other results are used in this course that cannot be proven by means of the tools introduced in the course. Logic resembles gymnastics or language study in that it can be learned only by practice. There is yet another reason that natural deduction is worth learning. and the rest are available in the Instructors’ Handbook . a reason that I do not think has received suﬃcient attention. In some of the exercises in this book the student is presented with a collection of premisses without a conclusion. we do indeed go through something like natural deduction (although usually sloppily. The relation of natural deduction to truth trees and tables is something like the relation of long division to multiplication. B. In long division we have to guess our trial divisors. we may be surprised to learn that we use natural deduction whether we know it or not. We often do not have a pre-cooked conclusion any more than we know the sum total of a list of expenses before we add it up. If you cannot do the A exercises then you have missed something basic in the chapter. if we wish. I have included a generous selection of exercises. Like Moli`re’s M. Jourdain.

its pace is brisk. Although the book is elementary. I assume that my readers are like me in that they do not enjoy being patronized. Mark Smith. A special thanks to Ardis Anderson for her help in tracking down M. this is not Logic for the Complete Dummy. I have decided to follow Lemmon’s convention and use the version with double s’s because it avoids confusion with the legalistic sense of the term (as in. Acknowledgements I am grateful to the University of Lethbridge and the Social Sciences and Humanities Research Council of Canada for essential material and ﬁnancial support. . suggestions. Mark Ebner. Any errors that remain (and a quick induction suggests that there must be some!) are entirely my own responsibility. Many thanks for discussions. Herb Kort´. The author of any text on logic must at some point decide how to spell the word “premiss” (plural “premisses”). Laura Smith. Sam Woodruﬀ. I would be very grateful for any suggestions as to how this document can be improved. for instance. e and John Woods. Sheldon Chow. or error corrections to Bryson Brown. Dawn Collins. shorter proofs. Jourdain. “keep oﬀ the premises!”) and because it is probably more widely used in logic.vi PREFACE except for some of the more open-ended C problems. advice.

. . . . . . . . . . .6 Commutativity and Associativity . . . . . . . 2. . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . 1. . . . . . . . .2. .2. . . . . . . . . . . . . . . . .2. . . . . 2.2 Conjunction . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . .5 The Biconditional . . . . .4 Material Implication . . . . . . . . . . . . . .5 Syntactic Conventions .5. . . . . . . . . . .3. . . Wet . . . . . . . . . .4 Interpreting the Material Conditional . .1 Exercises . . . . . . 2. . . . . . . . .2. . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . 2. . . .1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . .4. . . . . .3 Truth Table Deﬁnitions of the Propositional Connectives . . . . . . . . . . .7 Negated Forms . . . . . . . . . . . . . . . .6 Combinations . . . . . . 2. . . . . . . . . . . . . . . . Logical Form. . . . . . . . . .2. . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . .2. .1 How To Use This Text . 2. . . . . . . . 2. 1. .3 Bracketing . . . . . . . . . . . . . . . . .3 Disjunction . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . .1 Exercises . . . . . . .9 Alternative Notation for Propositional Connectives . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . .2 The “Paradoxes” of Material Implication . . .5. . . . . . 2. . . . . .2 Common English Equivalents for Propositional Connectives 2. . . . . .7. . .2. . 2. . .2. . . . . . . . . . . 2. . . . . . . . . . 2. . . . . and Validity . . . . . . .7 “Unless” . . . 2. . . . . . . . . . . . . .6.2 How to Teach Yourself Logic . . . . . . . . . . . . . .1 Scope of a Connective . . . . . . . . . . . 2. . . . . . . . vii . . . . . . . . . . . . . . . . . .10 Exercises on Chapter 2 . . . . . . . . . . . . . . . . . . 2. . . . . . .3 Questions for Discussion . . . . . . . . . . .2 Main Connective . . . . . . . . . . 2 Propositional Logic: Symbolization and Translation 2. . . . . . . . . . .Contents Preface 1 Introduction to Logic: Getting Your Feet 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . 2. . . . . . . . . . . . 1.8 Intensional Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Logic. . . . . . . . . . .1 “Ifs” versus “Only ifs” . .4. . . . . .5. . . . . .1 Negation . . . . . . . . .8 Exercises . . iii 1 1 4 5 5 7 7 8 8 9 9 10 11 11 11 12 13 14 14 14 15 16 16 16 17 17 18 18 19 19 19 21 . . . . . . .1 Notation in Propositional Logic . . . . . .

. . 3. . . . . . . .2 Expressive Completeness . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . 3. . . . . . . 4. . . . . . .4 Substitution . . . . . . . . . . .2 Comments on Basic Rules . . . . . . .2 A Turnstile is not an Arrow! 4. . . . . . . . . . . . . .2 Truth Table Deﬁnitions of the Propositional Connectives 5. . . . . . . . . . . . . 3. . .3 Denying the Antecedent . . . . . . 3. . . . . . .2 Interderivability Results .6 The Deduction Metatheorem . . . . . . .3 Semantic Classiﬁcation of Single Wﬀs . . .1 Syntax versus Semantics . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . .2. . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . .6. . . . . . . . . . . . . . . . . .2 Aﬃrming the Consequent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . .5. . . . . . . . . . 5. . 4. . 5. . . . . . . . . . . . . . . . . . .3. . .4. . . . . . . . 3. . . . . . . 4. . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . 3. . . . . . . .6 Fallacy Alert! . . . .3 Form of Justiﬁcation When Using Derived 3. CONTENTS 23 25 26 27 28 30 30 31 34 42 42 43 44 45 46 46 46 47 48 51 51 51 52 52 53 55 55 55 55 56 57 57 58 59 59 60 60 63 63 64 64 64 65 66 66 . . . . . Rules . . . . . .1 Exercises .3 Theorems . . . . . . . . . . 5 Propositional Semantics: Truth Tables 5. . . . . . . . . . .1 Tautology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . .4 Avoid Circularity! . . . . . .1 A Common Misuse of vE . . . . . . . . . .1 The “Laws” of Thought . . . . . . . . . .1 Exercises . .1 Exercises . . . . . . . . . . .1 Basic Rules of Derivation .3. . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . 5. . . . .6. . 4. . . . . .1 How Many Possible Truth Tables Are There? . . . .4. . . . . . . . . . . .5.6. . . . . . . . 3. . . .1 Exercises . . . . . . . . . . . . . . . . . . 4.2 Nonuniform Substitution . . . . . . . . . . . . . .2 Combining Steps . . . . . .6. . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Exercises on Chapter 3 . . . . . . . . .1. . . . . . . . . . . . . 4. . . . . . . . . . . . . .1 Uniform Substitution . .1 Alternative Names for the Basic Rules. . . . . . . . . . . . .1 Exercises . . . . . . . . . . . .5 Introduction Rules . . . . . . . . . . . .2 Summary of Derived Sequents . . . . . . .4.3 Evaluating Complex Wﬀs . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . 4 Techniques and Applications 4. . . . . . . . . . . . . . . . . . . . . .5 Summary of Basic Sequents and Rules . 4. 4. . . . . . . 5. . . . 4. . . . . . . . . . . . . . . . .1 Basic Rules and Sequents . . . . .4. .7 Boolean Algebra . . . 4. . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . .1 Exercises . . . . . .6. . . . . . . . . . . . . .3 Simple Examples of the Basic Rules . . . 3. . . . . . . . . . . . . . . . . . . . . . .1 Recognition of Logical Form . . . . . .8 Filling in the Gaps in an Argument . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . .4 Some Useful Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. 3. . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . .1 One-Way Sequents . . . . .viii 3 Introduction to Natural Deduction 3. . . . . . . . . .

. 5. . . . . . . . . . . . .2 Predicate Logic Notation . . . . . . . . .4 Relation of Syntax to Semantics . . . . . Contingency. . . . . . . .5. . . . . . 5. . . . . . . . . . . . . .7. . 5. . . . . . . . . . . . . . . . . . . . . . . . .4 Contingency . . . . .3. . . . . . . . . . . . . . . . .10 Evaluating Long Conjunctions or Disjunctions . . . . . . . . . . . . . . . . . . . . . . . .5 Consistency . . .12 Review Exercises on Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . .2 Terms and Variables . . . . . . . . . .1 Deduction Theorem from the Semantic Viewpoint . . . . . . . . . . . . . . . . . . . . . . .5. . 6. . . . . . . . . . . . . . . 7. . . . 7. . . . . . . . . . . . . . . .2. . . . . . . . .5. . . . . . . .7 Exercises . . . . .5 Checking Validity of Sequents . 5. . 5.8 Testing Validity of Sequents Using Truth Tables . . . . . .5. . . . . . . . . . .2. . . . . . . . . . .1 Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CONTENTS 5. . . . . . . .8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Substitution of Equivalents . . . . . . . . . . . . . . . . . . . . . . . . .4 Subcontrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Tautology. . . . .3 One-to-One Relation Between Inconsistencies and Tautologies 5. . . . . . . . . . . . . . . . . . .5 Semantic Classiﬁcation of Pairs of Wﬀs . . . .2. . . . . . .7. . . . . . . . . . . . . .6 Relations Between Semantic Properties of Wﬀs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . .2 Inconsistency . . . . . . . .2 Equivalence . . 5.8. 6. . . . . . . . . . . . . . . . . .8 Questions for Further Research . 5. . . . . . . . . . . .9 Using Short-Cuts in Testing for Other Semantic Properties . . . . . . . . . . . . . . 5.2 Paths . . . . . . . . . . .7 Semantic Classiﬁcation of Sets of Wﬀs . . and Consistency . . 5. . . 7 Predicate Logic: Symbolization and Translation 7. . . . . . . . . 6. . . . . .4. . . . . . . . . . . .1 Equivalence . . . . . . . . . . .1 Why We Need Predicate Logic . . . . . . . . . . . 6.4 Relations . . . . . . . . . . . . 5. . 5. . . . . . . . . 6. 7. . . . . . . . . . . . . . . . . .5. . . . . ix 66 66 67 67 67 67 67 68 68 68 68 68 68 68 69 69 69 69 69 69 71 71 72 73 73 75 77 77 78 79 80 80 80 81 82 83 84 84 85 85 86 86 86 87 87 . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . 7. . . . . . . .1 Branches and Stacks . . . . 6 Propositional Semantics: Truth Trees 6. . . . . . . . . . . . . . .2.7. . . . 6. . . . . . . . . . 5. . 5. 5. . . . . . . . 6. . . . . . . . . .2 Short-cuts . . . . . . . . . . 5. . . . 5. . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . .4.6 Consistency . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . .3 Contrariety . . . . . . . . . . . .7. . . . . . . . . 6. .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . 5. 5. . . . . . . . . .11 Sheﬀer and Nicod Strokes . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . .4 Checking Semantic Properties of Formulas with Trees 6. . . .2 Equivalence . .3 Consistency . . . . . . . . . . .4 Independence . . . . . . . . . . . . . . . .3 Monadic Predicates . . . . . . . . . . . . . . .3 Stacking and Branching Rules . . . . . . . . . . . . . . . . . . .1 Universe of Discourse . . . . . . . . . .5 Inconsistency . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . .2 Inconsistency . . . . . . . . . .7 Independence . . 5. . .5. . . . . . . . . .6 Trees or Tables? . . .

. . .6 Quantiﬁers . . . . . . . . . .2. . . . . . 8. . . . . . . . . . . 8. . .2. . . . . . . . . . . . . .4 Review Exercises . .3 Fallacy Alert! . . . . . . . . . . . . . 9. . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . 8. . . . . . . . . . 8. . . . . . . . . . . . . . . .13. . . . . . . . . . . . . . . .2. . . . . . . . . . . . . 8. . . . .7 Order . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. .2. . . . . . . .13 Variants of Categorical Forms Using Relations Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Distribution Rules . . . . . . . . . . . . . . . . .2 Expressing Numerical Relations 9. . . . . . . . . .3 Superlatives . . . . . . .6 Proofs of Duality Rules . . . .2 Universal Introduction (UI) . . . . . . . . .3. . . . . . . . . . . .2.11 Use of “Except” . . . . . . . . . . . . . . . . . . .2 Fallacy Alert! . . . . . . . . . . .5 Existential Elimination (EE) . . . . . . . . . . .13. . . . . . . . . 7. . . . . . .7 Rules of Passage .12 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Universal Elimination (UE) . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Use of “Only” . . .13 Summary of Rules for Predicate Natural Deduction . . . . . . .1 Duality Relations . . . . . . . .2 Duality Rules . . . . . . . . . . .2.10 Combinations of Properties . . . . . . . . . . . CONTENTS . . . . . . ! . . . . . . . . . . . . . . . . . .4 Examples . . . . . . . . . .2. . . . . . 9. . . . . . . . . . . .13. . .2. . . . . . . . . . . .9 What to Memorize . . .13. . . . . . . . . . . . . . . . . . . . 9 Predicate Logic With Identity 9. . . . . . . . . . . . . . .x 7. . 7. . . . . . . . . . . . . . . . . . 8. . . .6 Fallacy Alert! . . . . . . . . . . . . . . . . . . 8.11 Fallacy Alert! . . . . .5 Direct Manipulation of Propositional Functions . . 7. . . 7. . .3 Deﬁnite Descriptions . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . .9 Don’t Be Fooled. . . . . . . . . . . . . . .2. . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . 8. . . 8. . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . 8. .3 Rules of Passage . . 8. . . . . . . . . . . . . . . . . . . . 8. . . . . . .2. . . . . 8. . . . . . . . . . . . . . . . . . . . .1 Examples . . . . . . . . . . . . . . . .2. . . . . . . . . . . . 88 88 90 90 91 92 92 93 93 93 95 95 95 95 96 96 97 97 98 99 100 102 102 103 104 105 105 105 105 106 106 107 107 107 108 109 111 112 113 113 115 116 116 117 7. . . . . . . . . . . . . . . . . . .1 Basic Rules . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . .8 Interderivability ⇔ Equivalence Theorem . . . . . . . . . . . . . . . . . . . . . .3 Summary of Introduction and Elimination Rules for Quantiﬁers 8. . .5 Distribution Rules for Quantiﬁers . . . . . . . .3. . . .14 Review Exercises for Chapter 8 . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . 9. . . . . . . 8. . .3 8 Predicate Logic: Natural Deduction 8. . . . .4 Existential Introduction (EI) . . . . . . . . . . . . .5 Propositions and Propositional Functions . . . . . . . . . . . . . .2 Introduction and Elimination Rules . . . . . . .10 A Shortcut . . . . . . . . . . . 8. . . . . . . . .13. . . . . . . . .8 Categorical Forms . . . . .1 Example . . . . . . . . . . . . . . . .2. 8. . . . . . . . . . . . . . .2. . . . . .

. . . . . . . . Solutions for Chapter 3 . xi 119 119 120 126 129 135 136 137 143 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions for Chapter 9 . . . . . . . . . . . . . . .CONTENTS Solutions to Selected Exercises Solutions for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions for Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions for Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions for Chapter 6 . . Solutions for Chapter 7 . . . . . . . . . . . . . . . .

xii CONTENTS .

and Validity What is logic? There is much disagreement among professionals about this question. i. But it is now generally felt that this ambitious project did not work. • Logic (especially symbolic logic) is the mathematics of language. 1 . while no doubt debatable. Unfortunately. or implicit. We might accept the premisses for several reasons: they may be known facts whose implications we are trying to uncover.. have been debating for millennia with the nominalists.1 Logic. we cannot get into such fascinating disputes in an introductory work like this. I will oﬀer two deﬁnitions that.” meaning word ). headed by Bertrand Russell and Gottlob Frege (who themselves contributed in important ways to the discipline that we now call modern symbolic logic) argued that all of mathematics is based on laws of pure logic. We should begin by noting the important distinction between inductive and deductive logic. see [1. who believe in the independent reality of mathematical objects. But in deductive reasoning we never go beyond the information contained in the premisses. rather. In deductive logic we extract information already contained.e. or they might be hypotheses or possibilities whose consequences we wish to explore. Logical Form. for logic instantiates deep laws of form that appear in many guises in many subjects and disciplines [5]. logic has a lot to do with language. But it also points to something beyond language. (The Platonists. 5].Chapter 1 Introduction to Logic: Getting Your Feet Wet 1. in a set of premisses (statements that are accepted as given in the context of an argument). who insist that the “truths” of logic and mathematics are nothing more than linguistic conventions. For more on the philosophy and foundations of mathematics and logic. that logic is a sort of applied mathematics. for instance.) We could also ask which comes ﬁrst. and that it is more likely the other way around. As the very word “logic” suggests (its root is the Greek term “logos. logic or mathematics? The school of logicism. we rearrange that implicit information into forms that may be more interesting or useful to us. give us something to work with: • Logic is the art and science of reasoning. One can get into interesting philosophical debates about the nature of these mathematical forms.

the Leafs have a chance. intuition. then you are in Canada. and their ability to express complex relationships that could not be expressed eﬃciently or at all in words. then you are in Alberta. you are in Canada. Also. However. “tensed” logics for which the point in time at which a proposition was uttered is important. it is very useful to study deductive methods in isolation. ∴ If you are in Lethbridge. For the logician. symbols have the advantage of greater universality. we make a crucial distinction between validity and soundness: Validity: a deductive argument (or argument form) is valid if and only if (or iﬀ ) it is impossible for its conclusion to be false when its premisses are true. Now. Warning! Please spell the word “argument” correctly! One mark oﬀ any assignment or test any time you write “arguement”! Here’s a simple deductive argument: If Sundin stays healthy. why do we do symbolic logic? The advantages of symbols in logic are much the same as their advantages in mathematics: precision. Both premisses and the conclusion happen to be true statements. there are arguments that intuitively seem to be valid in the sense that the conclusions somehow follow from the premisses.2 CHAPTER 1.) Notice that we don’t worry too much about tense in the examples used here. premiss). INTRODUCTION TO LOGIC: GETTING YOUR FEET WET Inductive reasoning is said to be ampliative in that it expands upon the information given in the premisses. Most applications of reasoning in real-life situations involve a mixture of inductive and deductive methods. If you are in Alberta. an argument is not a dispute or a shouting match. and it becomes quantitative in many scientiﬁc contexts. there are. The premisses are statements that are assumed to be given. however. But if Sundin takes his vitamins. because of its importance as the foundation for an understanding of all other methods of reasoning. A lot of what logicians do (though not all) is analyze arguments. and the grasp of aesthetic relationships. also involves factors that are very diﬃcult to deﬁne formally. ∴ If Sundin takes his vitamins. an argument in diﬀerent languages that has the same logical form would be expressible in the same symbols. concision. it is an attempt to establish a conclusion on the basis of given premisses (singular. the Leafs have a chance. Inductive reasoning is usually implicitly or explicitly probabilistic. Consider the following argument: If you are in Lethbridge. Substitute the word “France” for “Alberta” and we would have an argument with false premisses. such as creativity. as logicians we don’t know where they come from. he will stay healthy. (There’s our ﬁrst symbol: “∴”. but the conclusion would still seem to follow from the premisses. which means “therefore”. we need to distinguish between validity and soundness. especially as it is used in science and engineering. This course is almost entirely concerned with deductive reasoning. . Now. To clarify what is missing. but which still have something missing. Induction. Therefore. rather.

Arguments can be valid or invalid. valid. As we shall see. Later in the course we will learn other ways of expressing the notion of validity. Surprisingly. nor whether what we are saying is true. this course is simply about all the ways of deﬁning and testing for deductive validity. In fact. not their soundness. logicians might be compared to automotive mechanics. such as Lemmon’s Beginning Logic [17].” [20]. then you are in the Shire. . as follows: P →Q Q→R ∴ P → R. without even deﬁning the symbols. however.1. the conclusion is guaranteed to be true as well.1.” that terminology is now out of date. use the word “sound” where we will use “valid. As Bertrand Russell once quipped. AND VALIDITY Soundness: A deductive argument is sound iﬀ it is valid and has true premisses. Compare this to the argument given above about being in Alberta. and for validly deriving conclusions from premisses. but which can never be sound. valid arguments can be transformed by the techniques of natural deduction into certain kinds of true statements called logical truths. all of which are equivalent to the ﬁrst deﬁnition given above. What is common to these two arguments is their form. you are in Middle Earth. “Pure mathematics [which Russell saw as an extension of logic] can be deﬁned as that subject in which we do not know what we are talking about. The practical value of valid deductive reasoning stems from the fact that if a valid argument has true premisses. Now consider the following argument: If you are in Hobbiton. Validity transmits truth from premisses to conclusions. ∴ If you are in Hobbiton. in fact. In a sense. we do not worry about where the premisses came from or whether or not they are true. but there are no such things as “valid” statements or “true” arguments. Some older books. which we could represent in a more or less obvious way. then you are in Middle Earth. and propositions can be true or untrue.” Be careful to avoid this ambiguity when you are doing formal logic. LOGIC. Later on we will see some examples of rather suspicious-looking arguments that are. there are some arguments that can be valid. If you are in the Shire. Common parlance often confuses the terms “valid” and “true. The following rough-and-ready concept of validity is also useful: An argument is valid if it follows from its premisses by correct application of certain rules of deduction. 3 This is the modern usage of the terms “valid” and “sound” as they are applied to arguments or argument forms. we are only concerned about the validity of arguments. so that these two deﬁnitions of validity turn out to be the same thing in the long run. As logicians. LOGICAL FORM. who ensure that your car runs properly but do not ask where you drive it. they are. for formal reasons. we have to ask whether or not the rules of deduction are themselves valid in the deeper sense given above. In this respect. Eventually.

if some of the premisses are false then there is no telling whether the conclusion will be true or false. etc. 1. In this course we cover three main levels. They are: 1. because I want to emphasize that logic. In fact. is ultimately self-taught no matter what courses you take or what books you read. just as if you put incorrect data into your accounting program you might. each one of which gives us more machinery to capture validity. One of the main skills that you will develop in learning logic is the ability to perceive logical form. Just as the job of software engineers and programmers is to write reliable code. computers can easily be programmed to follow the steps of a deductive argument. and serves as the foundation for all higher-level work in logic.2 How to Teach Yourself Logic I entitled this section “how to teach yourself logic”. French. There are diﬀerent kinds of logical form that capture diﬀerent ways in which an argument can be valid. Elementary ﬁrst-order predicate logic with identity. where we have to master each level before we can go on to the next. which is an absolutely basic discipline that appears in a hundred useful contexts. 2. .). Another helpful analogy between logic and computing is that we can think of logic as being like a computer game with levels. The secret of learning any complex set of skills is attentive repetition. Propositional logic — also known as sentential logic. Elementary ﬁrst-order predicate logic. or might not. Each level has extra machinery that allows us to represent a kind of validity that cannot be represented by the lower level. and their forms. or propositional calculus. 3. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET So we can distinguish between particular arguments in a given natural language (such as English. Of course. for instance. The validity of a deductive form can be compared to the reliability of a piece of software. not “how to learn logic” or “how to be taught logic”. Our main business as logicians is to analyze the validity of arguments by determining the validity of argument forms. but which at the same time awaits improvement with almost inﬁnite patience. We begin with propositional logic. together with the cultivation of a calm intensity that does not lose its focus on the desired goal. This is like a valid argument form that can be guaranteed to return a true conclusion if the premisses are true. Russian. will correctly tell you whether you have a proﬁt or a loss in your business so long as you correctly enter all income and expenses. A reliable accounting program.4 CHAPTER 1. get the right answer back from it. the job of logicians (so far as deductive logic is concerned) is to devise valid arguments. which are more universal. We can intuitively recognize some kinds of validity by means of what I shall call “linguistic common sense. like any other complex set of inter-related skills. A deductive form is very much like a computer program which accepts input (the premisses) and outputs a conclusion.” but the powerful tools of logic allow us to test the validity of arbitrarily complex argument forms. sentential calculus.

there are far fewer of them than in learning a language. move on to the B’s. Try to work as many of these as you can. Learning a subject involves putting the details of the subject into your head. correctly. Although there are things that must be memorized in learning logic.2. it follows from becoming so familiar with the operations of the subject that they become second nature. the inter-relations between its concepts become more and more obvious. learning logic is very much like learning a language. and mere memorization becomes less important. to the point at which they become as obvious to you as the streets and alleys of your home town. 1.3. where one must memorize thousands of words as a basis for full ﬂuency. When you can do the A’s. Yes. some of which at ﬁrst may seem arbitrary and strange. Writing things out completely.1. QUESTIONS FOR DISCUSSION 5 I once had the temerity to take a course called Introduction to Theoretical Physics. Furthermore. and then diligently practice their use. and so forth. and solutions to about half of them are given in the back.3 Questions for Discussion 1. • You may also ﬁnd it very helpful to try and make up your own problems. I repeat. in order to learn logic you have to memorize things. The very ﬁrst equation he presented to us will be the ﬁrst formula of this text as well: Understanding = Familiarity. repetition is the secret of learning. as one becomes more experienced and familiar with the use of logic. however. • There are a generous number of practice exercises. • It is very important to be neat and tidy when writing out your proofs. and becoming adept in their use by dint of long practice. There are. you have to work very hard simply to familiarize yourself with the basic deﬁnitions and rules. 1. truth tables. and legibly will facilitate careful and accurate thought — and will also encourage our over-worked teaching assistant to give you better marks. given at the University of Toronto by a great teacher and scholar named Professor Lynn Trainor. In fact. and if you cannot do all of them you have missed something important in the previous section of the text. What Professor Trainor meant was that in learning a complex technical subject such as logic or physics. rather. After a while the structure of the language becomes so obvious that you cannot quite ﬁgure out why you once could not understand it.1 How To Use This Text • Work through on paper the demonstrations given in the text. also some important diﬀerences between learning logic and learning a language. A certain amount of doing logic correctly is nothing more than clerical accuracy. in the beginning. why aren’t all deductive arguments circular? Or are they? . If all deductive logic is simply a process of reworking information given in the premisses. Try the A’s ﬁrst. • If you do all of the exercises in this text — do them again! — and perhaps again. one must accept the fact that fully conﬁdent understanding may not come immediately. until it ﬁnally dawns on you that all of logic is in a deep sense just many diﬀerent ways of saying the same thing.

Are logic and mathematics discovered or invented ? .6 CHAPTER 1. Is logic just a branch of mathematics? Or the other way around? Or neither? 3. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET 2.

It is possible to deﬁne other truth-functional connectives. 1944. This is by no means the smallest set of connectives with which one can do propositional logic. Not all ways of forming compound propositions from simpler propositions are truth-functional. and ↔ (the biconditional). commands. Those 7 . In propositional logic we study the logical behavior of propositions. .” Propositions are so-called because in eﬀect the speaker “proposes” that something is the case. and we will brieﬂy discuss this in Chapter 5. as we shall eventually see. → (the material conditional). P2 . Combinations of propositional letters and connectives will be called formulas. Examples of propositions are “This is an example of a proposition. requests.. These are also called truth functional connectives because. or negation). However.” “2 is a number. . Propositions can be joined together by certain symbols called propositional connectives. others (perhaps more commonsensically) will say that these represent diﬀerent propositions that happen to point to the same state of aﬀairs. . the truth value of compound propositions formed by linking simpler propositions together with these particular connectives is strictly a function of the truth values of the simpler propositions.Chapter 2 Propositional Logic: Symbolization and Translation 2. & (and). . Propositions (also sometimes called statements) are distinct from questions. it may be sometimes diﬃcult to say just what that value is). which are linguistic expressions or utterances that are intended by their user to indicate a fact or state of aﬀairs. We need not concern ourselves with this subtle distinction in an introductory course. We will represent propositions by letters such as P.” “D-Day was June 6.1 Notation in Propositional Logic As explained in Chapter 1. propositional logic is the ﬁrst “level” of logic. it is really a calculus of truth values. ∨ (inclusive or). Q. R. P1 . These connectives have been chosen because they are fairly close to the less formal logically connective concepts that are used in many natural languages. We shall be mainly concerned with the following truth-functional connectives: − (not. Common translations of these symbols are given in the next section. Some logicians will say that “It is raining” and “Il pluit” represent the same proposition expressed in diﬀerent languages. . truth-functional combinations of propositions are the main ones we are concerned with here because all that propositional logic is really concerned with are the truth values of propositions. . or exclamations in that they can have a truth value (though as we shall see.” and “E = mc2 .

2.” “P is not true. we will simply rely on our knowledge of English to tell us which formulas can stand for complete sentences and which cannot. ”. . −P can be read as “not P . PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION strings of symbols that can represent propositions are called well-formed formulas or wﬀs. Propositions can be atomic or molecular . . This proposition could be a combination of other propositions. .8 CHAPTER 2. and we will use the double-arrow ⇔ to mean “can be translated as . . but we will not do that in this course. in that it only applies to the proposition written immediately to its right. many years of practical experience shows that the rules we present here (or ones very similar to them) allow us to deﬁne a system of logic that is very useful and which can form a basis for more advanced logics.” while an example of a molecular proposition is “Today is Tuesday and I have to go to a meeting. imprecise. however.2 Common English Equivalents for Propositional Connectives Below I list some of the most commonly encountered English equivalents for the standard propositional connectives. It is impossible to list all of the possible English expressions that could be correctly translated into the standard symbols such as & . while molecular propositions are those that can. In the examples in this chapter we’ll use the following propositional symbols: B := “Bob goes to the store” A := “Alice goes to the store” C := “Cheryl goes to work” T := “Tad stays home” The symbol := means “is deﬁned as . . for instance. It is possible to give a precise recursive deﬁnition of the set of all possible wﬀs. Expressions in natural languages such as English can be ambiguous. Thus.” in order to work out a speaker or author’s implicit logical sense. what might be called “linguistic common sense. or capable of multiple meanings and subtle rhetorical nuances. Atomic propositions are those that cannot be broken down into truth functional combinations of other propositions. An example of an atomic proposition is “Today is Tuesday.” Negation is a unary connective.) Instead.” 2. Lemmon’s Beginning Logic [17]. The rules for translation that we present here certainly do not capture all of the conceivable ways that logical structure can be expressed in a natural language.1 Negation To negate a proposition is to deny that it is true or that it is the case. Example: Bob does not go to the store ⇔ −B. ”. However. and may not always have a unique logical translation. (See.” or “P is false.2. We have to rely on our knowledge of the language.” or “it is not the case that P .

The two disjoined propositions P and Q are called the disjuncts. If these distinctions make a diﬀerence to the logic of an argument. In addition to Q. . etc.2. is or are the case. The most general disjunction can be written in the form P1 ∨ . . or possibly both. Thus. The two conjoined propositions P and Q are called the conjuncts.2. disjunction is usually taken to be binary. Example 2 : Alice goes to the store while Tad stays home ⇔ A & T . P although Q. Here are some common expressions that would all be translated into symbols as P & Q: P and Q. Either P or Q. P despite Q. P or Q or both. then they must somehow be expressed clearly enough that they can be translated into propositional notation.3 Disjunction To disjoin two propositions is to say that one or the other. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES 9 2.2 Conjunction To conjoin two propositions is to assert or to hold them jointly. Here are common expressions that would be translated as P ∨ Q: P or Q. there is no reason in principle why any number of conjuncts cannot be linked in the same conjunction (although we will not prove that fact formally in this course). meaning that it applies to only two propositions. . ∨ Pn . Example 1 : Cheryl goes to work and Tad stays home ⇔ C & T . Although conjunction is usually taken to be binary. P whereas Q. We will write conjunction in the form P & Q.” “whereas. .P . P in spite of Q. Example 4 : Bob and Alice go to the store while Cheryl goes to work ⇔ B & A & C. indeed. Note that order does not matter in any of these examples..! Note that the subtle rhetorical distinctions between “but. P even though Q. Example 2 could be translated just as well as T & A. but also like conjunction it is perfectly admissible to write disjunctions of any number of disjuncts.” etc.2.” “and. We write disjunction in the form P ∨ Q. 2. Both P and Q. or. P but Q. in any correct usage of &. & Pn . Like conjunction. are all lost in moving to propositional notation. . Example 3 : Bob and Alice both go to the store ⇔ B & A. and while we will usually consider cases where only two propositions are conjoined.2. The most general conjunction can be written in the form P1 & .

. order does not matter. Example 3 shows that. or in circuit theory as XOR. Q is entailed by P . Here’s a typical example: “Either negligence or incompetence accounted for the sinking of the Titanic. Q if P . it might have been both. P entails Q. While P . In propositional logic the consequence relation is called material implication. P implies Q. Occasionally. P only if Q.2. Example 3 : Either Bob goes to the store. In a statement of the form P → Q the proposition P is the antecedent.” Clearly. and it is crucial to understand its properties. Given P . which are sometimes counterintuitive. The form P → Q is often called the conditional . was probably adapted from the Latin vel . Example 1 : Bob or Alice go to the store ⇔ B ∨ A. it will be useful to express the concept of exclusive disjunction. Q while P . Here’s an example in which the fact that a disjunction is exclusive might make a logical diﬀerence: “Every natural number is either even or odd. Later on we will see how to construct exclusive “or” in terms of the usual connectives. This is how “or” is usually used in English and many other natural languages. and is symbolized in this course by →. Q. Alice goes to the store. Q.10CHAPTER 2. also known as exclusive or . Example 2 : Either Cheryl goes to work or Bob goes to the store ⇔ C ∨ B. Material implication is a strictly binary connective. or Cheryl goes to work ⇔ B ∨ A ∨ C. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION In propositional logic we will almost always take “or” in the inclusive sense. The usual symbol for inclusive “or. and (“aut”) for the exclusive sense — and thus less chance of confusing the two concepts.” Again. Q is a necessary condition for P . as with &. P is a suﬃcient condition for Q. If P . Thus. we can usually take “or” to be inclusive unless clearly stated otherwise.” ∨. any number of propositions can be disjoined. Example 2 could just as correctly have been written as B ∨ C. 2.4 Material Implication It is probably fair to say that all or virtually all types of logic. meaning that if we accept “P or Q” as true. and Q is the consequent. or that P and Q may be true together. involve some notion of logical consequence — the idea that one thing may be taken to follow from certain others. In Latin there are two distinct words for the two senses of “or”: (“vel”) for the inclusive sense. Q. formal or informal. as with &. Q is implied by P . Again. Here are common translations of P → Q: If P then Q. we are willing to grant that either P or Q is true on their own.

unlike ∨ and &.6 Commutativity and Associativity Commutativity: An operation is commutative if it returns the same result regardless of the order in which it is applied. Example: Bob goes to the store if and only if Tad stays home ⇔ B ↔ T .2. Like “or. Note carefully that if P → Q.2. of course. (This is often abbreviated as “P iﬀ Q. it is necessary for Cheryl to go to work ⇔ A → C.5 The Biconditional The symbol ↔ is called the biconditional .” is a word that is used in more than one way in the English language. it does not always follow that Q → P .” “unless” can be used in an inclusive or exclusive sense. For instance. It is. Example 1 : If Bob goes to the store then Tad stays home ⇔ B → T . and the biconditional are commutative and associative. Also. In these two forms the implication goes in opposite directions. Here is an example of the inclusive sense of “unless:” . Associativity: An operation is associative if it returns the same result regardless of how we group the terms to which it is applied.2. if we take bracketing to indicate that the bracketed term is evaluated ﬁrst. it can be shown that (P ∨ Q) ∨ R has the same truth value as P ∨ (Q ∨ R). is equivalent to Q. implies Q and Q implies P . implication is strictly binary.”) is a necessary and suﬃcient condition for Q. Example 4 : In order for Alice to go to the store. Here are the most common translations of P ↔ Q. 2. while material implication is not. Again. 2. Example 2 : Bob goes to the store only if Tad stays home ⇔ B → T .7 “Unless” “Unless. Example 3 : Bob goes to the store if Tad stays home ⇔ T → B.2. It is very important not to confuse “P only if Q” with “P if Q”. P P P P if and only if Q. P & Q has the same truth value as Q&P.2. and some confusion has grown up over the right way to translate it into propositional symbols. disjunction.” like “or. 2. and is deﬁned such that P ↔ Q if and only if both P implies Q and Q implies P . Conjunction. there are many possible ways of expressing this notion in natural languages. not generally true of any two propositions chosen at random that they entail each other. For example. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES11 Innumerable minor linguistic variations of these forms are possible.

But check the context carefully. in the large majority of cases. . (d) Either Bob goes to the store or Tad stays home. Translate the following into clear. (i) For Cheryl not to go to work it is necessary that Tad not stay home. (c) −B ∨ C. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION Johnny will fail the exam unless he studies. grammatically correct. This is most easily translated as exclusive “or”. Here is an example of the exclusive use of “unless:” We will have a picnic unless it rains. In the chapter on propositional semantics we will examine the truth table behavior of the various senses of “or” and “unless”. (d) −C ↔ T . (f) Bob does not go to the store only if Alice does. (c) If Tad stays home then Cheryl does not go to work. (b) Tad stays home but Cheryl does not go to work. It would be very rare that the logic of an argument would depend on whether or not a usage of “unless” was exclusive. 2. Translate the following into symbols: (a) Tad stays home while Cheryl goes to work.8 A Exercises 1. Sadly. (k) Bob does not go to the store if Alice does. The very thing we wish to deny is that we are going to have a picnic in the rain. you can follow the advice of most logic books. (g) Cheryl goes to work if Bob goes to the store. (e) Alice going to the store is implied by Bob not going to the store.2. (b) B & C. while at the same time we aﬃrm that we will have a picnic if it does not rain. idiomatic English: (a) B → −A. (j) Cheryl going to work implies Alice going to the store. and Alice going to the store implies Cheryl going to work.12CHAPTER 2. (l) Bob going to the store is equivalent to Tad not staying home. we know that Johnny studying for the exam does not exclude the possibility of Johnny failing the exam. which is to translate “P unless Q” as P ∨ Q. (m) Tad stays home unless Cheryl does not go to work. 2. Therefore. (h) For Cheryl to go to work it is suﬃcient that Tad stays home.

2. We will study truth tables in much more detail in Chapter Four. where we will see that they oﬀer a powerful method for the analysis of the validity of arguments. that of a single proposition P : P T F This merely expresses the assumption that any proposition is taken to be either deﬁnitely T or deﬁnitely F. . We can call these the “input” values.3. The table is constructed simply by listing all the possible truth values of the proposition in a column below it. B → T. We introduce truth tables now because they will help us to understand why some of the connectives are translated in certain ways.3 Truth Table Deﬁnitions of the Propositional Connectives All of the propositional connectives we have used in natural deduction can be deﬁned in terms of their characteristic truth tables. in order to represent all possible combinations of truth values of the atomic propositions. TRUTH TABLE DEFINITIONS OF THE PROPOSITIONAL CONNECTIVES 13 (e) (f) (g) (h) (i) (j) −A → B. because they are the basis for a large part of what we do in this text. −A ∨ −T . The truth value of the whole expression is written in a column underneath the connective. call these values the “outputs”. Here is the simplest truth table. Here are the truth-table deﬁnitions of the ﬁve most commonly used propositional connectives: Negation: − P F T T F Conjunction: (P & Q) T T T T F F F F T F F F Biconditional: (P ↔ Q) T T T T F F F F T F T F Disjunction: (P ∨ Q) T T T T T F F T T F F F Material Implication: (P → Q) T T T T F F F T T F T F It is absolutely necessary for beginners in logic to memorize these basic tables.2. Tables for propositions containing two variables have four lines. −A & − T . −T → −B. T → B.

there are many other kinds of nourishing food. and it is not necessary to eat East Indian in order to live.3. Hence. then state the truth values of (a) P ∨ Q (b) P → Q (c) −P ∨ Q (d) P ↔ Q (e) Q → P 2. (d) Bob goes to the store if and only if Cheryl does not go to work. Here is a nice example (thanks to a student!) which illustrates the proper reading of P → Q. 2.1 Interpreting the Material Conditional “Ifs” versus “Only ifs” As we learn more about how deduction works. (e) For Bob to go to the store it is suﬃcient that Cheryl not go to work. Q is true whenever P is T. (Lovers of curry might disagree with me about this. that P is T only in the lines in which Q is true also. However. it is suﬃcient for P to be true. this table has some counterintuitive features which you may only become fully comfortable with after a lot of experience with logic. In other words. The concepts of necessity and suﬃciency are easy to mix up. The reading “P only if Q” sometimes confuses students (and a few professors from time to time as well). (b) If Bob goes to the store then Cheryl goes to work. For a person to be adequately nourished.4. it makes perfect sense if you notice that on all the lines of the table where P → Q is T. (c) Either Bob goes to the store or Cheryl goes to work. If P is T and (P & Q) is F. also. P cannot be T unless Q is T.14CHAPTER 2. it is suﬃcient that they eat East Indian food. If it is true that Bob goes to the store and false that Cheryl goes to work. for Q to be T. However. the reading “Q is a necessary condition for P ”: given P → Q. however. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION 2.1 A Exercises 1.) On the other hand. We can note right away. Consider the following statements: (1) You will win the lottery only if you buy a ticket. However. . it will become apparent that the truth table for → is the backbone of deductive logic. that the table will help us to understand the various ways in which → is translated into English. Concrete examples may help to make them clear. The reading “if P then Q” should be clear: from the table.4 2. then what are the truth-values of the following propositions? (a) Bob goes to the store and Cheryl goes to work. we can see that on all the lines where P → Q is T. it is necessary to ingest a certain amount of protein in order to be adequately nourished.

there seems to be little if any relevance of antecedent to consequent. INTERPRETING THE MATERIAL CONDITIONAL (2) You will win the lottery if you buy a ticket. Again. but this. beyond their truth values.4.2 The “Paradoxes” of Material Implication Consider the following garden-variety implication: If you change your oil regularly then your engine will last longer. then Canada is in North America. and any formula is true or false depending only on the truth values of the input variables and the logical relations between them. This is a “paradox” only because it is an aﬀront to our linguistic common sense. propositional logic is purely an algebra of truth values. Still. have is their truth values. Here’s another example: If a party in a Parliamentary system wins the most seats in an election.2. . and so by the truth table for “if-then” the whole statement is true. The ﬁrst is the generally true claim that you have to buy a ticket in order to have a chance to win. for the table says that such conditionals are true regardless of the truth value of the consequent. anyone familiar with the way Parliamentary governments work will recognize the statement as true. does not guarantee that you will win the lottery. .4. Both antecedent and consequent are true. . 15 These are two very diﬀerent statements. both of the following are true statements: If 2 + 2 = 17π. but legal. This illustrates a fact that is often called the Paradox of Material Implication: The truth value of a material conditional has absolutely nothing to do with the meaning of the antecedent and consequent. In propositional logic the only “meaning” the propositional letters P. it gets to form the government. The second says that you will win if you buy a ticket. But now consider the following: If Ottawa is the capital of Canada then Caesar crossed the Rubicon. We recognize this as true because we know about the causal relevance of regular oil changes to engine longevity. On the other hand. If 1 = 0 then 1 = 2. the truth conditions for these two statements are clearly quite diﬀerent. Here the relevance of antecedent to consequent is not causal. Q. which is certainly false for most lotteries! Statement (1) above is equivalent to the following: To win the lottery it is necessary that you buy a ticket. The challenge to common sense becomes even more acute when we consider cases in which the antecedent is false. Thus. . of course. And yet. statement (2) is equivalent to this: To win the lottery it is suﬃcient that you buy a ticket. 2.

A free translation of the Latin phrase is. . while in −(P & Q) the main connective is the negation sign. .” This principle is sometimes abbreviated Ex Falso. It is also possible. Pn ) should be privileged. scope can be ﬁxed by brackets if there is any danger of ambiguity.5. to represent conjunctions and disjunctions respectively by writing something like this: (P1 . . In Chapter 5 we will introduce the highly ﬂexible Reverse Polish Notation (RPN).5. are the main connectives for the whole conjunction or disjunction. 2. . Pn ) = i Pi and (P1 . in −P . Pn ) = i Pi . in (P ∨ Q) → (P & Q). . ∨ . P2 .2 Main Connective The main connective of a propositional formula is the connective whose scope is the whole formula. As these examples indicate. in eﬀect. although we will not rely on it here. the scope of the negation sign is just P . . and we shall encounter it again in the next chapter. . This question illustrates the drawback of the usual notation we use for disjunction and conjunction. the scope of the sign of conjunction is −P and Q. especially those containing conjunctions or disjunctions with more than two terms. . The highly ﬂexible notation used by Spencer Brown in his calculus of indications [5] gets around this problem elegantly. For instance. stripped-down notion of material consequence presented here is on the whole the most useful notion of implication we have.5 2. long experience has shown that the simple. There is an advanced branch of logic called relevance (or relevant) logic in which logicians attempt to construct notions of implication which can express relevance of antecedent to consequent in a way that is closer to ordinary language. conclude whatever you like. However. For instance. . 2. Pn ) or (P1 . P2 . . “from a falsehood. Is there a main connective in a conjunction or disjunction with more than two terms? Since conjunction and disjunction are fully associative. since it can be adapted in so many ways. . the → is the main connective.1 Syntactic Conventions Scope of a Connective The scope of a propositional connective is the set of formulas that it operates on. . & . where the connectives in front operate on the whole list of propositions given in brackets and thus. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION These examples are an illustration of the following principle of classical two-valued logic: Ex Falso Quodlibet: from a falsehood anything follows. . it would seem that no one connective in (P1 . and show how it can be used to streamline the evaluation of complex formulas. . in −P & Q.16CHAPTER 2. . and in −(P & Q) the scope of the negation sign is all of (P & Q). . . which (as George Spencer Brown pointed out [5]) are usually represented as if they were binary when in fact they are not restricted to two variables.

few experienced speakers of English would read it this way. & Pn ). (There is an exception to this when we do truth tables. In this book we will use a very simple convention whose sole purpose is to avoid ambiguity. . Brackets indicate the scope of a connective. a sensitivity to context and conventional usage is sometimes required to translate combined expressions either from a natural language to a formula or the reverse. Example 3: Either Bob goes to the store only if Alice goes to the store or else Cheryl goes to work only if Tad stays home ⇔ (B → A) ∨ (C → T ). For instance. . . and the biconditional. . Example: In P & (Q ∨ R). Example 1: If Bob and Alice go to the store then Tad doesn’t stay home ⇔ (B & A) → −T .) 2. This means that we can write conjunctions of n formulas as (P1 & . it could also be read as “(Kerry wins → Bush loses) & Cheney has to retire.” Strictly speaking.” or. the & connects P and Q. . ↔ Pn ). ∨ Pn ). Example: “If Kerry wins then Bush loses and Cheney has to retire” would usually be understood to be bracketed as follows: “Kerry wins → (Bush loses & Cheney has to retire).5.2. and the → connects the whole subformula (P & Q) with R.6 Combinations Truth-functional combinations of propositions can be created in literally an inﬁnite variety of ways. Remember that we allow unlimited associativity for conjunction. and the general rule is that one evaluates inner brackets ﬁrst. Bracket oﬀ a sub-formula if there is any possibility of confusion about which connective applies to which part of the formula.” Be careful! In English and many other natural languages. disjunction. Propositional logic can express relationships that would be far too complicated to state conveniently in ordinary language. Example 2: Either Bob goes to the store or Tad stays home and either Alice goes to the store or Cheryl goes to work ⇔ (B ∨ T ) & (A ∨ C). Brackets can be nested. and equivalences of n formulas as (P1 ↔ . disjunctions of n formulas as (P1 ∨ . and this formula would be translated as “P and either Q or R.6. COMBINATIONS 17 2. . the subformula Q ∨ R is considered as a unit.” By comparison.” However.3 Bracketing There are almost as many bracketing conventions as there are logic books. perhaps more clearly as “R or both P and Q. Putting brackets around a whole formula is optional. In Chapter Four we will learn how to show that these two versions of the sentence are not logically equivalent. in (P & Q) → R. the scope of logical connectives is ambiguous and must be understood from context and a knowledge of common usage. and then works outwards. I discuss further examples like this in the section on Combinations below. without having to bracket oﬀ pairs of conjuncts or disjuncts. (P & Q) ∨ R would be translated as “Either P and Q or R. which will be explained in Chapter 5. As indicated in the discussion of bracketing above. .

∨ Qm ) is a perfectly legitimate formula. say. −P ∨ −Q −(P & Q). Using the propositional letters deﬁned at the beginning of this chapter. (d) When Bob goes to the store. and yet few would try and say anything like this in words if n and m were much larger than. 2 or 3. −(P → Q). disjunction. P is not equivalent to Q: P is equivalent to not-Q: . 2. (c) If Tad stays home then Cheryl goes to work but Cheryl goes to work only if Tad stays home. P ↔ −Q. & Pn ) → (Q1 ∨ . . Translate the following formulas into clear. grammatically correct. or elimination.18CHAPTER 2.1 A Exercises 1. 2. . PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION Example: The expression (P1 & . Here are a few of the more frequently encountered negated forms: Neither P nor Q: Either −P or −Q: Not both P and Q: Both not P and not Q: P does not imply Q: P implies not-Q: −(P ∨ Q). . P → −Q. −P & − Q. either Alice also goes to the store or Cheryl doesn’t go to work. . (e) Either Bob or Alice goes to the store if Tad doesn’t stay home.7 Negated Forms It is not always obvious how to translate English expressions involving negations that operate over or within conjunction.6. −(P ↔ Q). (b) Bob and Alice don’t go to the store if and only if Tad doesn’t stay home. idiomatic English: (a) (A & B & C) → −T (b) (A & B) ↔ (C ∨ −T ) (c) A → (B → C) (d) (A → B) → C (e) (A → B) → (C → T ) 2. translate the following English sentences into formulas: (a) If Bob goes to the store and Tad stays home then Alice goes to the store only if Cheryl goes to work.

8. Translate the following formulas into clear. If Tad stays home it does not necessarily mean that Alice goes to the store. “Santa came last night.1 A Exercises 1. translate the following English sentences into formulas: (a) (b) (c) (d) (e) Neither Bob nor Alice goes to the store. whether or not it is true that Joe said that Santa came last night — is entirely independent of the truth value of the statement itself that Joe made. How should we translate these? The key is to see that the truth value of the whole statement — for instance. they might report on someone’s state of mind or utterances. Science proves that Santa does not exist. grammatically correct. has not achieved the same standardization. and give them simply a single propositional letter such as P . the connection between Joe’s statement itself and the claim that he made a certain statement is not truth-functional. Logic. Therefore. however.7. Either Cheryl doesn’t go to work or Tad doesn’t stay home. INTENSIONAL CONTEXTS 19 2. Here are typical examples: Joe believes that Santa is real. Joe wishes that Santa could be real. we have no choice but to treat them as atomic propositions. Using the propositional letters deﬁned at the beginning of this chapter. Here are some other notations that one might encounter for the symbols we use in this text. which is a reﬂection of the fact that logic is still a subject that is very much under development. 2.8 Intensional Contexts Intensional contexts are statements that report on how a proposition is represented to or in a person or thing capable of representation.” said Joe. . Cheryl going to work is insuﬃcient to guarantee that Tad stays home. Cheryl goes to work only if Tad doesn’t stay home.9 Alternative Notation for Propositional Connectives Virtually every elementary calculus text published anywhere in the world uses nearly the same notation for the basic operations of the theory. If we are given any statements like the four examples above to translate into propositional symbols. 2. For instance.2. idiomatic English: (a) (b) (c) (d) (e) −(−A ∨ B) −(B → (−C ∨ T )) −(A ↔ B) A ↔ −B −(A ∨ B) 2.

.20CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION −P is often written as ∼ P . (The symbol ∼ is called the tilde. (The symbol ⊃ is pronounced “horseshoe. or P . or occasionally as P Q.”) P & Q is often written as P ∧ Q and sometimes as P · Q. ¬P .) P → Q is frequently written as P ⊃ Q. P ↔ Q is sometimes written as P ≡ Q. P ∨ Q is often written as P + Q.

10 B Exercises on Chapter 2 [Under construction] .10.2. EXERCISES ON CHAPTER 2 21 2.

PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION .22CHAPTER 2.

There are also numerous axiomatic systems of logic that are focused on logical economy and generality. clarify. . and in discovering new systems of logic that might be useful. .1) where the propositions Pi are the premisses (the statements that are taken for granted in the context of the problem). 23 . especially those who might want to learn logic mainly because of its numerous useful applications in other ﬁelds such as philosophy. a natural deductive approach designed more for ease of use and closeness to intuitive methods of reasoning is much more helpful for beginners in logic. this system of deductive reasoning is. . and Q is the conclusion (the proposition to be arrived at by valid deductive steps from the premisses).” If P Q we can also say that Q is derivable from P . .Chapter 3 Propositional Logic: Introduction to Natural Deduction We begin by setting forth rules and procedures for what logicians call natural deduction. they represent more than merely a sort of anthropology of reasoning: they are normative. However. computing. Pn Q. for instance. The justiﬁcation for the approach we take. a very useful distillation of the most important deductive principles that we use in ordinary reasoning. P2 . which are expressions of the general form P1 . The symbol (called the turnstile) can be read as “therefore. is pragmatic. and science. (3. in the sense that they are an attempt to codify.” or “Change your oil every 3000 km. (A normative rule is one that says what we should do. We shall symbolize the logical forms of arguments by sequents.”) The sort of logic we set forth here by no means exhausts all possible useful logics. “Honour thy father and mother. Logicians call this way of reasoning natural deduction because it is meant to come as close as one reasonably can to the way people tend to reason deductively in ordinary natural languages (such as English). Some students may well ﬁnd it to be decidedly unnatural at ﬁrst (though familiarity will bring ease of use). However.” or (more fussily) as “syntactically entails. In fact. mathematics. in that extensive experience shows that what we set forth here can be applied to a great deal of the day-to-day reasoning that we actually carry out. Such axiomatic systems play an important role in research into the basis of mathematics and logic. out of the many approaches that are mathematically possible. and can also serve as a very useful foundation for further developments.” “proves. and make more precise our day-to-day reasoning. in fact.

(3. etc. for which the entailment goes both ways — each formula entails the other. If the line is a theorem there will be no dependencies. It will be shown that some formulas can be derived without any premisses left over. beginning with the statement of the premisses as assumptions. The dependencies are the line numbers of the assumptions on which the line is based. (3. we cannot automatically conclude that R P . one showing P Q. Since the premisses of a sequent are taken to be held conjointly. and two such proofs are often quite diﬀerent in form. .24 CHAPTER 3.” However.4) which can be read “Provable P . The justiﬁcation is a statement of the rule used to get the line. Q R S T. or equivalently. INTRODUCTION TO NATURAL DEDUCTION It amounts to the claim that the stated conclusion can be derived validly from the given premisses by accepted rules of deduction. (3. (3. This reﬂects common linguistic practice. however.” Theorems are propositions that can be asserted without the qualiﬁcation of any premisses at all. . where it would not be uncommon to make an argument of the form “P and Q. the dependency is the number of the line itself. In such a case. The proof or derivation of a sequent will consist of a sequence of lines. . The sequents for such formulas can be written in the form P. therefore S. To establish interderivability we generally have to carry out two proofs. There is an important class of formulas. We shall say that a proposition depends upon the assumptions listed to the left of it. together with the numbers of the previous lines (if any were needed) from which the line was derived by means of the indicated rule. given that we can show P R. There is no reason we can’t concatenate sequents together to get something like this: P.5) in which a sequence of conclusions are derived one after the other.2) Note carefully that in general deductive entailment goes only one way. therefore R. if the line itself is merely an assumption. we will write a sequent of the form P Q. Each line has to show four things: the dependencies (if any). & Pn ) Q. the proposition asserted on the line.3) Such a formula expresses an interderivability between P and Q. and ending (one hopes) with the desired conclusion. that is.1 above is just the same thing as saying (P1 & . and the justiﬁcation for asserting that proposition.” or “P is a theorem. The meaning of these terms will be clearer when we consider examples of proofs. Such theorems are sometimes referred to by the grand name of logical truths. in this book we will be almost entirely concerned with sequents with only one turnstile. the line number. a sequent of the form 3. and the other showing Q P . that the assumptions to the left of a proposition in a line govern that proposition.

BASIC RULES OF DERIVATION 25 3. Dependency: the conclusion depends upon whatever assumptions the original conjunction depended upon. In the following. 5. except that we will derive MT instead of taking it as basic. Dependency: The double negation of a line in a proof always depends upon the same assumption as the line double-negated. we can derive either of the conjuncts P or Q separately. v-Introduction (vI) Given P . Form of justiﬁcation: A 2. Dependency: The conclusion Q depends upon whatever assumptions P and P → Q depend upon. 4. etc. so long as we can ﬁnd a derivation of Q from all n of the disjuncts separately. v-Elimination (vE) From a disjunction P1 ∨ . ∨ Pn . may denote either atomic or molecular formulas. Form of justiﬁcation: m. Dependency: the conclusion depends on whatever assumptions P and Q depended upon. I have made certain assumptions that Lemmon uses more explicit than he does. where m is the line double-negated. Form of justiﬁcation: m &E. . Rule of Assumption (A) We may introduce any proposition anywhere in a proof that we wish — so long as we never forget that it is an unproven assumption! Dependency: The assumption so introduced depends only on itself. Form of justiﬁcation: m. & -Elimination (&E) Given the conjunction P & Q. we can derive P & Q or Q & P . n MP. Double Negation (DN) From P we can derive − − P . Form of justiﬁcation: m DN. Modus Ponens (MP) From P and P → Q we can derive Q. we may conclude Q. 7. 6. where m and n are the lines for the premisses. .3. or from − − P we can derive P . . n &I.1. where Q is any proposition whatever. & -Introduction (&I) Given P and Q separately. Q. where m is the line of the proposition “or-ed” onto.. 3. the proposition letters P . Also. and the line number at which it is introduced appears in the assumption column to its left. Form of justiﬁcation: m vI. where m is the line containing the conjunction used. where m and n are the lines for the premisses. 1. Dependency: The conclusion depends upon whatever assumptions the premiss depended upon. we may derive P ∨ Q or Q ∨ P .1 Basic Rules of Derivation The following list of rules is very similar to that in Lemmon’s Beginning Logic [17].

together with any additional assumptions that had to be introduced (but which could not again be discharged) in the derivation of Q from P . . (If it is true that my name is Kent. where K is the line of the starting disjunction. INTRODUCTION TO NATURAL DEDUCTION Dependency: the conclusion Q depends upon any assumptions that the starting disjunct depended upon. l1 .) This allows us to cite a single proposition as both the beginning and end of certain kinds of sub-proofs in vE. and n is the line number of the consequent. ln . n CP. Reductio ad Absurdum (RAA) If from the assumption P (or −P ) we can derive a contradiction of the form Q & − Q. Dependency: The conclusion P → Q depends upon whatever assumptions P depended upon. then it would be an odd sort of logic in which I could not immediately though perhaps redundantly infer that my name is Kent. 12. then P Q. Conditional Proof (CP) If Q follows from the assumption P . 8. and Mi is the last line of the i-th subproof. though we need not cite it explicitly when it is used. where m is the number of the line being reiterated. is that any proposition may be taken to follow from itself. Modus Ponendo Ponens. 11. just to give it a name.26 CHAPTER 3. together with any additional assumptions that had to be introduced (but which could not be again discharged) in the derivations of Q from the individual disjuncts. See examples below. mn vE. Form of justiﬁcation: m. or Conditional Elimination. and CP. I’ll call this the principle of Reﬂection. . Form of justiﬁcation: m. m1 . which Lemmon does not explicitly state himself but which he certainly uses. Form of justiﬁcation: m R. . n RAA. and enormously streamlines certain derivations. RAA. and n is the line at which we arrive at the contradiction that negates the initial assumption. where m is the line number of the antecedent of the conditional to be proven. 9. General form of justiﬁcation: k. Reﬂection (Rﬂ) A very basic assumption of the sort of natural deduction we use here. 3. then we can conclude P → Q. Reiteration (R) Any line may be reiterated or repeated anywhere it is needed in a proof. Dependency: The reiterated line has the same dependencies as the line it was copied from. . but which were not themselves discharged in that derivation. we can conclude −P (or P respectively). 10.1 Alternative Names for the Basic Rules. where m is the line at which we assume the negation of the conclusion to be established. MP is also known as Detachment. Dependency: The conclusion depends upon any assumptions that governed any lines used in the derivation of the contradiction.1. Deﬁnition Introduction (DfIntro) If P := Q. li is the ﬁrst line of the i-th subproof.

Practice will make perfect. in eﬀect. You should be able to see that nothing illegitimate is going on. vI may be the best way to do it.2. MP is the pattern for all deductive inferences. vE allows us to derive conclusions from disjunctions. Once we have adopted an assumption and use it in the subsequent derivation. COMMENTS ON BASIC RULES &I is also sometimes called Conjunction &E is also sometimes called Simpliﬁcation. introduce as an assumption the negation of the desired conclusion and then try to get a contradiction. so long as you remember that it was just an assumption. An example might help: if it is true that my name is Kent. Students often ﬁnd vE to be diﬃcult at ﬁrst. but on which all debts must ultimately be paid. usually in the propositional logic section of this course it will be applied to disjunctions of the form P ∨ Q containing only two terms. a form of vE over arbitrary numbers of disjuncts. RAA is sometimes called Indirect Proof or Negation Introduction. since it allows us to introduce any proposition whatsoever into a derivation whenever we need it.3. Adopting an assumption is rather like borrowing money on a credit card that has no credit limit. 27 3. RAA is one of the most powerful and useful rules. that one proposition may follow from another. RAA. since the ability to freely entertain hypotheses. If you cannot think of any other way of getting to the desired conclusion. CP is sometimes called Conditional Introduction. perhaps merely for the sake of argument. When we do predicate logic we will use a rule called Existential Elimination (EE) which is. everything from that point on is dependent on the assumption and cannot be asserted unconditionally. If you cannot think of any other way to get a letter into the proof. RAA would .2 Comments on Basic Rules Some students may feel that it is a little ﬁshy to be allowed to assume anything we want at any time. no matter how outrageous they may seem. This is unfortunate. It expresses the fundamental notion of logical consequence. if you can see that the proposition added to or “or-ed” onto P is not being asserted categorically. There is nothing logically illegitimate about doing this. or ⊃ Intro. Sometimes even very well educated people are surprisingly reluctant to use RAA. since they are uncomfortable assuming for the sake of argument a proposition that they know or wish to believe is false. To some people vI seems a bit like cheating. → Intro. this is analogous to the process of paying back borrowed money. vI also sometimes called Addition. Although the general statement of the rule given above allows us to derive conclusions from disjuncts containing any number of terms. and vE. Assumptions can be discharged by CP. however. The fact is that an assumption is nothing more than that — an unjustiﬁed adoption of a proposition. it is also safe (although perhaps odd) to say that either my name is Kent or the moon is made of green cheese. is one of the most useful tools of thought. but the diﬃculties are mostly notational.

and &I.. and then derived MP using DS. and so I have allowed it as a rule even though it is redundant.e. These rules are chosen because they are fairly close to the way people actually reason in ordinary natural languages. the reader might ﬁnd it a good exercise to show that we could have taken Disjunctive Syllogism (DS. The centre column lists the propositions that lead one after the other to the conclusion.) A A A 3 DN 2. as a deﬁnition instead of a derived result). a consequence of Reﬂection. DN. Reiteration is. Think of it as analogous to training wheels on a bicycle.2.5 MP (P & Q) → R P −−Q Q P &Q R This derivation illustrates the basic pattern for all natural deductions. that . a logic in which not all propositions were either deﬁnitely true or deﬁnitely false). the numbers in brackets are the line numbers. introduced → as a deﬁned symbol (taking the rule of Implication. they are therefore designed mainly for ease of use. − − Q 1 2 3 3 2. Those who have some familiarity with accountancy can think of the left column as analogous to a tabulation of debit. Debits and credits must balance in the end. Reiteration is not needed in a system like ours because one can always cite a line that has already been written (so long as you don’t forget to also cite the assumptions upon which it depends) anywhere else in the proof. beginners will sometimes ﬁnd Reiteration to be helpful. Despite this. If we discharge all the assumptions and still have a proposition left over. not logical economy.28 CHAPTER 3. deﬁned below) as a basic rule instead of MP. which you will come to do without as your skill increases. the assumptions we make allow us (by means of the rules) to assert other propositions. INTRODUCTION TO NATURAL DEDUCTION not be valid in a logic that was not bivalent (i. also given below. together with the lines to which the rule was applied. 1 (P & Q) → R. of course. and the centre column as credit.3 1. For instance. It should also become evident that other rules could just as easily be taken as basic.3 (1) (2) (3) (4) (5) (6) R (This uses MP. that is. the assumptions that the proposition in the centre column depends upon. It will become increasingly evident as the course progresses that these rules of deduction are by no means the simplest rules upon which a valid system of propositional logic could be based.4 &I 1. Making an assumption is like borrowing money. P. The right hand column lists the rule that justiﬁes the proposition written in that line. Strictly speaking.3 Simple Examples of the Basic Rules In this section I state and prove several sequents which demonstrate the use of the Basic Rules in especially simple and clear ways. 3. but we must keep track of the fact that we are relying on certain assumptions just as we must keep track of money we owe. The left-hand column lists the dependencies.

this sequent is actually called “vElimination. 5 P Q→P 1 2 1 1 (1) (2) (3) (4) (This demonstrates CP and R. This is an entirely optional step.) P &Q P →R P R (This uses vI. Notice also that in lines 4 and 6 I annotate the assumptions that start the subproofs with an indication of the rule that I am trying to use in the subproof. In some treatments of propositional logic. Each subproof starts by temporarily assuming the disjunct to be tested.3 MP P ∨ (Q & R) 1 1 (1) (2) 4 P ∨ Q.2.6 MP 1. To demonstrate the claim. you have to test both routes — because the claim is that either route works. this is especially helpful when you use nested subproofs. However. that temporary assumption is discharged.4 MP A (vE) 3. This is why we use a subproof for each disjunct in the starting disjunction. when the subproof is done. P → R 1 2 1 1. and whether or not you do it does not aﬀect the validity of the proof. since variations on it will occur often enough that it will be useful to give it a name.4. The whole idea of vE is that we use it to establish the claim that from either P or Q we can get to the desired conclusion.7 vE A A 1 &E 2.3.6.” Here we will call it inference pattern Convergence. that is.) P P ∨ (Q & R) R A 1 vI (This uses vE and MP.6 1. it can help you keep track of which assumption is part of which subproof. (See the proof of DS below. a proposition that can be held unconditionally — like a property with no mortgage on it! 2 P & Q.) A A A A (vE) 2.3.2 3 P (1) (2) (3) (4) R (This uses &E and MP. SIMPLE EXAMPLES OF THE BASIC RULES 29 proposition counts as a theorem. P → R. It is like a claim that we can get to Calgary from Lethbridge via either Highway 2 or the Vulcan Highway.) P Q P Q→P A A (CP) 1R 2. Q → R 1 2 3 4 2.5.4 6 3.) I have borrowed this device from Jacquette [13].3 CP .3 (1) (2) (3) (4) (5) (6) (7) (8) P ∨Q P →R Q→R P R Q R R This is one of the simplest examples I have been able to think of that demonstrates the use of vE.

2. Many of these sequents have common names. Q → R (d) P. P 1 2 3 1. (a) P → (Q & R). But it demonstrates in a very clear way how to use RAA.2 1.3. Find natural deduction proofs for the following sequents. P ∨ R (e) P → Q.2 Q (This demonstrates RAA. −R 3. P → R. (a) P. P → S 1. (P & Q) → R R∨S Q∨S P ∨R Q→R −P (Hint: try RAA) (b) P → Q. and you should learn these names and acquire the ability to recognize these forms in their innumerable variations. R → Q. Q → R.2 MP 3.3 1. Construct proofs of the following sequents. P → −Q −Q Q R R&S (b) − − P.4 &I 3.) (1) (2) (3) (4) (5) (6) P →Q P −Q Q Q& − Q Q A A A (RAA) 1. P → R (e) − − P B P ∨R (d) P. 6 P → Q. and which will be found to be very useful in our future propositional and predicate natural deductions. this sequent is merely the statement of MP.30 CHAPTER 3.1 A Exercises 1. I have marked the assumption in line 2 with the rule that I intend to use in the subproof. P (c) P ∨ Q. INTRODUCTION TO NATURAL DEDUCTION As in sequent 5. P → Q (c) P & Q.5 RAA Of course. .4 Some Useful Sequents In this section we derive from the Basic Rules a series of sequents that are very widely used throughout logic. 3.

it was Alice who paid the phone bill. since it illustrates the use of vE and RAA. Reﬂection also allows us. Verbal example: If Bob goes to work then Alice stays home.4. −Q 1 2 3 1.3. it was not Bob. But Alice did not stay home. in which P is both assumption and conclusion of a sub-proof. −Q 1 2 3 4 5 2. however.4.6 RAA 1. Hence.4 1.2 (1) (2) (3) (4) (5) (6) (7) (8) P Disjunctive Syllogism (DS) P ∨Q −Q P Q −P Q& − Q P P A A A (vE) A (vE) A (RAA) 2. The conclusion follows by vE. 3 is the line at which the working assumption P was introduced and also the line at which it is discharged (since P is the very conclusion we are aiming at). but we shall rarely use vE on disjunctions with more than two terms. In the justiﬁcation for the conclusion at line (8). Bob did not go to work. from the assumption of the second disjunct we can derive a contradiction and thereby prove P — or.4. anything we want.3 1.1 7 One-Way Sequents −P (1) (2) (3) (4) (5) (6) Modus Tollens (MT) P →Q −Q P Q Q& − Q −P A A A (RAA) 1.3.3 1.4 &I 5. 4 is the line at which the second working assumption was introduced and 7 the line at which it was discharged. Therefore. The object is to derive P from P ∨ Q given the condition −Q. SOME USEFUL SEQUENTS 31 3.4 2. 8 P ∨ Q.2.3. to establish the theorem P → P as follows: 9 P →P Law of Identity (Id) .3 MP 2. The proof of DS will repay careful study. If we were working with a disjunction with more than two terms.4 &I 3.5 RAA P → Q. Note the use of Reﬂection in line 3.7 vE Example in Words: Either Bob or Alice paid the phone bill. for example. 1 is the line at which the premiss P ∨ Q was introduced.2 MT is one of the most widely used rules in deductive logic. there would have to be two line numbers for each working assumption introduced and then discharged. indeed. The ﬁrst disjunct is simply the desired conclusion.

or more brieﬂy Ex Falso.32 1 (1) (2) CHAPTER 3.1 CP Example in Words: Paul Martin is Prime Minister of Canada only if Paul Martin is Prime Minister of Canada. −P 1 2 3 1. This is. Here is a slightly diﬀerent version of ExF that helps to illustrate how the methods work: 12 P & − P Q There are at least two ways to prove this. and this phenomenon. however. is important enough to deserve a name of its own: Ex falso quodlibet.2 1. this theorem could be just as well called “self-implication”. ”. This example illustrates the fact that the utter triviality of many of the basic rules of logic becomes apparent when they are put into words.” 11 P.2 &I 3. This Latin phrase literally means “from a contradiction (or falsehood). It is fascinating how powerful these apparently “trivial” inference rules can be.4 RAA Example in Words: If 2 + 2 = 4 and 2 + 2 = 4 then Captain Kangaroo is the Prime Minister of Australia. the case. A “learner’s permit” method uses Reiteration: 1 2 1 1 (1) (2) (3) (4) P & −P −Q P & −P Q A A (RAA) 1R 2.3 &I 2. .2 Q (1) (2) (3) (4) (5) Ex Falso Quodlibet (ExF) P −P −Q P & −P Q A A A (RAA) 1. which we have already encountered in the last chapter. indeed. in principle. INTRODUCTION TO NATURAL DEDUCTION P P →P A 1.3 RAA It can be done more compactly as follows: . or “from a contradiction anything follows.4 RAA −P P &Q P P & −P −(P & Q) The reader will notice that the peculiar way in which we used RAA in the proof of DS would apparently. .2 1 −(P & Q) (1) (2) (3) (4) (5) Denial (Den) A A (RAA) 2 &E 1. whatever. More generally. allow us to deduce any proposition whatsoever from a contradiction. Here is another simple sequent that can sometimes come in handy: 10 −P 1 2 2 1. every proposition implies itself.

R → S.3 &I 2.2 1. Ex Falso will seem less odd when we come to truth table semantics in Chapter 5. Q → R 1 2 3 1.7. 13 P → Q 1 2 1.4 7 2. Note how the assumption on line 2 is discharged when we arrive at the conclusion.4 1. The next derivation is a good illustration of the use of vE: 15 P → Q.2 MP 2.2 1 (1) (2) (3) (4) (5) P → (P & Q) P →Q P Q P &Q P → (P & Q) Absorption (Abs) A A 1.1 RAA 33 Note that line 1 is only “derived” from line 2 in the sense that it can be asserted when 2 is asserted simply because it has already been asserted. 14 P → Q.3 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) P →Q R→S P ∨R P Q Q∨S R S Q∨S Q∨S Q∨S Constructive Dilemma (CD) A A A A (vE) 1.7 MP 8 vI 3.2.4 MP 3.9 vE .2 (1) (2) (3) (4) (5) (6) P →R Hypothetical Syllogism (HS) A A A (CP) 1. This proof is a typical usage of CP.2.4 CP Example in Words: Given that Bob works only if Alice stays home.3 1.4 MP 5 vI A (vE) 2.3 1.4. P ∨ R 1 2 3 4 1.4. then if Bob works then Bob works and Alice stays home. But that is enough to make RAA work.3 MP 2. and if Alice stays home only if she is oﬀ shift.7 1.6. then Bob works only if Alice is oﬀ shift.7 2.5 CP P →Q Q→R P Q R P →R Example in Words: If Bob works only if Alice stays home. SOME USEFUL SEQUENTS 1 2 1 (1) (2) (3) P & −P −Q Q A A (RAA) 2.3.

Destructive Dilemma (DD) 3. For now. and if Tad works only if Cheryl stays home.8 &I 7. there is a theorem of the form P ↔ Q. or vice versa.34 CHAPTER 3.3 (5) (−P ∨ −Q) & − (−P ∨ −Q) 2 (6) P 7 (7) −Q 7 (8) −P ∨ −Q 2. −Q ∨ −S −P ∨ −R The proof is left as a B exercise. (3. The following variant on Constructive Dilemma is easily established: 16 P → Q. In Chapter 5 we will see that expressions are provably equivalent iﬀ they have the same truth tables.6). INTRODUCTION TO NATURAL DEDUCTION Example in Words: If Bob works only if Alice stays home. and this rule. We will call this metarule Equivalence Introduction. In syntactic terms (that is.4. we merely note the fact that because of Equivalence Introduction these interderivabilities are very powerful tools of deduction. De Morgan’s Laws The following rules. called de Morgan’s Laws.2 Interderivability Results In this section we state and prove a number of interderivability results of the general form P Q. in terms of the use that this fact can be put in derivations).4 &I 3. are very important and widely used in logic: 17 −(P & Q) −P ∨ −Q de Morgan’s Laws (deM) (a) −(P & Q) −P ∨ −Q 1 (1) −(P & Q) 2 (2) −(−P ∨ −Q) 3 (3) −P 3 (4) −P ∨ −Q 2. (3.6) It can be shown that for every interderivability of the form (3.7) We shall say that any wﬀs P and Q for which P ↔ Q holds are provably equivalent. and if either Bob or Tad works. R → S. then either Alice or Cheryl stays home.7 (9) (−P ∨ −Q) & − (−P ∨ −Q) 2 (10) Q 2 (11) P & Q A A (RAA) A (RAA) 3 vI 2. will be discussed in greater detail in the next chapter.9 RAA 6. along with some other introduction rules.10 &I .5 RAA A (RAA) 7 vI 2. this implies that any occurrence of a formula of the form P can be substituted for by a formula of the form Q wherever they occur. We will use Equivalence Introduction in some of the proofs further on in this chapter.

2 1 (12) (13) (P & Q) & − (P & Q) −P ∨ −Q 1.9 &I A A (RAA) 1 &E 2.6 (8) (P ∨ Q) & − (P ∨ Q) 1 (9) −Q 1 (10) −P & − Q (b) −P & − Q −(P ∨ Q) 1 (1) −P & − Q 2 (2) P ∨Q 1 (3) −P 1.2 (4) (P ∨ Q) & − (P ∨ Q) 1 (5) −P 6 (6) Q 6 (7) P ∨Q 1.11 &I 2.10 RAA 1.7.3. It shows that RAAs can be nested.2 (4) Q 1 (5) −Q 1.4. in eﬀect.12 RAA 35 This is an adaptation of the ingenious proof given by Lemmon for his Sequent 36(a).3 &I 2.6 RAA This can.6. (b) −P ∨ −Q −(P & Q) 1 (1) −P ∨ −Q 2 (2) −P 3 (3) P &Q 3 (4) P 2. ﬁlling in the proof of DS between lines (3) and (4). of course.11 vE de Morgan (deM) (a) −(P ∨ Q) −P & − Q 1 (1) −(P ∨ Q) 2 (2) P 2 (3) P ∨Q 1.3 (5) −P & P 2 (6) −(P & Q) 7 (7) −Q 8 (8) P &Q 8 (9) Q 7.7 &I 6.9 &I 8.4 RAA A (RAA) 6 vI 1. be proven without the use of the derived rule DS by simply.2.5 RAA A (vE) A (RAA) 8 &E 7.4 &I 3.3 DS 1 &E 4. SOME USEFUL SEQUENTS 1.8 RAA 5.5 &I 2.8 (10) −Q & Q 7 (11) −(P & Q) 1 (12) −(P & Q) 18 −(P ∨ Q) −P & − Q A A (vE) A (RAA) 3 &E 2. .2 (6) Q& − Q 1 (7) −(P ∨ Q) A A (RAA) 2 vI 1.

19 P →Q (a) P → Q 1 2 2 2 2 1. 20 P →Q −(P & − Q) Implication (Imp) The proofs of this version of Imp use Sequent 18.2 1 −P ∨ Q Implication (Imp) −P ∨ Q (1) P →Q (2) −(−P ∨ Q) (3) −−P & −Q (4) P & −Q (5) P (6) Q (7) −Q (8) Q& − Q (9) −P ∨ Q A A (RAA) 2 deM 3 DN 4 &E 1. INTRODUCTION TO NATURAL DEDUCTION The de Morgan Laws amount to rules for the distribution of the negation sign.5 MP 4 &E 6. Example: Here is an example of a valid use of Imp and deM. and when we distribute negation over disjunctions they become conjuncts of the negated terms. and they are left as a B exercise. It is justiﬁable by the metarule of Equivalence Introduction. Implication Rules The following two rules about the implication relation → are frequently used. They can both be cited the same way. Note the RAA subproof nested inside the CP subproof. although you do not have to cite that explicitly.36 CHAPTER 3.8 RAA The converse is left as a B exercise. .3 CP The converse is left as a B exercise.2 MT 2. Transposition is also very useful: 21 P →Q P →Q 1 2 1.7 & I 2. 1 1 1 (1) (2) (3) (P → Q) & (R → S) (−P ∨ Q) & (R → S) −(P & − Q) & (R → S) A 1 Imp 2 deM This example illustrates the fact that an interderivability rule such as deM allows us to substitute an interderivable formula for a subformula in a larger formula. they show that when we distribute negation over conjunctions they turn into disjuncts of the negated terms.2 2 1.2 1 −Q → −P Transposition (Trans) −Q → −P (1) P →Q (2) −Q (3) −P (4) −Q → −P A A (CP) 1.

3. 23 P ∨ P P ∨P 1 2 3 1 P (1) (2) (3) (4) P ∨P P P P A A (vE) A (vE) 1.2.3.2. . which allow us to “and” and “or” two propositions together in any order.4. Their proofs illustrate that something which is “obvious” still may require a little work in order to be correctly demonstrated from the Basic Rules.5 vE Associative Laws 26 P & (Q & R) (P & Q) & R Associativity (Assoc) This proof is left as a B exercise.3.3 &I 25 Commutativity (Comm) P P Q Q Q Q ∨Q ∨P ∨P ∨P A A (vE) 2 vI A (vE) 4 vI 1.3 vE P Idempotency (Idem) This is the shortest possible application of vE.2. Commutative Laws The commutative laws are really just a restatement of the Basic Rules of &I and vI.4. The converse is left as an A exercise. SOME USEFUL SEQUENTS Idempotency 22 P & P P Idempotency (Idem) 37 These easy proofs are left as A exercises. 24 P & Q P &Q 1 1 1 1 P ∨Q P ∨Q 1 2 2 4 4 1 Q&P Q&P (1) (2) (3) (4) Q∨P Q∨P (1) (2) (3) (4) (5) (6) Commutativity (Comm) P &Q P Q Q&P A 1 &E 1 &E 2.

INTRODUCTION TO NATURAL DEDUCTION P ∨ (Q ∨ R) (P ∨ Q) ∨ R Associativity (Assoc) This proof is a little tricky. The proofs of the Rules of Dominance are simple but quite instructive. As noted previously. Rules of Dominance The Rules of Dominance may seem a little strange. These rules simplify the derivation of Equiv (below). and leave the rest as exercises.2 1 P Dominance (Dom) ∨ −Q) (1) P (2) −(P & (Q ∨ −Q)) (3) −P ∨ −(Q ∨ −Q) (4) −P ∨ (−Q & − −Q) (5) P → (−Q & − −Q) (6) (−Q & − −Q) (7) P & (Q ∨ −Q) A A (RAA) 2 deM 3 deM 4 Imp 1. which is beyond (though not far beyond) the scope of this course. but the proof of this fact requires mathematical induction.2. Associativity can be extended to conjunctions and disjunctions of arbitrary numbers of terms. since it requires nested vEs.5 MP 2. but they have a very natural interpretation from the semantic point of view. 28 P & (Q ∨ −Q) P P & (Q 1 2 2 2 2 1. 29 30 31 P & (Q & − Q) P ∨ (Q ∨ −Q) P ∨ (Q & − Q) P ∨ (Q & − Q) 1 (1) 2 (2) 3 (3) 3 (4) 1 (5) Q& − Q Q ∨ −Q P Dominance (Dom) Dominance (Dom) Dominance (Dom) P P ∨ (Q & − Q) P Q& − Q P P A A (vE) A (vE) 3 ExF 1. It is left as a C exercise. We will review them in Chapter 5. Distributive Laws 32 P & (Q ∨ R) (a) P & (Q ∨ R) (P & Q) ∨ (P & R) (P & Q) ∨ (P & R) Distributive Laws (Dist) .38 27 CHAPTER 3.4 vE The converse is left as an A exercise.2.3.6 RAA The proof of the converse is left as an A exercise. I give some here.

5 MP 6.5 (6) Q 1.5 (7) R 1.14 RAA 39 Note the use of RAA.2 1.2 1.2 1.10 &I 1.7. SOME USEFUL SEQUENTS 1 2 2 2 2 2 2 1 1 1.8 MP −R 7.8 CP 9 Imp .13 &I (P & Q) ∨ (P & R) 2.8 MP −Q & − R 10.2 1.5 MP 4.7 &I 5.11 vE This is a straightforward application of vE.11 &I −(Q ∨ R) 12 deM −(Q ∨ R) & (Q ∨ R) 9.6. (b) (P & Q) 1 2 2 2 2 2 7 7 7 7 7 1 ∨ (P (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) & R) P & (Q ∨ R) (P & Q) ∨ (P & R) P &Q Q P Q∨R P & (Q ∨ R) P &R P R Q∨R P & (Q ∨ R) P & (Q ∨ R) A A (vE) 2 &E 2 &E 3 vI 4.5 (8) Q&R 1 (9) −P → (Q & R) 1 (10) P ∨ (Q & R) A 1 Imp 2 &E 2 &E A (CP) 3.2 1 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) P & (Q ∨ R) A −((P & Q) ∨ (P & R)) A (RAA) −(P & Q) & − (P & R) 2 deM (−P ∨ −Q) & (−P ∨ −R) 3 deM (P → −Q) & (P → −R) 4 Imp P → −Q 5 &E P → −R 5 &E P 1 &E Q∨R 1 &E −Q 6.5 &I A (vE) &E &E 9 vI 8.3.4. 33 P ∨ (Q & R) (P ∨ Q) & (P ∨ R) Distributive Laws (Dist) (P ∨ Q) & (P ∨ R) P ∨ (Q & R) 1 (1) (P ∨ Q) & (P ∨ R) 1 (2) (−P → Q) & (−P → R) 1 (3) −P → Q 1 (4) −P → R 5 (5) −P 1.2.

(b) (P & Q) ∨ (−P & − Q) 1 2 2 2 2 2 2 2 2 10 10 10 10 10 10 10 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) P ↔Q A A (vE) 2 &E 2 &E 3 vI 4 vI 5 Imp 6 Imp 7.) Equivalence Rules There are two Equivalence rules that are widely used. since they are simple but instructive. (a) P ↔ Q 1 1 1 1 1 1 1 (P & Q) ∨ (−P & − Q) (1) (2) (3) (4) (5) (6) (7) P ↔Q (P → Q) & (Q → P ) (−P ∨ Q) & (−Q ∨ P ) (−P & (−Q ∨ P )) ∨ (Q & (−Q ∨ P )) ((−P & − Q) ∨ (−P & P )) ∨ ((Q & − Q) ∨ (Q & P )) (−P & − Q) ∨ (Q & P ) (P & Q) ∨ (−P & − Q) A 1 Def↔ 2 Imp 3 Dist (× 2) 4 Dist (× 2) 5 Dom 6 Comm (× 2) Note how Dom allows us to “cancel” out contradictory expressions such as −P & P . (For a nicely accessible introduction to this intriguing subject. There is an important branch of advanced logic called quantum logic in which the seemingly “obvious” distributive laws break down. 34 P ↔Q (P → Q) & (Q → P ) Equivalence (Def↔) This follows by Deﬁnition Introduction.40 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION The converse is left as a B exercise.8 &I A (vE) 10 &E 10 &E 11 vI 12 vI 13 Imp 14 Imp (P & Q) ∨ (−P & − Q) (P & Q) P Q −Q ∨ P −P ∨ Q Q→P P →Q (P → Q) & (Q → P ) (−P & − Q) −P −Q −P ∨ Q −Q ∨ P P →Q Q→P . see [12]. since P ↔ Q := (P → Q) & (Q → P ) 35 P ↔Q (P & Q) ∨ (−P & − Q) Equivalence (Equiv) I give the proof of both directions here.” and vice versa. The Distributive Laws show how “ORs” can be distributed over “ANDs.

The derivation of this sequent by using only the Basic Rules would run to many more than ﬁve lines.) 38 P → (Q & R) (P → Q) & (P → R) Distributive Law for → (Dist→) The proofs are left as B exercises. in eﬀect.9. SOME USEFUL SEQUENTS 10 1 1 (17) (18) (19) (P → Q) & (Q → P ) (P → Q) & (Q → P ) P ↔Q 15. 39 P → (Q ∨ R) (P → Q) ∨ (P → R) Distributive Law for → (Dist→) P → (Q ∨ R) (P → Q) ∨ (P → R) 1 (1) P → (Q ∨ R) 1 (2) −P ∨ (Q ∨ R) 1 (3) (−P ∨ Q) ∨ R 1 (4) ((−P ∨ Q) ∨ R) ∨ −P 1 (5) (−P ∨ Q) ∨ (R ∨ −P ) 1 (6) (−P ∨ Q) ∨ (−P ∨ R) 1 (7) (P → Q) ∨ (P → R) A 1 Imp 2 Assoc 3 vI 4 Assoc 5 Comm 6 Imp . given the rules we have now developed.17 vE 18 Def↔ 41 Negated Arrows The following are easy.4.3. (Thanks to Mark Smith for drawing this to my attention. 37 −(P ↔ Q) −(P ↔ Q) 1 1 1 1 1 (P & − Q) ∨ (−P & Q) Negation of Equivalence (Neg↔) (P & − Q) ∨ (−P & Q) (1) −(P ↔ Q) (2) −((P → Q) & (Q → P )) (3) −(P → Q) ∨ −(Q → P ) (4) (P & − Q) ∨ (Q & − P ) (5) (P & − Q) ∨ (−P & Q) A Def↔ 2 deM 3 Neg→ 4 Comm Again. we obtain the converse by reversing the steps.2. like distributive laws for implication. 36 −(P → Q) −(P → Q) 1 1 1 1 P & −Q P& (1) (2) (3) (4) Negation of Implication (Neg→) −Q −(P → Q) −(−P ∨ Q) −−P & −Q P & −Q A 1 Imp 2 deM 3 DN We get the converse essentially by reversing these steps.16 &I 1.10. Distributive Laws for → It is possible to prove interderivability relations that act.

Here is an example to make this clear: 41 P ∨ Q.3 Form of Justiﬁcation When Using Derived Rules The general structure of a justiﬁcation notation in a proof line is that we state the rule we used to get the line. but they are the only rules for which the usual pattern does not hold. except that you need to mention the line numbers of all of the propositions that were used as inputs to the use of the rule. −P ∨ R. RAA. 3. One of the most important skills in logic is the ability to recognize . I present for convenience a concise summary of the Basic and Derived rules and sequents we have covered in this chapter. CP. in that we only have to give the beginning and end lines of the subproofs.3 (5) 1. Finally. or CD. 3.5 CP 2.2 (6) 1 (7) P → (Q → R) → (Q → R) (P & Q) → R P Q P &Q R Q→R P → (Q → R) Exportation (Exp) A A (CP) A (CP) 2.5 CD 6 Idem Since CD requires three premisses.6 CP This proof of Exp demonstrates nested CPs.4. If you are using a Derived Rule such as MT.2. and give the numbers of the lines we used.3 (1) (2) (3) (4) (5) (6) (7) P ∨Q −P ∨ R −Q ∨ R P →R Q→R R∨R R R A A A 2 Imp 3 Imp 1. These elementary sequents can occur as fragments of longer derivations. three lines must be cited in the justiﬁcation. here is a rule that looks almost like a distributive law for →. 40 (P & Q) → R (P & Q) → R P 1 (1) 2 (2) 3 (3) 2.) The converse is left as a B exercise.3 (4) 1. −Q ∨ R 1 2 3 2 3 1. For instance.5 Summary of Basic Sequents and Rules In this section.4.3 1. INTRODUCTION TO NATURAL DEDUCTION (In the next chapter we will show that it is possible to combine steps in proofs like this. the same basic pattern holds. a line that uses a rule that uses three lines will need to have those three lines mentioned in its justiﬁcation.42 CHAPTER 3.2. The converse is left as a B exercise.2. HS. and it is very important that you learn to recognize them when they appear. but isn’t quite. and vE are slightly diﬀerent.3 &I 1.4 MP 3.

Basic Metarules Assumption (A): Any proposition whatever may be assumed.5. Reductio ad Absurdum (RAA): From P R & − R we can conclude −P . Deﬁnition Introduction (DefIntro): If a set of symbols is deﬁned as a certain truth-functional combination of propositional variables. Basic Sequents Modus Ponens (MP): P. to spot that a very complicated-looking argument is really just a case of a simple form such as MP. 3. etc. ∨ -Elimination (vE): If P R and Q R are provable separately. meaning that they are statements about what sort of inferences we can make. Don’t forget that in the following statement of the rules.. Other Basic Rules can be expressed as sequents. P → Q Q &-Introduction (&I): P. then we can conclude P ∨ Q Conditional Proof (CP): From P. SUMMARY OF BASIC SEQUENTS AND RULES 43 logical form. could be either atomic or molecular propositions. Q. the propositional variables P . MT. so long as we carry its dependencies with it. Reiteration (R): Any line may be repeated anywhere in a proof. . given that certain other inferences are possible.1 Basic Rules and Sequents Some of the Basic Rules are really metarules. or HS.3. Q R we can conclude P Q → R. then one may be substituted for the other wherever they occur. &-Elimination (&E): P &Q P P &Q Q v-Introduction (vI): P P ∨Q P Q ∨ P. R.5. so long as we do not forget that it was an assumption. Q Q & P . for instance. R. Q P & Q P.

INTRODUCTION TO NATURAL DEDUCTION Double Negation (DN): −−P P Note that DN is the only Basic Rule that is an interderivability result. R → S. P → R.5. Q → R P → R Constructive Dilemma (CD): P → Q. 3. One-Way Sequents Modus Tollens (MT): P → Q. Q → R Absorption (Abs): P → Q P → (P & Q) Interderivability Sequents de Morgan’s Laws (deM): −(P & Q) −P ∨ −Q −(P ∨ Q) −P & − Q. The rules in this list are simply those that are used often enough to be given names. R → S. −P Q Hypothetical Syllogism (HS): P → Q. −Q ∨ −S Convergence (Conv): P ∨ Q. There is a denumerable inﬁnity of propositional sequents that can be derived from the Basic Rules. −Q P Denial (Den): −P −(P & Q) Ex Falso Quodlibet (ExF): P.44 CHAPTER 3. Implication Rules (Imp): P →Q −P ∨ Q P →Q −(P & − Q) Transposition (Trans): P →Q −Q → −P R −P ∨ −R . −Q −P Disjunctive Syllogism (DS): P ∨ Q. P ∨ R Q ∨ S Destructive Dilemma (DD): P → Q.2 Summary of Derived Sequents I summarize here the sequents stated and (mostly) derived earlier in this chapter.

6. Equivalence (Equiv): P ↔Q (P & Q) ∨ (−P & − Q). We fallible humans commit certain characteristic errors so often and so spontaneously that it has been found useful to give them names. They are common precisely because. Negation of Implication (Neg→): −(P → Q) P & −Q Negation of Equivalence (Neg↔): −(P ↔ Q) (P & − Q) ∨ (−P & Q) Exportation (Exp): (P & Q) → R P → (Q → R) Distribution for → (Dist→): P → (Q & R) (P → Q) & (P → R) Distribution for → (Dist→): P → (Q ∨ R) (P → Q) ∨ (P → R) 3. Commutativity (Comm): P &Q Q&P P ∨Q Q∨P Associativity (Assoc): P & (Q & R) (P & Q) & R P ∨ (Q ∨ R) (P ∨ Q) ∨ R Dominance (Dom): P & (Q ∨ −Q) P & (Q & − Q) P ∨ (Q ∨ −Q) P ∨ (Q & − Q) P Q& − Q Q ∨ −Q P 45 Distributivity (Dist): P & (Q ∨ R) (P & Q) ∨ (P & R) P ∨ (Q & R) (P ∨ Q) & (P ∨ R) Deﬁnition of Equivalence (Def↔): P ↔Q (P → Q) & (Q → P ). FALLACY ALERT! Idempotency (Idem): P &P P P ∨P P.6 Fallacy Alert! A fallacy.3. is simply some sort of error in reasoning. like optical illusions. loosely speaking. they are almost like optical illusions of the mind. Many of these fallacies belong to informal logic or inductive logic. and they are . they look believable if they are not examined closely.

(3.8) To see that this is invalid. This error is seductive. that doesn’t mean that the sequent is valid! It just means that you can’t think of an example to show that it is not. 8] for good introductions to critical thinking and fallacy theory. We are having a logic class.46 CHAPTER 3. the conclusion could still be false. 3. except that it is more like a misuse of MT: P → Q.6. INTRODUCTION TO NATURAL DEDUCTION discussed in books on rhetoric and critical thinking. the sequent must be invalid. of course.3 Denying the Antecedent This is an error which is very similar to Aﬃrming the Consequent. 16. since it looks so much like the valid rule MP. simply because logic might be held on other days of the week besides Tuesday. 3. −P −Q. and I will mention a few of them here.6.6. consider the following example: If today is Tuesday. If you cannot ﬁnd an example that invalidates a sequent. Q P (????) (3. today must be Tuesday. (False!) The rule of vE does not work like &E! This example also illustrates the fact that we can check the validity of a sequent by trying to ﬁnd speciﬁc examples of it in which obviously true premisses lead to obviously false conclusions.2 Aﬃrming the Consequent Another very common fallacy is Aﬃrming the Consequent: P → Q. You can convince yourself that this pattern of reasoning is invalid by considering the following example: The number 3 is either even or odd. (See [6.9) A simple example demonstrates invalidity: .) There are a few fallacies of deductive reasoning that occur rather often as well. In Chapter 5 we will learn systematic techniques for diagnosing fallacious deductive reasoning. In Chapter 5 we learn general and reliable methods for checking the validity of propositional logic sequents and arguments. we have a logic class. Even if the premisses are correct. (This is true.) Therefore the number three is even. Therefore. 3. however.1 A Common Misuse of vE It is very common for students to attempt the following move: 1 1 (1) (2) P ∨Q P A 1 vE (???) This is a glaring error. If you can do this.

Therefore we don’t have logic. or “begging the question.) It would also be possible to use Implication in the derivation of the de Morgan Laws — but you must not do that if you used de Morgan to get Implication! That would be a circular procedure. Again.3. we have a logic class. (See Sequent 35 in Lemmon. the problem is just that we might have logic on other days of the week. But you should not forget that these were proven in a certain order. and then used in the proof of the Implication rules.6. FALLACY ALERT! If today is Tuesday. It is possible to prove Implication using only Lemmon’s Basic Rules as well.6. For instance.4 Avoid Circularity! From this point onward the student can take any of the derived sequents above and use them in the proof of further sequents. It would also be quite legitimate to prove Implication ﬁrst from the Basic Rules and then use it in the derivation of de Morgan. in order to avoid circularity.” as it is sometimes called in informal logic. (You might want to try this as an exercise. 47 3. A circular proof is no proof at all. de Morgan’s Laws are proven ﬁrst here.) But once you have chosen an order of proof you should stick to it. since it merely amounts (though perhaps in a roundabout way) to a restatement of its assumptions. . Today is not Tuesday.

P (o) X → (P ↔ Q). (P & − Q) ∨ (−P & Q) B 1. Find natural deduction proofs for the following named sequents: (a) P → Q. INTRODUCTION TO NATURAL DEDUCTION 3. −Q ∨ −S (b) −P ∨ Q (c) P → Q P →Q −(P & − Q) −P ∨ −R (DD) (Imp) (Hint: use DS) (Imp) . P Q→P (c) P → (Q ∨ R). R → S. Your proof should be only three lines long. M (m) P → (Q ∨ Q) (n) P & (Q → R) −P (g) P → (Q → R). Find natural deduction proofs for the following named sequents: (a) P & P (b) P (c) P (e) P P (Idem) (Idem) (Idem) P (Dom) (Dom) (Dom) (Dom) (Dom) P &P P ∨P (d) P & (Q ∨ −Q) (f) Q & − Q (h) Q ∨ −Q P ∨ (Q & − Q) P & (Q & − Q) Q& − Q P ∨ (Q ∨ −Q) (g) P & (Q & − Q) 2. −(Q → R) (f) (M ∨ N ) → Q (h) −(P & − Q) (j) (X ∨ Y ) → Q (k) −M ∨ N.7 A Exercises on Chapter 3 1. Q P R P →T (M ∨ N ) → (Q & (M ∨ N )) −P ∨ Q −P & (R → S) (−X & − Y ) ∨ Q N (X → Y ) −(P & − Q) (P & − Q) ∨ (P & R) −X (e) (Q → R). −(Q ∨ R) (d) P ∨ −Q.48 CHAPTER 3. (Q → R) → T (i) −(P ∨ −(R → S)) (l) −(X → Y ) → −P. Find natural deduction proofs for the following miscellaneous sequents: (a) P. Q P →Q (b) Prove the following without the use of R.

(Q → R) (e) R. (P → R) → Y 1. −R (f) P → Q. Suppose also the theory deductively entails a certain prediction P . MT is a basic pattern of inference which is very important in scientiﬁc reasoning. −(S ∨ T ) (j) P → R. −P → Q. Q ↔ R (c) (P → R). why not? 2. (i) P → (Q → R) (j) P → (Q & R) (l) P ∨ (Q ∨ −Q) (m) −(P & − Q) (P & Q) → R (Exp) (Dist→) (Dist→) (P → Q) & (P → R) P → (Q & R) (Dom) (Dist→) (Neg→) P → (Q ∨ R) Q ∨ −Q (k) (P → Q) & (P → R) P →Q (n) (P → Q) ∨ (P → R) 2. EXERCISES ON CHAPTER 3 (d) −(P & − Q) (e) −Q → −P (f) P & (Q & R) (g) P ∨ (Q & R) P →Q P →Q (Imp) (Trans) (Assoc) (Dist) 49 (P & Q) & R (P ∨ Q) & (P ∨ R) (h) (P ∨ Q) & (P ∨ R) P ∨ (Q & R) (Dist) This sequent is proven in the text using CP. Try it using RAA. R ↔ P. −S ∨ −R (g) P ↔ S.) Does this mean that we have to junk all of T and start all over again? If not. P ∨ R. What does MT imply about T ? (Philosophers of science refer to this process as falsiﬁcation. Q → R. −(R ∨ S) C −(Q → P ) (i) P → (Q ∨ R).3. suppose that (to our great disappointment) experimentation shows that P is false. R → S. −Q → −P (d) −P ∨ R. P ∨ Q. The following questions require you to ﬁnd natural deduction proofs for a variety of sequents: P ↔Q P ↔R (P & Q) → R S −P S −P X ∨Y Q (a) P. Suppose we have a new scientiﬁc theory. and suppose this entire theory is expressed as a single proposition T . Q → S. −(Q & − S). Finally. −Q → S. Q (b) P ↔ Q. (P → Q) → X. R → T. Find a natural deduction proof for the following sequent: (a) P ∨ (Q ∨ R) (P ∨ Q) ∨ R . −Q ∨ −R (h) P → (Q ∨ R).7.

Suppose we had a system of Natural Deduction that was exactly the same as the one we use in this book.) (b) Now suppose. (See. in the place of Modus Ponens (MP). e.) This deﬁnition captures the intuition that to say P implies Q is to say that we never simultaneously have P and −Q. . derive MP using Basic Rules. Show that in this system we can. that we deﬁne P → Q as −(P & − Q).g. except that. instead. INTRODUCTION TO NATURAL DEDUCTION 3. since in our usual system → is deﬁned implicitly by the way it behaves in MP. (a) Show that in this new system we can derive MP using only DS and other Basic Rules. so long as we deﬁne P → Q as −P ∨ Q.50 CHAPTER 3. as is done in numerous logic texts. Hint: try RAA.. as well. Copi and Cohen [6]. it used Disjunctive Syllogism (DS) (P ∨ Q. −Q P ) as a Basic Deductive Rule. we have to treat → as a deﬁned symbol. (If we don’t have MP as a Basic Rule.

(X ∨ Y ) (c) P (Y ∨ Z) −(((X ∨ Y ) → Q) → (P & Q)) (b) ((X ∨ Y ) → Q) → (P & Q).1 Recognition of Logical Form One of the most important skills to develop in symbolic logic is the ability to recognize logical form.1 A Exercises 1. 4. Try to “chunk” complicated formulas together in your mind and see if they start to look like instances of simpler rules before you start blindly calculating. Don’t forget that in this chapter the propositional variables P . R. Identify the simple rule used in each of the following deductions: (a) (X ∨ Y ) → (Y ∨ Z). Formally. A little thought will convince you that this is merely an instance of DS. can be either atomic or molecular unless stated otherwise. Q. 4.. so that an immediate conclusion is (R ↔ S) & (S ↔ T ).Chapter 4 Techniques and Applications of Propositional Natural Deduction In this chapter we introduce several techniques that greatly simplify propositional natural deductions. we could write this out as follows: 1 2 2 1. you were asked to deduce a conclusion from the following seemingly-complicated premisses: ((R ↔ S) & (S ↔ T )) ∨ (R → −S) and R & S. −(P & Q) ((M ↔ N ) ∨ (M ↔ (R & S))) ∨ P 51 . etc. and indicate some of its possible applications.2 (1) (2) (3) (4) ((R ↔ S) & (S ↔ T )) ∨ (R → −S) R&S −(R → −S) (R ↔ S) & (S ↔ T ) A A 2 Neg→ 1. Suppose.1.3 DS Something that threatened to be forbiddingly complicated in fact turns out to be very simple. for instance.

52 CHAPTER 4. you could use Imp twice on the same line as follows: 1. too.DN This is perfectly okay. For instance. For instance. Remember. so long as you don’t get ahead of yourself and make a mistake. TECHNIQUES AND APPLICATIONS (d) P → (Q & X). it is better painstakingly to write them all out one by one and thereby reduce the chance that you will make a mistake.2 Combining Steps As you become more experienced in doing proofs you will naturally want to combine steps when they seem to be obvious.2 1. State the justiﬁcations for each of the following steps: (a) (b) (c) (d) (e) (n) (n+1) (n) (n+1) (n) (n+1) (n) (n+1) (n) (n+1) (n+2) (n) (n+1) P → (Q → P ) −P ∨ (−Q ∨ P ) P → (Q & R) (−P ∨ Q) & (−P ∨ R) −P → (Q & R) P ∨ (Q & R) (P → Q) & (P → R) −P ∨ (Q & R) −M → N −N M (P ∨ −P ) ∨ (Q ∨ −Q) (−P ∨ Q) ∨ (−Q ∨ P ) TI ? ? ? ? ? ? (f) . that on tests and assignments the onus is on you to convince your instructor that you understand how the proof works! 4. Just as you can use more than one rule to get a given line.2 1.2 (16) (17) −(−P ∨ Q) P & −Q (from previous lines) 16 deM. you can use the same rule more than once.1 A Exercises 1.2. you might do a fragment of a larger proof like this: 1.2 (1) (2) P → (P → Q) −P ∨ (−P ∨ Q) (from previous lines) 1 Imp (×2) If you have the slightest doubt about the steps you are taking. (Q & X) → T P →T −((P → Q) → R) ∨ (P ↔ (Q & R)) (e) ((P → Q) → R) → (P ↔ (Q & R)) 4.

but only in a sub-proof. (The rule vE also discharges assumptions. assume its negation and try to derive a contradiction. 42 P →P Here are two proofs of this elementary theorem. For now. . we have to have a way of discharging assumptions. meaning a wﬀ that can be proven with no assumptions left over.1 CP 2 Imp.3 Theorems In the last chapter we introduced the concept of the theorem. without the support of any premisses. THEOREMS 53 4. I will give a few simple examples of theorems. with nothing in between.4. The ﬁrst uses CP and R: 1 1 (1) (2) (3) P P P →P A (CP) 1R 1. In the next chapter we will see that the theorems of propositional logic are just the same as its tautologies. which are the formulas that are T in all lines of their truth tables. (4. This is sometimes expressed in Latin as tertium non datur — “a third is not given”. which means that P can stand on its own.) Here is an important theorem that we can prove using RAA: 43 P ∨ −P 1 1 Law of Excluded Middle (ExM) (1) (2) (3) −(P ∨ −P ) −P & − −P (P ∨ −P ) A (RAA) 1 deM 1.2 CP We can also do this without using R: 1 (1) (2) P P →P A 1.1 CP This is no doubt the shortest possible application of CP! If we want to derive a result that depends on no premisses from a set of premisses. we assert P.3.2 RAA This illustrates the power of RAA: if you can’t think of any other way to get a result. Here’s another way to get ExM: 1 (1) (2) (3) P P →P P ∨ −P A (CP) 1.1) (read as “Provable P ” or “P is a theorem”). and that means that we are going to use either CP or RAA. If P is a theorem. to demonstrate typical techniques that can be used to prove them.Comm This theorem is called the Law of Excluded Middle because it says that every proposition is either true or false.

see if you can do it without using R.6 &I 1.4 CP This is a straightforward use of CP. TECHNIQUES AND APPLICATIONS Here is another example: Show that any given proposition P is implied by any other proposition Q.54 CHAPTER 4. 1 1 1 (1) (2) (3) (4) P −Q ∨ P Q→P P → (Q → P ) A (CP) 1 vI 2 Imp 1. but it illustrates the fact that if all else fails we can use RAA. 1 1 1 1 1 1 1 (1) (2) (3) (4) (5) (6) (7) (8) −(P → (Q → P )) P & − (Q → P ) P −(Q → P ) Q& − P −P P & −P P → (Q → P ) A (RAA) 1 Neg→ 2 &E 2 &E 4 Neg→ 5 &E 3.3 CP 4 Trans The above proof illustrates the idea that if we want to prove a conditional. and then derive a contradiction by brute force. 44 P → (Q → P ) 1 2 1 1 (1) (2) (3) (4) (5) P Q P Q→P P → (Q → P ) A (CP) A (CP) 1R 2.3 CP The above proof reminds us of how we can use vI to bring new propositional letters into a proof.7 RAA The above proof is a bit longer than the others. 1 1 1 (1) (2) (3) (4) (5) −(Q → P ) Q& − P −P −(Q → P ) → −P P → (Q → P ) A (CP) 1 Neg→ 2 &E 1. I am going to give ﬁve ways of proving this theorem. The use of R in line 3 is redundant. assume the negation of what you want to prove. in order to demonstrate the variety of proof strategies that can be used for theorems. we might try to prove the negation of its consequent ﬁrst and then use Transposition to get the desired conditional. that is.3 CP 1. .

” The Law of Excluded Middle (ExM): (P ∨ −P ) The Law of Identity (Id): (P → P ) The Law of Non-Contradiction (NC): −(P & − P ).4. uniform and nonuniform. Theorem Introduction is discussed further below.Imp 55 In line 1 of the above proof we introduce a theorem. There are two kinds of substitution. SUBSTITUTION (1) (2) (3) (4) (5) P ∨ −P (P ∨ −P ) ∨ −Q −P ∨ (P ∨ −Q) P → (P ∨ −Q) P → (Q → P ) ExM 1 vI 2 Comm.4. We have seen the proofs of the ﬁrst two. it would be possible to derive all theorems. However.4.4 Substitution Under certain circumstances. . 4.4.3. rearrange the results. and apply Imp to get what we wanted.1 Uniform Substitution We will say that a wﬀ P is substituted uniformly for a wﬀ Q if every instance of Q in a larger formula is replaced by P . three simple theorems are important enough that they have come to be known as the “laws of thought. directly from ExM if we wanted to.Assoc 3 Imp 4 Comm.1 The “Laws” of Thought There is no limit to the number of propositional theorems that could be proved.2 Nonuniform Substitution We will say that a wﬀ P is substituted nonuniformly for a wﬀ Q if some but not all instances of Q in a larger formula are replaced by P . Example: Consider the formula (P & R) → (Q & R). 4. the basic idea is that a theorem can be stated at any point in a proof without justiﬁcation. and thereby all of deductive logic. A nonuniform substitution of (M ∨ N ) for R would yield either (P & (M ∨ N )) → (Q & R) or else (P & R) → (Q & (M ∨ N )). 4. 4. A uniform substitution of (M ∨ N ) for R would yield (P & (M ∨ N )) → (Q & (M ∨ N )). We then expand line 1 using vI. This is the deepest proof. In fact. it is valid to substitute one wﬀ for the whole or part of another wﬀ. the proof of NC is left as an A exercise. Example: Consider again (P & R) → (Q & R). the Law of Excluded Middle. since it builds directly from ExM.

at any point in a proof. We have already used all of these rules except Theorem Introduction in Chapter 3. For instance. and so forth in the proofs of further sequents. DS. If the sequent you are using has a special name (such as MP or HS) then just cite the name in the justiﬁcation column. Equivalence Introduction (EqI): Given an equivalence theorem P ↔ Q. HS. In this section we state without formal justiﬁcation (since that would be beyond the scope of this course) a number of Introduction Rules that will greatly facilitate proofs of sequents and theorems. . otherwise. 3. we could introduce it. But there is no reason that we could not deﬁne other symbols as combinations of basic symbols or other previously deﬁned symbols. We have already used SI many times in Chapter 3. we may substitute P for Q (or the other way around) in any occurrences of Q in a valid sequent. A theorem is a proposition that we have already shown can be proven with no assumptions left over. Sequent Introduction (SI): Any uniform substitution instance of a proven sequent can be introduced in a proof. write TI and give a reference to where the theorem was previously proven (such as in an earlier exercise). in eﬀect. Do not forget that the substitution has to be uniform. 1. So far. suppose we stipulate that P ∗ Q := (P ∨ Q) → (P & Q). this kind of substitution need not be done uniformly. the only symbol we are taking as deﬁned in terms of the basic set of symbols is ↔. If the equivalence rule you are using has a special name (such as deM or Dist) then just cite the name in the justiﬁcation column. write EqI and give a reference to where the rule was previously proven (such as in an earlier exercise). otherwise. write SI and give a reference to where the sequent was previously proven (such as in an earlier exercise). If the theorem you are using has a special name (such as ExM or Id) then just cite the name in the justiﬁcation column. Hence. or vice versa.56 CHAPTER 4. Then any occurrence of P ∗ Q could be substituted for by its deﬁnition. Deﬁnition Introduction (DefIntro): Any deﬁned symbol can be replaced by its deﬁnition. Theorem Introduction (TI): Any uniform substitution instance of a theorem may be written as a line anywhere in a proof. 2. Any such usage can be justiﬁed by citing the line number that was operated on and then putting down either DefIntro or Def∗. doing so is just shorthand for a statement of the entire proof of the theorem. otherwise. TECHNIQUES AND APPLICATIONS 4. 4. or a substitution instance of it. and any wﬀ that was interderivable with its deﬁnition could be substituted for by P ∗ Q.5 Introduction Rules Introduction Rules are metarules that allow us to introduce previously accepted results into a derivation in order to simplify it. No assumptions will be listed in the assumption column (because theorems do not depend on assumptions) and the notation TI should be put in the justiﬁcation column (together with the name of the theorem if it has one in brackets). when we applied sequents such as MT. only uniform substitution instances of sequents can be guaranteed to be valid.

It cannot be proven in all generality with the tools we use in this course. The proofs of these results are very simple.2 (1) (2) (3) (4) Q∨R P →Q −P → R P ∨ −P Q∨R A A ExM 1. Prove the Law of Non-Contradiction. . 4. Example: Given P R. However.1 A Exercises 1. Deduction Theorem: The sequent P1 . P2 . (P & − P ) → Q P → (−P → Q) P ∨ (P → Q) (P → Q) ∨ (Q → P ) (P → Q) ∨ (Q → R) (−P → P ) → P −(P & − P ). (a) Given P R. . Prove ExM using only Basic Rules. Prove the following theorems: (a) (b) (c) (d) (e) (f) C 1. & Pn ) → Q.6 The Deduction Metatheorem The following metatheorem is one of the most important results in deductive logic. we will work out proofs for a few simple cases. show P R. . and they should make apparent the plausibility of this centrally important theorem.6.3 CD 57 4. Pn Q is valid iﬀ (P1 & P2 & . .5.4. B 1. −P → R 1 2 1. . show P → R. THE DEDUCTION METATHEOREM Example of TI: 45 P → Q.2 CP .2. 1 (1) 1 (2) (3) show P → R. . P R P →R A (CP) 1 SI 1. but they nicely illustrate the application of SI and TI. and given P → R.

We can then prove P →Q Q→P (P → Q) & (Q → P ) P ↔Q TI TI &I 3 Def↔ (b) Given P ↔ Q. 2.58 CHAPTER 4. we can substitute it in wherever we want. (b) Given 1 1 P → R.2 MP On the assumption that P → R is a theorem. Given P. since this is the line that the introduced sequent is applied to.1 B Exercises 1. Q R. and we do not have to cite any previous assumptions or lines since a theorem holds without qualiﬁcation. HS. Q R. show P. 4. but we can substitute it into the proof as required. The Deduction Theorem gives us a better understanding of a very useful result that we used in the previous chapter: P Q if and only if P ↔ Q. (These two questions in combination show the Deduction Theorem for the case of two premisses. Note that we have to cite line 1. The expression P Q simply means P this means that we have follows: (1) (2) (3) (4) P → Q and Q and Q P . By the Deduction Theorem.) . by the Deduction Theorem Q. Here is an informal argument to establish this result: (a) Given P Q. and such equivalences can be used freely in transforming expressions into equivalent forms which may help us get to where we want to go in a deduction. Given (P & Q) → R. It doesn’t have a special name (like MT. show P ↔ Q. is simply the fact that whenever we have an interderivability result. which is just the same as P The main use of the Deduction Theorem. we have an equivalence.). etc. show P (1) P (2) P →R (3) R R. show (P & Q) → R. A TI 1. P . From P ↔ Q we get P → Q and this gives us P Q and Q Q → P separately. P ↔ Q as Q → P . show P Q.6. for the purposes of this introductory course. TECHNIQUES AND APPLICATIONS In line 2 we apply the given sequent P R.

we can sometimes in propositional logic gloss over the ﬁne distinction between “true” and “provable”. Q is true. don’t combine steps.2 A Turnstile is not an Arrow! There is a natural tendency to think of P → Q as saying just the same thing as P Q. It so happens that elementary propositional logic has the property of completeness. But P → Q only says that given P . The ﬁrst says that Q can be derived from P by the rules of natural deduction. Here is a simple example: −(P → Q) ↔ −(−P ∨ Q) ↔ −−P & −Q ↔ P & −Q Imp deM DN This establishes the theorem −(P → Q) ↔ (P & − Q). Because of completeness. meaning that every logically true statement can be proven within the system. therefore.DN Again. As with regular natural deduction.7 Boolean Algebra We can greatly simplify many deductions that involve the use of equivalence relations by means of Boolean algebra. meaning (as far as propositional logic is concerned) that it denotes a relation between truth values. For our purposes. we can combine steps: −(P → Q) ↔ −(−P ∨ Q) ↔ P & −Q Imp deM. If you aren’t quite sure of what you are doing. be careful that you don’t get ahead of yourself. and what is true. the Deduction Theorem assures us that given P Q. while → is a logical symbol. it is a theorem that P → Q. But P Q and P → Q are diﬀerent claims. But (as G¨del showed) there are logical o systems that are incomplete. we will mean by “Boolean algebra” the manipulation of propositional formulas in a manner very similar to ordinary algebraic manipulations. Now. once we move beyond elementary propositional logic the distinctions between what can be proven from a given set of rules. but which in a course involving metatheory would be serious. have to be kept carefully in mind.4. 4. This will seem very familiar to those who have some experience with the manipulation of algebraic identities in ordinary mathematics.7. The symbol is a metalogical symbol. This is a mistake. We can substitute equivalent expressions for subformulas within a larger formula either uniformly or nonuniformly: (−P ∨ Q) & (P → R) ↔ (P → Q) & (P → R) ↔ P → (Q & R) Imp Dist→ . If all we are doing is trying to go from one expression to another which is equivalent to it. BOOLEAN ALGEBRA 59 4. It is a claim about what can be derived. and the Deduction Theorem says that this is the same thing as saying that P → Q can be derived with no assumptions left over.6. we can write our calculations in a much simpler and more direct way that does not require the full apparatus of natural deduction. one which can sometimes be glossed over in an elementary course such as this.

we do have some idea of what requirements the conclusion should satisfy. But let’s go through the symbolization and proof to illustrate the general method of attack. there actually are some people who attempt to reason their way through real-life problems) we do not have a conclusion worked out in advance. B := Bob pays the bill. The fact remains that when we use natural deduction in “real life” (and despite all appearances to the contrary.8 Filling in the Gaps in an Argument It is unavoidable that most of this text has to be devoted to explaining the basic techniques of symbolic logic. Rather. First. R := Bob has the receipt. a detective trying to solve a murder mystery wants to ﬁnd out who dunnit. TECHNIQUES AND APPLICATIONS 4. let’s deﬁne our symbols: A := Alice pays the bill. but we may not have all of the premisses required to establish it deductively. Or else we may have a fairly deﬁnite idea of the conclusion. Example: Suppose we know the following: either Bob or Alice paid the phone bill. for instance. In this section I will introduce a few exercises in which we have to work out natural deduction proofs in which part of the argument is missing. and we can’t devote as much space as we should to its possible applications. We may not know the conclusion in advance any more than we know the sum of a column of ﬁgures before we add them up. we are given some evidence (which we express in premisses) and we try to work out what that evidence implies. if Alice paid the bill then she would have the receipt. Who paid the phone bill? It is pretty obvious that Alice got stuck with the bill. Use Boolean algebra to demonstrate the following equivalence theorems: (a) (b) (c) (d) (e) (P ↔ Q) ↔ (−(P → −Q) ∨ −(−Q → P )) ((Q ∨ R) → P ) ↔ ((Q → P ) & (R → P )) (P → (Q ∨ R)) ↔ ((P → Q) ∨ (P → R)) (P → (Q & R)) ↔ ((P → Q) & (P → R)) (P ∨ (Q → R)) ↔ ((Q → P ) ∨ (−P → R)) 4.7. if Bob paid the bill he would have the receipt. This could lead an attentive student to wonder what is the point of going through all the pain of ﬁnding a deductive proof of a conclusion that has already been worked out — especially when the semantic techniques of the next two chapters allow us to check the validity of an argument quickly and eﬃciently without having to construct a proof at all.1 B Exercises 1. Bob does not have the receipt. Often. we might have to work backwards to ﬁll in the gaps in the argument. We can put down all of the premisses we have on hand. and try a few steps to see where they lead: .60 CHAPTER 4. S := Alice has the receipt.

−R A.4 (1) (2) (3) (4) (5) (6) B ∨A B→R A→S −R −B A A A A A 2.3 2. if the weapon wasn’t in Paul’s oﬃce. then Michael wasn’t entirely innocent. since there is no way to extract it from its conditional.1. that one of the premisses (line 3) turned out to be irrelevant.4 1 1 1. Example: We are given the following facts: If either Paul or Peter did it then either Randa helped or Michael wasn’t entirely innocent.5 DS 61 We quickly come to the conclusion that it was Alice who paid the bill. where we often have to sort out the useful information from the noise.4 (4) (5) (6) (7) (8) (9) (10) (11) M M →N M →P P (P → (R ∨ −M )) & (Q → (R ∨ −M )) P → (R ∨ −M ) R ∨ −M R A 3 Trans 2. A → S. But now suppose that independent evidence shows that Michael has an alibi — he was out of town for a meeting at the time of the incident.4 1. Then we can continue: 4 3 2. is just like real life.4.4 MT 1.7. 1(b)) 8 &E 7. Note.3.4 1. On the other hand.8.2.2.6 MP 1 SI (Exercise 4.2. in this sort of problem we usually don’t know in advance what sequent we are going to be proving.3. we explore a set of premisses to see where they will lead. FILLING IN THE GAPS IN AN ARGUMENT 1 2 3 4 2. If the weapon was in Paul’s oﬃce he did it.5 HS 4. We have therefore established the sequent B ∨ A. Remember.3.10 DS . of course. also. rather.9 MP 4. and we stop there because this answers the question. B → R. and that. Can we determine whether Randa was involved? Symbolization: P := Paul did it Q := Peter did it R := Randa helped M := Michael was innocent N := The weapon was found in Paul’s oﬃce Write down the premisses: 1 2 3 (1) (2) (3) (P ∨ Q) → (R ∨ −M ) N →P −N → −M A A A There is no way we can draw a deﬁnite conclusion about the truth or falsity of R from this.

62 CHAPTER 4. TECHNIQUES AND APPLICATIONS .

1 Syntax versus Semantics The method of natural deduction that we have learned so far are instances of what are called syntactic procedures. 63 .” This simply means that you have broken the rules that say which combinations of symbols are allowed.e. we may (if it suits our purposes) then correctly write down −P . which we may represent as T (true) and F (false). Such rules are concerned only with syntax . or 1 and 0. If you make a mistake when typing in commands to a computer program. since we know that in real life there are often various shades. In propositional logic we make the simplifying assumptions that there are only two possible truth values. is a calculus of truth values. in eﬀect. Bivalence should not be confused with the theorem derived in the previous chapter called the Law of Excluded Middle. We will see later that this assumption also breaks down under certain interesting conditions. See [7] for an illuminating discussion of this point. propositional logic. i.. the most common interpretation of the propositional symbols is that they represent truth values of propositions. We commonly interpret the P s and Qs of propositional logic as meaningful statements that can bear truth values. assume that every proposition has one and only one of these two possible truth values. it may tell you that you have committed a “syntax error. The computer has no notion whatsoever of the possible meanings of the symbols being used by the program. When we do semantics. In propositional logic. modes. Nevertheless. MT says that given P → Q and −Q. but these symbols could be given other interpretations as well — or no interpretation at all. For instance.) Bivalence is an idealization.Chapter 5 Propositional Semantics: Truth Tables 5. We also. the ﬁrst of two main semantic techniques we will learn in this course. at this stage at least. we are concerned with the interpretation or meaning of the symbols we write down. P ∨ −P . that every proposition is either deﬁnitely T or deﬁnitely F. In this chapter we introduce truth tables. the logic based on these two somewhat idealized assumptions turns out to be extremely powerful and useful. and is the basis for most or all further and more nuanced elaborations of logic. and degrees of truth. These are procedures deﬁned by rules that tell us the allowable combinations of symbols that may be written down starting from given wﬀs. (These are often symbolized as and ⊥. this is called bivalence.

64

CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

5.2

Truth Table Deﬁnitions of the Propositional Connectives

I reproduce here the basic truth tables from Chapter Two, for convenience: Atomic Proposition: P T F Negation: − P F T T F Conjunction: (P & Q) T T T T F F F F T F F F Biconditional: (P T T F F ↔ T F F T Q) T F T F Disjunction: (P T T F F ∨ Q) T T T F T T F F

Material Implication: (P T T F F → T F T T Q) T F T F

5.2.1

How Many Possible Truth Tables Are There?

The ﬁve familiar truth tables above are not the only possible truth tables for binary connectives. Consider the general truth table for any binary connective ∗: (P T T F F ∗ ? ? ? ? Q) T F T F

There are two ways to ﬁll each space in the centre column and four lines per table, meaning that the total number of ways to ﬁll in the centre column will be just 2×2×2×2 = 24 = 16. In other words, there are 16 possible truth functions of two propositional variables. While the ones we deﬁne above are the ones most commonly used in elementary logic, others are possible, and some (such as exclusive OR) have practical applications in circuit theory.

5.2.2

Expressive Completeness

Clearly, some of the truth functions we have deﬁned are redundant, in the sense that they can be written as combinations of other symbols. For instance, we can show that P → Q is equivalent to −P ∨ Q; hence our use of the symbol → is largely for linguistic convenience. Some sets of symbols would not be suﬃcient to produce in combination all possible truth tables. However, it can be shown that combinations of {−, ∨ } and {−, & } are suﬃcient to generate all possible tables. It is also possible to deﬁne a single connective that is expressively complete in this sense, and for this purpose one can use either the Sheﬀer stroke P |Q (deﬁned as −P ∨ −Q) or the Nicod stroke P ↓ Q (deﬁned as −P & − Q). We will examine some of the properties of these symbols later on.

5.2. TRUTH TABLE DEFINITIONS OF THE PROPOSITIONAL CONNECTIVES 65

5.2.3

Evaluating Complex Wﬀs

Evaluate wﬀs as follows: 1. Be sure you have written the input values for all the occurrences of each variable in the same order. 2. Evaluate all negations of atomic formulas ﬁrst; then, 3. starting with the innermost brackets, work outward until you come to the main connective. Arbitrarily complicated wﬀs can be evaluated mechanically by truth tables. Truth tables therefore constitute what computer scientists call an eﬀective procedure — meaning simply a recipe that can be followed in a completely unambiguous manner that is guaranteed to produce a unique result. The practical drawback of truth tables is that the number of lines in a table for a formula containing n propositional variables is 2n . This is a rapidly increasing function; recall the old fable about the wise man who demanded 64 2i grains i=1 of rice from his king in return for a favour. (See [3]. A quick estimation will show why the king may have been tempted to cut oﬀ the wise man’s head.) Large truth tables may therefore take impractically long times to evaluate. Many of the truth table problems encountered in elementary propositional logic can be solved by means of short-cuts, and we will consider these below. There is no guarantee that a short cut exists for a given problem. On the other hand, if in an introductory course you ﬁnd yourself faced with the task of evaluating a formula that would contain 8, 16, 32, or more lines if fully written out, take a careful look at it before beginning blindly to ﬁll in the whole table. There is probably a short-cut! Example: do the full truth table for the wﬀ −(P → (−Q ∨ R)) & P . Write out the formula as neatly as possible, with fairly wide, even spacing between the symbols. Then enter all the possible combinations of truth values for the three letters in the formula, P , Q, and R. (You can think of these as “inputs”.) There is nothing sacred about the order in which I have written the inputs below, but it is a standard order that is commonly used in logic, and it would be helpful to stick to this order. Since P appears twice in the formula you will have to enter its input values twice. Whenever a letter appears more than once in a table, it is essential that you write the values in its column of input values in the same order for each instance of the letter; otherwise, the truth table will be meaningless. There will be 8 = 23 lines in the table since there are three variables. Then proceed to evaluate the various subformulas in the order indicated above, ending with the main connective, which in this case is the & symbol. − F T F F F F F F (P T T T T F F F F → T F T T T T T T (− F F T T F F T T Q T T F F T T F F ∨ R)) & P T T F T F F T T T T F T T F F T T T F F F F F F T T F F T F F F

66

CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES We see that the formula is T only in the case in which P and Q are T and R is F.

5.3

Semantic Classiﬁcation of Single Wﬀs

Well-formed formulas of propositional logic can be classiﬁed according to their semantic properties:

5.3.1

Tautology

A wﬀ is said to be tautologous, or to be a tautology, if and only if it comes out T under all possible combinations of input values. Tautologies can also be referred to as logical truths or necessary truths. Example: Law of Excluded Middle In the previous chapter we proved the Law of Excluded Middle as a theorem. The table shows that it is a tautology as well, as it should be. P ∨ − P T T F T F T T F Example: (P → (Q → P )) In the previous chapter we gave several proofs of this formula as a theorem. One can easily see that it is a tautology as well. The only way it could be F would be if P were T and (Q → P ) were F. But if P is T, then (Q → P ) is T and the whole wﬀ is T. Hence, it is not possible for this wﬀ to be false.

5.3.2

Inconsistency

A wﬀ is said to be an inconsistency, or to be inconsistent, if it comes out F for all possible combinations of input values. Such a wﬀ is sometimes loosely called a contradiction, but that term is properly reserved for an inconsistent wﬀ such as P & − P containing only one propositional variable. Example: Law of Non-Contradiction The Law of Non-Contradiction states that it is never the case that a proposition is both true and false; i.e., that (P & − P ) is always false. This is supported by the truth table: P & − P T F F T F F T F

5.3.3

One-to-One Relation Between Inconsistencies and Tautologies

It is easily seen by de Morgan’s Law that the Law of Non-Contradiction is simply the negation of the Law of Excluded Middle. It can be shown in general that for every tautology there is a contradiction, generated by taking the negation of the tautology, and vice versa.

incomplete. and P → Q.5. (See [10. P → P .4 Contingency A wﬀ is said to be contingent or to be a contingency iﬀ it has occurrences of both T and F in its truth table. in other words. P → Q. .) This fact has practical signiﬁcance for us because it tells us that there is a nice one-to-one correspondence between theorems and tautologies: every theorem we can prove using natural deduction will turn out to be a tautology if we do truth table analysis on it. P ∨ Q. Examples: P . there could never be a computer program that could by itself generate all possible truths about the numbers. not on logical form. In fact. Kurt G¨del shocked the mathematical world by proving that any system of logic that o contains enough machinery to deﬁne the natural numbers is. (See Lemmon [17]. 5. One of the most important things done in metalogic (the logic of logic) is to investigate whether various possible logical systems are sound or complete.5 Semantic Classiﬁcation of Pairs of Wﬀs There are useful semantic properties that can be identiﬁed for pairs of wﬀs. or is a consistency.4 Relation of Syntax to Semantics A logical system is said to be sound iﬀ every theorem that it can generate is a tautology. iﬀ it is either a tautology or a contingency. it can be shown that propositional logic is.1 Implication Proposition P is said to imply Q iﬀ P → Q is a tautology.5. A logical system is said to be complete iﬀ every tautology can be proven as a theorem. In 1930. The simplest example of a contingency is just P . 2.4 and 2.3. 5. Sect. it turns out that many important systems of logic that are more powerful than propositional logic (in the sense that they can express and derive results about logical or mathematical structure that is invisible to propositional logic) are incomplete. RELATION OF SYNTAX TO SEMANTICS 67 5. in fact. other examples are P & Q. if it is consistent.) This tells us that there is no ﬁnite set of formal rules from which one can derive all the possible truths about the natural numbers. However. 5. or to put it another way. It may seem too “obvious” to bear mentioning that propositional logic is both complete and sound. 19]. Its truth or falsity in a given situation therefore depends on the facts.3. 5.5. a wﬀ is consistent. while every tautology can be proven as a theorem. We do not attempt to prove the soundness or completeness of propositional logic in this course.4.5 Consistency A wﬀ is said to be consistent if it is T for at least some of its input values. both sound and complete. for one way of doing this.

however. but never both T.5. The simplest example of independent propositions are two atomic propositions.5.5 Inconsistency Two propositions P and Q are inconsistent iﬀ they never agree in truth value. The given propositions are contrary if and only if −(P & Q) is a tautology.3 Contrariety A proposition P is contrary to Q if they are never both T. It is easily seen that the negation of their conjunction is a tautology. 5. Formally. Examples: The propositions P ∨ Q and P ∨ −Q are subcontraries since one of them is T even if the other is F.4 Subcontrariety Two propositions P and Q are subcontraries if they are never both false. This suggests that to test the implications of contrariety. Example: Show that if P and Q are contrary then one implies the negation of the other.6 Consistency Two propositions P and Q are consistent iﬀ their conjunction P & Q is consistent. however. They could. Q. say P and Q. It is also easily seen that their disjunction is a tautology. 5.7 Independence Two propositions are said to be independent iﬀ none of the above relations apply to them. P and Q are subcontrary iﬀ P ∨ Q is a tautology.5. 5.6 Relations Between Semantic Properties of Wﬀs It is possible to use natural deduction to establish relationships between the semantic properties of pairs of wﬀs. they are both T if P is T.5. 5. iﬀ P ↔ Q is a tautology. 5.68 CHAPTER 5.2 Equivalence One proposition P is equivalent to another. Examples: The propositions P & Q and P & − Q are contraries. The concept of subcontrariety is not widely used these days.5. Two propositions P and Q are contraries iﬀ −(P & Q) is a tautology. PROPOSITIONAL SEMANTICS: TRUTH TABLES 5. both be true. we assume that −(P & Q) is true and carry out a simple natural deduction: . since they are both F when P is F.5. however. They could. though it was important in medieval logic. both be false. This is the same thing as saying that −(P ↔ Q) is a tautology. 5.

that doesn’t mean that it is not valid. 5.7. then the sequent is valid. equivalently. one thing we can do is set up the truth table for the sequent. A sequent is said to be valid if and only if the conclusion is never false when the premisses are true. either a tautology or a contingency. If we can ﬁnd a proof that makes correct use of the rules. −Q}. i. 5.: {P. SEMANTIC CLASSIFICATION OF SETS OF WFFS 1 1 1 1 1 (1) (2) (3) (4) (5) −(P & Q) −P ∨ −Q P → −Q −Q ∨ −P Q → −P A 1 deM 2 Imp 2 Comm 4 Imp 69 Lines 3 and 5 express the required properties of P and Q.1 Equivalence A set of wﬀs is said to be equivalent iﬀ all its members are pair-wise equivalent. E.7 Semantic Classiﬁcation of Sets of Wﬀs Some of the above properties of pairs of wﬀs can be usefully extended to sets of any number of wﬀs. then the sequent is valid.3 Consistency A set of wﬀs is said to be consistent iﬀ it is not inconsistent. Here is an example of a valid sequent (MP): . P → Q.7.4 Independence A set of two or more propositions are independent of each other iﬀ they are independent pairwise. To check the validity of a sequent. but if we can’t ﬁnd a proof. Q. 5. 5. therefore.7. 5. One of the most important uses of truth tables is in testing the validity of argument forms. We need a general procedure for testing the validity of sequents. or. with separate columns for premisses and conclusion.5.7. 5.g. and see if there is any line in the truth table in which the premisses are all T and the conclusion is F. If you can’t come up with a proof.. this is the same thing as saying that its conjunction is consistent.8 Testing Validity of Sequents Using Truth Tables Suppose you are asked to ﬁnd a proof of a sequent such as P. R S.2 Inconsistency A set of wﬀs is said to be inconsistent iﬀ its conjunction is always false. If this is not the case.7.e. it may simply mean that you are unable to think of a route from premisses to conclusion. that doesn’t mean that the sequent is invalid. iﬀ the negation of its conjunction is a tautology.

not if it is sometimes but not always right. Here is an example that may seem counter-intuitive. Here is the table: P T T F F → T F T T Q T F T F P T T F F → T T T T P T T F F As before. PROPOSITIONAL SEMANTICS: TRUTH TABLES (I use a double vertical line to separate the conclusion from the premisses. since the conclusion is a . This proves that Aﬃrming the Consequent is invalid. By contrast. there is no line with the premiss T and the conclusion F. But the sequent can never be sound . here is the table for Aﬃrming the Consequent: Q T F T F P T T F F → T F T T Q T F T F P T T F F In the third line. such as P → Q P → P . You need only ﬁnd one line in which this happens in order to show invalidity.) In each line in which the conclusion is F. both premisses are T while the conclusion is F. But we need hardly have done the whole table to see this. at least one of the premisses is F as well.70 P T T F F P T T F F → T F T T Q T F T F Q T F T F CHAPTER 5. Is P. and the sequent is valid. meaning valid with all premisses true. This is enough to show that the sequent is valid. −P Q valid or invalid? This is just ExF. Suppose now we have a sequent in which the conclusion is a tautology. The program is valid only if it never makes a mistake. which we already know how to prove using RAA. That is just like an invalid computer programme that occasionally outputs the correct answer. But the truth table is instructive: P T T F F − F F T T P T T F F Q T F T F The sequent is valid since there is no line in which the conclusion is false and all the premisses are T. Note very carefully that we cannot conclude that the argument is valid just because there are some lines (such as the ﬁrst) in which everything comes up true.

& Pn ) → Our method for checking for validity is designed to take advantage of this fact. the Deduction Metatheorem can be expressed as follows: The sequent P1 . the size of a truth table increases very rapidly as a function of the number of propositional variables involved.2) 5. with the conclusion set oﬀ to the right. it is not necessary to write out the whole table. Pn Q is a tautology. conversely. for instance. true or false.5. the sequent is therefore valid.1) (5.8. we immediately see that in both of these lines one of the premisses is F. We can summarize this as follows: suppose F is a contradictory proposition. to have short-cuts that would make it unnecessary for us to write out every line of the table in order to check the validity of a sequent. Then any sequents of the following form are valid: F X X T (5. some require more than one. . will have 26 = 64 lines. . TESTING VALIDITY OF SEQUENTS USING TRUTH TABLES 71 tautology and must always come out T. Example: show that MP is valid by the short-cut method. therefore. However. Recall that to show that a sequent is valid we have to show that there are no lines in the truth table where all the premisses are T and the conclusion F. Here are a few examples to illustrate the method. . meaning that it is always guaranteed to produce a deﬁnite answer. However. Writing what I have shown here constitutes a complete solution to the problem of testing the validity of MP. Then set the conclusion to be F. T is a tautology.1 Deduction Theorem from the Semantic Viewpoint From the semantic viewpoint. therefore. to show that a sequent is invalid it is suﬃcient to ﬁnd just one line in its table with all premisses T and the conclusion F. and see whether this forces any or all of the premisses to be T. . But it is always a good idea to see if a short-cut can be found before starting to write out a huge truth table. P P → Q Q T T F F F F F T F F The only lines that need concern us are the two lines in which the conclusion is F. and X is any proposition whatsoever. .8. It would be very good.2 Short-cuts The truth-table check of validity is what logicians call an eﬀective procedure. The key idea of the short-cut method is to work backwards from the conclusion to the premisses. . 5.8. There can be no guarantee that a short-cut can be found for every problem. as noted. What we do. Q is valid iﬀ the conditional (P1 & P2 & . P2 . . is set up the truth table for the sequent in the manner suggested above. some may be irreducibly complex and will require a full or nearly full truth table for their analysis. Some sequents can be checked in one line. the table for a problem with 6 letters.

That forces both R and S to be F.72 CHAPTER 5. Example: check (−P ∨ −Q) → R. and therefore by the truth table for → the formula is always T — i. The table need only show enough information to express this fact: . and which means that P and Q would have to be F. Q P → Q P T F T T F Set the purported conclusion P to be F. The antecedent (P & − P ) is always F. −R P & Q. and so its full table would have 24 = 16 lines. However. Set the conclusion to be F. This sequent has 4 propositional variables. contradictory. the sequent is invalid. P ∨ Q P → R Q → S R ∨ S F F F F T F F T F F F F This can be checked with just one line. There will sometimes be sequents in which you have to check more than one line. the argument is valid. Therefore. But any possible invalidating line has to have P → R and Q → S come out T. it is a tautology. this makes the ﬁrst premiss P ∨ Q F as well. which is more complicated.e. It is impossible to run through all of the possible situations that might arise. Example: let’s check Constructive Dilemma.9 Using Short-Cuts in Testing for Other Semantic Properties Short-cut truth tables can be used sometimes to test for semantic properties other than validity as well. there is no invalidating line. PROPOSITIONAL SEMANTICS: TRUTH TABLES Example: show that Aﬃrming the Consequent is invalid by the short-cut method. which makes R false.. therefore. this forces the premiss P → Q to be T regardless of the value of Q. and the argument is valid. Hence. The values of P and Q remain to be determined. Example: Determine whether (P & −P ) → (P → (Q ∨ R)) is contingent. there is a line with all premisses T and the conclusion F. we see that all possible combinations of the truth values of P and Q which make the conclusion F also make the ﬁrst premiss F. Therefore. However. or tautologous. (− F T T P T F F ∨ T T T − T F T Q) F T F → F F F R − R P & F T F T F F F F F F F Q F T F It is only necessary to consider lines on which −R is T. there are three ways in which P & Q can be false. and this means that we can choose the other premiss Q to be T. Here is an example which will suggest the sort of approaches that one could take. 5. Plugging R = F into the ﬁrst premiss.

and for this purpose one can use either the Sheﬀer or the Nicod strokes. EVALUATING LONG CONJUNCTIONS OR DISJUNCTIONS (P & − P ) → (P → (Q ∨ R)) F T 73 The general approach to this sort of problem is to see what it would take to make the wﬀ T and F. it is contradictory. the last ∨ is the main connective. it is a tautology. The Sheﬀer stroke is deﬁned as “not both P and Q”. we have to bracket oﬀ the terms two at a time. As the practice examples at the end of the chapter will convince you. Example: Evaluate P ∨ Q ∨ R when P and Q are T and R is F . and they have important applications in circuit theory. and (ii) because they come reasonably close to capturing the sense of the logical connectives that we use in ordinary language. 5. which one we pick makes no diﬀerence to the outcome.10 Evaluating Long Conjunctions or Disjunctions By a “long” conjunction or disjunction I simply mean one containing more than two formulas. If it can be both T and F. The latter are chosen (i) because they are expressively complete. and show that we get the same result regardless of what order we associate the terms. and arbitrarily pick one of the connectives as the main connective. But their existence does tell us something about how logic works. truth table analysis as we have set it up here (and as it is usually done in most elementary texts) deals only with & and ∨ as binary connectives. the others can be found by applying commutation.10. 5. In order to evaluate a formula containing a long conjunction or disjunction. so this is not something that you will frequently need to worry about. however. In fact.11 Sheﬀer and Nicod Strokes At the beginning of this chapter I mentioned that there are other possible truth tables for two propositional variables than the familiar ones we use in elementary logic.5. these connectives are almost entirely useless for practical calculations in logic. and if it can only be T. and has the following truth table: . However. then it is contingent. in the right-hand formula.) In the left-hand formula. the ﬁrst ∨ is the main connective. P ∨ (Q ∨ R) (P ∨ Q) ∨ R T T T T F T T T T F (I’ve only shown two of the possible ways to associate this formula. We accept conjunctions and disjunctions of more than two formulas as wﬀs. although we cannot prove that fact in all generality in this course. if it can only be F. In this course you will rarely if ever have to evaluate a conjunction or disjunction with more than two terms. therefore. It is. possible to do the logic of propositions with only one symbol.

this is called a NOR gate (“not-or”). this is called a NAND gate (“not-and”). PROPOSITIONAL SEMANTICS: TRUTH TABLES In circuit theory. The Nicod stroke is deﬁned as “neither P nor Q.” and has the following table: P T T F F ↓ F F F T Q T F T F In circuit theory.74 P T T F F | F T T T Q T F T F CHAPTER 5. .

Write the complete truth table for each of the following wﬀs: (a) −(P ∨ Q) & − (P & Q) (b) −(−(P → Q) & − (Q → P )) (c) −(P & (Q & R)) (d) P → (Q → R) 2.5. (a) P ∨ Q. inconsistent.12. Evaluate the following wﬀs. REVIEW EXERCISES ON CHAPTER 5 75 5. −Q (b) P → Q. or contingent. (a) P → P (b) −P → P (c) P → (−P → P ) (d) (P ∨ −P ) → (P & − P ) 4. Use truth tables to classify the following wﬀs as either tautologous. when P = T and Q = F . −P B 1. or contingent: (a) P ∨ (Q & R) (b) P → (R & S) (c) P → (Q → P ) (d) (P → R) → (P → (P & R)) (e) (P → (Q → P )) → R (f) (P ↔ Q) ↔ (Q ↔ −P ) (g) P → (R & − R) P −P −Q (DS) (MT) (Abs) (Denying the Antecedent) P → (P & Q) . inconsistent. (a) −(P → Q) (b) −(P ↔ Q) (c) (−P ↔ Q) (d) (−P & − Q) ∨ (P & Q) 3.12 A Review Exercises on Chapter 5 1. Check the following sequents for validity. −Q (c) P → Q (d) P → Q. Use truth tables to characterize the following wﬀs as tautologous. Use short-cuts where they are suitable.

(c) Check the following sequent for validity by constructing its full truth table: (P |P )|(Q|Q). Q. P → (R ∨ S). (c) Show that −P and (e) If P. P → R. (a) Show that if P and Q are contrary then −P and −Q are subcontrary. Q → R (g) P → Q (h) P → Q P → (P ∨ Q) P → (P & Q) (i) P ∨ R. The following questions deal with the analysis of semantic properties. (d) Show that two wﬀs are inconsistent iﬀ they are both contraries and subcontraries.76 CHAPTER 5. (d) Translate the sequent in part (c) into ordinary propositional notation. is it possible for R to be false? Explain your (f) If P. Q ↔ R (c) P → Q. and P → Q in terms of the Sheﬀer stroke. −(R ∨ S) 3. R is a valid sequent. PROPOSITIONAL SEMANTICS: TRUTH TABLES (h) (P → R) → −P (i) (P → (Q & R)) & (P → (Q & − R)) (j) −(P → Q) & − (Q → P ) 2. . P ∨ Q. Q → R (b) P ↔ Q. C 1. and R to all be true? Explain your answer. −S −P −P ∨ M (j) P → (M & N ). Q are equivalent if and only if P and Q are equivalent. Q answer. P |(R|R) R. is it impossible for P . −Q ∨ −S (f) P ∨ Q. (a) Show how to write −P . (b) Do the same for the Nicod stroke. (b) Show that P and Q are equivalent if and only if P and −Q are contradictory. N → S. R → Q (d) P → Q P →R P ↔R P →R −P ∨ −R (Conv) (DD) (HS) −Q → P R (e) P → Q. M → R. Q R is invalid. R → S. P & Q. Use truth tables to check the following sequents for validity: (a) P → Q.

stacks and branches. it gives a very easy way of checking the validity of sequents. 6. For instance. Because any number of propositions can be conjoined. a stack can contain any number of propositions.Chapter 6 Propositional Semantics: Truth Trees Truth trees are one of the most eﬃcient ways of checking the semantic properties of propositional formulas. and break it down as shown on the top of the next page: 77 . and branches show the decomposition of disjunctions into their separate disjuncts. The basic idea of truth trees is that they give a graphic way of displaying whether or not a set of formulas is inconsistent. P &Q P Q Basic Stack P ∨Q P Q Basic Branch d d We place a checkmark beside every formula that has been decomposed. Stacks show the decomposition of conjunctions into their separate conjuncts.1 Branches and Stacks Truth trees are graphic objects built up out of combinations of two basic structures. In particular. we could start with the formula (P & Q) & − R & (P ∨ Q).

PROPOSITIONAL SEMANTICS: TRUTH TREES (P & Q) & − R & (P ∨ Q) (P & Q) −R P Q P ∨Q d d P Q Figure 6. Here is another simple example: P &Q P ∨ −Q P Q P d d −Q × . it is checked oﬀ as shown so that we don’t forget that we have broken it down.3: A Typical Tree Decomposition has gone as far as it can when all molecular formulas are broken down into atomic formulas or their negations (which are collectively called literals). the wﬀ (P & Q) ∨ R can be decomposed as follows: (P & Q) ∨ R d d P &Q P Q R Figure 6. P ∨Q∨R P ¨rr ¨¨ r R Q Figure 6. we can have branches with any number of stems. but this invites confusion with the sense of branch as a forked structure splitting oﬀ from a disjunction. For instance. will be called here a path. because any number of propositions can be disjoined. we will rarely need branches with more than two stems in this book.2 Paths A route going from the initial set of formulas at the top down to one of the atomic formulas at the bottom. (In some logic books these are called branches.2: A Triple Branch However.) As each formula is broken down into its corresponding branch or stack. Stacks and branches can be linked together into trees of arbitrary size.78 CHAPTER 6. while choosing only one side of each branch as one goes.1: A Typical Stack Similarly. 6. as noted above.

If just one branch remains open after decomposition has gone as far as it can go.6. it will not be especially useful to do it this way. If all branches on a tree are closed. If all branches of the resulting tree are closed. 6. For example. That is. My use of the term path here is analogous to the way it is used in DOS and UNIX. It would have been perfectly valid to do the branch ﬁrst. P &Q P Q −−P P −(P ∨ Q) −P −Q Stacking Rules −(P & Q) −P d d −(P → Q) P −Q P ∨Q P d d P →Q −P d d P ↔Q d d −(P ↔ Q) d d Q Q P Q Branching Rules −Q −P −Q P −Q −P Q . Any path that is not closed is open. and then broke down the disjunction (P ∨ −Q) into a branch. the set of formulas is inconsistent. is treating all the formulas along a given path as if they were true. and declared to be closed . namely −Q itself and Q. (Try it. What we are doing. in the above tree. and you can stop working on that path at that point. to see where it will lead. Truth trees are a kind of partially ordered set or poset. replace Q with (R → S). These terms will be explained later in this book. in eﬀect. we do not need to go any farther along it. In the last tree above. In general.) As a rule. it is usually more eﬃcient to do as many stacks as one can ﬁrst. which possess hierarchical ﬁle structures.) You will obviously get a closed path without having to break down (R → S) into its literal components. (Try it. Any path that contains a contradiction is marked oﬀ with an × at the end. the two paths in the above tree could be written more abstractly as ((P & Q) & (P ∨ −Q)) \ (P & Q) \ P and ((P & Q) & (P ∨ −Q)) \ (P & Q) \ Q.3 Stacking and Branching Rules We need to know how to decompose other formulas besides simple conjunctions and disjunctions.3. Testing the branches that sprout from a disjunction is similar to vE in that we go exploring down each branch of the disjunction. since this will minimize the number of branches to be drawn. since the main purpose of truth trees is to make the implications of a stack of propositions graphically explicit. STACKING AND BRANCHING RULES 79 To arrive at this we ﬁrst broke down the conjunction (P & Q) into a stack. We can start from any number of initial formulas and break them down as far as we can. once a path is closed. However. and then the stack. Notice that the path ending in −Q contains contradictory formulas. we can stop decomposing formulas along a path as soon as a contradiction is found. though. Although we will usually have to break formulas all the way down to the literal level. then the tree itself is said to be closed. The rules for doing this can be divided into stacking rules and branching rules. the tree itself is open. one at a time. and seeing whether or not this will lead to a contradiction. the two paths could be notated in a form similar to the way we note paths in UNIX. and then do the branches.

It would also be a courtesy to our beleaguered marker to jot down. Contingency. in the spirit of the approach in this text we will allow any method that makes the work easier — which might be the case if. If we ran the a tree on the formula (P → (Q → P )) itself. we would get this: (P → (Q → P )) −P d d (Q → P ) d d −Q P . For example. 6. and checked by truth tables.4.1 Substitution of Equivalents To further simplify the use of trees. in stacks. 6. and since it is closed we see that the negation of the formula is inconsistent. PROPOSITIONAL SEMANTICS: TRUTH TREES These rules follow from equivalence formulas (such as (P → Q) ↔ (−P ∨ Q)) which can be easily established by natural deduction. say.1 Tautology. meaning that the formula itself is tautologous. Thus. all of the major semantic properties of sets of propositional formulas can be checked using truth trees. we can also make it a rule that we can replace formulas with their equivalents. the rule you are using (in this case Implication) that justiﬁed the replacement. The stacking rules cover all possible cases you could encounter in propositional logic. The formula P is a tautology if and only if the tree for −P closes. a valid stack could look like this: (P → Q) −P ∨ Q Be sure to check oﬀ the formula you have replaced. to show that (P → (Q → P )) is a tautology. we run the tree for its negation. oﬀ to the side. you have forgotten a stacking or branching rule! 6. and Consistency To check whether or not a wﬀ is a tautology. so strictly speaking replacement of equivalents like this is not necessary. either piecewise or as a whole. we can run the following tree: −(P → (Q → P )) P −(Q → P ) Q −P × There is only one path.4 Checking Semantic Properties of Formulas with Trees Although the main purpose of truth trees is to reveal inconsistency.3. However.80 CHAPTER 6.

For instance.4. and one for (−P ∨ −Q) −(P & Q). We could check each possible pair for equivalence. which is the same thing as saying that −(P1 & . However. A formula is contingent iﬀ the tree for both the formula and its negation are open. we have to run trees for both P and −P . that is. and this is the same thing as saying that the tree for −(P ↔ Q) must close.6. (Try it. . . Pn } is inconsistent if and only if (P1 & . 6. & Pn ) is a tautology. we could have also demonstrated the same result by doing two trees. CHECKING SEMANTIC PROPERTIES OF FORMULAS WITH TREES 81 All paths on this tree are open. the following trees show that (P → Q) is contingent: −(P → Q) P →Q P d d −Q −P Q In summary: a set of formulas is inconsistent iﬀ its stack has a closed tree. . P1 ↔ P2 . . and is consistent iﬀ its stack has an open tree. . To check whether a set of formulas is inconsistent. we run the following tree: −(−(P & Q) ↔ (−P ∨ −Q)) ¨¨rr rr ¨¨ ¨ r −(P & Q) −(−P ∨ −Q) P Q −P × d d P &Q −P ∨ −Q P Q −P × d d −Q × −Q × Since all paths close. P3 } are equivalent.) Checking a set of more than two formulas for equivalence can be a little more complicated. & Pn ) is always false. only that it is consistent. The formula is contingent if and only if neither tree closes. A set of formulas {P1 . But suppose we have to check whether all formulas in the set {P1 . . P2 ↔ P3 P1 ↔ P3 . Of course.2 Equivalence To check whether or not two formulas P and Q are equivalent. but it follows straightforwardly by the application of the branching and stacking rules given above. However. P2 . This tree looks relatively complex. and we already know that if all we know about a formula is that it is consistent then it could be either a tautology or a contingency.4. For instance. one for −(P & Q) (−P ∨ −Q). (This sequent can be proven by an easy application of . we should keep in mind that equivalence is transitive. The set is inconsistent iﬀ the tree closes. To check whether a formula P is contingent. the formula is a tautology. we use the fact that P and Q are equivalent iﬀ (P ↔ Q) is a tautology. to demonstrate that −(P & Q) ↔ (−P ∨ −Q) (one of de Morgan’s Laws). . stack them up and run a tree. . this does not show that the formula is a tautology. and is in fact not something that one often has to do.

If that is the case. To check the validity of an interderivability result such as (P → Q) (−P ∨ Q) we have to run two trees. for comparison. . are the trees for some invalid sequents: . if even one path is open. . Here are some examples of trees for familiar valid sequents: P →Q P −Q d d −P × MP Q × P →Q Q→R −(P → R) P −R −Q −P × d d d d R × Q × HS These sequents are shown to be valid by the fact that all paths close. This simply expresses the notion of validity. to check three formulas for equivalence. since that would express the fact that it is possible to get from the premisses to the negation of the purported conclusion without contradiction. . you are done.82 CHAPTER 6. On the next page. one in each direction. . . Don’t forget that the order in which we choose to break down the formulas in our initial stack does not make any diﬀerence to the validity of the result. In many cases the formulas will be simple enough in structure that it will be obvious whether some pair of them is not equivalent. PROPOSITIONAL SEMANTICS: TRUTH TREES HS. . don’t forget that as shown in the tree for HS. . the sequent is invalid. one often arrives at a simpler tree structure by doing as much stacking as possible ﬁrst. We can check the validity of any propositional sequent P1 . no further breakdown is needed. and breaking down all of the formulas to see if we arrive at closed paths. once a path is closed. since to show that a whole set of formulas is inequivalent it is suﬃcient to show that any two members of a set are inequivalent. . The sequent is valid if and only if the whole tree is closed. Also. which is that if the argument (or the sequent that formally represents its propositional structure) is valid. 6. we only need to check two formulas for inconsistency: −(P1 ↔ P2 ) and −(P2 ↔ P3 ). . Pn & −Q) is inconsistent.5 Checking Validity of Sequents The most important application of truth trees is in checking the validity of sequents. However. In other words. Pn Q simply by making a stack consisting of the premisses and the negation of the conclusion. A sequent P1 . Pn Q is valid if and only if (P1 & . . make sure you try it!) Therefore. then the premisses cannot lead to the negation of the conclusion.

Strictly speaking. TREES OR TABLES? P ∨Q Q∨R −R ¨¨rr rr ¨¨ r ¨ 83 P →Q Q −P −P d d Q Q P Q R × d d R × Q d d In the ﬁrst example (which is just Aﬃrming the Consequent).6. To check whether or not a formula is a theorem is (by the soundness and completeness of propositional logic) the same thing as checking whether it is a tautology. To demonstrate this fact by means of a truth tree. you will see that if you decompose Q ∨ R ﬁrst you will get a slightly simpler tree structure. but it is still a moderately complex tree. for instance. we would have to construct the following tree: P →Q Q→R −(R → P ) R −P −Q −P ¨rr ¨¨ rr ¨ r ¨ R Q d d Q × −P d d Since the tree is open. let us check the sequent P → Q. neither path closes. We could also demonstrate that the above sequent is invalid by using a short-cut truth table. as follows: . showing that the sequent is invalid. especially when evaluating very complex arguments with several premisses. Also. but two remain open.6 Trees or Tables? Truth trees are usually quicker and simpler to use than truth tables. In the second example. the tree above that checks whether (P → (Q → P )) is a tautology shows that (P → (Q → P )). For example. Q → R R → P. But there are cases in which truth tables are simpler to use. the sequent is invalid.6. 6. It should be apparent that this sequent is invalid. in the second example. Thus. two paths close. one could stop drawing the tree as soon as even one open path was revealed since that would be suﬃcient to demonstrate invalidity.

84

CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES P → Q Q → R R → P F T T T T F F

While the full table for this sequent has eight lines, only one partial line is needed to demonstrate invalidity: assume the conclusion is false, and work backwards. We can see that it is possible to make both the premisses T when the conclusion is F, regardless of the value of Q. This is clearly less work than the tree. The lesson is, therefore, that both trees and tables should be in the logician’s toolkit, and with experience the logician will get to know which is likely to be easier to try. If there are no more than (say) three or four premisses, and if the tree will involve a lot of branching, it may be easier to use a table.

6.7

B

Exercises

1. Use truth trees to check the following sequents for validity: (a) P → Q, −P (b) P ∨ Q, −P −Q Q (Denying the Antecedent) (Disjunctive Syllogism) R∨S (Constructive Dilemma) (Destructive Dilemma) −P ∨ −Q (Absorption)

(c) P ∨ Q, P → R, Q → S (e) P → Q (f) (g) P (h) (i) P → (P & Q)

(d) P → R, Q → S, −R ∨ −S (P → Q) ∨ (Q → R) (P → Q) −(P ∨ Q) ↔ (−P & − Q) P ∨ −P −P

(de Morgan)

(Excluded Middle)

(j) P → (Q & R), −Q

6.8

Questions for Further Research

1. Set up general rule for checking the equivalence of a set of any number of wﬀs.

Chapter 7

**Predicate Logic: Symbolization and Translation
**

7.1 Why We Need Predicate Logic

Logic can be compared to a computer game in which there is a hierarchy of levels, where once you have conquered a level you move onto the next higher level. In this chapter we move to a higher level. It builds upon what we have learned before but gives us tools with which to study the validity of a very large class of widely-used arguments that so far we do not know how to analyze. Consider the following elementary arguments: All philosophers are absent-minded. Kent is a philosopher. ∴ Kent is absent-minded. Intuitively, this argument seems valid. (Whether or not it is sound is another question. I have, in my roughly twenty year career, known two or three philosophers who were not absent-minded, so I think that in fairness to the profession I would have to say that the ﬁrst premiss is false.) However, we have no way of representing the validity of this argument using the tools of propositional logic we have learned so far, although it does seem as if it might have something to do with MP. Now consider a slight generalization of this argument: All philosophers are absent-minded. There are some philosophers. ∴ there are some who are absent-minded. This again seems obviously valid, but, again, we have no tools with which to express its validity in any precise way. Arguments like these seem to have a structure suggesting some sort of generalization of propositional structures with which we are already familiar. However, all that propositional logic as such could do for us would be to translate the sentences in these arguments into logically independent symbols, and so both arguments would have the generally invalid form P, Q R. To represent the logical structure of these arguments we therefore need the following tools: 85

86

CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION 1. A way to symbolize properties of individual persons or things (so that we can say something like “Kent is a philosopher” or “Mars is a planet”); 2. Ways to express the logic of quantiﬁers such as “all” and “some.”

7.2

Predicate Logic Notation

In the following I introduce the main types of logical structures that occur in predicate translation.

7.2.1

Universe of Discourse

Predicate notation is always assumed to refer to a domain or universe of discourse U. This could be the set of all entities whatsoever, the set of all people, the set of all students in Logic 2003A, the set of all natural numbers, or any other well-deﬁned collection of objects, persons, or entities. Sometimes the universe of discourse will be understood implicitly; sometimes it will need to be speciﬁed explicitly; and sometimes it does not need to be mentioned at all. Usually we will assume that U is not empty.

7.2.2

Terms and Variables

We ﬁrst need to introduce symbols to denote elements of the universe of discourse. Proper names are letters which serve the same function as proper names such as “Susan” or “the planet Mars” in a natural language; that is, they denote individual entities within U. By convention, proper names are usually written using lower-case letters chosen from the beginning or middle of the alphabet. For instance, we could set k := “Kent” and m := “the planet Mars”. If there are a large number of proper names, it may be useful to subscript them, as n1 , n2 , . . .. Arbitrary constants, also called arbitrary names, are letters that behave much like proper names, or if you like as if they were proper names; but they are supposed to refer to individuals picked from U at random, and about which we know nothing to distinguish them from any other member of U. By contrast, if we use a letter as a proper name, it is assumed that we know something that distinguishes the individual to which the letter refers from all other members of U. Letters for arbitrary constants are, like proper names, usually taken from the beginning to the middle of the alphabet, and, again like proper names, can be subscripted like a1 , a2 , . . . if it is convenient to do so. Arbitrary constants are very important in predicate natural deduction, and, as we shall see in the next chapter, their use requires some care. Proper names and arbitrary constants are collectively called terms. Variables are letters which serve as place-holders where proper names or arbitrary constants can be inserted in a formula. They serve a function very similar to that served by variables in algebra. By convention, variables are usually written using lower-case letters from the end of the alphabet, such as x, y, z, t . . .. As with constants and names, they can be subscripted if it is convenient to do so. For instance, we could have a set of variables of the form x1 , x2 , . . .. A variable is not a term, but rather a place-holder where a term can be inserted.

2. Here are some examples: T xy := “x is taller than y” Bxyz := “y is between x and z” Proper names or arbitrary constants can be substituted for variables in relation-expressions. but its introduction was a major step in the history of logic. and so forth. also called relations. G.4 Relations Predicates that apply to two or more terms. adjectives. be subscripted as well. Predicate logic thus allows us to express much more complex relational concepts than could be expressed in any natural language. and p := “Paul”. . PREDICATE LOGIC NOTATION 87 7. This way of expressing relational concepts seems to be very obvious. . Alternate Notation Some books write two-place relations Rxy as xRy. m. polyadic relations. such as the property of being a planet. Note that the order in which the variables are expressed in a relation is usually crucial to the meaning of the proposition. . properties. which studies the properties of special classes of relations. equivalence relations.2. . P and so forth. . We deﬁne predicate letters by expressions such as P x := “x is a planet”. qualities. In principle the notation can allow for relations with any number of arguments. . the proposition T kp is false. n-ary relations. . we introduce predicate letters. monadic predicates. and they can. or 1-place relations. We now know that we can treat “Mars” as a proper name. attributes. The terminology for properties that apply to single terms is diverse: they can be called predicates. or n-place relations. say. while T pk happens to be true. We also need to know how to say that a term has a property. We will not do relation theory in this course (except in some C exercises). which conforms more closely to standard usage in English.3 Monadic Predicates Suppose we want to translate “Mars is a planet” into predicate notation. For instance. . Lemmon’s Beginning Logic [17] contains an excellent introduction to relation theory. if we take (as before) k := “Kent”. then we can say T pk. if you like). For example. we would be more likely to say “x is taller than y” than “Taller is x than y. such as symmetric relations. B. unary relations.2. and choose some letter for it such as. To say things like this. F. since it is more easily generalized. These will usually be written as upper-case letters like A. There is a detailed theory of relations.7.”) However. if necessary. 7. . can easily be accommodated in the notation of predicate calculus. we will stick with the notation in which the variables are to the right of the relation letter (call this Yoda-notation. meaning “Paul is taller than Kent”. although in this course we will rarely consider relations with more than two arguments. (Unless we are Yoda of Star Wars.

88 CHAPTER 7. 7. not a proposition. Note carefully that an expression like T xa is an open sentence. or (especially in mathematics) inﬁnite collections where it would be impossible to name all the members even if we knew what they were. this is a misnomer. The Universal Quantiﬁer We always quantify over a domain or universe of discourse U. When we substitute proper names or arbitrary constants for the variables in open sentences then we get propositions which can be presumed to be either T or F. which expresses the concept that everything in the domain of discourse has a certain property. since a function is not an expression but a mapping from a domain to a range such that every set of input values from the domain maps into one and only one element of the range.5 Propositions and Propositional Functions Expressions such as P x or T xy are called open sentences or (less correctly) propositional functions. is therefore equivalent to a conjunction over all members of the domain of discourse.6 Quantiﬁers Quantiﬁers are powerful symbolic tools that allow us to refer to collections where we do not or cannot know all the members in advance. If we had the class list. Despite this complication. since propositions. and so forth. Deﬁne Sx := “x studies logic”. For example. . and which can be combined using the familiar connectives that we use in propositional logic. In themselves they have no truth value since they are.1) for all of the approximately 55 students in the class. propositional functions in eﬀect map elements of U into truth values. Expressions like T xa or P x are often called propositional functions. even though properly speaking it is just part of the function that takes real numbers into their squares. . These could be large ﬁnite collections where it would be impractical to name all members even if we knew them. just as it is convenient in ordinary mathematics to refer to an expression such as x2 as a function of x. especially in older literature in analytical philosophy. incomplete sentences. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION 7. if Gx := “x is a game-theorist”. since it contains one unassigned variable. in the Spring Term of 2004. Suppose that U is the set of students in Logic 2003A. map into truth values. Propositional functions therefore map elements of U into the set of all possible propositions. then we could express the proposition “Everyone (in U) studies logic” as S(Lindsey) & S(Ben) & . The introduction of quantiﬁers (principally by Gottlob Frege) toward the end of the 19th century was one of the major innovations in the history of logic. We can’t evaluate “x is a philosopher” until we know who or what x is supposed to be. Strictly speaking. and impossible if we do not know everyone’s name. P x := “x is a philosopher”. in eﬀect. k := “Kent”.2. A universal statement. and p := “Paul”. if the membership in the class was unknown or . then we can write sentences such as P k & Gp (“Kent is a philosopher and Paul is a game-theorist”). For a large class it is quite inconvenient to write a universal claim this way. in turn. Gk → −P p (“Kent is a game-theorist only if Paul is not a philosopher”).2. it is often convenient to refer to expressions like P x or T xy as propositional functions. & S(Craig) (7.

then that person studies logic. . We can get around these diﬃculties by introducing the universal quantiﬁer ∀x . which can be read as “Someone in Logic 2003A wears blue jeans.” The statement “Some mammals are warm-blooded” is therefore consistent with the possibilities either that there is only one warm-blooded mammal or that all mammals are warm-blooded (and in fact the latter statement happens to be true. For any x. or if it was inﬁnite. For all x.2 above.5) . the most common of which are the following: Everyone studies logic. If we know in advance that U is the membership of that particular class. although U does not always need to be mentioned. We get around these diﬃculties by introducing the existential quantiﬁer ∃x . and write ∀xSx. If U were a larger set. PREDICATE LOGIC NOTATION 89 undetermined.2) where Lx := “x is in Logic 2003A”. This can be read in several ways.. . (7. The latter would be read “Everyone in Logic 2003A studies logic. Note carefully: In logic. . undetermined. and write simply ∃xBx.” or “If anyone is in Logic 2003A. such as the set of all students at a certain university. ∨ B(Craig) (7.3) (7. . Let Bx := “x wears blue jeans”. Someone wears blue jeans.7. or inﬁnite.” (7. .4) As before. If we know in advance that U is a certain logic class. and impossible if we do not know everyone’s name.2.” The Existential Quantiﬁer Suppose now we wanted to express the claim that someone in Logic 2003A is wearing blue jeans. This can be read in various ways: There exists something such that it wears blue jeans. or if the membership in the class was unknown. or the set of all people. .6) (7.. the word “some” means nothing more than “at least one” or (in the case of continuous quantities such as liquids) “an amount greater than zero. although as logicians we do not concern ourselves with the mere facts of biology). It is important to remember that the meaning of a quantiﬁcational statement is always relative to a universe of discourse. If U were a larger set than the membership of Logic 2003A. if the class is large it would be quite inconvenient to write this all out. Sx. then we would have to write ∃x(Lx & Bx). then we could write B(Lindsey) ∨ B(Ben) ∨ . then we can write “Everyone studies logic” as in 7. then to express the same fact — that everyone in a certain class studies logic — we would have to write ∀x(Lx → Sx). Sx. At least one person wears blue jeans.

apply to the shortest possible string that follows them. it is an open sentence. the whole expression (P a & Qb) is negated. in ∃x(F x & Gx). However. “Everything is P ” and “Something is T ”). Many older logic books write the universal quantiﬁer without the ∀ symbol. In ∃yBayx. If a quantiﬁcational expression contains a free variable. Scope of Quantiﬁers It is possible to give a formal recursive deﬁnition of a well-formed formula of predicate logic (see Lemmon [17]. A few books (such as the widely-read Metalogic by G. 7. a is an arbitrary constant or name and y is bound. respectively. Here are some examples: In P x. wherein universals are represented by and existentials by . like negation signs. and the expression is an open sentence. for instance. only the sentence P a is negated.8 Categorical Forms Universal aﬃrmative (A): All A are B. this expression. Also. the quantiﬁer applies to all of (F x & Gx). a quantiﬁer only binds variables within its scope. Similarly. F x”. Thus. 7. Here are the main things to keep in mind: quantiﬁers.2. This introduces further complications that are well beyond the scope of this course. This means that the variables in the quantiﬁers apply only to elements of the universe of discourse U. for instance). while in −P a & Qb. ∀x(Ax → Bx) (7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION Alternate Notation Quantiﬁers are often written in enclosing brackets. the quantiﬁer binds only the x in F x.90 CHAPTER 7. In secondorder logic we quantify over predicates and relations. ∃xF x & ∃xGx means exactly the same thing as ∃xF x & ∃yGy. such as (∀x)F x.7) . in ∃xF x & Gx. the variable x is free. in −(P a & Qb). . while free variables are those that do not appear in a quantiﬁer.2. This is inspired by the fact that universals and existentials are generalized “ands” and “ors” respectively. Hunter [11]) use so-called Stanford notation. so that the expression is an open sentence with no truth value. it is a proposition only if all variables are bound. Bound variables are those that are linked to a quantiﬁer. therefore. and the expressions are propositions (which say. Bound and Free Variables Variables in quantiﬁcational formulas are either bound or free. is an open sentence. but x is unbound.7 Order The predicate logic we study in this book is said to be ﬁrst-order . In ∀xP x and ∃yT y the variables are bound. so that (x)F x would be read as “for all x. but we will not bother to do so here since it is generally obvious which formulas are properly formed and which are not. Thus. and variables can be used over again if there is no ambiguity about which quantiﬁer they link with.

” This can be written as ∀x(Dx → M x) & ∀x(Cx → M x).11) where M x := “x is a mammal” and Ax := “x is an animal”.9 Don’t Be Fooled. . ∀x(Ax → Bx) does not claim that there are any A which are B.10) Note carefully that a universal statement does not involve an existence claim. “If anything is both a dog and a cat. we wanted to say “All dogs are carnivorous and all cats are carnivorous. ∀x(Ax → −Bx) Particular aﬃrmative (I): Some A are B. We might be tempted to translate this as ∀x((Dx & Cx) → M x). That is. fortunately. The translations we have described here so far are not inﬂexible rules to be followed blindly. ∃x(Ax & Bx) Particular negative (O): Some A are not B. A negative existential such as “Some animals are not reptiles” also asserts existence. but rather guidelines to be used with good judgement. PREDICATE LOGIC NOTATION Universal negative (E): No A are B. (7. (Thus. .” And that is not quite what we wanted to say. (7. there being any orcs in the real world. (7.” then we would have to write ∃x(Ax & M x) & − ∀x(Ax → M x). or equivalently ∃x(Ax & M x) & ∃x(Ax & − M x). In fact. The latter statement can also be read “There are (or exist) some animals which are mammals.15) . the conditional in the quantiﬁer is true even if there are no As. then they are B. Existential claims such as “Some animals are mammals” do assert existence. ∃x(Ax & − Bx) 91 (7.12) (7. (7.2.7.” If we speciﬁcally wanted to express the true statement that “Some but not all animals are mammals. Here is an example where “and” should not be translated as & : “All dogs and cats are carnivorous. You have to be aware of the nuances of your language and of the intended logic of the statements you are translating.2. It says merely that if there are any As.13) 7. Such universals are said to be vacuously true. for it is translated as ∃x(Ax & − Rx).14) But this cannot be right.” Let M x := “x is carnivorous”. since it literally says.9) (7.) By the truth table for the material conditional →. then it is carnivorous. we can truly say “All orcs are nefarious” without. ! Translation from natural languages such as English into predicate notation requires some skill and attention.8) (7.

§1). most logic texts now translate exceptives in such a way as to make the intention as explicit as possible. then it is carnivorous”. translating into symbols in a way that requires as few commitments as possible.2. this is of course equivalent to ∀x((P x & Lx) → (Bx ∨ W x)). However. . All philosophers at the University of Lethbridge are either brilliant or witty. ∀x((P x & Lx) → (Bx ∨ W x)).92 CHAPTER 7.10 Combinations of Properties The basic categorical forms can be easily extended to describe situations involving combinations of properties. It is impossible to anticipate all possible cases. Lemmon must have been aware of this. (7.16) This can be read as “If anything is either a dog or a cat. 2. this says that if anything is a dog but not a chihuahua.” to borrow Lemmon’s example (see his Chapter 3. (7. Not all logic texts agree on how they should be translated. to say that all dogs except chihuahuas like the cold is the same thing as saying that if anything is a dog. It can be shown that this is equivalent to ∀x(Dx → (Cx ↔ −Lx)). For instance. No philosopher at the University of Lethbridge is neither brilliant nor witty. This may not be a bad practice in general. ∃x(P x & Lx & Bx & − W x). Unfortunately. 7. this does not tell us what is the case if a dog is a chihuahua. he expresses this as ∀x((Dx & − Cx) → Lx) (7. and that is what we actually wanted to say.19) (7. and so I assume that his translation reﬂected a preference for logical minimalism — that is. but here are a few typical examples that will serve as guides: 1. Some philosophers at the University of Lethbridge are brilliant but not witty. 3.11 Use of “Except” Propositions of the general form “all A except B are C” are called exceptives. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION In the next chapter we will learn techniques that will enable us to show that this is equivalent to the following: ∀x((Dx ∨ Cx) → M x). Suppose we want to translate into symbols the sentence “All dogs except chihuahuas like the cold.) 7. of course.18) In words. then it is a chihuahua if and only if it does not like the cold. then it likes the cold.17) Literally. Lemmon’s sentence above would now be translated as ∀x((Dx & − Cx) → Lx) & ∀x((Dx & Cx) → −Lx). Using obvious notation for the predicates. ∀x((P x & Lx) → − − (Bx ∨ W x)) (By Double Negation.2.

” So we would therefore translate “Only the mongoose likes eating cobras” as ∀x(Cx → M x). For instance.3 Exercises In these exercises. 7. Some dogs are smarter than some people. not merely transliterate words into symbols without thinking about their meanings. ∃x∃y(Dx & P y & Sxy).20) Examples such as exceptives illustrate the fact that the aim of symbolization is not to translate the natural language expression literally into symbols. (7. while ∃x∀yP xy says that at least one person is the parent of everyone — obviously a very diﬀerent claim.7. Note that order of quantiﬁers is crucial to the meaning. in no way meant to exhaust all the possibilities.12 Use of “Only” The key to these is to recall that P → Q can be translated not only as “If P then Q. where Cx := “likes eating cobras” and M x := “is a mongoose”. Any cat is smarter than any dog. 3.” but also as “P only if Q” or “Only if Q.2. EXERCISES 93 Negative exceptives can be translated in a similar way. but to analyze and express the implicit logical intention of the natural language expression — if one can be found (since in natural language we are often not very logically precise at all). again. (7. 7. ∀x∀y((Cx & Dy) → Sxy).13 Variants of Categorical Forms Using Relations We can construct innumerable variations of the basic categorical forms using relations. “No animal except the mongoose likes to eat cobras” could be translated as ∀x(Ax → (M x ↔ Cx)). which are.21) 7. Here are a few illustrative examples. and P xy := “x is the parent of y”. 2. Every A has relation R to some B: ∀x∃y((Ax & By) → Rxy).3. These will often involve multiple quantiﬁcation. In order to translate a sentence correctly. P . let U be the set of all people.2. 1. we have to ﬁgure out its logic. use the following symbolizations: Ax := “x is an aardvark” V x := “x is from the Transvaal” F x := “x is an ant” . Then ∀x∃yP xy says that everyone is the parent of someone.

2. (b) Only mongooses like to eat cobras.94 CHAPTER 7. Translate the following formulas into sensible English: (a) ∀x((M x & V x) → Dx) (b) ∃x∃y(Cx & M y & Sx & Sy) (c) ∀x((M x ∨ Ax) → ∀y(F y → Exy)) (d) ∀x((Cx ∨ M x) → −V x) . (e) No mongoose is from the Transvaal. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION Sx := “x likes to sleep in the sun” Cx := “x is a cobra” Dx := “x has a strange diet” T x := “x is a termite” Lx := “x is an animal” M x := “x is a mongoose” Exy := “x likes to eat y” T xy := “x wins the Lou Marsh trophy in y” m := “Mike Weir” n := “2003” 1. (d) No animals except aardvarks like to eat termites. (f) Mike Weir won the Lou Marsh Trophy in 2003. (c) All aardvarks except those from the Transvaal like to sleep in the sun. Translate the following sentences into predicate notation: (a) All aardvarks like eating any ants.

however. Later in this chapter we will see how formally to prove these relations. 8.1 Universal Elimination (UE) From ∀xF x we may infer F a. where F x is any propositional function of x (possibly including other bound quantiﬁers) and a is any term.1 Duality Relations Before stating the introduction and elimination rules for quantiﬁers. which is that it applies to anything in the universe of discourse. but they can be seen intuitively to follow from the basic meanings of the quantiﬁers. This rule simply expresses what is naturally implied by a universal claim.” because we use it to write an instance of a universal claim.2 Introduction and Elimination Rules We will introduce the rules for the introduction and elimination of quantiﬁer symbols in order of their diﬃculty.) 95 .Chapter 8 Predicate Logic: Natural Deduction Natural deduction for ﬁrst-order predicate logic consists of ordinary natural deduction plus four special rules that allow us to take oﬀ quantiﬁers and put them back on again. Examples will be given below. 8. once we review the introduction and elimination rules for quantiﬁers. an instance of any one of these rules can be referred to simply as Duality. 8.2. ∀xF x ∃xF x ∀x − F x ∃x − F x ↔ ↔ ↔ ↔ −∃x − F x −∀x − F x −∃xF x −∀xF x When used to justify a move in a deduction. (UE is sometimes called “Universal Instantiation. it is useful to state the so-called Duality Relations which relate existentials and universals.

96

CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

Example: From the statement “All philosophers are witty” and the statement “Paul is a philosopher”, we can validly conclude that Paul is witty: 1 2 1 1,2 (1) (2) (3) (4) ∀x(P x → W x) Pp Pp → Wp Wp A A 1 UE 2,3 MP

8.2.2

Universal Introduction (UI)

This rule allows us to move from a proposition of the form F a, where a is an arbitrary constant, to a universal ∀xF x. This is valid, however, only if a is genuinely arbitrary. We can assure ourselves of that if we make sure that a was not mentioned in the premisses. (Remember, in deductive logic we have no access to any information that is not explicitly given in the premisses.) UI is called “Universal Generalization” in many logic texts. Here is a valid use of UI. Example: Symbolize the following argument, and show that it is valid: All horses are mammals, all mammals are warm-blooded; therefore, all horses are warm-blooded. Using obvious deﬁnitions of the predicate symbols, we can symbolize this as follows: 46 ∀x(Hx → M x), ∀x(M x → W x) 1 2 1 2 1,2 1,2 (1) (2) (3) (4) (5) (6) ∀x(Hx → M x) ∀x(M x → W x) Ha → M a Ma → Wa Ha → W a ∀x(Hx → W x) ∀x(Hx → W x) A A 1 UE 2 UE 3,4 HS 5 UI

The arbitrary constant a introduced in line 3 does not appear in the premisses. The use of UI in moving from line 5 to 6 is therefore valid. For those who have studied categorical logic, this sequent is the familiar syllogism in Barbara: all A’s are B’s, all B’s are C’s; therefore, all A’s are C’s. Restriction on UI We cannot use UI on a term that was introduced in an assumption.

8.2.3

Fallacy Alert!

If we attempt to universalize on a premiss that contains the arbitrary constant, we risk committing the fallacy of Hasty Generalization. For instance, from the fact that Paul is egregiously bald, we cannot validly infer that everyone is egregiously bald: 1 1 (1) (2) Bp ∀yBy A 1 UI (?!)

This is invalid because the term p appears in an assumption.

8.2. INTRODUCTION AND ELIMINATION RULES

97

8.2.4

Existential Introduction (EI)

Existential Introduction, like Universal Elimination, is a very safe rule to use, in that we do not have to worry about any special restrictions on the terms involved. EI states that from F a, where a is any term (proper name or arbitrary constant), we may infer ∃xF x. For example, from the statement that Paul is a witty philosopher, we can validly conclude that something is a witty philosopher (or, in other words, that there exists a witty philosopher): 1 (1) Pp&Wp 1 (2) ∃x(P x & W x) There are no restrictions on EI. A 1 EI

8.2.5

Existential Elimination (EE)

The general pattern of EE is that we infer a conclusion P from an existential claim ∃xF x (possibly together with other premisses). Just as an existentially quantiﬁed statement is a generalization of an OR statement, Existential Elimination is a generalization of vE. Like vE, as well, it is a bit tricky to use, but most of the diﬃculties are matters of bookkeeping. In vE we have to examine each possible inferential path starting by temporarily assuming each disjunct and seeing if it leads to the desired conclusion; in EE, we examine a typical inferential path deﬁned by introducing an arbitrary constant. A typical example of the use of EE will make this clearer. Suppose we want to establish the validity of the following argument: All philosophers are wise; there are some philosophers; therefore, something is wise. We proceed as follows: 47 ∀x(P x → W x), ∃xP x 1 2 3 1 1,3 1,3 1,2 (1) (2) (3) (4) (5) (6) (7) ∃xW x A A A (EE) 1 UE 3,4 MP 5 EI 2,3,6 EE

∀x(P x → W x) ∃xP x Pa Pa → Wa Wa ∃xW x ∃xW x

The object is to prove the desired conclusion from the existential in line 2, in combination with the assumption at line 1. At line 3 we start a subproof that is similar in function to the subproofs used in vE. In 3 we introduce an arbitrary constant a. It is crucial (as we shall see) that the arbitrary constant introduced here be a new constant that has not appeared in any earlier lines (because that would, in eﬀect, remove its arbitrariness). In eﬀect, we are saying that we will suppose that a is the name of the entity referred to in line 2. In line 4 we can instantiate on line 1 with UE in terms of a because UE can be applied to any term whatsoever. We then do a little elementary propositional logic, and then at line 6 we get the desired conclusion by using EI to go from line 5. However, we can’t stop at this point, because line 6 still depends upon the temporary or working assumption on

98

CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

line 3. We therefore have to discharge line 3. EE allows us to do this; on the next line, we ﬁnally arrive at the conclusion dependent only on lines 1 and 2, as desired. Note carefully the form of the justiﬁcation given at the end of line 7. The ﬁrst number, 2, is the line number of the existential statement on which we run the EE. The next number, 3, is the line number of the working assumption; this assumption is discharged at this point. The ﬁnal number, 6, is the line number at which we arrive at the desired conclusion. This is the general pattern of EE. This notation may seem a little tricky, but what we are doing is actually very simple, and is a pattern of reasoning that we often use in day-to-day discussions. Here is the same argument cast into informal language: Suppose we accept that all philosophers are wise and that there are some philosophers. Suppose also that a is a typical philosopher. (Note that this is an assumption; we are picking an arbitrary name and supposing that it is attached to one of the philosophers whose existence is claimed in the premisses.) Then a must be wise; therefore, someone is wise. Restrictions on EE There are two restrictions on the use of EE. 1. The arbitrary constant cannot appear in a premiss of the argument, or any previous assumptions unless they were discharged. 2. The arbitrary constant cannot appear in the conclusion of the EE subproof.

8.2.6

Fallacy Alert!

There are several types of fallacies that we can fall into if we do not obey the restrictions on the use of the arbitrary constant. Fallacies from Constants in a Premiss Here is a fallacy that results from running an EE with a constant that appears in a premiss: 1 2 3 2,3 2,3 1,2 (1) (2) (3) (4) (5) (6) ∃xF x Ga Fa F a & Ga ∃x(F x & Gx) ∃x(F x & Gx) (?!) A A A (EE) 2,3 &I 4 EI 1,3,5 EE (?)

The mistake here is to use the arbitrary constant a from line 2 in the assumption in line 3. The use of EI in going from 4 to 5 is in itself correct. Intuitively, this “argument” says that from the fact that something is F , and that a is G, we can conclude that the same thing is both F and G. This is as if we said that from the fact that there exists a man, and from the fact that our friend Susan is a woman, that Susan is both woman and man. By contrast, here is a slightly diﬀerent sequent that is, in fact, valid:

is obviously fallacious. This “argument” states that from the fact that something is F we can conclude that everything is F .2 EE 3 UI The ﬁnal move. from line 3 to 4 using UI.3 Summary of Introduction and Elimination Rules for Quantiﬁers Universal Elimination (UE): From ∀xF x. we can validly conclude that there exists a man and that there exists a woman. involving Hasty Generalization. and this is clearly invalid.8. we can infer F a.3 &I Here is an instance of this sequent in words: from the fact that there exists a man. and from the fact that Susan is a woman. Ga 1 2 2 1. Consider the following “derivation”: 1 2 2 1 (1) (2) (3) (4) ∃xF x Fa ∀xF x ∀xF x (?!) A A (EE) 2 UI (?) 1. the use of EE at line 4 is correct.2.2 ∃xF x & ∃xGx (1) (2) (3) (4) ∃xF x Ga ∃xGx ∃xF x & ∃xGx A A 2 EI 1.3 EE From the assumption that something is F we seem again to have arrived at the conclusion that everything is F ! In fact. SUMMARY OF INTRODUCTION AND ELIMINATION RULES FOR QUANTIFIERS99 48 ∃xF x. This shows that the restriction on universalizing on constants that appear in a premiss applies to the assumptions that start a subproof as well as the main premisses of a sequent. . 8.3. Restrictions: none. Fallacies from Constants in the Conclusion of the Sub-Proof Here is a fallacious example which shows that the arbitrary constant cannot appear in the conclusion of the EE subproof: 1 2 1 1 (1) (2) (3) (4) ∃xF x Fa Fa ∀xF x A A (EE) 1. But slightly more subtle fallacies are possible if we are not careful to obey the prohibition against using UI on constants that appeared in an assumption. Fallacies Combining EE with UI The example given in the “Fallacy Alert!” for UI. where a is any term. is valid. The fallacy here lies in having universalized on the assumption in line 2. The mistake was to use a in the conclusion of the subproof at line 3.2.

8. Restrictions: none.100 CHAPTER 8. • The arbitrary constant a cannot appear in the conclusion of the EE subproof.2 (1) (2) (3) (4) (5) (6) ∀x(Hx → M x) ∀x(M x → −Cx) Ha → M a M a → −Ca Ha → −Ca ∀x(Hx → −Cx) ∀x(Hx → −Cx) A A 1 UE 2 UE 3. Existential Elimination (EE): We derive a conclusion P from ∃xF x and possibly other premisses. −Cb 1 2 1 1.2 (1) (2) (3) (4) (5) (6) −Db A A 1 UE 2.2 1. 50 ∀x((Dx ∨ F x) → Cx). we can infer ∀xF x. ∀x(M x → −Cx) 1 2 1 2 1. where a is an arbitrary constant.2 1. Existential Introduction (EI): From F a.3 MT 4 deM 5 &E ∀x((Dx ∨ F x) → Cx) −Cb (Db ∨ F b) → Cb −(Db ∨ F b) −Db & − F b −Db An instance in words: . no mammals are chitinous. 49 ∀x(Hx → M x).4 HS 5 UI Here is an instance of this sequent in words: All horses are mammals. where a is a term.2 1. we can infer ∃xF x. Restrictions: • The arbitrary constant a cannot appear in a previous undischarged premiss.4 Examples In this section I develop a number of typical quantiﬁcational arguments to show the application of the Introduction and Elimination Rules. Restrictions: the arbitrary constant a cannot have been mentioned in a premiss. PREDICATE LOGIC: NATURAL DEDUCTION Universal Introduction (UI): From F a. ∴ no horses are chitinous. using a subproof in which we make an assumption of the form F a.

3 1. ∴ someone here has imagination. 52 ∀x(Rx → (V x & Ix)).4 1.6 EE 101 ∀x(Rx → V x) ∃xRx Ra Ra → V a Va ∃xV x ∃xV x Here is an instance in words: Whoever wears red has a vivid colour sense. 51 ∀x(Rx → V x). ∃xRx 1 2 3 4 1 1. ∀x(Ix → Gx).3 1.2 (1) (2) (3) (4) (5) (6) (7) (8) ∃xIx A A A (EE) 1 UE 3. EXAMPLES All dogs and cats are carnivorous.4 MP 5 &E 6 EI 2.3 1.7 EE ∀x(Rx → (V x & Ix)) ∃xRx Ra Ra → (V a & Ia) V a & Ia Ia ∃xIx ∃xIx Here is an instance in words: Whoever wears red has a vivid colour sense and imagination.8.3 1. ∴ someone has a vivid colour sense.2 (1) (2) (3) (4) (5) (6) (7) ∃xV x A A A (EE) 1 UE 3. someone is wearing red.3 1.4 MP 5 EI 2.4. 53 ∀x(Rx → (V x & Ix)).3. ∃xRx 1 2 3 1 1.4 (1) (2) (3) (4) (5) (6) (7) ∀x(Rx → (V x & Ix)) ∀x(Ix → Gx) ∃xRx Ra Ra → (V a & Ia) V a & Ia Ia ∃xGx A A A A (EE) 1 UE 4. ∃xRx 1 2 3 1 1. ∴ Bessy is not a dog.3. someone here is wearing red. Bessy is not carnivorous.5 MP 6 &E .

then everything is red or green.6 Proofs of Duality Rules The Duality Rules make intuitive sense from the meaning of quantiﬁcational symbols. Some .2. however. however. whoever has imagination will go far in life. from the fact that there are some numbers which are even and some numbers which are odd. 57 8. (See Review Exercises.2.) The converse is invalid. 54 ∀x(F x & Gx) ∀xF x & ∀xGx The proof of this interderivability result is fairly straightforward.4 1. The converse. is invalid: ∀x(F x ∨ Gx) ∀xF x ∨ ∀xGx A simple example shows that this is invalid: it is true that all natural numbers are either even or odd.102 2 1.5 Distribution Rules for Quantiﬁers There are a number of sequents which govern the distribution of quantiﬁers over formulas. 8.8 MP 9 EI 3. and is left as an exercise (see Review Exercises). I will prove the ﬁrst one here.) 56 ∃x(F x ∨ Gx) ∃xF x ∨ ∃xGx (See Review Exercises. we cannot conclude that there are numbers which are both even and odd. however. But they can be proven using the introduction and elimination rules for predicate logic.3 (8) (9) (10) (11) CHAPTER 8. the remaining proofs are covered in the Review Exercises. Some require care in their application. PREDICATE LOGIC: NATURAL DEDUCTION Ia → Ga Ga ∃xGx ∃xGx 2 UE 7.2. Note.4 1. that the following.4.10 EE Instance in words: Whoever wears red has a vivid colour sense and imagination. someone is wearing red. is valid: 55 ∀xF x ∨ ∀xGx ∀x(F x ∨ Gx) If everything is red or everything is green.) ∃x(F x & Gx) ∃xF x & ∃xGx (See Review Exercises. which very similar. but it is false that either all natural numbers are even or all natural numbers are odd. ∴ someone will go far in life.

123–4]. since (2) was discharged when we backed out of the RAA. it could. p. But the way Lemmon emerges from the RAA with a still intact does look a lot like a magician pulling a rabbit out of a hat! 59 60 61 ∃xF x ∀x − F x ∃x − F x −∀x − F x −∃xF x −∀xF x 8. Therefore. RULES OF PASSAGE 103 of these derivations. the step from (5) to (6) may seem ﬁshy. there is nothing special about the constant a and we can safely universalize upon it.4 RAA 5 UI These elegant proofs are due to Lemmon [17.7 Rules of Passage There are a number of quantiﬁcational formulas that are known as the Rules of Passage.8.8 RAA (b) −∃x − F x ∀xF x 1 (1) −∃x − F x 2 (2) −F a 2 (3) ∃x − F x 1. Note the trick in the ﬁrst proof of using the starting assumption as the beginning of an RAA. P is any proposition whatsoever. The rules themselves are called “Rules of Passage” because they allow us to pass P in or out of the scope of a quantiﬁer. for instance.3 3 2 1.5 RAA 2. In the following. 58 ∀xF x (a) ∀xF x 1 2 3 1 1.3 &I 2. In this proof there are.7 &I 2.4 &I 1. since we are using a term a that was introduced in the assumption in line (2).2 (4) ∃x − F x & − ∃x − F x 1 (5) Fa 1 (6) ∀xF x A A (RAA) 3 EI 1.1) . be a predicate formula itself containing quantiﬁers and bound variables. in eﬀect.” are rather subtle. In the second proof. ∃x(P & F x) P & ∃xF x (8.7.3.6 EE 1. like many proofs of propositions that seem to be “obvious. two overlapping RAAs. we are on safe ground.2 1 −∃x − F x −∃x − F x (1) ∀xF x (2) ∃x − F x (3) −F a (4) Fa (5) Fa& − Fa (6) −∀xF x (7) −∀xF x (8) ∀xF x & − ∀xF x (9) −∃x − F x A (RAA!) A (RAA) A (EE) 1 UE 3. However.

2 1 (1) (2) (3) (4) (5) (6) (7) ∃xF x → P ∀x(F x → P ) ∃xF x Fa Fa → P P P ∃xF x → P A A (CP) A (EE) 1 UE 3.3 1.5 1.5) (8. the rest of the Rules of Passage are left as Review Exercises.6 CP (b) ∃xF x → P 1 2 2 2 5 5 5 5 1.2 1 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) ∀x(F x → P ) ∃xF x → P −∀x(F x → P ) ∃x − (F x → P ) ∃x(F x & − P ) Fa& − P Fa −P ∃xF x P P & −P P & −P ∀x(F x → P ) A A (RAA) 2 Duality 3 Neg→ A (EE) 5 &E 5 &E 6 EI 1.104 ∀x(P & F x) ∃x(P ∨ F x) ∀x(P ∨ F x) ∃x(P → F x) ∀x(P → F x) ∃x(F x → P ) ∀x(F x → P ) CHAPTER 8. I will give proofs of the last formula here.5 1. (a) ∀x(F x → P ) 1 2 3 1 1. PREDICATE LOGIC: NATURAL DEDUCTION P & ∀xF x P ∨ ∃xF x P ∨ ∀xF x P → ∃xF x P → ∀xF x ∀xF x → P ∃xF x → P (8. from the ﬁrst Rule of Passage listed above. is translatable into a theorem stating an equivalence.3.8 MP 7.8 Interderivability ⇔ Equivalence Theorem Don’t forget that any interderivability result in predicate logic.4) (8.4 MP 2.9 &I 4.5–10 EE 2–10 RAA 8.6) (8. we have 62 ∃x(P & F x) ↔ P & ∃xF x .7) (8.2) (8. and any such equivalences can be used for substitutions just as with propositional logic. such as the Duality Rules or the Rules of Passage. For instance.8) Note the switch between universals and existentials in the last two formulas.5 EE 2.3) (8.

we can skip the use of these rules and do Boolean operations directly on propositional functions within the scope of quantiﬁers.9. although individual examples can be proven using the techniques in this chapter. propositional equivalence relations can be applied to propositional functions so long as they are entirely within the scope of the quantiﬁers that govern them. for instance. So there is something that is −F .9 What to Memorize I do not expect you to memorize the Rules of Passage for this course. since they depend on a very basic understanding of the meaning of the quantiﬁcational symbols. Under some circumstances to be speciﬁed below. and hence we cannot apply UE to it.8. 8. however. the statement −∀xF x is not really a universal statement at all.10 A Shortcut It is essential to master the use of the four introduction and elimination rules. Here are a few simple examples: ∀x∃y − (F x → Rxy) ∀x∀y(F x → (Gx → Hy)) ∀x(F x → (Gx & Hx)) ↔ ∀x∃y(F x & − Rxy) ↔ ∀x∀y((F x & Gx) → Hy) ↔ ∀x((F x → Gx) & (F x → Hx)) Neg→ Exp Dist→ In other words. In other words. WHAT TO MEMORIZE 105 8. This can greatly simplify many derivations. we would have to assume it. You should know the Duality Rules.11 Fallacy Alert! (n) (n+1) −∀xF x −F a A rather common error is the following: n UE Line (n) is (by Duality) equivalent to ∃x(−F x). however. if we were going to try an EE). The following inference is invalid: 1 1 (1) (2) ∀x(Hx → Sx) ∃x(Hx & Sx) A 1 ??? . For instance.12 Fallacy Alert! It is not possible to derive a particular existential from a universal. The proof of the general validity of this rule requires metalogical techniques that are beyond the scope of this course. 8. and EI. EE. 8. because they allow for general procedures that can be applied to any ﬁrst-order quantiﬁcational argument. we cannot infer that there are some hobbits living in the Shire. but it might not be the individual a! If we want to use the statement −F a (as we might. UI. and then (we hope) discharge it if the EE is successful. And of course you should be able to use UE. from the statement that all hobbits live in the Shire.

I would live in the Shire. 8. By comparison.13. while an existential claim does. lives in the Shire. Fa Existential Introduction (EI) Fa ∃xF x. line (3) does not assert the existence of hobbits. whether living in the Shire or not. No restrictions on a. PREDICATE LOGIC: NATURAL DEDUCTION This is essentially a consequence of the fact that a universal claim is hypothetical and does not assert existence. the Duality relations. and this in no way asserts the existence of hobbits! 8. may be used freely to help you solve more complicated problems. where a is any constant or proper name (term). It merely says that there is something which if it is a hobbit.106 CHAPTER 8.1 Basic Rules Here. This is like saying that if I were a hobbit. No restrictions on x. The derived rules. . in succinct form. Restrictions: a must be an arbitrary constant. and the Distribution rules.13 Summary of Rules for Predicate Natural Deduction In this section I summarize the rules for predicate natural deduction. ﬁrst. such as the Rules of Passage. are the Basic Rules of predicate natural deduction. which means that it must not have been used in an undischarged assumption. Universal Elimination (UE) ∀xF x F a. Universal Introduction (UI) ∀xF x. the following pattern of inference is valid: 1 1 1 (1) (2) (3) ∀x(Hx → Sx) Ha → Sa ∃y(Hy → Sy) A 1 UE 2 EI However.

5.3.n EE Lines 3 to n are the EE subproof.) 8. and if you have any questions about how vE works you should review it at this point. (See examples in the Review Exercises. so the overall result will look like ∃x P .7.13. Any of them can be cited simply by saying “Passage” in the justiﬁcation column. if any) Fa . 8.2 Duality Rules I state the Duality Rules here as interderivability results: ∀xF x ∃xF x ∀x − F x ∃x − F x −∃x − F x −∀x − F x −∃xF x −∀xF x Any of these can be cited in a justiﬁcation simply as “Duality”.3 Rules of Passage I won’t repeat here all of the Rules of Passage. quantiﬁers can be distributed over their scopes sometimes. 2.13.8. The constant a must be arbitrary.) 8. . Here are the ones that are .2 (1) (2) (3) (n) (n + 1) ∃xF x (other premisses. the ﬁrst number is the line where the starting existential claim was introduced.4 Distribution Rules As described in detail in 8. one has to be careful in using these rules.3 1.13.13. SUMMARY OF RULES FOR PREDICATE NATURAL DEDUCTION Existential Elimination (EE) 107 EE is a generalized vE. Restrictions on the choice of a: 1. In the justiﬁcation of line n + 1. Here is the general pattern of EE: 1 2 3 2. The constant a cannot appear in the conclusion of the EE subproof. The object of EE is to derive a conclusion from an existential claim. which are listed in 8. and the second and third numbers are the beginning and end lines of the subproof. which means that it cannot appear in any previous assumption. but not in all cases. . (See Review Exercises for examples. P P A A A (EE) 1.

108 CHAPTER 8. Quantiﬁers can in some cases be “distributed” over → but the results are more complicated and are best dealt with on an individual basis.13. so long as you stay within the scope of the quantiﬁer. (See the Review Exercises. while some go only one way. the following transformation is valid: ∀x∃y − (P x → Qxy) ↔ ∀x∃y(P x & − Qxy) Neg → (8.9) . PREDICATE LOGIC: NATURAL DEDUCTION valid. together with convenient short-form names that can be used when they are cited in a justiﬁcation line: ∀x(F x & Gx) ∀x(F x ∨ Gx) ∃x(F x ∨ Gx) ∃x(F x & Gx) ∀xF x & ∀xGx ∀xF x ∨ ∀xGx ∃xF x ∨ ∃xGx ∃xF x & ∃xGx Dist∀ Dist∀ Dist∃ Dist∃ Note carefully that some of these results are interderivabilities. For instance.5 Direct Manipulation of Propositional Functions Don’t forget that propositional functions such as F x & Gy or Hx → Gxy can be manipulated using standard propositional equivalence rules.) 8.

and relations: p := Plato s := Socrates h := Heidegger r := Rufus k := The Yukon c := the Republic t := Thumper b := Being and Time Lx := x is a rabbit Dx := x is a dialogue Ox := x is bamboo Bx := x is brown U x := x is a bear Hx := x is a person N x := x loves honey Ax := x loves asparagus Rx := x loves broccoli M x := x is a mammal W x := x is warm-blooded Lxy := x loves y Cxy := x came from y T xy := x takes y seriously Axy := x is the author of y A 1. Rufus is a bear. Prove the validity of the following sequents: (a) F a & F b ∃xF x Wa ∃xGx ∃x − F x (b) ∀x(M x → W x). ∴ Rufus loves honey. No bears love broccoli. All mammals are warm-blooded. Plato was the author of The Republic. All bears are mammals. ∃xGx (e) ∀x − (F x ∨ Gx) −∃x(F x ∨ Gx) . and prove them valid: (a) (b) (c) Rufus loves honey.14. ∴ Plato was the author of something. ∃xF x (d) ∀x(F x → −Gx). ∴ Rufus is warm-blooded. Rufus is a bear.8. ∴ Thumper is not a bear. ∴ something loves honey. predicates. All bears love honey.14 Review Exercises for Chapter 8 Use the following symbols for individuals. M a & Ra (c) ∀x(F x → Gx). Thumper loves broccoli. Translate the following arguments into symbols. (d) (e) 2. REVIEW EXERCISES FOR CHAPTER 8 109 8.

All bears love honey.7 3. Bears do not like asparagus. Republic is a dialogue. Prove the validity of the following sequents: (a) Dist∀ (all of them) (b) Dist∃ (all of them) (c) ∀x(F x → Gx) (d) ∃x(F x → Gx) ∀xF x → ∀xGx ∀xF x → ∃xGx 4. Plato was the author of Republic. 8. and 61. . ∴ some mammals do not like asparagus. Sequents 59.110 CHAPTER 8. Prove the Rules of Passage. Some bears come from the Yukon. Some bears are brown. Prove the Duality relations that were not proven in the Chapter. All bears are mammals. It can be very instructive to try to make up your own examples as well! C 1. Rufus is a brown bear.1 through 8. All bears love honey. that is. ∴ Some brown mammals love honey. Translate the following arguments into symbols and prove them valid: (a) No one takes anything written by Heidegger seriously. PREDICATE LOGIC: NATURAL DEDUCTION B 1. ∴ Socrates does not take Being and Time seriously. 60. Being and Time was written by Heidegger. ∴ Plato wrote some dialogues. ∴ Something brown loves honey. (b) (c) (d) (e) 2. All bears and rabbits are mammals.

a tricky philosophical problem. For our purposes. and may be introduced at any point in a proof. These rules permit a powerful extension of predicate logic which enables us to express the logical structure of a much larger class of quantiﬁcational propositions with a very minimal increase in formal machinery. we can derive A(t) by substituting one or more occurrences of s in A(s) with t. By “identity” we simply mean a two-term relation which says that two proper names or constants point to the same object in the universe of discourse. Identity Elimination (=E): If s and t are terms. one would also likely have to examine how identity is used in physics (where the notion of “identical particle” is very diﬃcult). and given a proposition A(s) (that is. we can deﬁne identity simply as a relation that obeys the introduction and elimination rules. in fact. We now introduce introduction and elimination rules for identity: Identity Introduction (=I): For any term a the formula a = a is a theorem. since we probably would have to use the identity relation in order to express the fact that two symbols single out the same individual. a proposition containing s). given below. 111 . The deﬁnition I have given here is close to being circular. We could write this as Ixy := x is identical to y but it is easier. Deﬁning identity in a way that is clearly non-circular would probably require the resources of higher-order logic (since we would have to quantify over predicates and relations). and more in keeping with standard mathematical usage. to express identity as follows: x = y := x and y point to the same member of U. if s = t. In symbols: a = a. We will also allow the use of the following obvious notational convenience: a = b := −(a = b) Deﬁning identity is.Chapter 9 Predicate Logic With Identity In this chapter we introduce the use of two simple rules concerning the relation of identity.

2.4 =E 1. the converse is left as an exercise. PREDICATE LOGIC WITH IDENTITY The second rule. 66 a=b Fa ↔ Fb .3 =E This proof demonstrates that identity is a transitive relation. 9. The main interest in this proof is that it demonstrates the use of =E. is the one we will use most often. The next proof demonstrates a triviality: to say that something is Paul.1 Examples Here are a few elementary examples which illustrate the use of these rules.2 =E In line (3) we cite two lines. is equivalent to saying that Paul loves curry. 65 Fp ∃x(x = p & F x) I’ll prove one part of it. similarly to the way we can substitute equals for equals in mathematics.5 EE The next proof shows that the way we use identity here corresponds to our basic notion of identity. 64 a = b&b = c 1 1 1 (1) (2) (3) (4) a=c a = b&b = c a=b b=c a=c A 1 &E 1 &E 2. =E. This proof demonstrates that identity is a symmetric relation. It simply allows the substitution of one term symbol for another. which is that things that are presumed identical should have all the same properties. while the other is the line into which we make a substitution. and that thing loves curry. 1 2 2 2 2 1 (1) (2) (3) (4) (5) (6) ∃x(x = p & F x) b = p&Fb b=p Fb Fp Fp A A (EE) 2 &E 2 &E 3. In line (2) we substitute b for a. one of which is the statement of the identity we are using.112 CHAPTER 9. and demonstrate some basic properties of the identity relation: 63 a=b 1 1 b=a (1) (2) (3) a=b a=a b=a A =I 1.

the following elementary argument: Harper is the Prime Minister. and yet Socrates was a distinct individual from Plato.7 &I 8 Def↔ 113 9.” The statement “F (Socrates) ↔ F (Plato)” is true.” The ﬁrst sentence is false. Suppose F x is deﬁned as “x is a philosopher.6 CP 4.9. and yet it might not be immediately obvious how they should be analyzed logically. But that doesn’t seem quite right either.3 Deﬁnite Descriptions Phrases such as “the present King of France. the following pattern of reasoning is correct: This simply expresses what we mean by identity.1) As shown above.” are called deﬁnite descriptions. consider the following sentences: 1. They are widely used in ordinary language. “France presently has a bald King.” “the last person to cross the ﬁnish line. to be meaningless.2) This is invalid because the two individuals a and b do not necessarily have to be the same in order for F a to be true iﬀ F b is true. .” or “the ﬁrst African woman to win the Nobel Peace Prize. in which a deﬁnite description points to a nonexistent object. The second sentence seems very suspicious. for instance. To see why this is important. To say F a ↔ F b is to say that the propositions F a and F b have the same truth value. but it is not immediately obvious how to assign it a truth value. Layton is not Harper. Consider.2 =E 2.2 Fallacy Alert! a=b F a ↔ F b. (????) (9. But do not attempt the following move: Fa ↔ Fb a = b.5 1 1 1 (1) (2) (3) (4) (5) (6) (7) (8) (9) a=b Fa Fb Fa → Fb Fb Fa Fb → Fa (F a → F b) & (F b → F a) Fa ↔ Fb A A (CP) 1. Some philosophers have gone so far as to declare sentences such as this. to say a = b is to say that the symbols a and b point to the same individual in the universe of discourse. (9. 9. ∴ Layton is not the Prime Minister.5 =E 5. “The present King of France is bald.” 2.2.3 CP A (CP) 1. since we make deductions using deﬁnite descriptions all the time.” “the man in the Moon.2 1 5 1. Remember. FALLACY ALERT! 1 2 1.

Stephen Harper. this statement is false. how would we translate “Layton is not the Prime Minister?” Intuitively. after all. F. several countries that have Prime Ministers.” we can write ∃x(Kx & Bx & ∀y(Ky → (y = x)) In other words. indeed. but whose validity we do not yet know how to demonstrate. “There exists a unique bald King of France. which is that the grammatical structure of a sentence in a particular natural language such as English is not always a good indication of its underlying logical form. we motivate the need for new types of symbols by showing that there are arguments that are clearly valid. it seems that to deny this statement should amount to saying that either Layton is not a Prime Minister. We can also express the identity of two descriptions. that person is Harper”. A quick bit of predicate natural deduction shows that this analysis is correct: −(P l & ∀y(P y → (y = l))) ↔ −P l ∨ −∀y(P y → (y = l)) ↔ −P l ∨ ∃y − (P y → (y = l)) ↔ −P l ∨ ∃y(P y & (y = l)) deM Duality Neg→ Russell’s method of analysing the logic of deﬁnite descriptions has generated a lot of controversy in the literature on philosophy of language. it provides a workable logical analysis of deﬁnite descriptive claims and. this simply says.” we can write ∃x(P x & F x & ∀y(P y → (y = x))) And to say “The present King of France is bald.” which would be symbolized simply as P h. at the time of this writing. As before. Strawson: . as we shall show next. we could express “Harper is the Prime Minister” as P h & ∀y(P y → (y = h)) In words. Russell’s method also illustrates a point that we noted earlier.” and. or that someone other than Layton is a Prime Minister. but we don’t yet know how to represent its validity formally. not just any Prime Minister. What makes this argument work is clearly the fact that (as of 2006) Harper is the Prime Minister. “The author of Hamlet is the author of Lear ” can be expressed as ∃x(Hx & Lx & ∀y((Hy & Ly) → (y = x)) As these examples suggest. a deﬁnite description such as “the Prime Minister of Canada” functions grammatically very much like a proper name such as “Mr. allows us to demonstrate the validity of arguments involving deﬁnite descriptions which should. since there are.114 CHAPTER 9. turn out to be valid. PREDICATE LOGIC WITH IDENTITY Intuitively this seems to be valid. Thus. In his later years Russell defended his approach against criticism by P. This statement makes sense. This statement needs to be distinguished carefully from the claim that “Harper is a Prime Minister. Bertrand Russell suggested that we should analyse propositions using deﬁnite descriptions as combinations of existence claims with uniqueness claims. However.” What about negations of statements using deﬁnite descriptions? For instance. To say “The Prime Minister is a former Finance Minister. either or both of those possibilities would be suﬃcient to negate the claim that Layton is the Prime Minister. “Harper is the Prime Minister and if anyone is the Prime Minister. for instance. In 1905.

Layton is not Harper.9.3) This is equivalent to the simpler form we use here. [21. (9.2 1. and he says of me ‘The way in which he [Russell] arrived at the analysis was clearly by asking himself what would be the circumstances in which we would say that anyone who uttered the sentence S had made a true assertion’. p. I was concerned to ﬁnd a more accurate and analysed thought to replace the somewhat confused thoughts which most people at most times have in their heads. 9. This can be proven from the fact that ∃x(P x & (x = h)) P h. that line (6) only says that Layton is not a Prime Minister.1 Example We return to the argument introduced above: Harper is the Prime Minister.2 (1) (2) (3) (4) (5) (6) (7) P h & ∀x(P x → (x = h)) l=h Ph ∀x(P x → (x = h)) P l → (l = h) −P l −P l ∨ ∃y(P y & (y = l)) A A 1 &E 1 &E 4 UE 2. .5 MT 6 vI Note. and indeed many of those who contributed to modern symbolic logic. . but to devise techniques that can help us to reason more eﬀectively. We need to “or” on the extra clause that states that someone else might be Prime Minister as well as Layton. This does not seem to me a correct account of what I was doing.3. 243] 115 Russell. (I thank Heather Rivera for insisting that I clarify this point. ∴ Layton is not the Prime Minister.) A Fine Point Many logic texts write statements such as “Harper is the Prime Minister” in the following more complicated form: ∃x(P x & ∀y(P y → (y = h)) & P h) (9. .4) This is proven in the Exercises. . DEFINITE DESCRIPTIONS My theory of descriptions was never intended as an analysis of the state of mind of those who utter sentences containing descriptions. below. Using obvious deﬁnitions for the term and predicate symbols. which can be used whenever we know the proper name of “the such-and-such”. believed that the job of the logician is not only to describe the logic of ordinary reasoning in a precise way. we can prove this argument valid as follows: 1 2 1 1 5 1. again.3. Mr Strawson gives the name ‘S’ to the sentence ‘The King of France is wise’.

∃x∃y(Kx & Ky & (x = y) & ∀z(Kz → ((z = x) ∨ (z = y)))).116 CHAPTER 9. At most one person knows the secret. 9. with hopefully obvious notation: Smith is the tallest person in Logic class. then x and y are the same person. Here’s an example. Exactly (or only) one person knows the secret.2 Expressing Numerical Relations It is also possible to express simple numerical relationships using identity.3. ∀x∀y((Kx & Ky) → (x = y)) Here it is in words that are half-way between logic and ordinary English: For all x and y. ∃x(Kx & ∀y(Ky → (y = x))). PREDICATE LOGIC WITH IDENTITY 9. In half-logic: there are two distinct people who know the secret. In half-logic: Someone knows the secret.3 Superlatives It is also possible to express simple superlatives using identity.3. and anyone else who knows the secret is that person. and anyone else who knows the secret is identical to one or the other of those two people. . if x and y know the secret. Exactly two people know the secret. Ls & ∀y((Ly & (y = s)) → T sy) In half-logic: Smith is in the Logic class and if there is anyone else in the Logic class who is not Smith then Smith is taller than that person.

4 Review Exercises Use the following symbol deﬁnitions: a := Aristotle s := The Stagirite1 p := Plato e := The Ethics r := The Republic t := The Timaeus Kx := x knew (knows) The Good Axy := x was (is) the author of y A 1. a = b Aristotle came from a small town called Stagira. (d) Plato wrote both the Republic and the Timaeus. (b) Aristotle was The Stagirite. ∴ The Stagirite wrote the Ethics.4. (a = b) (d) F aa → (a = a) 1 F a & Gb F b → Ga ∀x(F ax → Gax) −F aa (c) ∀x(F ax → Gbx). Translate into good. (a = b) (b) F a → Ga. Aristotle was not Plato. . If Aristotle wrote the Republic then Aristotle was Plato. and so he was often called The Stagirite. 2. Prove the following sequents: (a) F a & Ga.9. idiomatic English: (a) p = s (b) r = t (c) Apr ∨ −Apr 3. However. (b) 4. Translate into symbols: (a) Aristotle was not Plato. Aristotle was The Stagirite. ∴ Aristotle did not write the Republic. Translate and prove: (a) Aristotle wrote the Ethics. (c) Either Aristotle or Plato wrote the Republic. REVIEW EXERCISES 117 9.

Aristotle was not Plato. PREDICATE LOGIC WITH IDENTITY (a) Plato was the author of the Republic. 2. At most one person knew the Good. Prove the following sequent: ∃x(P x & (x = l)) Pl 4. .118 B 1. (b) Aristotle was not the author of the Republic. ∴ Aristotle did not know the Good. and prove the resulting sequent: Plato knew the Good. Translate into symbols: CHAPTER 9. Translate the following argument into symbols. idiomatic English: (a) ∃x((Axr & Axt) & ∀y(Ayr → (x = y))) (b) Apr & Apt & ∀y(Ayr → (y = p)) 3. Translate into good.

(g) Either Alice does not go to the store or Tad does not stay home. (c) Either Bob doesn’t go to the store or Cheryl goes to work. (i) Tad stays home only if Bob goes to the store.Solutions to Selected Exercises Chapter 2 Exercises 2. or Alice does not go to the store only if Bob does.3. or Bob goes to the store only if Alice does not.8 A 1. (a) T & C or C & T (c) T → −C (e) −B → A (g) B → C (i) −C → −T (k) A → −B (m) T ∨ −C or −C ∨ T 2.2. (e) If Alice does not go to the store then Bob does. (a) F (c) T (e) T 119 . (a) If Bob goes to the store then Alice does not.1 A 1. Exercises 2.

(b) Bob going to the store does not imply either Cheryl not going to work or Tad staying home. (e) If Alice goes to the store only if Bob goes too. (b) −C ∨ −T (d) −(C → T ) 2. P → −Q 1 2 1. Exercises 2. (a) P. then Tad does’t stay home.120 2.1 A 1. Solutions for Chapter 3 Exercises 3.3. (d) Alice goes to the store if and only if Bob does not.2 −Q (1) (2) (3) P P → −Q −Q A A 1.7. (a) (B & T ) → (A → C) (c) C ↔ T (e) −T → (A ∨ B) 2.1 A 1. (a) T (c) F (e) T SOLUTIONS TO SELECTED EXERCISES Exercises 2.6. then Cheryl goes to work only if Tad stays home. or: If Alice goes to the store then Bob goes to the store only if Cheryl goes to work.1 A 1. P → R R . (c) If Alice goes to the store then if Bob goes to the store Cheryl goes to work.2 MP (c) P & Q. (a) If Alice and Bob go to the store and Cheryl goes to work.

121 1 2 1 1,2 (1) (2) (3) (4) P &Q P →R P R A A 1 &E 2,3 MP

(e) − − P 1 1 1

P ∨R (1) (2) (3) −−P P P ∨R A 1 DN 2 vI

B

1. (a) P → (Q & R), P 1 2 1,2 1,2 1,2 (1) (2) (3) (4) (5)

R∨S P → (Q & R) P Q&R R R∨S A A 1,2 MP 3 &E 4 vI

(c) P ∨ Q, Q → R 1 2 3 3 5 2,5 2,5 1,2

P ∨R (1) (2) (3) (4) (5) (6) (7) (8) P ∨Q Q→R P P ∨R Q R P ∨R P ∨R A A A (vE) 3 vI A (vE) 2,5 MP 6 vI 1,3,4,5,7 vE

(e) P → Q, Q → R, −R 1 2 3 4 1,4 1,2,4 1,2,3,4 1,2,3 (1) (2) (3) (4) (5) (6) (7) (8)

−P P →Q Q→R −R P Q R R& −R −P A A A A (RAA) 1,4 MP 2,5 MP 3,6 &I 4,7 RAA

122

SOLUTIONS TO SELECTED EXERCISES

Exercises 3.7

A

2. (a) P, Q

P →Q 1 2 2 (1) (2) (3) P Q P →Q −P A A 1,2 MT A A 1,2 CP

(c) P → (Q ∨ R), −(Q ∨ R) 1 2 1,2 (1) (2) (3)

P → (Q ∨ R) −(Q ∨ R) −P R Q→R −(Q → R) R

(e) (Q → R), −(Q → R) 1 2 1,2 (1) (2) (3)

A A 1,2 ExF

(g) P → (Q → R), (Q → R) → T 1 2 1,2 (1) (2) (3)

P →T A A 1,2 HS

P → (Q → R) (Q → R) → T P →T

(h) −(P & − Q) 1 1 1

−P ∨ Q (1) (2) (3) −(P & − Q) −P ∨ − − Q −P ∨ Q A 1 deM 2 DN

(j) (X ∨ Y ) → Q 1 1 1

(−X & − Y ) ∨ Q (1) (2) (3) (X ∨ Y ) → Q −(X ∨ Y ) ∨ Q (−X & − Y ) ∨ Q (X → Y ) −(X → Y ) → −P P P → (X → Y ) (X → Y ) A A 1 Trans 2,3 MP A 1 Imp 2 deM

(l) −(X → Y ) → −P, P 1 2 1 1,2 (1) (2) (3) (4)

123 (n) P & (Q → R) 1 1 1 B (P & − Q) ∨ (P & R) (1) (2) (3) P & (Q → R) P & (−Q ∨ R) (P & − Q) ∨ (P & R) A 1 Imp 2 Dist

1. (a) P → Q, R → S, −Q ∨ −S 1 2 3 4 1,4 1,4 7 2,7 2,7 1,2,3 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

−P ∨ −R

(DD) A A A A (vE) 1,4 MT 5 vI A (vE) 2,7 MT 8 vI 3,4,6,7,9 vE

P →Q R→S −Q ∨ −S −Q −P −P ∨ −R −S −R −P ∨ −R −P ∨ −R

(c) P → Q 1 1 1 1

−(P & − Q) (1) (2) (3) (4) P →Q −P ∨ Q −P ∨ − − Q −(P & − Q) A 1 Imp 2 DN 3 deM

(d) −(P & − Q) 1 1 1 1

P →Q (1) (2) (3) (4) −(P & − Q) −P ∨ − − Q −P ∨ Q P →Q A 1 deM 2 DN 3 Imp

(g) P ∨ (Q & R) 1 2 2 2 2 6 6 6 6

(P ∨ Q) & (P ∨ R) (1) (2) (3) (4) (5) (6) (7) (8) (9) P ∨ (Q & R) P P ∨Q P ∨R (P ∨ Q) & (P ∨ R) Q&R Q R P ∨Q A A (vE) 2 vI 2 vI 3,4 &I A (vE) 6 &E 6 &E 7 vI

6 CP P → (Q → R) P &Q P Q→R Q R (P & Q) → R (k) (P → Q) & (P → R) 1 2 1 1 1.2 1 1.2. Q P ↔Q 1 2 2 1 1.2 1.124 6 6 1 (10) (11) (12) SOLUTIONS TO SELECTED EXERCISES P ∨R (P ∨ Q) & (P ∨ R) (P ∨ Q) & (P ∨ R) 8 vI 9.5.11 vE (i) P → (Q → R) 1 2 2 1. (a) P.5 MP 2.7 CP (m) −(P & − Q) 1 1 1 1 P →Q (1) (2) (3) (4) −(P & − Q) −P ∨ − − Q −P ∨ Q P →Q A 1 deM 2 DN Imp 2.2 1 (P & Q) → R (1) (2) (3) (4) (5) (6) (7) (Exp) A A (CP) 2 &E 1.3 MP 2 &E 4.2 (1) (2) (3) (4) (5) (6) P Q P →Q Q→P (P → Q) & (Q → P ) P ↔Q A A 1.2 1 (1) (2) (3) (4) (5) (6) (7) (8) P → (Q & R) (P → Q) & (P → R) P P →Q P →R Q R Q&R P → (Q & R) A A (CP) 1 &E 1 &E 2.4 MP 5.3 MP 2. (Q → R) 1 2 (1) (2) (P & Q) → R P →R Q→R A A .6 &I 2.4 &I 5 Def↔ (c) (P → R).10 &I 1.2 1.2 CP 1.6.2 1.2 CP 3.

125 3 3 1.6.7.3.3 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) P → (Q ∨ R) (P → Q) → X (P → R) → Y (P → Q) ∨ (P → R) P →Q X X ∨Y P →R Y X ∨Y X ∨Y X ∨Y A A A 1 Dist↔ A (vE) 2.4 MP 3.7. R ↔ P.3 (1) (2) (3) (4) (5) (6) (7) (8) Q R R↔P −Q → −P P →Q (R → P ) & (P → R) R→P R→Q Q A A A 3 Trans 2 Def↔ 5 &E 4.2.5 MP 6 (vI) A (vE) 3.5.4 1 1 1.7 MP (g) P ↔ S.5. (P → R) → Y 1 2 3 1 5 2.8 MP 6 vI 1.6 HS 1.2.5 7 3.3 1.10 vE . P ∨ Q.4 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) S A A A A A (vE) 2.5 8 3.8 3.3 1 (3) (4) (5) (6) P &Q P R (P & Q) → R A (CP) 3 &E 1.5 CP (e) R.2.8. P ∨ R.5 2.8 (vE) 1 Def↔ 10 &E 9.7 2.7 DS 4.8 1.11 MP P ↔S P ∨Q P ∨R −Q ∨ −R −Q P −R P P (P → S) & (S → P ) P →S S (i) P → (Q ∨ R).3.5 DS A (vE) 3. −Q ∨ −R 1 2 3 4 5 2. (P → Q) → X. −Q → −P 1 2 3 3 2 2 2.

Dist n Imp. Comm 3 Imp. (a) (b) (c) (d) (e) MP MT vI HS Imp Exercise 4. State the justiﬁcations for each of the following steps. DN n Assoc. DN n Imp.5.1 (A) 1.1.1 (A) 1. n+1 MT. DN 4 Imp P & −P Q (P & − P ) → Q A (CP) 1 ExF 1.2 (3) 1.126 SOLUTIONS TO SELECTED EXERCISES Solutions for Chapter 4 Exercise 4.2.2 CP (b) P −P P & −P Q −P → Q P → (−P → Q) A (CP) A (CP) 1. Dist n.2 &I 3 ExF 2. Identify the rule used in each of the following deductive steps. (a) (P & − P ) → Q 1 (1) 1 (2) (3) P → (−P → Q) 1 (1) 2 (2) 1.1 (B) 1. (a) (b) (c) (d) (e) (f) n Imp(×2) n Imp.2 (4) 1 (5) (6) Alternate proof: (1) (2) (3) (4) (5) P ∨ −P (P ∨ −P ) ∨ Q −P ∨ (P ∨ Q) P → (− − P ∨ Q) P → (−P → Q) ExM 1 vI 2 Assoc. Comm.5 CP . Exercise 4.4 CP 1.

1 (C) 1.5. (a) P ∨ −P .4 CP ExM 1 vI 2 vI 3 Assoc (×2) 4 Imp (×2) (f) −P → P −P P P & −P P (−P → P ) → P A (CP) Q (RAA) 1.4 RAA. DN 1.127 (c) P ∨ (P → Q) (1) (2) (3) Alternate proof: (1) (2) (3) (4) (d) (P → Q) ∨ (1) (2) (3) (4) P ∨ −P (P ∨ −P ) ∨ Q P ∨ (−P ∨ Q) P ∨ (P → Q) ExM 1 vI 2 Assoc 3 Imp −P → (P → Q) − − P ∨ (P → Q) P ∨ (P → Q) TI (Ques.2 (3) 1.5 CP Exercise 4.2 MP 2.3 &I 2. 1(a)) 1 Imp 2 DN (Q → P ) P ∨ −P (P ∨ −P ) ∨ (Q ∨ −Q) (−P ∨ Q) ∨ (−Q ∨ P ) (P → Q) ∨ (Q → P ) ExM 1 vI 2 Assoc (×2).2 (4) 1 (5) (6) Alternate proof: 1 1 1 1 (1) (2) (3) (4) (5) −P → P −−P ∨ P P ∨P P (−P → P ) → P A (CP) 1 Imp 2 DN 3 Idem 1. Comm (×3) 3 Imp (×2) (e) (P → Q) ∨ (Q → R) (1) Q ∨ −Q (2) −P ∨ (Q ∨ −Q) (3) (−P ∨ (Q ∨ −Q)) ∨ R (4) (−P ∨ Q) ∨ (−Q ∨ R) (5) (P → Q) ∨ (Q → R) (−P → P ) → P 1 (1) 2 (2) 1.

(a) (P ↔ Q) ↔ (−(P (P ↔ Q) ↔ ↔ ↔ → −Q) ∨ −(−Q → P )) ((P & Q) ∨ (−P & − Q)) ((P & − −Q) ∨ (−Q & − P )) (−(P → −Q) ∨ −(−Q → P )) Equiv DN.7.3 SI 1.2 (3) (4) 1. DN This elegant proof is due to Lemmon (his Sequent 44.4 CP (b) Given (P & Q) → R. (a) Given P. 1 (1) 2 (2) 1.2 &I TI 3. Q 1 1 1 1 R.4 RAA 5 vI 1.2 1 1 1 (1) (2) (3) (4) (5) (6) (7) (8) SOLUTIONS TO SELECTED EXERCISES −(P ∨ −P ) P P ∨ −P (P ∨ −P ) & − (P ∨ −P ) −P P ∨ −P (P ∨ −P ) & − (P ∨ −P ) P ∨ −P A (RAA) A (RAA) 2 vI 1.3 &I 2.2 (5) A A 1.6.7 RAA.1 (B) 1. show (1) (2) (3) (4) (5) (P & Q) → R (P & Q) P Q R (P & Q) → R show P. Exercise 4.6 &I 1.128 1 2 2 1. Comm Neg→ (×2) (b) ((Q ∨ R) → P ) ↔ ((Q → P ) & (R → P )) ((Q ∨ R) → P ) ↔ (−(Q ∨ R) ∨ P ) ↔ ((−Q & − R) ∨ P ) ↔ ((−Q ∨ P ) & (−R ∨ P )) ↔ ((Q → P ) & (R → P )) (P → (Q ∨ R)) ↔ ((P → Q) ∨ (P → R)) Imp deM Dist Imp(×2) (c) . Q R P Q P &Q (P & Q) → R R A (CP) 1 &E 1 &E 2.) [17].4 MP Exercise 4.1 (B) 1.

Assoc. Write the complete truth table for each of the following wﬀs: (a) − F F F T (b) − T T T T (− F T F F (P T T F F → T F T T Q) T F T F & F F F F − F F T T (Q T F T F → P )) T T T T F F T F (P T T F F ∨ Q) & − (P & Q) T T F F T T T T F F T T F F T T F T F F T F F T T F F F . DN Imp(×2) Solutions for Chapter 5 Review Exercises on Chapter 5 A 1. Comm Imp (d) (P → (Q & R)) ↔ ((P → Q) & (P → R)) (P → (Q & R)) ↔ (−P ∨ (Q & R)) ↔ ((−P ∨ Q) & (−P ∨ R)) ↔ ((P → Q) & (P → R)) Imp Dist Imp(×2) (e) (P ∨ (Q → R)) ↔ ((Q → P ) ∨ (−P → R)) (P ∨ (Q → R)) ↔ ↔ ↔ ↔ (P ∨ (−Q ∨ R) ((P ∨ P ) ∨ (−Q ∨ R)) ((−Q ∨ P ) ∨ (− − P ∨ R)) ((Q → P ) ∨ (−P → R)) Imp Idem Comm.129 (P → (Q ∨ R)) ↔ ↔ ↔ ↔ (−P ∨ (Q ∨ R)) ((−P ∨ −P ) ∨ (Q ∨ R)) (−P ∨ Q) ∨ (−P ∨ R)) (P → Q) ∨ (P → R) Imp Idem Assoc.

or contingent. (a) − (P → Q) T T F F (b) − (P ↔ Q) T T F F (c) (− P ↔ Q) F T T F (d) (− F F T T P T T F F & F F F T − F T F T Q) T F T F ∨ (P & Q) T T T T F T F F F F F T T F F F 3. when P = T and Q = F .130 (c) − F T T T T T T T (d) P T T T T F F F F → T F T T T T T T (Q T T F F T T F F → T F T T T F T T R) T F T F T F T F (P & (Q & R)) T T T T T T F T F F T F F F T T F F F F F F T T T F F T F F F F F F T F F F F F SOLUTIONS TO SELECTED EXERCISES 2. inconsistent. Use truth tables to characterize the following wﬀs as tautologous. . Evaluate the following wﬀs.

131 (a) P → P T T T F T F (b) − P → P F T T T T F F F gent (c) P → (− P → P ) T T F T T T F T T F F F (d) (P ∨ −P ) → (P & − P ) T F F Contradiction Tautology ContinTautology 4. (a) P ∨ Q − Q P F F F T F F This is valid. Check the following sequents for validity. (b) P → Q − Q − P T F F T F F T Valid (c) P → Q P → (P & Q) T F F T F T F F Valid (d) P → Q − P − Q F T T T F F T Invalid. . since making the conclusion F forces a premiss to be F. Use short-cuts where they are suitable. since there is a line in the truth table in which the premisses are T and the conclusion F.

. since the only way to make the consequent F makes the antecedent F as well. (a) P ∨ (Q & R) F F F F F T T T T T Contingent. (e) (P → (Q → P )) → R T T T T F F Contingent. to demonstrate this. and one that comes out F. inconsistent. (b) P → (R & S) T F F F T T T T T Contingent (c) P → (Q → P ) T T T T Tautology (d) (P → R) → (P → (P & R)) T F F T T F T F F Tautology. or contingent. which can be T or F. Use truth tables to classify the following wﬀs as either tautologous.132 B SOLUTIONS TO SELECTED EXERCISES 1. all we have to do is show that there is at least one line that comes out T. we know by (c) that the antecedent is always T. so the value depends on R.

133 (f) (P T T F F ↔ T F F T Q) T F T F ↔ F F F F (Q T F T F ↔ F T T F − F F T T P) T T F F Contradiction (g) P → (R & − R) T F F F T F Contingent (h) (P → R) → − P T T T F F T F T T T F Contingent (i) (P → (Q & R)) & (P → (Q & − R)) F T T F T T T F F F Contingent (j) − F T F F (P T T F F → T F T T Q) T F T F & F F F F − F F T F (Q T F T F → T T F T P) T T F F Contradiction 2. Use truth tables to check the following sequents for validity. (a) P → Q Q → R P → R T F F F T F T F F Valid .

since the conclusion is a theorem.134 (b) P ↔ Q Q ↔ R P ↔ R T F F F T F T F F F F T T T T F F T Valid (c) P → Q R → Q P → R T T T F T T T F F Invalid (d) P → Q − Q → P F T F T F F F Invalid (e) SOLUTIONS TO SELECTED EXERCISES P → Q R → S − Q ∨ − S − P ∨ − R T T T T T T F T F F T F T F F T Valid (f) P ∨ Q P → R Q → R R F F F F T F F T F F Valid (g) P → Q P → (P ∨ Q) F T T T T T Valid. (h) P → Q P → (P & Q) T F F T F T F F Valid .

therefore the sequent is valid. therefore the sequent is valid.7 1(b) P ∨Q −P −Q d d The tree closes. d d S × d d R × Q × 1(e) P →Q −(P → (P & Q)) P −(P & Q) −P × d d The tree closes.135 (i) P ∨ R P → (R ∨ S) − S − P T T T T T T T F T F F T Invalid (j) P → (M & N ) M → R N → S − (R ∨ S) − P ∨ M T F F F F F T F F T F T F F F F T F F Valid Solutions for Chapter 6 Exercise 6. −Q Q × −P × d d . P × Q × 1(c) P ∨Q P →R Q→S −(R ∨ S) −R −S −Q −P P × d d The tree closes. therefore the sequent is valid.

136 1(g) P −(P → Q) P −Q SOLUTIONS TO SELECTED EXERCISES The tree is open.(a) Any mongoose from the Transvaal has a strange diet. (d) No cobra or mongoose is from the Transvaal. 1(j) P → (Q & R) −Q P −P × d d The tree is closed. (c) All mongooses and aardvarks like to eat ants. therefore the sequent is invalid. (b) Some cobras and Mongooses like to sleep in the sun. Q&R Q R × Solutions for Chapter 7 Section 7.(a) ∀x∀y((Ax & F y) → Exy) (b) ∀x∀y(Cy → (Exy → M x)) OR ∀x∀y((Cy & Exy) → M x) (c) ∀x(Ax → (Sx ↔ −V x)) (d) ∀x∀y((Lx & T y) → (Exy ↔ Ax)) (e) (f) ∀x(M x → −V x) T mn 2. therefore the sequent is valid.3 1. .

2 1 2 1 1.3 (1) (2) (1) (2) (1) (2) (3) (4) (1) (2) (3) (4) (1) (2) (3) (4) (5) (6) (7) Nr ∃xN x Apc ∃xApx ∀x(U x → N x) Ur Ur → Nr Nr ∀x(U x → −Rx) Rt U t → −Rt −U t ∀x(U x → M x) ∀x(M x → W x) Ur Ur → Mr Mr → Wr Ur → Wr Wr A 1 EI 2 (b) A 1 EI 2 (c) A A 1 UE 2.2 1 2 3 1 1.2.3 (1) (2) (3) (1) (2) (3) (4) (5) (1) (2) (3) (4) (5) Fa&Fb Fa ∃xF x ∀x(M x → W x) M a & Ra Ma → Wa Ma Wa ∀x(F x → Gx) ∃xF x Fa F a → Ga Ga A 1 &E 2 EI 2 (b) A A 1 UE 2 &E 3. (a) 1 2 1 1 1 2 1 1.5 HS 3.2 1 2 3 1 2 1.14 A 1.4 MP 2 (c) A A A (EE) 1 UE 3.3 MP 2 (d) A A 1 UE 2.6 MP 2 2.3 MT 2 (e) A A A 1 UE 2 UE 4.2 1.4 MP .137 Solutions for Chapter 8 Section 8. (a) 1 1 1 1 2 1 2 1.

3 1.4 MT 5 EI 2.3–6 EE 2 (e) A 1 Duality 2 B 1.3 1.2 1.6 &I 7 EI 2 (c) A A 1.2 1 2 2 2 1 1.3–6 EE 2 ∀x(F x → −Gx) ∃xGx Ga F a → −Ga −F a ∃x − F x ∃x − F x ∀x − (F x ∨ Gx) ∃x(F x ∨ Gx) A A A 1 UE 3.2 1.2 1 2 1.2 1.5 MP 4.2 1 2 3 4 4 4 3 (1) (2) (3) (4) (5) (6) (1) (2) (3) (4) (5) (6) (7) (8) (1) (2) (3) (4) (1) (2) (3) (4) (5) (6) (7) ∀x(Ahx → −∃yT xy) Ahb Ahb → −∃yT yb −∃yT yb ∀y − T yb −T sb ∀x(U x → N x) U r & Br Ur Br Ur → Nr Nr Br & N r ∃x(Bx & N x) Apc Dc Apc & Dc ∃x(Apx & Dx) ∀x(U x → N x) ∃x(Bx & U x) ∀x(U x → M x) Ba & U a Ba Ua Ua → Ma A A 1 UE 2.2 1.138 1. (a) 1 2 1 1.3 1.2 &I 3 EI 2 (d) A A A A (EE) 4 &E 4 &E 3 UE (b) .2 1.3 MP 4 Duality 5 UE 2 A A 2 &E 2 &E 1 UE 3.2 (d) 1 2 3 1 1.2 1 1 (6) (7) (1) (2) (3) (4) (5) (6) (7) (1) (2) ∃xGx ∃xGx SOLUTIONS TO SELECTED EXERCISES 5 EI 2.

∃x(P & F x) 1 2 2 2 2 2 1 . This could be a good exam question! (c) i.10 &I 11 EI 3.4–12 EE 2 A A A A (EE) 4 &E 2 UE 5.4 1.3. 1 1 1 1 1 1 (1) (2) (3) (4) (5) (6) ∀x(P & F x) P &Fa P Fa ∀xF x P & ∀xF x A 1 UE 2 &E 2 &E 4 UI 3.7 MP 1 UE 6.4 1.4 1.4 1.2.4–12 EE 2 A A 2 &E 2 &E 4 EI 3.4 1 1 1.9 MP 7.2.6 MP 1 &E 8 UE 5.4 1 1.2. This could be a good exam question! (b) i. (a) i.139 3.3. 1 2 3 3 5 5 (1) (2) (3) (4) (5) (6) ∃x(P ∨ F x) P ∨ Fa P P ∨ ∃xF x Fa ∃xF x A A A 3 vI A 5 EI 2.3 (e) 1 2 3 4 4 2 2.9 MP 5.3 (8) (9) (10) (11) (12) (13) (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) Ma Ua → Na Na M a & Ba & N a ∃x(M x & Bx & N x) ∃x(M x & Bx & N x) ∀x(U x → M x) & ∀x(Lx → M x) ∀x(U x → −Ax) ∃x(U x & Cxk) U a & Cak Ua U a → −Aa −Aa ∀x(U x → M x) Ua → Ma Ma M a & − Aa ∃x(M x & − Ax) ∃x(M x & − Ax) P & ∃xF x (1) ∃x(P & F x) (2) P &Fa (3) P (4) Fa (5) ∃xF x (6) P & ∃xF x (7) P & ∃xF x 6.5 &I 2 ii.2–6 EE 2 ii.5 &I 1.10 &I 11 EI 2.4 1.2.4 1.8.

This is left as an exercise.3–4.5–7 vE 2 A A A 2.2 1.2 1 (1) (2) (3) (4) (5) (6) (7) ∃x(P → F x) P P → Fa Fa ∃xF x ∃xF x P → ∃xF x (f) i.2 1. This could be a really good question for the exam! Hint: it is basically the reverse of the above.2 1 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (1) (2) (3) (4) (5) (6) P → ∃xF x −∃x(P → F x) ∀x − (P → F x) −(P → F a) P & − Fa P −F a ∀x − F x −∃xF x ∃xF x ∃xF x & − ∃xF x ∃x(P → F x) A A (RAA) 2 Duality 3 UE 4 Neg→ 5 &E 5 &E 7 UI 8 Duality 1.2–8 EE 2 ii.10 &I 2–11 RAA 2 A A 1 UE 2. 1 2 2 2 2 2 2 2 2 1.3–5 EE 2–6 CP 2 ii. (d) i. 1 1 3 3 5 5 5 1 (1) (2) (3) (4) (5) (6) (7) (8) ∀x(P ∨ F x) P ∨ Fa P P ∨ ∀xF x Fa ∀xF x P ∨ ∀xF x P ∨ ∀xF x A 1 UE A (vE) 3 vI A (vE) 5 UI 6 vI 2. 1 2 3 2. Another possible exam question! (e) i.2 1 1 2 1 1.140 5 2 1 (7) (8) (9) P ∨ ∃xF x P ∨ ∃xF x P ∨ ∃xF x SOLUTIONS TO SELECTED EXERCISES 6 vI 2.3 1. ii.5–7 vE 1. with EE’s inside vE subproofs.3 2.3 MP 4 EI 1.3 MP 4 UI 2–5 CP 2 ii. Hint: it’s an EE inside an RAA. .3–4.6 MP 9.

1 2 3 3 3 6 6 6 2 1 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) ∃xF x ∨ ∃xGx ∃x(F x ∨ Gx) F a ∨ Ga Fa ∃xF x ∃xF x ∨ ∃xGx Ga ∃xGx ∃xF x ∨ ∃xGx ∃xF x ∨ ∃xGx ∃xF x ∨ ∃xGx A A (EE) A (vE) 3 EI 4 vI A (vE) 6 EI 7 vI 2.3 1.141 (g) i.4 MP 1.6–8 vE 1.3–5 EE 2–6 CP 2 ii.2 1 (1) (2) (3) (4) (5) (6) (7) (8) (1) (2) (3) (4) (5) (6) (7) ∃x(F x & Gx) F a & Ga Fa Ga ∃xF x ∃xGx ∃xF x & ∃xGx ∃xF x & ∃xGx ∀x(F x → Gx) ∀xF x F a → Ga Fa Ga ∀xGx ∀xF x → ∀xGx A A 2 &E 2 &E 3 EI 4 EI 5.2–9 EE 2 ii. use UE and IU within each subproof. 3. (d) Prove ∃x(F x & Gx) ∃xF x & ∃xGx Note: this is one that you really should be able to do for the exam.2–7 EE 2 A A (CP) 1 UE 2 UE 3. Hint: it’s a pretty straightforward RAA.2 1. The converse proof is very similar. it is left as an exercise. This is left as an exercise.3–5.6 &I 1. except that you do EE subproofs inside each vE subproof. (b) Sequent 55 is a fairly easy vE. 1 2 2 2 2 2 2 1 (e) 1 2 1 2 1.4 MP 5 UI 2–6 CP 2 . 1 2 3 2 2.2 1 (1) (2) (3) (4) (5) (6) (7) ∃x(F x → P ) ∀xF x Fa → P Fa P P ∀xF x → P A A A 2 UE 3. (a) Sequent 54 is a straightforward application of UI. (c) Prove ∃x(F x ∨ Gx) i.

7 &I 2–8 RAA 2 (b) 2.2 1 −∀x − F x (1) (2) (3) (4) (5) (6) (7) (1) (2) (3) (4) (5) (6) (7) (8) (9) ∃xF x ∀x − F X Fa −F a Fa& − Fa Fa& − Fa −∀x − F x −∀x − F x −∃xF x Fa ∃xF x ∃xF x & − ∃xF x −F a ∀x − F x ∀x − F x & − ∀x − F x ∃xF x A A A 2 UE 3.3–5 EE 2–6 RAA 2 A A A 3 EI 2.3 &I (b) .2 1 1 2 2 1.4 &I 1–5 RAA 2.3–6 EE 1.4 &I 1.3 1.7 &I 2–8 RAA 2 A A 2 EI 1. Prove ∀x − F x (a) 1 2 3 1 1.142 SOLUTIONS TO SELECTED EXERCISES Note: a question similar to this could easily be on the exam! (f) 1 1 1 1 1 (1) (2) (3) (4) (5) ∃x(F x → Gx) ∃x(−F x ∨ Gx) ∃x − F x ∨ ∃xGx ∀xF x ∨ ∃xGx ∀xF x → ∃xGx A 1 Imp 2 Dist∃ (Sequent 56) 3 Duality 4 Imp 2 C 1.2 −∃xF x (1) (2) (3) (4) (5) (6) (7) (8) (9) (1) (2) (3) (4) ∀x − F x ∃xF x Fa −F a −F a & F a −∀xF x −∀xF x ∀xF x & − ∀xF x −∃xF x −∃xF x ∃xF x ∃xF x & − ∃xF x A A A 1 UE 3.4 &I 3–5 RAA 6 UI 1. Prove ∃xF x (a) 1 2 3 2 2.3 3 2 1.

4 &I 2–5 RAA 1.2–6 EE 2 A A A 3 EI 2.2 1 (5) (6) 2–4 RAA 5 UI 2 ∀x − F x −∀xF x (1) (2) (3) (4) (5) (6) (7) (1) (2) (3) (4) (5) (6) (7) (8) (9) ∃x − F x ∀xF x −F a Fa Fa& − Fa −∀xF x −∀xF x −∀xF x −∃x − F x −F a ∃x − F x ∃x − F x & − ∃x − F x Fa ∀xF x ∀xF x & − ∀xF x ∃x − F x A A (RAA) A (EE) 2 UE 3.3 3 1 1 2 3 3 2.3 2 2 1. Prove ∃x − F x (a) 1 2 3 2 2.4 &I 3–5 RAA 6 UI 1.7 &I 2–8 RAA 2 (b) .143 1 1 3.

(a) The author of the Republic was the author of the Timaeus.2 1 1 B 1.2 MT (b) 4. (a) 1 2 1. 3. 2.2 =E A =I 1.144 SOLUTIONS TO SELECTED EXERCISES Solutions for Chapter 9 A 1.2 1 2 1. (b) Plato was the author of the Republic and the Timaeus.2 =E A A 1. (a) Plato was not The Stagirite.2 MT A A 1.2 1 2 1. (a) p = a (b) a = s (c) Aar ∨ Apr (d) Apr & Apt 2. (a) Apr & ∀y(Ayr → ∀y(Ayr → (y = p)) (b) −(Aar & ∀y(Ayr → (y = a))) This is equivalent to −Aar ∨ −∀y(Ayr → (y = a)) and this is equivalent to −Aar ∨ ∃y(Ayr & (y = a)).2 1 2 1.2 =E A A 1.2 =E A A 1. (a) (b) (c) (d) . (c) Plato either did or did not write Republic. (b) The Republic is not the Timaeus.2 1 2 1. (1) (2) (3) (1) (2) (3) (1) (2) (3) (1) (2) (3) (1) (2) (3) (1) (2) (3) Aae a=s Ase Aar → (a = p) a=p −Aar F a & Ga a=b F a & Gb F a → Ga a=b F a → Ga ∀x(F ax → Gbx) a=b ∀x(F bx → Gbx) F aa → (a = a) a=a −F aa A A 1.

4 MT 5 deM 1. 1 2 3 3 2.2 &I 3 EI A A A 1 UE (× 2) 2.3 (1) (2) (3) (4) (5) (6) (1) (2) (3) (4) (1) (2) (3) (4) (5) (6) (7) ∃x(ps & (x = l)) P a & (a = l) Pa a=l Pl Pl Pl l=l P l & (l = l) ∃x(P x & (x = l)) Kp a=p ∀x∀y((Kx & Ky) → (x = y)) (Kp & Ka) → (p = a) −(Kp & Ka) −Kp ∨ −Ka −Ka A A (EE) 2 &E 2 &E 3.145 3.2.3 1.4 =E 1.6 DS .2–5 EE A =I 1. 1 2 2 2 2 1 1 1 1 4.3 2.

146 SOLUTIONS TO SELECTED EXERCISES .

R. G. A Practical Study of Argument. 1992. Belmont.. [6] Copi. C.and Jeﬀrey. [3] Birch. S. P. 1991. [15] Klemke. 785–799. Paul. 202–213. Douglas. M. R. 1990. Hilary (eds. [9] Halmos. Naive Set Theory. [5] Brown. I. 1980. 1960. and Nelson. [8] Govier. Dialogue 38(4). [14] Jeﬀrey.. CA: Wadsworth. New York: McGrawHill. Its Scope and Limits.. [7] DeVidi. Thinking Clearly: A Guide to Critical Reasoning. 1988. 1979. [12] Hughes. [4] Boolos. [2] Bergmann. New York: Penguin. Englewood Cliﬀs. New York: Dover. and Putnam. George.New York: o Basic Books. 1950. W. David. [13] Jacquette. Third Edition. 2001. CA: Wadsworth. Third Edition. J. and Cohen. Upper Saddle River. David. 1964. 147 . George. Formal Logic. Theory of Sets. Introduction to Logic Tenth Edition. 1969. Richard Computability and Logic Second Edition. T. “Quantum Logic.Bibliography [1] Benaceraﬀ. J. I. Graham. [16] LeBlanc. London: Macmillan. Metalogic. 1998. “On Confusions About Bivalence and Excluded Middle”. Second Edition. 1998. Cambridge: Cambridge University Press. R. Escher. The Laws of Form. NJ: Prentice-Hall. G¨del. London: George Allen and Unwin. E. Illustrations by Devis Grebu. 1999. New York: W. New York: Van Nostrand Reinhold.” Scientiﬁc American 245(4) (October 1981). NJ: Prentice-Hall. Geoﬀrey. and Solomon. M. Belmont.) Philosophy of Mathematics: Selected Readings. Moor. [10] Hofstadter. J. The Logic Book. 1971. Norton. [11] Hunter. New York: McGraw Hill. The King’s Chessboard . Bach: An Eternal Golden Braid. Dale Symbolic Logic.

[21] Russell. New York: Bantam Books. Van Nostrand Reinhold. and Whitehead. Bertrand. A Teaching Companion to Lemmon’s Beginning Logic. G. 1979. 1918. Raymond. [19] Rucker.148 BIBLIOGRAPHY [17] Lemmon. IN: Hackett. . Bertrand. Introduction to Mathematical Philosophy. Benson Elementary Logic. Principia Mathematica. 1978. 1978. New York: Oxford University Press. Second Edition. Second Edition. A. IN: Hackett. Rudy Inﬁnity and the Mind: The Science and Philosophy of the Inﬁnite. London: Allen & Unwin. 1927. NJ: Prentice-Hall. First published 1965. 1972. London: Allen & Unwin. Bertrand. Cambridge: Cambridge University Press. Indianapolis. J. Beginning Logic. [22] Russell. Indianapolis. What is the Name of This Book? Englewood Cliﬀs. [23] Schumm. N. [20] Russell. E. [18] Mates. 1983. F. My Philosophical Development. 1959. [24] Smullyan.

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?