0 views

Uploaded by Maria Ines Pazos

- Argumentation
- Class Notes- Legal Reasoning Second Sem
- solution1-10
- Discrete Maths x
- General Assessment Criteria
- Guidelines for Essay Assignment History 15
- Course Reflection
- Legal Logic Technique Outline
- yunmathf4
- United States v. Jazon Dussan, 3rd Cir. (2010)
- Discrete Lecture 3
- For and Against Essays
- Logical Reasoning.docx
- Solutions to Exercise 11
- Debate Teaching Material-day 2-Fallacy
- A Brief Overview of Logical Theory
- ISM SOE11139 Assessment Information
- I Took the GMAT Twice and Scored 6
- Devices
- Essay Plan

You are on page 1of 111

Philosophy Insights

http//www.humanities-ebooks.co.uk

General Editor: Mark Addis

Formal Logic

Mark Jago

‘What makes

an argument

valid?’

For advice on use of this ebook please scroll to page 2

Publication Data

© Mark Jago, 2007

The Author has asserted his right to be identified as the author of this Work in accord-

ance with the Copyright, Designs and Patents Act 1988.

Published by Humanities-Ebooks, LLP, Tirril Hall, Tirril, Penrith CA10 2

Reading Options

* To use the navigation tools, the search facility, and other features of the toolbar,

this Ebook should be read in default view.

* To navigate through the contents use the hyperlinked ‘Bookmarks’ at the left of

the screen.

* To search, expand the search column at the right of the screen or click on the bin-

ocular symbol in the toolbar.

* For ease of reading, use <CTRL+L> to enlarge the page to full screen

* Use <Esc> to return to the full menu.

* Hyperlinks appear in Blue Underlined Text. To return from an internal hyperlink

use the ‘previous view’ button and repeat if necessary.

* For a computer generated reading use <View>Read out Loud>

Purchasing this book licenses you to read this work on-screen and to print one copy for

your own use. Copy and paste functions are disabled. No part of this publication may be

otherwise reproduced or transmitted or distributed without the prior written permission

of both the copyright owner and the publisher. Making or distributing copies of this

book constitutes copyright infringement and would be liable to prosecution. Thank you

for respecting the rights of the author.

ISBN 978-1-84760-041-7

Formal Logic

Mark Jago

Bibliographical Entry:

Jago, Mark. Formal Logic. Philosophy Insights. Tirril: Humanities-Ebooks, 2007

A Note on the Author

Mark Jago is a lecturer in the Department of Philosophy at the University of Notting-

ham, UK and a Junior Research Associate in the Research Group on the Philosophy

of Information at the University of Oxford. He wrote the Wittgenstein guide in the

Philosophy Insights series and has published articles on truth, belief, logic, fiction and

information.

Philosophy Insights: Formal Logic 5

Contents

Introduction 7

1 Logical Reasoning 9

1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2 Valid Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.3 Valid Forms of Inference . . . . . . . . . . . . . . . . . . . . . . . . 12

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Propositional Logic 16

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 Logical Connectives . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.3 The Logical Language . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.4 Construction Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.5 Truth Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.6 Valuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.1 Logical Entailment . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.3 Equivalence Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4 Proof Trees 41

4.1 Proofs in Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2 The Proof Tree Method . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.3 Examples of Proof Trees . . . . . . . . . . . . . . . . . . . . . . . . 44

4.4 Decidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.5 Valuations From Open Finished Trees . . . . . . . . . . . . . . . . . 46

4.6 Soundness and Completeness . . . . . . . . . . . . . . . . . . . . . . 48

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Philosophy Insights: Formal Logic 6

5.1 More Valid Arguments . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2 Constants, Predicates and Relations . . . . . . . . . . . . . . . . . . 52

5.3 Existence and Generality . . . . . . . . . . . . . . . . . . . . . . . . 54

5.4 Formation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.5 Binary Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.6 Semantics for First-Order Logic . . . . . . . . . . . . . . . . . . . . 62

5.7 Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

6 Identity 70

6.1 The Puzzle of Identity . . . . . . . . . . . . . . . . . . . . . . . . . . 70

6.2 Identity in First-Order Logic . . . . . . . . . . . . . . . . . . . . . . 70

6.3 Expressing At Least, At Most and Exactly . . . . . . . . . . . . . . . 71

6.4 Definite Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . 74

6.5 Leibniz’s Law and Second Order Logic . . . . . . . . . . . . . . . . 75

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

7.1 Rules for Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . 78

7.2 Rules for Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

7.3 Undecidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7.4 Constructing Models from Open Branches . . . . . . . . . . . . . . . 85

7.5 Soundness and Completeness . . . . . . . . . . . . . . . . . . . . . . 88

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Appendix B. Infinity 92

Answers to Exercises 95

Introduction

Logical reasoning is vital to philosophy. Descartes for one recognized this in his Rules

for the Direction of the Mind (1628), where he writes:

RULE 4: There is need of a method for investigating the truth about things.

RULE 5: . . . we shall be observing this method exactly if we reduce complex

and obscure propositions step by step to simpler ones, and then, by retracing

our steps, try to rise from intuition of all of the simplest ones to knowledge of

all the rest.

Descartes’ aim was first to find principles whose truth he could be certain of and then

to deduce further truths from these. This raises the question, just what counts as rea-

soning correctly from one proposition to another? This is what we hope to understand

through studying logic. There are different views as to what studying logic should

achieve, including the following:

We should aim to discover logical truths, i.e. sentences that could not possibly be

false and which we can discover to be true a priori.

We should aim to discover valid forms of reasoning to use in our arguments.

We should aim to discover the principles of logical entailment, so that we can

ascertain the facts that are entailed by what we know to be the case.

Fortunately for us, these approaches to logic all turn out to be interchangeable, at

least in the form of logic that we will study here, known as classical logic. Strange

as it sounds at first, there is not one body of doctrine or method that can be labelled

‘logic’. There are disagreements over what principles apply to the notion of logical

entailment and over what counts as a valid argument. These disagreements constitute

the philosophy of logic, which I will not go into in this book. As a rule of thumb,

whenever someone speaks of logic, unqualified as this or that style of logic, they will

mean classical first order logic. This is certainly true in most philosophy classes (at

least, those not dealing with technical subjects such as the philosophy of logic or

mathematics). Classical logic is also adopted as the logic of choice in mathematics

and electronics, although not always in computer science. 1

1 Classical logic is focused on truth, whereas computer scientists are often focused on the kinds of tasks that computers can do and

in particular, what computers can prove. Focusing on proof rather than truth is the province of constructive logics.

Philosophy Insights: Formal Logic 8

The following chapters outline the basic features of classical logic. We begin In

chapter 1 with a brief informal outline of some of the features of logical reasoning. In

chapters 2 through to 4, we investigate propositional logic, in which we uncover the

logical relations that hold between propositions of certain kinds. Finally, first order

logic, the big brother of propositional logic, is presented in chapters 5 to 7. A very

short introduction to set theory and the concept of infinity is provided as an appendix.

If you have never come across the notion of a set before, or find symbols such as ‘∈’

or concepts such as countable that you are unfamiliar with in the book, you might

want to begin by reading the appendix.

1. Logical Reasoning

1.1 Preliminaries

In logic, we are concerned with precise forms of reasoning and so we have to be

careful to avoid ambiguity. The English sentence, ‘visiting relatives can be boring’ is

ambiguous, for it can mean either that it can be boring to visit relatives or that the

relatives who are visiting can be boring. If we do not know what the sentence means,

we cannot tell what inferences can correctly be drawn from it. In logic, therefore, we

only deal with unambiguous sentences. We can think of the unambiguous sentences

and propositions of logic as statements, that is, sentences that state something quite

unambiguously. Grammatically, such sentences are in the indicative mood. We can

always tell whether a sentence is indicative by asking, does it make sense to say that

the sentence is true (or false)? Here are some examples.

Indicative Sentences

‘It’s raining somewhere in Scotland right now.’

‘I know that I exist.’

‘If you go outside, you should take an umbrella.’

Non-Indicative Sentences

‘Is it raining?’

‘I hereby name this ship . . . .’

‘If you go outside, take an umbrella.’

Philosophers sometimes use the term ‘proposition’ to mean an unambiguous indica-

tive sentence, capable of being true or false, although ‘proposition’ has also been

given other, more technical meanings by philosophers.1 I will use the term ‘propo-

sition’ throughout the first half of the course (when we look at propositional logic)

1 Bertrand Russell took propositions to be abstract entities containing the very things in the world that the corresponding sentence is

about, for example. Other philosophers take propositions to be sets of possible worlds.

Philosophy Insights: Formal Logic 10

and I will always mean no more than an unambiguous indicative sentence (so propo-

sitions are simply one type of sentence). In the second half I will talk about sentences

rather than propositions but still mean unambiguous indicative sentences. There is no

real difference between sentences and propositions as I will use the terms; the only

difference is how they are treated by the logic we are looking at.

A valid argument is one in which the conclusion could not possibly be false whilst all

the premises are simultaneously true. Throughout this book, we will take ‘false’ to be

synonymous with ‘not true’ and so an argument is valid just in case the conclusion is

true if all of the premises are simultaneously true. By way of example:

Premise 1: If it’s raining, then I had better take an umbrella.

Premise 2: It’s raining.

Conclusion: Therefore, I had better take an umbrella.

The conclusion follows logically from the premises and so the argument is valid. We

can use propositional logic to show why this is (we will do so in chapter 2). An

example of an invalid argument is

Premise 1: All men are mortal.

Conclusion: Socrates is mortal.

The argument is invalid, even though both its premise and its conclusion are true,

because the truth of the premise does not guarantee the truth of the conclusion. The

premise guarantees something about men; but ‘Socrates’ might be the name of my

pet goldfish. To be sure, goldfish are mortal too but the premise does not guarantee

that this is so. Someone might argue that the laws of physics guarantee that goldfish

are mortal. In this case, the premise plus the laws of physics guarantee the conclusion

but the argument, as stated above, does not say anything about these laws. Someone

living on a different planet (or even in a parallel universe) in which goldfish scientists

have discovered the elixir for immortality (but withheld it from humans) could truly

assert the premise but not the conclusion. In a logically valid argument, on the other

hand, this cannot be the case. The truth of the premises must guarantee the truth of the

conclusion, even in circumstances wildly different from our own.

Philosophy Insights: Formal Logic 11

A valid argument is one in which it is impossible for all premises to be true but the

conclusion false

We can make the argument valid by adding another premise:

Premise 1: All men are mortal.

Premise 2: Socrates is a man.

Conclusion: Socrates is mortal.

Now this is clearly valid. It does not matter if we assert the premises in a parallel

universe in which men are immortal, for even though the conclusion would then be

false, so would premise 1. A valid argument does not have to have a true conclusion.

By way of example, the following argument is just as valid as the last:

Premise 1: All men are secretly women.

Premise 2: Socrates is a man.

Conclusion: Socrates is secretly a woman.

When a valid argument has true premises, it is called a sound argument:

A sound argument is a valid argument with true premises

By definition, a sound argument has a true conclusion. As we shall see below, whether

an argument is sound or not will not be our focus of attention. From the point of view

of studying logic, our interest is only in whether a given argument is valid. It is clearly

not the task of logic to discover whether all men are secretly women! But logic does

tell us that the conclusion in this argument follows logically from the premises, so that

if the premises were true then the conclusion would have to be true as well.

The Substitution Principle

The validity of an argument has nothing to do with what its premises or conclusion

are about. The argument

Premise 1: If it’s raining, then I had better take an umbrella.

Premise 2: It’s raining.

Conclusion: Therefore, I had better take an umbrella.

is valid. It contains the sentences ‘it’s raining’ and ‘I had better take my umbrella’.

If we replace these with ‘interest rates are low’ and ‘house prices will increase’, the

argument is still valid:

Philosophy Insights: Formal Logic 12

Premise 2: Interest rates are low.

Conclusion: Therefore, house prices will increase.

We cannot make a valid argument invalid by uniformly substituting one sentence for

another. By saying that the substitution is uniform we mean that, if we replace one

occurrence of a sentence ‘p’ with a new sentence ‘q’, then we replace all occurrences

of ‘p’ with ‘q’. We thus have a separation between the form and the content of an

argument. The form can be given by replacing the sentences in the argument by place-

holders p, q, r etc. The form of the above argument is then given by:

Premise 1: If p then q.

Premise 2: p

Conclusion: Therefore, q.

The content is given by providing a sentence to fill the place of each placeholder, such

as:

‘p’ is the sentence ‘it’s raining’.

‘q’ is the sentence ‘I had better take my umbrella’.

Valid arguments are valid in virtue of their form alone, not their content (although con-

tent affects whether an argument is sound). The substitution principle then amount to

the fact that a valid argument, written using ‘p’s and ‘q’s, is valid whatever sentences

we give to take the place of ‘p’ and ‘q’. Of course, some choices of substitutions

can make a sound argument unsound, as in the ‘all men are secretly women’ argu-

ment above. Likewise, an unsound argument can become sound under some choice of

sentences, just so long as it is valid.

We need a way of analysing any given argument, of any length, to see whether it is

valid. Since there are infinitely many such arguments, we must concentrate on the

way in which any given argument is put together. We can think of an argument as a

sequence of sentences, with the premises written (or uttered) first, the conclusion writ-

ten (or uttered) last and each intermediate sentence following logically from sentences

written (or uttered) previously in the argument.

Philosophy Insights: Formal Logic 13

To study valid arguments, therefore, we can study valid ways of adding just one

new sentence to our argument. We call these inference rules, which say: if you have

sentences that look like . . . in the argument already, then you can add a sentence that

looks like . . . to the argument. The different inference rules that we use are obtained

by filling in the ‘. . . ’ in different ways.

Modus Ponens

The argument:

Premise 1: If p then q.

Premise 2: p.

Conclusion: Therefore, q.

relies on the inference rule of modus ponens. Sentences of the form ‘if p then q’ are

conditional sentences; p is the antecedent and q is the consequent. Modus ponens tells

us that, if both a conditional sentence and the antecedent of that condition have been

asserted, then the consequent may also be asserted without fear of contradiction.1

Inference rules are themselves schematic mini-arguments, all of which are valid

and which can be chained together to construct larger valid arguments. We usually

abbreviate the statement of an inference rule by writing its premises above a horizontal

line and its conclusion below it. Using this notation, modus ponens would be written:

If p then q

p

q

Modus Tollens

Closely related to modus ponens is the inference rule modus tollens, which also con-

cerns conditional sentences ‘if p then q’. Consider an argument that begins with the

following premises:

Premise 1: If p then q.

Premise 2: p.

Premise 3: It is not the case that q.

1 Inference rules can also be stated in terms of proofs, rather than in terms of arguments. Modus ponens is then: if you have proved

‘if p then q’ and proved ‘ p’, then you can add ‘q’ to the proof.

Philosophy Insights: Formal Logic 14

‘It is is not the case that q’ sounds clumsy, so I will abbreviate it as ‘not-q’. What is

wrong with these premises? If we apply modus ponens to the first two premises, we

obtain

Conclusion: Therefore, q.

It cannot be the case that both ‘q’ and ‘not-q’ are true simultaneously and so at least

one of the premises in this argument must be false: it is a logical fact that all three

premises cannot be true together. If both ‘if p then q’ and ‘not-q’ are true, then ‘p’

cannot be true. But if ‘p’ cannot be true, ‘not-p’ must be true. This is the inference

rule of modus tollens:

If p then q

not-q

not-p

Reductio Ad Absurdum

Suppose that, during the course of an argument, we assume for the sake of argu-

ment that a sentence ‘p’ is true. We then go about reasoning as normal, using this

assumption as an additional premise. Later in the argument, I draw two contradictory

conclusions, ‘q’ and ‘not-q’, say:

Assumption: p.

Interim conclusion 1: q.

Interim conclusion 2: Not-q.

This shows us that we should not have made the assumption that ‘p’ is true to begin

with. Why not? ‘q’ and ‘not-q’ cannot both be true. But these sentences follow from

the assumption that p and so ‘p’ cannot possibly be true, given what else we have

assumed or asserted in our argument. If ‘q’ cannot consistently be asserted in our

argument, then ‘not-q’ can:

Conclusion: Not-p.

This form of reasoning is known as reductio ad absurdum, because we reduce the

assumption that ‘p’ is true to an absurdity (that both ‘q’ and ‘not-q’ are true). Reductio

ad absurdum plays an important role in logical reasoning. To show that something is

the case, we can begin by assuming for the sake of argument that it is not the case and

deriving a contradiction from this assumption.

Philosophy Insights: Formal Logic 15

1. assume that p is false

2. derive a contradiction from this assumption

3. conclude that p

Invalid Arguments

Before moving on, we should take a look at some invalid argument forms. When a

invalid argument is frequently used, it is called a fallacy of reasoning. A common

fallacy is denying the antecedent, in which one reasons as follows:

Premise 1: If p then q.

Premise 2: Not-p.

Conclusion: Therefore, not-q.

It is easy to see that the premises do not guarantee the truth of the conclusion. Take ‘p’

to be ‘Anna studies hard’ and ‘q’ to be ‘Anna passes her exam’. Anna can pass even if

she does not work hard, if she copies her answers from a good student, say, or bribes

the marker. Either of these cases gives us a counterexample to the above argument.

In a counterexample to an argument, the premises are true but the conclusion is

false. Valid arguments have no counterexamples.

In this chapter, we have taken an informal look at some features of logical reasoning.

In the next chapter, we formalize these ideas by developing a logical language and

defining a formal notion of logical entailment, which captures the notion of a valid

argument just discussed.

Exercises

(a) Every sound argument is valid but not every valid argument is sound.

(b) You can make a valid argument invalid by adding additional premises.

(c) You cannot make an invalid argument valid by removing premises.

2. Propositional Logic

2.1 Introduction

As the name suggests, propositional logic is concerned with logical relations between

propositions. In philosophy, the term ‘proposition’ has been given different technical

meanings but here it will be used in its everyday sense, meaning an unambiguous

indicative sentence, capable of being either true or false. Our method will be to look

at the way that propositions can be constructed from more basic propositions and

how these constructions affect the logical relations that hold between the resulting

propositions. We are not interested in the particular content of the propositions that we

discuss (for example, whether the proposition is about the weather or world politics),

we are interested only in the logical relations between arbitrary propositions and so I

will use the letters ‘p’, ‘q’ and ‘r’ (and ‘p1 ’, ‘p2 ’ and so on, if we need more) to stand

for particular but arbitrary propositions.1

Some Conventions

We need to be clear whether we are asserting a proposition or talking about that propo-

sition. To talk about a proposition—to mention, rather than use it—we usually enclose

it in quotes:

There are 26 letters in the alphabet but 8 letters in ‘alphabet’.

There are always 8 letters in ‘post box’ but can be more or fewer letters in a post

box.

Throughout the book, we shall talk about propositions a lot. This means that we could

end up with quotation marks everywhere. To avoid this, I adopt the following two

conventions.

Convention 1 Lower-case letters from the middle of the alphabet (p, q, r) written in

sans serif font are names for particular propositions of the logical language.

Convention 2 Lower-case letters from the middle of the alphabet (p, q, r) in italic font

are placeholders for particular propositions of the logical language.

1 If you prefer, pick three English indicative sentences (‘snow is white’, ‘grass is green’ and ‘logic is fun’, for example) to read in place

of p, q and r. It doesn’t matter what sentences you pick, as long as they don’t have logical connectives such as ‘not’, ‘and’, ‘or’, ‘if’

in them.

Philosophy Insights: Formal Logic 17

If we think of p, q and r as names for propositions as ‘Anna’ and ‘Bob’ are names for

particular people, then the placeholders p, q and r are analogous to ‘somebody’ or the

pronoun ‘one’ (as in ‘doing one’s duty’).

Negation

Denying that p is the case amounts to asserting the negation of p, that is, asserting

that p is not the case, which I will abbreviate as ‘not-p’. The condition we give for the

truth of negated propositions, called their truth condition, is simple:

‘not- p’ is true if and only if ‘ p’ is false

Conjunction

Suppose that someone asserts first p and then further asserts that q. She could not then

consistently deny the conjunction of these two propositions, ‘p and q’. In propositional

logic, we can always form a new proposition by connecting two propositions together

using ‘and’. A proposition of the form ‘p and q’ is called a conjunction and ‘p’ and

‘q’ are its conjuncts. The truth-condition for a conjunction is:

‘ p and q’ is true if and only if ‘ p’ is true and ‘q’ is true

Disjunction

As well as saying that a proposition ‘p’ is true or false, or that both ‘p’ and ‘q’ are

true, one might want to say that either ‘p’ or ‘q’ is true. The proposition ‘p or q’

is called a disjunction and both p and q are its disjuncts. Just as ‘and’ means that

both conjuncts are true, ‘or’ means that one of the disjuncts is true. We need to make

this more precise, as ‘one’ can mean either exactly one or at least one. When a shop

assistant says, ‘that’s one pound’, for example, she means that you need to pay exactly

one pound. On the other hand, if I say that I have a pound, I usually mean that I have

at least a pound to my name.

These two readings of ‘or’ are known as exclusive and inclusive disjunction. On the

exclusive reading, ‘p or q’ means that exactly one out of p, q is true. On the inclusive

reading, ‘p or q’ is true when at least one out of ‘p’, ‘q’ is true (which amounts to

saying that they are not both false). This is the reading that logicians have accepted

Philosophy Insights: Formal Logic 18

for ‘or’ and so, whenever you read ‘disjunction’ (unqualified as either inclusive or

exclusive disjunction), you should assume that the inclusive reading is meant. The

exclusive reading can be recaptured by defining ‘p exclusive-or q’ as ‘p or q, but not

both’.

Conditionals

There are a variety of ways of asserting something conditional on something else in

English, including:

‘if p then q’,

‘p only if q’,

‘p if q’,

‘p implies q’ and

‘q, on the condition that p’.

In a conditional sentence ‘if p then q’, p is the antecedent and q is the consequent. Note

that the examples above are all indicative sentences: they say what is the case, provided

that something else is the case as well. They differ from subjunctive conditionals, such

as ‘if she were coming, she would be here by now’ and counterfactual conditionals

(in which the antecedent is actually false), such as ‘if I hadn’t turned up to the lecture

today, you wouldn’t have stayed here long’. In this book, we will look at the indicative

conditional only.1

It is hard to say just what the truth conditions for the indicative conditional ‘if p

then q’ in English are but we shall approximate it as best we can. Asserting ‘if p then

q’ implies that, if you also assert that p, you thereby commit yourself to the truth of q.

We will take our logical version of ‘if p then q’ to mean that it cannot be the case that

p is true but q is false. In our quasi-logical notation:

‘If p then q’ is true if and only if ‘not-( p and not-q)’ is true

‘If p then q’ is false if and only if p is true but q false

1 This is because subjunctive and counterfactual conditions are not truth-functional: the truth-value of ‘if it had been the case that p,

then it could have been the case that q’ cannot always be calculated from the truth-vales of p and q only.

Philosophy Insights: Formal Logic 19

Because a proposition is true if and only if it is not false, these two definitions amount

to the same. As we shall see in section 2.5 below, this truth-conditional characteriza-

tion of the conditional has some unintuitive consequences. We shall call the logical

connective defined in this way the material conditional and assume that the mate-

rial conditional is an approximate rather than an exact translation of the English ‘if

. . . then’.2

The Difference Between ‘only if’ and ‘if’

Both ‘p if q’ and ‘p only if q’ are conditional sentences in English. Which of them

corresponds (roughly) to the material conditional? Suppose that ‘if Anna works hard,

then she will pass her exams’ is true. What can we infer if, in addition, we are told that

‘Anna will pass her exams’ is true? We cannot infer that Anna works hard, because

there are any number of other ways in which she might have contrived to pass her

exams. She might have bribed the marker, or stolen the questions beforehand and

looked up the answers on the internet. The falsity of ‘Anna works hard’, therefore,

is perfectly compatible with the truth of ‘Anna will pass her exams’ and of ‘if Anna

works hard, she will pass her exams’. As a consequence, ‘if Anna works hard, then

she will pass her exams’ does not mean ‘Anna works hard if she passes her exams’. In

fact, it means the converse conditional, ‘Anna passes her exams if she works hard’.

The correct way to re-phrase ‘if Anna works hard, then she will pass her exams’

is ‘Anna works hard only if she passes her exams’. Why is this? Consider a situation

in which Anna works hard but doesn’t pass. Then ‘Anna works hard and not-(Anna

passes her exams)’ is true and so its negation, ‘not-(Anna works hard and not-(Anna

passes her exams))’ must be false. But ‘not-(p and not-q)’ is the definition we gave

of the material conditional, ‘if p then q’ and so, on the assumption that Anna works

hard but fails, ‘if Anna works hard, then she will pass her exams’ comes out false.

Therefore, the only way in which ‘if Anna works hard, then she will pass her exams’

can be true when Anna works hard is when she also passes her exams. This is why ‘if

Anna works hard, then she will pass her exams’ means the same as ‘Anna works hard

only if she will pass her exams’.

‘ p only if q’ means if p then q, whereas ‘ p if q’ means if q then p

2 Logicians have experimented with different definitions of conditionals but we shan’t look at these here, as the associated logics can

become complicated. See Mares & Meyer (2001) or Read (1988).

Philosophy Insights: Formal Logic 20

Biconditionals

When the truth of p is conditional on the truth of q and vice versa, the biconditional

‘p if and only if q’ is true, often abbreviated to ‘p iff q’. The biconditional means just

what it says: it is the conjunction of ‘if p then q’ and ‘if q then p’. Given what we have

said about the truth conditions of the material conditional, it follows that q cannot be

false if p is true and likewise p cannot be false if q is true. If ‘p if and only if q’ is

true, therefore, p and q must either both be true or both be false. ‘p iff q’ tells us that

p and q share a truth-value.

‘ p iff q’ is true if and only if p and q are both true or both false

We now need to begin formalizing these ideas. We will do this in two stages. The

formal language of propositional logic will be introduced in the next section and then

the meanings of the formal connectives will be defined precisely in section 2.5.

ber of primitive propositions p1 , p2 , p3 and so on (that is, for each natural number n,

there is exactly one primitive proposition pn ). As we shall need a small number of

primitive propositions only for the majority of this book, I shall wherever possible use

p, q and r in place of p1 , p2 and p3 . To this, we add the logical connectives ‘¬’ for

‘not’, ‘∧’ for ‘and’, ‘∨’ for ‘or’, ‘→’ for the material conditional (approximating ‘if

. . . then’) and ‘↔’ for ‘if and only if’.1

The propositions of our formal language should be entirely unambiguous, so we

need to ensure that ambiguity does not creep in when we build complex propositions

using the connectives. For example,

p∧q∨r

could mean that p is true and so too is either q or r (or both); but it could also mean

that both p and q are true, or else r is true (or both). We use the parentheses ‘(’ and ‘)’

to convey each of these ways of disambiguating p ∧ q ∨ r, so that

1 ‘¬’ is a unary connective as it allows us to build a new proposition from a single proposition, whereas ‘∧’, ‘∨’, ‘→’ and ‘↔’ are

binary connectives, taking two propositions and forming a new one. Logical connectives are also known as logical constants,

Boolean connectives and Boolean constants (they are constant in the sense that they never change their truth-functional meaning;

they are named ‘Boolean’ after George Boole (1815–1864), the English mathematician who discovered that logical reasoning,

suitably formalized, behaved like algebraic equations).

Philosophy Insights: Formal Logic 21

p ∧ (q ∨ r)

and (p ∧ q) ∨ r

can be seen to be distinct propositions. In the next section, these ideas are made precise

by giving a formal definition of what counts as a well-formed proposition the logical

language L.

Formation Rules

The formation rules for L state precisely what counts as a well-formed proposition

of the language. L is just a set of sentences, so these rules give us membership con-

ditions for L, as follows.

(a) If p is a primitive proposition, then p ∈ L.1

(b) If p ∈ L then (¬p) ∈ L.

(c) If p ∈ L and q ∈ L then:

i) (p ∧ q) ∈ L,

ii) (p ∨ q) ∈ L,

iii) (p → q) ∈ L and

iv) (p ↔ q) ∈ L.

(d) Nothing else is a member of L.

Condition 4 rules out ‘rogue’ elements finding their way into the language. Condition

2 says that L is closed under ‘¬’ and condition 3 says that L is closed under ‘∧’,

‘∨’, ‘→’ and ‘↔’, that is, we have all the negations, conjunctions, disjunctions, con-

ditionals and biconditionals in the language that we could have. Using the formation

rules above, we can see that p ∧ q ∨ r is not well-formed, whereas both (p ∧ q) ∨ r and

p ∧ (q ∨ r) are. Since I will deal with well-formed propositions only from now on, you

can assume that ‘proposition’ means well-formed proposition.

Propositions that are not primitives are complex propositions. Note that whereas

p, q and r are always primitive propositions, the placeholders p, q and r range over

all propositions in the language. We are entitled to make statements such as ‘let p

be the proposition p ∧ (q → ¬r)’ (in the same way that mathematicians say ‘let x be

the number 200’). p ∧ q is therefore a placeholder for any conjunction, p ∨ q for any

disjunction and so on.

1 ‘ p ∈ L ’ abbreviates ‘ p is a member of the set L ’. There is a brief introduction to sets in the appendix.

Philosophy Insights: Formal Logic 22

Binding Conventions

By enclosing all newly-formed propositions in parentheses we rule out structural am-

biguity (i.e. ambiguity caused by an unclear structure, as in the example p ∧ q ∨ r from

above). Much of the effort of adding parentheses is unnecessary in avoiding ambiguity.

We do not need any parentheses in

(¬(¬p))

for example, as ¬¬p is perfectly unambiguous. We may therefore remove parentheses

from the ‘official’ version as long as an ambiguity cannot arise.

Even with this relaxation of the official formation rules, a proliferation parentheses

can make propositions hard to read. To avoid this, we can adopt conventions, whereby

parenthesis-free propositions are always read in a certain way. First of all, we will

adopt the convention that

¬p ∧ q

means (¬p) ∧ q rather than ¬(p ∧ q). Similarly, ¬p ∨ q is read as (¬p) ∨ q, ¬p → q

as (¬p) ∨ q and p ↔ q as (¬p) ↔ q. This way, we never need to write (¬p); we can

always write ¬p on its own. In other words, ¬ takes precedence over ∧, ∨, → and ↔.1

We will also adopt the conventions that ∧ takes precedence over ∨, ∨ over → and →

over ↔:

Negation ¬ highest precedence

Conjunction ∧

Disjunction ∨

Material conditional →

Biconditional ↔ lowest precedence

Finally, we write p ∧ q ∧ r to abbreviate (p ∧ q) ∧ r, p ∨ q ∨ r to abbreviate (p ∨ q) ∨ r

and so on for → and ↔. That is, we assume that connectives of the same order of

precedence bind to the left when the parentheses are not written explicitly. As the

order of the p and q in p → q is vital, I will always write the parentheses explicitly in

conditional propositions.

Examples

p ∧ q ∨ r always means (p ∧ q) ∨ r. If we want to express the other reading (p ∧

1 A connective taking precedence over another means that the former ‘sticks’ to propositions more strongly than the others.

Philosophy Insights: Formal Logic 23

p ∧ q → p ∨ q always means (p ∧ q) → (p ∨ q).

¬p ↔ q ∨ r always means (¬p) ↔ (q ∨ r).

tural difference, which is why we have to use parentheses in the second but not the first

case. This structural difference is best thought of in terms of how each proposition was

put together, according to the formation rules. The former proposition is a disjunction,

whose disjuncts are p ∧ q and r (∨ is said to be the main connective of the proposition).

It was constructed by first forming (p ∧ q) and then by forming ((p ∧ q) ∨ r) (and then

dropping the parentheses, as they are all unnecessary). p ∧ (q ∨ r) on the other hand is

a conjunction, formed by first forming the disjunction (q ∨ r) and then p ∧ (q ∨ r).

We can represent this construction history of a proposition as a construction tree,

as follows. The tree on the left shows the construction history of p ∧ q ∨ r, whereas the

tree on the right shows that of p ∧ (q ∨ r).

q p r q

r ∧ ∨ p

∨ ∧

By looking at the proposition’s construction tree, we get a much clearer idea of its

structure (and hence of its meaning).

Subformulas

To build p ∧ q ∨ r, we first built p ∧ q and so p ∧ q is a subformula of p ∧ q ∨ r, just as

the construction tree

q p

∧

Philosophy Insights: Formal Logic 24

for p ∧ q is a sub-tree of (i.e. appears within) the construction tree for p ∧ q ∨ r. The

subformulas of a proposition p are all the propositions constructed on the way to

constructing p, including p itself and all primitives that occur in p. We write ‘sub(p)’

to denote the set of all subformulas of a proposition p. For any proposition, we can

calculate sub(p) using the following rules:

The only subformula of a primitive proposition p is p itself:

sub(p) = {p} when p is a primitive

The subformulas of ¬p are just the subformulas of p plus ¬p:

sub(¬p) = sub(p) ∪ {¬p}

The subformulas of p ∧ q, p ∨ q, p → q or p ↔ q are just the subformulas of p,

plus the subformulas of q, plus the proposition itself:

sub(p ∧ q) = sub(p) ∪ sub(q) ∪ {p ∧ q}

sub(p ∨ q) = sub(p) ∪ sub(q) ∪ {p ∨ q}

sub(p → q) = sub(p) ∪ sub(q) ∪ {p → q}

sub(p ↔ q) = sub(p) ∪ sub(q) ∪ {p ↔ q}

This is a recursive definition of the set sub(p). In cases of complex propositions, it

tells us what sub(p) is in terms of less complex propositions, say q and r. We then

need to determine the subformulas of q and r and so we apply the definition again,

which will tell us the subformulas of these propositions in terms of yet less complex

ones. When we are left with finding the subformulas of primitive propositions only,

the first clause of the definition gives us our answer and we are done.

We have finished with our formal definition of the language L and now need to look

at what the propositions in it mean. We do this by giving a meaning to each kind of

complex proposition (negation, conjunction, disjunction, conditional or biconditional)

in terms of its truth-conditions (as we did informally in section 2.2).

assigning truth values to the primitive propositions that occur in p. We shall use ‘T’

to stand for the truth-value true and ‘F’ for the value false. The five basic truth tables

that we need are:

Philosophy Insights: Formal Logic 25

p q p∧q p q p→q

p ¬p F F F F F T

F T F F F F T T

T F T F F T F F

N EGATION T T T T T T

C ONJUNCTION C ONDITIONAL

p q p∨q p q p↔q

F F F F F T

F T T F T F

T F T T F F

T T T T T T

D ISJUNCTION B ICONDITIONAL

The way to read the truth table for ¬, for example, is: when p is false, ¬p is true

and when p is true, ¬p is false. The truth-table for ¬ thus corresponds to the truth-

condition we gave in words for negations: ¬p is true iff p is false. Many logicians (but

not all!) regard these truth tables as giving meaning to the logical connectives. We can

use truth tables:

to determine whether a proposition is a logical truth or a logical falsehood;

to determine whether a set of sentences is satisfiable (i.e. whether the sentences

can be simultaneously true);

to determine whether two propositions are logically equivalent;

to determine whether one proposition follows from another; and

to determine the validity of an argument.

We shall look at the first three of these now and and the last two in the following chap-

ter. In the truth tables above for ∧, ∨, → and ↔, the first two columns are identical:

they always take the form FF, FT, TF, TT. In the table for ¬, we used the pattern F, T.

If we had three primitive propositions, we would use FFF, FFT, FTF, FTT, TFF, TFT,

TTF, TTT. In general, in a truth table with n primitive propositions, we need 2n rows,

beginning with n ‘F’s and ending with n ‘T’s. The rightmost column should alternate

between ‘F’ and ‘T’ every row, the column to the left of this alternates every two rows

and so on (so that the leftmost column contains 2n/2 ‘F’s, followed by an equal number

of ‘T’s).

Philosophy Insights: Formal Logic 26

We build a truth table for a complex proposition p by first building a truth table for

each of its subformulas, starting from the most simple and building up to the most

complex, the proposition p itself. This would be a labouroius task if we had to draw

each table separately but fortunately we can combine all of this information into one

table. To build an all-in-one truth table for any proposition p, the strategy is as follows.

1. Across the top of the left-hand side of the table, list each primitive proposition that

occurs in p.

2. Beneath this, fill in each combination of ‘F’s and ‘T’s, beginning with an ‘F’ be-

neath each primitive proposition and ending each column with a ‘T’.

3. Write out the proposition p across the top of the right-hand side of the table. Leave

some space between each symbol.

4. Starting with the smallest subformulas of p (i.e. those nearest the top of p’s con-

struction tree), fill in the column under the main connective of those subformulas

with ‘F’s and ‘T’s according to the truth table for the connective in question.

5. Repeat the previous step until there is a column of ‘F’s and ‘T’s under each con-

nective. Now highlight the column under p’s main connective, as this is the infor-

mation that we are looking for.

p q p ∧ q ∨ ( ¬p ∨ ¬q )

F F F T T T T

F T F T T T F

T F F T F T T

T T T T F F F

I have written small ‘T’s and ‘F’s under the main connectives of the subformulas, re-

serving full-sized red ‘T’s and ‘F’s to highlight the column under the main connective.

As this is hard to write by hand, it is best to highlight the column under the main

connective by circling it after drawing the truth table.

Tautologies and Logical Falsehoods

Looking at the highlighted column in the truth table for p ∧ q ∨ (¬p ∨ ¬q), we can see

that each row has a ‘T’ in it. This means that p ∧ q ∨ (¬p ∨ ¬q) is true regardless of

how truth values are assigned to p and q. It is therefore a logical truth or tautology.

Philosophy Insights: Formal Logic 27

Tautologies have only ‘T’s in the main column of their truth table

It is easy to see why p ∧ q ∨(¬p ∨¬q) is a tautology: it is true if both p and q are true, or

else if at least one of them is false. Clearly, one of these cases has to be the case.1 What

happens if we consider the proposition (p ∧ q) ∧ (¬p ∨ ¬q) instead of p ∧ q ∨ (¬p ∨ ¬q)

(i.e. we change it from a disjunction to a conjunction)? The truth table becomes:

p q ( p ∧ q ) ∧ ( ¬p ∨ ¬q )

F F F F T T T

F T F F T T F

T F F F F T T

T T T F F F F

the entire proposition comes out false. We say that the proposition is unsatisfiable,

since none of the available assignments satisfies it (i.e. makes it true). Looking at the

columns under p ∧ q and ¬p ∨ ¬q, we see that rows that make the former true make

the latter false and vice versa. Each conjunct contradicts the other. A proposition does

not have to be a conjunction of two incompatible conjuncts to be unsatisfiable. It is

unsatisfiable if and only if there is no possible way of assigning truth values to its

primitives that makes it true.

Unsatisfiable propositions have only ‘F’s in the main column of their truth table

It is easy to check that the negation of a tautology is unsatisfiable and similarly the

negation of an unsatisfiable proposition is tautologous. But of course this does not

mean that any proposition is either a tautology or unsatisfiable. Satisfiable propositions

are those that are made true by at least one line of their truth table.

Satisfiable propositions have at least one ‘T’ in the main column of their truth table

2.6 Valuations

Each such way of assigning truth values to primitive propositions is called a valuation.

A valuation is an assignment of truth values to primitive propositions.

1 In fact, exactly one has to be the case and so we would still have a tautology if we replaced the main disjunction by an exclusive

disjunction.

Philosophy Insights: Formal Logic 28

We can think of a valuation as a function, a ‘black box’ that takes a primitive propo-

sition as its input and returns a truth-value as its output. ‘V ’ (a curly ‘V’), ‘V1 ’, ‘V2 ’

and so on denote particular valuations and ‘V (p) = T’ means that V assigns the value

true to the primitive proposition p.2 Each row of a truth table gives us a valuation for

the primitives that appear in the table. If Vn corresponds to row n of a truth table, then

V1 (p) = T if and only if a ‘T’ appears under p on row 1 of the left-hand side of the

truth table. If we are considering n primitive propositions, there will be 2n valuations

available.

Satisfaction

Suppose V1 corresponds to row 1 of the truth table for p. If there is a T under p’s main

connective in row 1, then V1 satisfies p, that is, the way that V1 assigns truth values

to the primitives in p is such that p comes out true. We abbreviate ‘V satisfies p’

as ‘|=V p’. Note that the symbol ‘|=V ’ is not part of the language L of propositional

logic and so ‘|=V p ∧ q’, for example, is not a sentence of L. Rather, ‘|=V ’ is part of the

metalanguage that we use to talk about L. Our metalanguage is English, augmented

by special symbols such as ‘|=V ’.

We can give a formal definition of ‘|=V ’, for any valuation V, corresponding to the

truth tables for the logical connectives. It is a recursive definition, due to the Polish

logician Alfred Tarski (1902–1983) and proceeds as follows. We have a base clause

for primitive propositions:

|=V p iff V (p) = T

We then have a clause for handling negations, conjunctions, disjunctions, conditionals

and biconditionals, as follows:

|=V ¬p iff 6|=V p

|=V p ∧ q iff |=V p and |=V q

|=V p ∨ q iff |=V p or |=V q

|=V p → q iff 6|=V p or |=V q

|=V p ↔ q iff |=V p and |=V q, or else

6|=V p and 6|=V q

(‘6|=V p’ means that V does not satisfy p). We can re-phrase our earlier definitions as:

Tautologies are satisfied by every valuation, satisfiable propositions by at least

one valuation and unsatisfiable propositions by none

2 Note that this only makes sense when p is a primitive. V (p ∧ q) is not a well-formed expression of our meta-language (the language

in which we talk about the logic).

Philosophy Insights: Formal Logic 29

When |=V p for every valuation V, I will drop the ‘V ’ subscript and write ‘|= p’ as

an abbreviation of ‘p is a tautology’.

Testing for Satisfiability

A set of propositions is satisfiable if and only if there is at least one valuation which

satisfies every proposition in the set. I will use the Greek capital letter ‘Γ’ to denote

an arbitrary set of propositions and will write ‘|=V Γ’ to abbreviate ‘V satisfies (every

sentence in) Γ’:

|=V Γ if and only if |= V p for every proposition p ∈ Γ

We can test for the satisfiability of a set of propositions Γ using a combined truth table

for each proposition in Γ (so long as Γ is finite). When we do this, we have to list

every primitive proposition that appears in any proposition in Γ on the left-hand side.

If a row of the table places a ‘T’ in each of the main columns on the right-hand side,

then Γ is satisfiable.

Example Show that the set Γ = {p ∧ q ∨ r, ¬r, r ∨ q} is satisfiable. We construct the

following combined truth table:

p q r p ∧ q ∨ r ¬r r ∨ q

F F F F F T F

F F T F T F T

F T F F F T T

F T T F T F T

T F F F F T F

T F T F T F T

T T F T T T T

T T T T T F T

The highlighted row gives a valuation V that makes each proposition in Γ true, i.e. the

valuation under which

V (p) = T V (q) = T V (r) = F

Note that this V also satisfies the proposition (p ∧ q ∨ r) ∧ (¬r ∧ (r ∨ q)), the conjunc-

tion of all the propositions in Γ. This holds in general: a finite set of propositions is

satisfiable if and only if the conjunction of all members of the set is satisfiable.1

1 This does not hold for infinite sets because we are not allowed to form infinitely long propositions in our formal language L .

Philosophy Insights: Formal Logic 30

Exercises

Question 1 True or false? If true, explain why. If false, give a counterexample (in

English).

(a) A valuation assigns a truth value to each proposition in the logical language L.

(b) ‘|=V Γ’ means that |=V p for every p ∈ Γ.

(c) ‘p → ¬p’ is unsatisfiable.

(d) If a proposition is not a tautology, then its negation is.

(e) If a proposition is unsatisfiable, then its negation is a logical truth.

(f) If a set of sentences is satisfiable, then each of its members must be.

(g) If each member of a set is satisfiable, then the set is too.

Question 2 Using truth tables, determine which of the following are tautologies. For

any that are not, give a valuation which does not satisfy the sentence.

(a) (p → ¬q) → ¬(p → q)

(b) (p → ¬q) → ¬p

(c) (p → q) → ¬(q → p)

(d) ¬(q → p) → (p → q)

(e) (p → (q → r)) → (p ∧ q → r)

(f) (p ∧ q → r) → (p → (q → r))

(g) (p ∨ q → r) → (p → r)

(h) (p → r) → (p ∨ q → r)

(i) (p → q) ∨ (q → p)

Question 3

(a) Translate the following statements into the propositional language.

p: Either we will both work hard, or else you and I will both fail.

q: You work hard, and either I work hard or else we’ll fail.

Philosophy Insights: Formal Logic 31

(b) Using p and q from part (a), determine whether p → q is a tautology. If not, give

a valuation which does not satisfy it.

(c) Is q → p is a tautology? If not, give a valuation which does not satisfy it.

3. Entailment and Equivalence

3.1 Logical Entailment

An argument is valid if and only if the conclusion could not be false whilst all the

premises are true. More precisely, an argument in propositional logic is valid if and

only if no valuation that satisfies the set of premises fails to satisfy the conclusion.

If a valuation satisfies the premises of a valid argument then it must satisfy the con-

clusion as well. When an argument is valid, we say that the conclusion is a logical

consequence of the premises and that the premises entail the conclusion. The logical

consequences of a set of propositions are then all the propositions that are satisfied by

every valuation that satisfies the set of premises.

To test whether a conclusion p is entailed by a finite set of premises Γ, we use a truth

table to check every possible valuation (given the primitive propositions that occur in

Γ and p).

Example Above, we considered the set {p ∧ q ∨ r, ¬r, r ∨ q}, which we tested and found

to be satisfiable. Does the subset {p ∧ q ∨ r, ¬r} entail r ∨ q? We need to check every

row of the truth table that satisfies both premises:

p q r p∧q ∨ r ¬r r ∨ q

F F F F F T F

F F T F T F T

F T F F F T T

F T T F T F T

T F F F F T F

T F T F T F T

T T F T T T T

T T T T T F T

There is only one row in which both p ∧ q ∨ r and ¬r have a T in their column and this

row also has a T in the column for r ∨ q, so the set {p ∧ q ∨ r, ¬r} entails r ∨ q. Again, it

is cumbersome to have to write such statements out in full and so I will write ‘Γ |= p’

to abbreviate ‘Γ entails p’ (note that the ‘|=’ has no ‘V ’ subscript, as our formulation

of entailment makes reference to every valuation):

Philosophy Insights: Formal Logic 33

To keep the notation as simple as possible, I will write ‘p, q |= r’ rather than using the

proper set-theoretic notation,‘{p, q} |= r’ and ‘Γ, p |= q’ instead of ‘Γ ∪ {p} |= q’.1

One possible source of confusion here is the symbolism ‘|= p’. Does it mean that ev-

ery valuation satisfies p, or that p is a logical consequence of an empty set of premises,

i.e. ∅ |= p? In fact, it means both, since being a tautology and being a logical con-

sequence of the empty set amount to the same thing. This is because the empty set is

trivially satisfied by any valuation V . Given any valuation V whatsoever, there can be

no proposition in ∅ that is not satisfied by V , since there are no propositions in ∅!

If every valuation satisfies p, then every valuation that satisfies ∅ also satisfies p—

trivially!—and so ∅ |= p, which I’ll abbreviate to ‘|= p’. This reveals a further fact

about entailment. If p is a tautology, then it is entailed by every set of premises Γ, not

just ∅. Suppose it were not the case: then there must be some valuation that satisfies

Γ but does not satisfy p. This cannot be the case, since every valuation whatsoever

satisfies p.

If p is a tautology, then Γ |= p for every set of propositions Γ

If Γ does not entail p (abbreviated ‘Γ 6|= p’), then there must be at least one valuation

which satisfies Γ but does not satisfy p. Such a valuation is a counterexample to the

purported entailment.

A counterexample to ‘Γ |= p’ is a valuation V such that |=V Γ but 6|=V p

then |=V Γ ∪ {¬p}. The converse also holds: any valuation that satisfies Γ ∪ {¬p}

is a counterexample to ‘Γ |= p’. If there are no such valuations, so that Γ ∪ {¬p} is

unsatisfiable, then there can be counterexamples to ‘Γ |= p’:

Γ |= p if and only if the set Γ ∪ {¬p} is unsatisfiable

In fact, some logicians define ‘|=’ in terms of unsatisfiable sets. If a set Γ of sentences

is unsatisfiable then, for any set ∆ of sentences whatsoever, Γ ∪ ∆ must also be unsatis-

fiable. In particular, if Γ is unsatisfiable then Γ∪{¬p} must be too, for any proposition

p. It follows that Γ |= p. This shows that any proposition whatsoever is entailed by an

unsatisfiable set.

Philosophy Insights: Formal Logic 34

Γ |= p whenever Γ is unsatisfiable

This is an interesting quirk of our logical notion of entailment. As a special case, p |= q

whenever p is an unsatisfiable proposition. The intuitive reason for this is that, since

p cannot possibly be true, q cannot possibly be less true than p.

Entailment and the Material Conditional

You might have noticed that the truth table we gave for → sounds a lot like the con-

ditions we laid down for p logically entailing q. If a valuation satisfies p → q, then it

cannot satisfy p but not q. p logically entails q, on the other hand, if every valuation

that satisfies p also satisfies q. We also said that the closest English translation of ‘→’

is ‘if . . . then’ and we use an ‘if . . . then’ clause to define entailment in our logic.

One difference between p → q and ‘p |= q’ is that the former is a proposition of

the logical language L, whereas the latter belongs to the metalanguage (it abbreviates

‘p entails q’). The main difference, however, is that ‘p |= q’ states a logical relation

between p and q, whereas p → q can be satisfiable even if there is no logical relation

between p and q at all. If p → q is a tautology, on the other hand, then there must be

a logical relation between p and q, for p → q is a tautology whenever p |= q. To see

why this is so, assume that p |= q and take any valuation V. If it satisfies p then it must

satisfy q too, hence it also satisfies p → q. If it does not satisfy q, then it cannot satisfy

p either, in which case p → q remains satisfied. The converse also holds: if p → q is a

tautology then p |= q (for similar reasoning to that just given). We can summarize this

result as:

p |= q if and only if |= p → q

We can see from the truth table for → that the material conditional inherits the slightly

counter-intuitive features of entailment discussed above: p → q is a tautology when-

ever p is unsatisfiable or q is a tautology because in both cases, q cannot possibly be

less true than p.

|= p → q whenever |= ¬p or |= q

3.2 Equivalence

Two propositions are equivalent when it is impossible for a valuation to satisfy one

but not the other. We abbreviate ‘p is equivalent to q’ as ‘p ≡ q’. Be careful not to

Philosophy Insights: Formal Logic 35

confuse ‘≡’ with the symbol ‘=’ for identity: ‘p = q’ means that p is the very same

proposition as q but propositions can be equivalent without being the same. p ∧ q and

q ∧ p are equivalent, as both are true if and only if both of their conjuncts are true but

are not identical propositions.

We can test whether two propositions p and q are equivalent by comparing their

truth tables. If the column under p’s main connective matches the column under q’s

main connective, then p and q are logically equivalent; otherwise they are not equiva-

lent.

Equivalent propositions have identical main columns in their truth tables

Just as with the ‘|=’ symbol, ‘≡’ is not part of the logical language L. It abbreviates

‘is equivalent to’, so is part of the metalanguage (i.e. English, augmented by symbols

such as ‘|=’ and ‘≡’) that we use to talk about L. ≡ is a relation that holds between

any two propositions, which has the following properties:1

Reflexivity Every proposition is equivalent to itself.

Symmetry If p is equivalent to q, then q is equivalent to p.

Transitivity If p and q are equivalent and q and r are equivalent, then p and r are

equivalent.

There are a number of important cases in which propositions of a certain form are

always equivalent. As mentioned above,

p∧q ≡ q∧ p

This is called the commutativity law for ∧. A quick glance at the truth tables shows

that ∨ is commutative as well:

p∨q ≡ q∨ p

The associativity law says that

p ∧ (q ∧ r) ≡ (p ∧ q) ∧ r

and that

p ∨ (q ∨ r) ≡ (p ∨ q) ∨ r

Finally, the distributivity laws concern how ∧ and ∨ interact. ∧ distributes over ∨:

p ∧ (q ∨ r) ≡ (p ∧ q) ∨ (p ∧ r)

Philosophy Insights: Formal Logic 36

into an equivalent disjunction (whose disjuncts are conjunctions). A quick truth table

check shows why this can be done: both propositions say that p is true and at least one

of q and r is true. Similarly, ∨ distributes over ∧:

p ∨ q ∧ r ≡ (p ∨ q) ∧ (p ∨ r)

This equivalence allows us to transform a disjunction (with a conjunction as a disjunct)

into an equivalent conjunction (whose conjuncts are disjunctions). Both propositions

say that either p is true or both q and r are true.1 To summarize:

Commutativity: p ∧ q ≡ q ∧ p Associativity: p ∧ (q ∧ r) ≡ (p ∧ q) ∧ r

p∨q ≡ q∨ p p ∨ (q ∨ r) ≡ (p ∨ q) ∨ r

Distributivity: p ∧ (q ∨ r) ≡ (p ∧ q) ∨ (p ∧ r)

p ∨ (q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r)

Equivalence and Biconditionals

Just as entailment can be captured within propositional logic using →, equivalence

can be captured using ↔. As before, we need to be clear that p ↔ q does not mean

that p and q are equivalent. ‘I am currently typing at my desk’ and ‘it is not raining’

are, as it happens, currently both true and so their biconditional is also true. The two

sentences are clearly not equivalent, for it could easily have been raining right now.

If p and q are equivalent, then p ↔ q is a tautology, for if p is true then q must be

too but if p is false then q is as well. Either kind of valuation satisfies p ↔ q and so

|= p ↔ q. The converse conditional holds as well: if p ↔ q is a tautology, then p and

q are equivalent, for then p and q must have the same truth value under every possible

valuation.

p ≡ q if and only if |= p ↔ q

r which always result in equivalent propositions when these placeholders are uni-

1 These laws are used in algebra and set theory as well. In algebra, both addition + and multiplication × are commutative and

associative, as are union ∪ and intersection ∩ in set theory.

Philosophy Insights: Formal Logic 37

tributivity schemes above are examples of equivalence schemes.

Double Negation

Double negations always cancel out to give us an equivalent proposition:

¬¬p ≡ p

Glancing at the truth table, we can see that p is true iff ¬p is false and ¬p is false iff

¬¬p is true. In fact, it is easy to check that even length sequences of ¬s always cancel

to give us an equivalent proposition:

p ≡ ¬¬p ≡ ¬¬¬¬p ≡ ¬¬¬¬¬¬p ≡ · · ·

The De Morgan Laws

The De Morgan laws state the logical relationship between conjunction and disjunc-

tion:

De Morgan for ∧: ¬(p ∧ q) ≡ ¬p ∨ ¬q

De Morgan for ∨: ¬(p ∨ q) ≡ ¬p ∧ ¬q

An equivalent formulation of the De Morgan laws is:

p ∧ q ≡ ¬(¬p ∨ ¬q)

p ∨ q ≡ ¬(¬p ∧ ¬q)

To see that these two equivalences are really the same as the former pair, note that if

p ≡ q then ¬p ≡ ¬q. Using this fact together with De Morgan ∧, we get ¬¬(p ∧ q) ≡

¬(¬p ∨ ¬q) and then, by eliminating the double negation, we get p ∧ q ≡ ¬(¬p ∨ ¬q).

This equivalence allows us to transform any proposition into an equivalent proposition

which does not contain any occurrence of ∧. To do this, we replace every conjunction

of the form p∧q in the original proposition with ¬(¬p∨q). We repeat the process until

no conjunction remains. Every proposition that we obtain in this process is equivalent

to the one we started with.

tion is of the form p ∧ q, where p is the disjunction p ∨ q and q is the proposition r.

Copying this information into the equivalence scheme p ∧ q ≡ ¬(¬p ∨ ¬q) gives us

Philosophy Insights: Formal Logic 38

In a similar way, we can obtain p ∨ q ≡ ¬(¬p ∧ ¬q) from De Morgan ∨. This equiv-

alence gives us a way to eliminate disjunctions from any proposition, by repeatedly

replacing any disjunction p ∨ q with ¬(¬p ∧ ¬q).

Conditionals

When I introduced the logical connectives in section 2.2, I said that ‘if p then q’ means

that q cannot be false whilst p is true. This gives us the following equivalence:

p → q ≡ ¬(p ∧ ¬q)

This is not so surprising. But the De Morgan laws show us that ¬(p ∧ ¬q) ≡ ¬p ∨ q

and hence

p → q ≡ ¬p ∨ q

This is surprising, because ¬p ∨ q doesn’t appear to be conditional at all. We can gloss

p → q as ‘q is at least as true as p’ and there are three ways in which q is at least a true

as p: if p is false and q is true, or if p and q are both false, or if they are both true. If

p is false or q is true, then one of these cases obtains and so ‘q is at least as true as p’

is true iff ¬p ∨ q is true. Some logicians take this as evidence that p → q is not a good

translation of the English indicative conditional, ‘if p then q’. Whatever one’s view on

this, it is a fact that p → q ≡ ¬p ∨ q.

The converse of a conditional p → q is q → p and the contraposition of the former

is ¬q → ¬p. Conditionals are not equivalent to their converses, since ¬p ∨ q is clearly

not equivalent to ¬q ∨ p but they are equivalent to their contraposition.

p → q is equivalent to its contraposition, ¬q → ¬p

To see why, ¬q → ¬p is equivalent to ¬¬q ∨ ¬p and, by the equivalence of ¬¬q and

q and the fact that ∨ is commutative, this is equivalent to ¬p ∨ q. In symbols:

p → q ≡ ¬p ∨ q ≡ q ∨ ¬p ≡ ¬¬q ∨ ¬p ≡ ¬q → ¬p

Biconditionals

The biconditional p ↔ q asserts a two-way material conditional: q is conditional on p

and vice versa. We can then analyze a biconditional as the conjunction of two material

conditionals:

p ↔ q ≡ (p → q) ∧ (q → p)

Philosophy Insights: Formal Logic 39

should be glossed as ‘p is at least as true as q and q is at least as true as q’, which

implies that p and q must have the same truth value. We have already seen that this is

the correct analysis of p ↔ q.

Exercises

Question 1 True or false? If true, explain why. If false, give a counterexample (in

English).

(a) ‘|=’ belongs to the formal language of propositional logic, L.

(b) A true proposition is entailed by any proposition.

(c) If Γ |= p then, for all valuations V, either |=V p or 6|=V Γ.

(d) ‘Γ |= p’ makes sense only when Γ is finite.

(e) Any unsatisfiable set entails any proposition.

(f) Every proposition entails infinitely many propositions.

Question 2 For this question, let > be an arbitrary tautology and ⊥ and arbitrary

logical falsehood, so that > has a ‘T’ and ⊥ an ‘F’ in every row of its truth table.

(a) Draw a truth table for (> → p) ∨ (q → ⊥).

(b) Find a short conditional proposition equivalent to (> → p) ∨ (q → ⊥), using only

p, q, → and parenthesis. Check the equivalence using a truth table.

(c) Find a proposition equivalent to ((> → p) ∨ (q → ⊥)) → ⊥, which contains ¬, ∨,

p, q and parenthesis only.

(d) Find a simple conjunction equivalent to the previous answer.

Question 3 A proposition is in negation normal form iff any negation symbol ap-

pears only to the immediate left of a primitive proposition. Thus ¬p ∧ q is in negation

normal form, whereas ¬¬q and ¬(p ∨ q) are not. For each proposition below, find an

equivalent proposition in negation normal form.

(a) ¬¬p

(b) ¬p → q

Philosophy Insights: Formal Logic 40

(c) ¬(p → q)

(d) ¬(p ∨ q ∧ r)

(e) ¬((p → q) ∨ (q → q)

propositions, ¬, ∧ and ∨ (and no parentheses). For example, p ∧ q ∨ ¬p ∧ ¬q is in

disjunction normal form.

(a) Use a truth table to find all valuations that satisfy p ∧ q ∨ ¬p ∧ ¬q. What is the

connection between these valuations and the arrangements of ∧s and ∨s in this

proposition?

(b) Using the connection you found in part (a) between valuations and propositions

in disjunctive normal form, for each proposition in question 4, find an equivalent

proposition in disjunction normal form.

4. Proof Trees

4.1 Proofs in Logic

p1 ∧ (p1 → p2 ) ∧ (p2 → p3 ) ∧ · · · ∧ (pn−2 → pn−1 ) ∧ (p9 → p10 )

entails p10 . Call the long proposition ξ . It contains 10 primitive propositions and so

the truth table will contain 216 = 1, 024 lines! But we can easily see that ξ does entail

p10 .1 In this chapter, we will look at a set procedure for testing whether a conclusion

is entailed by a set of premises. The procedure, called the proof tree method, consists

of a set of instructions for building trees from the premises and the conclusion. These

trees could be constructed by someone who doesn’t know the meaning of the logical

sentences involved, or by a computer, just by applying the rules in the correct way.

The proof tree method is algorithmic, in other words, in the same way that a computer

program is. The guiding idea is that such proof methods don’t rely on our intuitions,

which can often lead us astray in logic and mathematics.

The fact (from section 3.1) that we will use throughout this chapter is:

Γ |= p iff Γ ∪ {¬p} is unsatisfiable

To check whether a set Γ of premises entail a conclusion p, it is sufficient to check

whether the set Γ ∪ {¬p} is unsatisfiable (if so, Γ entails p; if not, it doesn’t). The

tree method works by breaking down long propositions into shorter ones so that any

implicit contradictions in Γ ∪ {¬p} (as there must be if Γ ∪ {¬p} is unsatisfiable) will

become explicit sooner or later. An explicit contradiction on any branch of the tree

(that is, propositions p and ¬p somewhere along the branch) shows us that the set of

propositions on that branch is unsatisfiable and so we close the branch by writing a ×

1 There is a quick way of checking this. Suppose that ξ is true but p10 isn’t. Since ξ is true, all of its conjuncts are true, one of which

is p9 → p10 . And since p10 is false, p9 must be too. Then, since p8 → p9 is true, p8 must be false, as must p7 and p6 and so on.

But p1 can’t be false, as it’s a conjunct of ξ and so our assumption must have been wrong: p10 can’t possibly be false if ξ is true.

In other words, ξ entails p10 .

Philosophy Insights: Formal Logic 42

beneath it, telling us not to bother with that branch any more. The tree as a whole is

finished when all of its branches are closed or when we run out of rules to apply. Then

we use the following fact to decide whether Γ entails p:

Trees with all branches closed have an unsatisfiable set at their root

If all branches close, then Γ |= p. If the tree finishes with at least one open branch

(when we run out of rules to apply), then p is not a logical consequence of Γ.

Rules For Proof Trees

Here are the rules for ∧, ∨, → and ↔ that we use to build proof trees:

p∧q X p∨q X p→q X p↔q X

p p q ¬p q p ¬p

q q ¬q

¬p ¬q ¬p p ¬p p

¬q ¬q q ¬q

There is a pattern here: for each connective, we have one rule for positive occurrences

(the top row) and one rule for negative occurrences (the bottom row), in which the

sentence has a ¬ in front of it. The rules for negation are slightly different:

p ¬¬p X

¬p

× p

These rules are schematic. The top left-hand rule applies to propositions of the form

p ∧ q, not just to the sentence p ∧ q! The rule is applied as follows. If we find a propo-

sition on an open branch whose main connective is a ∧, that does not already have a X

next to it, then we can add the proposition to the left of the main ∧ and the proposition

to its right to the tree. We also add a X to the right of the original proposition to say

that we’ve applied a rule to it and needn’t bother with that proposition again. If the

main connective is a ∨, on the other hand, we apply the p ∨ q rule by splitting the

Philosophy Insights: Formal Logic 43

branch into two, writing the proposition to the left of the ∨ on the left branch and the

proposition to its right on the right branch. The rule for ∨ is called a branching rule,

whereas the rule for ∧ is non-branching. The same procedure applies for propositions

whose main connective is a → or ↔, according to the rules above.

For propositions whose main connective is a ¬, we look to see what the main con-

nective is after the first ¬ (i.e. the connective that would appear in the second node

down of the proposition’s construction tree). If this is a ∧, we apply the ¬(p ∧ q) rule

and similarly for ∨, →, ↔ and ¬. We also have to keep an eye out for contradictory

pairs p and ¬p on the branch. If we find one, we close the branch by writing a ×

beneath it. Notice that we only apply rules to propositions on open branches that don’t

already have a X to their right. The method to test whether Γ |= p is then as follows.

(a) Write each proposition in Γ (the premises) at the top of the tree (the root), followed

by the negation of the conclusion, ¬p.

(b) Working from the top of the tree, find a (complex) proposition on an open branch

that doesn’t have a X next to it and apply the appropriate rule to extend the tree,

then check off the proposition. If no rule can be applied to any proposition, stop.

(c) Add a × to the bottom of any branch with a contradictory pair on it.

(d) Are all branches closed? If so, stop. If not, go back to step 2.

Stopping at step 4 indicates that Γ |= p (because all branches have closed), whereas

stopping at step 2 indicates that p is not a logical consequence of Γ, as the tree has

finished with open branches. When applying a rule to a proposition in step 2:

Add the results to every open branch on which the proposition occurs

If we apply a rule to a proposition at the ? node in the schematic tree below, for

example, then we have to write the results of applying the rule to the bottom of each

branch marked ↑.

↑ ↑

Philosophy Insights: Formal Logic 44

We begin the tree by listing the premises and ¬¬p, the negation of the conclusion ¬p.1

We then apply the tree rules:

1. q→r X premise

2. r → ¬p X premise

3. p→q X premise

4. ¬¬p X ¬conclusion

5. p from 4, ¬¬

6. ¬p q from 3, →

×

7. ¬r ¬p from 2, →

×

8. ¬q r from 1, →

× ×

The annotations on the right tell us which line(s) the new node of the tree was obtained

from and the rule used to do so. I’ll abbreviate the rules (using the connectives that

appear in the them) as ∧, ∨, ¬∧, ¬∨ and so on. We can apply the applicable rules in

any order we like, although we should try to keep the tree as small as possible, so:

Apply any non-branching rule before a branching rule

When we do have to use branching rules, try to find one that will allow one of the

newly-created branches to close immediately, as in the tree above.

Apply rules that allow branches to be closed immediately, where possible

1 It is tempting to add p rather than ¬¬p to the top of the tree, since we know from section 3.2 that ¬¬p ≡ p. But the method requires

us to add the negation of the conclusion which, in this case, is ¬¬p. We shouldn’t use what we know about logical equivalence to

help us build trees; we are only allowed to use the tree rules. This is what makes the tree method an algorithmic proof system.

Philosophy Insights: Formal Logic 45

Example 2 Test whether p1 ∧ (p1 → p2 ) ∧ (p2 → p3 ) ∧ (p3 → p4 ) |= p4 .

1. p1 ∧ (p1 → p2 ) ∧ (p2 → p3 ) ∧ (p3 → p4 ) X premise

2. ¬p4 ¬conclusion

3. p1 from 1, ∧

4. (p1 → p2 ) ∧ (p2 → p3 ) ∧ (p3 → p4 ) X

5. (p1 → p2 ) X

from 4, ∧

6. (p2 → p3 ) ∧ (p3 → p4 ) X

7. (p2 → p3 ) X

from 6, ∧

8. (p3 → p4 ) X

9. ¬p1 p2 from 5, →

×

10. ¬p2 p3 from 7, →

×

11. ¬p3 p4 from 8, →

× ×

It is clear that we should apply the ∧ as many times as possible (because we want to

apply non-branching rules before branching rules), giving us lines 3 to 8. After that,

we have a choice of which line to apply the → rule to. As p1 and ¬p4 are already in

the main branch, applying the → to either line 5 or line 8 allows us to close one of the

newly created branches immediately.

4.4 Decidability

To recap, the tree test gives us an answer to the question, ‘does Γ entail p?’ A closed

tree means that the answer is ‘yes’, whereas a finished open tree counts as a ‘no’. For

the tree method to be an effective test for entailment, it must always give us either

a ‘yes’ or a ‘no’ answer eventually. This will be the case just so long as every tree

Philosophy Insights: Formal Logic 46

finishes (either open or closed) after a finite number of steps. Ensuring that this is the

case is a matter of showing that it is decidable whether Γ entails p and that the tree

test constitutes a decision procedure for propositional logic. To establish that the tree

test for entailment in propositional logic does always halt within a finite number of

steps, we need to do some meta-logical reasoning, that is, reasoning about the logic

rather than within the logic.

To establish that every tree finishes sooner or later, note that every rule (with the

exception of the rule for closing a branch) breaks down the input proposition into

smaller propositions. Call a proposition active iff it is on an open branch and hasn’t

been checked off. Suppose that no active proposition contains more than n connectives

(ignoring any ¬ immediately to the left of primitive propositions) and that there are

m such active propositions in the tree. If n > 0 then a rule can be applied to each

of these propositions. In doing so, we check off each of these propositions and add

only propositions that are guaranteed to be less complex to the tree. After doing so,

no active proposition can contain more than n − 1 connectives. If (n − 1) > 0 we can

repeat, after which no active proposition can contain more than n − 2 connectives.

By repeating this process n times, every remaining active proposition must be either

a primitive or a negated primitive and since we have applied the rules just n times,

the tree is finite. Any branch that cannot be closed immediately is finished, since no

more rules can apply to it. Therefore, the tree is finished, after a finite number of

rule applications. This establishes that any tree test for entailment in propositional

logic finishes after a finite number of steps, which is exactly what we need for the

decidability result.

The tree method gives us a way of constructing a valuation that satisfies the set at

its root from any finished open branch. If the tree generated from Γ ∪ {¬p} does not

close, then Γ 6|= p. Recall from section 3.1 that a valuation for the set Γ ∪ {¬p} is a

counterexample to the purported entailment from Γ to p.

A counterexample to ‘Γ |= p’ is a valuation that satisfies Γ ∪ {¬p}

To find a valuation for any finite set ∆ of propositions, construct a tree beginning with

the propositions in ∆ (be careful here: since we are not checking whether a conclusion

is entailed by some premises, we do not negate either sentence). If ∆ is satisfiable,

Philosophy Insights: Formal Logic 47

there will be at least one open branch. To construct a valuation that satisfies ∆, pick

any finished open branch and:

(a) assign ‘T’ to all primitive propositions that appear on their own the branch;

(b) assign ‘F’ to all primitive propositions that appear negated on the branch;

(c) assign ‘T’ to every other primitive proposition.

As part of the completeness proof in section 4.6 below, I give a proof that |=V p for

every proposition p on the branch. As a consequence, V satisfies ∆.

Example 3 Use a tree to find a valuation that satisfies {¬q ∨ p, ¬¬(p → q)}.

1. ¬q ∨ p X sentence 1

2. ¬¬(p → q) X sentence 2

3. p→q X from 2, ¬¬

4. ¬q p from 3, →

5. ¬p q ¬p q from 1, ∨

× ×

The left-hand open branch gives us a valuation V which sets V (p) = V (q) = F. It

is easy to check that V satisfies every proposition on this branch: ¬p, ¬q, p → q,

¬¬(p → q) and ¬q ∨ p. We could just as easily have used the right-hand open branch

to construct our valuation, setting V (p) = V (q) = T.

Example 4 Construct a tree to test whether ¬q ∨ p |= ¬(p → q). If not, find a counter

example.

We construct the same tree as in the previous example. Since it does not close, the

entailment does not hold. The valuation constructed from either open branch gives us

a counterexample.

Philosophy Insights: Formal Logic 48

So far, I have assumed without any justification that a closed tree shows that the

premises entail the conclusion and that a finished open tree shows that the conclu-

sion is not entailed. To be sure that this is so, we need to check that the tree rules are

sound and complete:

Sound: Any proposition proved from a set of premises by a closed tree is a logical

consequence of those premises.

Complete: Any logical consequence of a set of premises can be proved from them by

a closed tree.

In establishing that the proof tree test is sound and complete, we are involved in the

meta-theory of propositional logic. ‘Γ ` p’ abbreviates ‘there is a proof of p from

Γ’ (or ‘Γ proves p’ for short), meaning that there is a closed tree generated from

Γ ∪ {¬p}.1 We can then state these properties of the tree test in symbols:

Soundness: If Γ ` p then Γ |= p

Completeness: If Γ |= p then Γ ` p

Soundness

If the tree test were unsound, then it would be possible to close all branches of a tree

even though the set at its root is satisfiable. The soundness requirement amounts to:

If all branches close, then the set at the root is unsatisfiable.

To show that the tree test is sound, we need to show that each of the tree rules preserves

satisfiability. This means that, if the set of propositions on the branch are satisfiable

before the rule is applies, then they are satisfiable after it is applied as well. In the case

of branching rules, the propositions on at least one of the newly created branches must

be consistent if they were on the original branch. It is easy to see that all of our rules

conform to this requirement.

Completeness

Completeness means that the tree method has sufficient ‘proving power’ to prove all

that it could prove. One formulation of this is:

1 Be careful: ‘ p ` q’ is not a proposition of the logical language! ‘`’, just like ‘|=’, is a symbol of the meta-language.

Philosophy Insights: Formal Logic 49

If the set at the root is unsatisfiable, then the tree will close

We establish this by proving the contrapositive:

If a tree has a finished open branch, then the set at its root is satisfiable

In section 4.5, we saw how to construct a valuation from any open branch. Let ∆ be

the set of propositions found along one open branch of a finished open tree and V be

the valuation constructed from this branch. We will show that |=V ∆, from which it

will follow that V satisfies the the set at the root of the tree (as it is a subset of ∆).

It is clear from the way that V is constructed that, for every primitive p, |=V p if

p ∈ ∆ and |=V ¬p if ¬p ∈ ∆. Now we assume that V satisfies every proposition in

∆ less complex than some arbitrary complex proposition q and show that, given this

assumption, V must satisfy q as well. Suppose that q is a conjunction q1 ∧ q2 ; then

q1 ∈ ∆ and q2 ∈ ∆ (otherwise the branch from which we started would not have been

finished). By our assumption, |=V q1 and |=V q2 and hence |=V q. Similarly, if q is a

disjunction q1 ∨ q2 , then either q1 ∈ ∆, in which case |=V q1 , or else q2 ∈ ∆ in which

case |=V q2 . It follows that |=V q. All other cases (→, ↔, ¬∧, ¬∨, ¬→ and ¬↔)

follow one of these patterns. If q is a negated disjunction ¬(q1 ∧ q2 ), for example, then

either ¬q1 ∈ ∆ and so |=V ¬q1 or else ¬q2 ∈ ∆ in which case |=V ¬q2 . Either way,

|=V ¬q.

We made no assumption about the complexity of q. We have therefore show that, for

any proposition p ∈ ∆ of any complexity whatsoever, if V satisfies all propositions less

complex than p, then V satisfies p too. Since we know that V satisfies all primitives

and negated primitives in ∆, it follows that V satisfies ∆ as a whole. This kind of

proof is known as an inductive proof. In general, to show that property P holds of all

numbers n, we show that the assumption that P holds for all k < n entails that P holds

for n as well.1

Exercises

Question 1 Test each of the purported entailments below with a tree. If the entailment

does not hold, give a valuation as a counterexample.

(a) ¬q → ¬p, q → r |= ¬p ∨ r

(b) ¬p ∧ (q ∨ r) |= (¬p ∧ q) ∨ r

Philosophy Insights: Formal Logic 50

(c) ¬p ∨ q, ¬q ∨ r, ¬r ∨ s |= ¬p ∨ s

(d) ¬(p ∧ q) |= ¬p ∧ ¬q

(e) ¬p ∨ ¬q |= ¬(p ∨ q)

(a) p → (q → r) → (p → q) → (p → r)

(b) (p → q) → (p → r) → p → (q → r)

Question 3 Use a tree to find a valuation V which satisfies the following set of propo-

sitions:

q ∧ s ∨ r, p ∨ q, q ↔ ¬r, p → q ∧ r

Question 4 Give the soundness proof in full. You need to consider the case for each

of the tree rules and show that that rule preserves satisfiability.

Question 5 (Harder) For this question, assume that ¬, ∧ and ∨ are the only logical

connectives in the language. A Hintikka Set H is a set of propositions such that:

If ¬p ∈ H then p ∈

/H

If ¬¬p ∈ H then p ∈ H

If p ∧ q ∈ H then p ∈ H and q ∈ H

If p ∨ q ∈ H then p ∈ H or q ∈ H

If ¬(p ∧ q) ∈ H then ¬p ∈ H or ¬q ∈ H

If ¬(p ∨ q) ∈ H then ¬p ∈ H and ¬q ∈ H

Show that every Hintikka set is satisfiable.

5. First Order Logic

5.1 More Valid Arguments

Many of the valid arguments that we would like to express in logic cannot be formal-

ized in propositional logic. ‘If all men are mortal and Socrates is a man, then Socrates

is mortal’ is a valid argument, because the conclusion could not possibly be false if

both premises are true. We cannot fully analyze this argument in propositional logic

because its validity does not depend only on relationships between propositions. The

three primitive propositions in the argument are:

p: All men are mortal

q: Socrates is a man

r: Socrates is mortal

It is clear that the premise p ∧ q does not entail r on the definition of ‘entails’ that

we have given for propositional logic, as there is a valuation V under which V (p) =

V (q) = T but V (r) = F. To analyze the argument as being valid, we need to break

inside these propositions to capture more of the information that they convey. The

name ‘Socrates’ appears in both ‘Socrates is a man’ and ‘Socrates is mortal’ but this

fact is lost when we represent these propositions as p and q. In a similar way, the

inference from ‘Socrates is a man’ to ‘someone is a man’ is valid (for someone called

‘Socrates’ must exist in order for ‘Socrates is a man’ to be true). In order to capture

this validity, we need to break inside these sentences to see that ‘Socrates’ is a name

and can thus be replaced by the pronoun ‘someone’ without affecting the truth of the

claim.

First-order logic allows us to break inside the structure of sentences (for this reason,

it is sometimes called ‘sentential logic’). To do this, we need the following elements:

Names such as ‘Socrates’.

Predicates such as ‘is a man’.

Quantifiers such as ‘everyone’ and ‘someone’.

The next two sections go into more detail about how these are dealt with in first-order

logic.

Philosophy Insights: Formal Logic 52

Tony Blair is Prime Minister,

I am a philosopher,

1 is a number,

this post box is red.

Here, the predicates ‘is Prime Minister’, ‘am a philosopher’, ‘is a number’ and ‘is

red’ say what is being predicated and the noun phrases ‘Tony Blair’, ‘I’, ‘1’ and ‘this

post box’ say what it is being predicated of. In our formal language, we use a stock

of names or constants c1 , c2 , c3 and so on and a stock of predicates P1 , P2 , P3 and so

on (to help distinguish constants from predicates, I will use lower-case roman letters

for constants and upper-case roman letters for predicates). We will usually only need

a few constants and so I will use a, b and c rather than c1 , c2 and c3 . We can form

sentences such as P1 a, which we will read as ‘a is P1 ’. We might translate ‘Tony

Blair’ as b and ‘is Prime Minister’ as P1 , in which case P1 a translates as ‘Tony Blair

is Prime Minister’.1 We also have English sentences that relate one entity to another,

for example:

Tony Blair is older than me,

3 is a larger number than 2,

this post box is larger than that post box.

Here, older than and larger than are relations. In first-order logic, relational sentences

have the form:

Rab

where a and b are constants. The difference between this relational sentence and the

subject-predicate sentences above is that the predicate R is followed by two constants

rather than one. Each predicate in the logical vocabulary has a fixed number of slots

which get filled up with constants to make a complete sentence. In English, one-place

predicates such as ‘— is Prime Minister’ and ‘— is red’ denote properties and two-

place predicates such as ‘— is taller than —’ denote binary (or dyadic) relations. We

1 As above, I am using the convention that sentences in sans serif are surrounded by implicit quotation marks. This is why the English

sentences are surrounded by explicit quotation marks whereas the logical ones are not.

Philosophy Insights: Formal Logic 53

also have three-place predicates, such as ‘— likes — more than —’. In fact, we can

have n-place predicates for any n we like, denoting n-ary relations, although we shall

concentrate on properties and binary relations only here.

Strictly speaking, we should tag each predicate in our logical vocabulary with a

number n, as in Pn , so that we know how many constants to place after it to make a

well-formed sentence (n is called the arity of P). The official way of setting out the

logical vocabulary is to include each of the following predicates:

P11 P12 P13 · · ·

P21 P22 P23 · · ·

P31 P32 P33 · · ·

... ... ...

In this way, we never get confused about how many constants should appear after

each predicate, for Pn always needs to be followed by n constants. In first-order logic,

we have atomic sentences rather than primitive propositions (which, unlike primitive

propositions, do have internal structure). An atomic sentence is a predicate with an

arity of n, followed by n constants. In the official language, the following are all well-

formed atomic sentences:

P1 a Q2 ab R3 abc

This official notation is hard to write down by hand and can quickly become hard to

read. As a compromise, we will make sure we know the arity of each predicate we use

in advance, so that we do not have to write it as a superscript on the predicate itself.

And just as I used p, q and r rather than p1 , p2 , p3 , . . . to name primitive propositions,

I will use P and Q to name one-place predicates and R and S to name two-place

predicates (and resort to subscripts only when we need more). With this in mind, the

following are all atomic sentences:

Pa Qb Rac

We build complex sentences in just the same way as we did in propositional logic,

using the logical connectives ¬, ∧, ∨, → and ↔. The following are all well-formed

complex sentences:

Pa ∧ Rab → Qb Sac ∨ (Qa ↔ Qc) ¬Pc ∨ ¬(Qb ∧ Pa)

Philosophy Insights: Formal Logic 54

When we want to talk about arbitrary sentences of first-order logic, I will use the

placeholders A, B and C.1 Thus, A ∧ B is schematic for conjunctions, of which the

sentence Pa ∧ Rab is a particular instance.

Atomic sentences and logical connectives alone do not give us the kind of logical

power we need to capture the validity of the arguments ‘everyone is mortal, therefore

Socrates is mortal’ and ‘Socrates is a man, therefore someone is a man.’ To capture

these, we need to use the quantifiers, ‘everyone’ and ‘someone’. In English, ‘everyone

is mortal’ means the same as ‘for all people x, x is mortal’ and ‘someone is mortal’

means ‘for some person x, x is mortal’. In general, we have a universal quantifier, ‘for

all x’ and an existential quantifier ‘for some x’, where the ‘x’ ranges over the set of

entities that we are talking about, called the domain.

In these examples, the ‘x’s are variables. When they are not part of a quantifier

phrase ‘for all x’ or ‘for some x’, they work like the pronouns ‘she’, ‘he’ or ‘it’. For

example, ‘for all x, x is mortal’ could be re-phrased as ‘for any thing you care to

choose, it will be mortal’. The language contains a denumerable number of variables

x1 , x2 , x3 and so on. Just as in the case of constants, we will typically need only a

small number of variables and so I will use x, y and z. In first-order logic, we write a

universal quantifier as the symbol ∀ followed by one variable, thus ∀x, ∀y and ∀z are

universal quantifiers. We write an existential quantifier as an ∃ followed by a single

variable, so that ∃x, ∃y and ∃z are existential quantifiers.

A quantifier on its own is not a complete sentence.1 We form universally quantified

sentences as follows. Take any sentence A of the logical language that contains a con-

stant c and replace every occurrence of c in A with a variable x, taking care that x did

not appear anywhere in the original A. Enclose the resulting sentence in parentheses

and write ∀x in front of it. What we have just written is a universally quantified sen-

tence. To form an existentially quantified sentence, we do exactly the same except that

we write ∃x at the front of the sentence. For example, the quantifier-free sentence

Pa → Rab

1 In English, one may answer the question, ‘who did their homework?’ with ‘everyone’; but the reply is really short for ‘everyone did

their homework’.

Philosophy Insights: Formal Logic 55

∀x(Px → Rxb)

or the existentially quantified

∃x(Pa → Rax)

In these examples, we universally generalized over a and existentially generalized

over b. We can universally and existentially generalize (in that order) to produce the

sentence:

∃y(∀x(Px → Rxy))

Quantifiers always take higher precedence than any logical constant, so that ∀xA ∧ B

unambiguously means the same as (∀xA) ∧ B. If we want the scope of the quantifier

to extend to the entire sentence, we need to write ∀x(A ∧ B). This is why the rule

for forming universally and existentially quantified sentences tells us to enclose the

sentence in parentheses before writing the quantifier in front of it. This is not always

strictly necessary, however, for

∃y∀x(Px → Rxy)

is perfectly unambiguous without enclosing ∀x(Px → Rxy) in parentheses. There is

no need to add parentheses around atomic sentences, quantified sentences or negated

sentences when forming a new quantified sentence.

It is useful to have a notation for denoting the quantified sentence built from an

arbitrary sentence A. A[c/x] is the sentence just like A but with all occurrences of c re-

placed by an x. ‘A[c/x]’ is not itself a sentence of the logical language but a schematic

way of referring to an open sentence.1 For example, (Pa → Rab)[a/x] is the open sen-

tence Px → Rxb. The universal generalization over c of a sentence A (containing one

or more occurrences of c, but not containing x) can be written ∀xA[c/x] and the exis-

tential generalization as ∃xA[c/x]. Similarly, we can write the instance of the universal

generalization ∀xB in which xs are replaced by cs as B[x/c].

Translating into English

We will assume that we are talking about people (so that ‘something’ becomes ‘some-

one’ and ‘everything’ becomes ‘everyone’). If we translate P as ‘— is a man’ and R

as ‘— knows —’ and suppose that a names Anna and b names Bob, then:

1 An open sentence is one containing a variable x that is not in the scope of a quantifier ∀x or ∃x: see below on quantifier scope. x is

then said to be a free variable. In this book, open sentences are not counted as well-formed sentences.

Philosophy Insights: Formal Logic 56

Pb translates as ‘Bob is a man’.

Rab translates as ‘Anna knows Bob’ (which also means ‘Bob is known by Anna’).

∀xRxb translates as ‘everyone knows Bob’.

∀xRax translates as ‘Anna knows everyone’.

∃x(Px ∧ Rxb) translates as ‘some man knows Bob’.

∀x(Px → Rxb) translates as ‘every man knows Bob’.

∃x∀yRxy translates as ‘someone knows everyone’.

∃y∀x(Px → Rxy) translates as ‘someone is known by all men’.

Notice how ‘some man . . . ’ is written ∃x(Px ∧ . . .) whereas ‘every man . . . ’ is written

∀x(Px → . . .). This is because ‘some man is . . . ’ means something is both a man and

. . . , whereas ‘every man is . . . ’ does not mean everything is a man and . . . , for this

would imply that there are no women in existence! Rather, ‘every man is . . . ’ means

everything that is a man is . . . , which we can further analyze as ‘for every x, if x is a

man then x is . . . ’. As a general rule of thumb, use → with universal quantifiers and ∧

with existential quantifiers when translating English into logic.

Quantifier Scope

In our previous example ∃y∀x(Px → Rxy), we have two quantifiers, ∀x and ∃y and two

variables, x and y. It is obvious which variable is bound by which quantifier:

∃ y ∀ x (Px → Rxy )

sentence. In the sentence

∃xPx ∧ ∃xQx

for example, we have two different quantifiers, each binding its own variable x; we

just happen to have used the same variable x with each quantifier. The sentence means

‘something is P and something is Q’ where the second something might or might not

be the same as the first something. The quantifiers bind the variables as follows:

∃ xPx ∧ ∃ xQx

Philosophy Insights: Formal Logic 57

The sentence ∃xPx ∧ ∃yQy has exactly the same meaning, because it has exactly the

same binding pattern as ∃xPx ∧ ∃xQx:

∃ xPx ∧ ∃ yQy

What really matters when we read quantified sentences is not which variables appear

where but which variable occurrences are bound to which quantifier. It is the binding

pattern that matters. We could thus represent the previous two sentences in a uniform

way:

∃•P• ∧ ∃•Q•

Similarly, ∃y∀x(Px → Rxy) can be represented as:

∃ • ∀ • (P • → R •• )

This lets us know precisely what is going on. Of course, it would not be practical to

write all of the binding lines in every sentence and so we need to use variables, as we

have done above. Keep the binding lines in mind when working out what a quantified

sentence means.

Formation rules for first-order sentences are simple. All of our previous rules for mak-

ing new sentences using ¬, ∧, ∨, → and ↔ are still in use. The only new rule is

that, if A is a sentence that has one or more occurrences of a constant c in it, then both

∀xA[c/x] and ∃xA[c/x] are well-formed sentences too, provided that x does not already

appear in A. We define the language L of first order logic as follows:1

(a) If R is an n-ary predicate and c1 , . . . , cn are constants, then Rc1 · · · cn ∈ L. Rc1 · · · cn

is an atomic sentence or atom.

(b) If A ∈ L then ¬A ∈ L.

1 In re-using the symbol ‘L ’ it becomes ambiguous between the propositional and the first order language. To avoid this, L is the

propositional language in chapters dealing with propositional logic and the first order language in chapters dealing with first order

logic, such as the present one.

Philosophy Insights: Formal Logic 58

(A ∧ B) ∈ L,

(A ∨ B) ∈ L,

(A → B) ∈ L and

(A ↔ B) ∈ L.

(d) If A ∈ L and A contains occurrences of c but no occurrences of x, then ∀x(A[c/x])

∈ L and ∃x(A[c/x]) ∈ L.

(e) Nothing else is a member of L.

Note that (a) includes the case in which n = 1, i.e. in which we form atoms such as

Pa.

First-order logic allows us to describe relational structures, i.e. a set of entities with

various relations holding between them. These elements can be concrete objects such

as you, me and the Eiffel tour, or they could be abstract entities, such as numbers or

sets. This set of elements is called the domain, which cannot be empty. We repre-

sent binary relations by putting an arrow between two entities whenever the former is

related to the latter, like this:

• •

• •

Here, the diagonal two-way arrow means that there is an arrow from the top-left ele-

ment to the bottom-right element and an arrow back again, so that

• •

• •

Philosophy Insights: Formal Logic 59

If the relation being represented is the admires relation, then the diagram in the cir-

cle shows a situation in which everyone admires someone or other. If we translate

‘admires’ as R in our formal language, then ∀x∃yRxy translates ‘everyone admires

someone (or other)’. Note how the English ‘everyone admires someone’ is ambigu-

ous. It can mean that everyone admires someone or other, as in the diagram above,

but it can also mean that everyone admires someone in particular, as in the following

diagram:

• •

•

•

In this case, there is an arrow from every element to the element in the centre. We can

express this in logic as ∃y∀xRxy. The difference between ‘everyone admires someone

(or other)’ and ‘everyone admires someone (in particular)’ is just the order of the

quantifiers. We could express ‘everyone admires someone (in particular)’ as the more

convoluted ‘there is someone admired by everyone’. In this sentence, the order of the

quantifiers ‘someone’ and ‘everyone’ mirrors the order of the quantifiers ∃y, ∀x in the

logical translation.

Note that the first diagram in the circle does not represent a situation in which there

is someone admired by everyone. Although the bottom right element is admired by all

the other elements, it does not admire itself (as there is no loop from the bottom right

element to itself). If someone is admired by everyone, then that universally admired

person must admire himself or herself as well.

Properties of Relations

If every element in the domain has a loop

attached to it, that relation is reflexive. Examples of reflexive relations are is the same

height as (as everyone is the same height as themselves) and was born on the same

day as. If every arrow is a two-way arrow (i.e. wherever there is an arrow from one

Philosophy Insights: Formal Logic 60

symmetric relations are is married to and is a dancing partner of. The relation show

below is both reflexive and symmetric:

•

•

Note how the loops on each element do not need to be written as two-way arrows.

Symmetry requires only that, if there is an from x to y, then there is an arrow back from

y to x. If x = y, then a loop from x to x satisfies both the antecedent and consequent at

once. A path in a diagram is made up by linking several arrows. In the diagram above,

there is a path from the bottom left element to the top element, via the element on the

right. We will say that a path has shortcuts when there is an arrow directly from any x

on the path to any y reachable from x. In the diagram below, there are infinitely many

paths between a and c.

a

•

•

• b

c

• • •

a b b c

• • • •

a b b b c

• • • • •

Every one of these paths has shortcuts, for there is a direct arrow from a to c. When

every path has shortcuts, the relation represented by the arrows is transitive. Transi-

tivity says that, if x and y are related and y and z are related, then x and z are related

directly. Examples of transitive relations are is an ancestor of and is larger than. We

can express these properties of relations in first-order logic as follows:

Philosophy Insights: Formal Logic 61

Reflexive ∀xRxx

Symmetrical ∀x∀y(Rxy → Ryx)

Transitive ∀x∀y∀z(Rxy ∧ Ryz → Rxz)

When we negate these properties, we get non-reflexive, non-symmetrical and non-

transitive relations. In the diagram of a non-reflexive, non-symmetrical and non-transi-

tive relation, not all elements have a loop to themselves, not all arrows are two-way

arrows and not all paths have shortcuts. This does not mean that no elements have

loops or that no arrows are two-way arrows or that no paths have shortcuts.

A relation whose diagram lacks any loops from an element directly back to itself

is an irreflexive relation. Is married to is an irreflexive relation, since no one can be

married to themselves. A relation such as is taller than whose diagram has no two-

directional arrows is an asymmetrical relation and a relation such as is one inch taller

than whose diagram has no shortcuts in it is an intransitive relation. We can express

all of these properties of relations as follows:

Non-reflexive ¬∀xRxx

Irreflexive ∀x¬Rxx

Non-symmetrical ¬∀x∀y(Rxy → Ryx)

Asymmetrical ∀x∀y(Rxy → ¬Ryx)

Non-transitive ¬∀x∀y∀z(Rxy ∧ Ryz → Rxz)

Intransitive ∀x∀y∀z(Rxy ∧ Ryz → ¬Rxz)

Be careful about where the ¬ goes in the case of asymmetry and intransitivity!

Equivalence Relations

Any relation that is reflexive, symmetrical and transitive is an equivalence relation.

Relations whose English expression contains ‘same’, such as is the same height as,

are usually equivalence relations: everyone is the same height as themselves, if x is

the same height as y then y is the same height as x and, if x and y and y and z are the

same height, then x and z must be too. But not all relations expressed using ‘same’ are

equivalence relations: is judged by Bob to be the same height as is not an equivalence

relation. Suppose Bob cannot distinguish between the heights of any of the adjacent

lines below but can tell that the leftmost line is shorter than the rightmost:

Philosophy Insights: Formal Logic 62

Then the relation is judged by Bob to be the same height as is not transitive, so cannot

be an equivalence relation.

An equivalence relation partitions the domain into a number of equivalence classes.

The equivalence relation is the same height as partitions the domain into classes, such

that everyone in a particular class is the same height as one another and no two indi-

viduals in distinct equivalence classes are the same height. In terms of our diagrams,

any two entities within the same equivalence class have an arrow between them but no

arrow travels from one equivalence class to another.

A little reflection on what quantified sentences mean shows that the truth table method

will not work for giving a semantics for first-order logic. We would need as many lines

as we have entities in our domain which can be infinitely large.1 We are going to give

the Tarski definition of the truth of a sentence.2 In propositional logic, we used the idea

of valuations (or rows in a truth table) as ways of representing how the facts might have

been different from the way they actually are. By thinking in terms of valuations, it

is easy to see how logical truths and contingent truths come apart. We make a similar

move in first-order logic but work with models in the place of valuations.

We associate each model M with a non-empty domain D, which is just the set of

entities that we want to quantify over, such as the set of all natural numbers or the

set of all people in the world. Models are defined relative to the vocabulary of our

language, that is, the set of constants, variables, predicates, logical connectives and

quantifier symbols from which we build the sentences of our first-order language.

Interpreting Constants

Models interpret the vocabulary by linking various bits of syntax to the domain in

various ways. In a model M, every constant c picks out some object or other in the

domain. This entity is the interpretation of the constant c in M, written ‘cM ’ (with the

model’s name as a superscript on the constant). If our domain is the set of philosophers

and M interprets c as Descartes, then cM = Descartes. Constants play the role that

1 Worse still, if we are quantifying over the real numbers, the domain is not even countable. This is explained in the appendix.

2 Like the definition of satisfaction in propositional logic (section 2.6), this definition is due to Alfred Tarski.

Philosophy Insights: Formal Logic 63

proper names play in natural language and so our example was one in which the

constant c names (and so denotes) Descartes. Note here that c is a bit of syntax, part of

our formal vocabulary, whereas Descartes is a great dead philosopher and not part of

a language at all. The important point is that the interpretation that the model forges a

link between our syntax and objects in the domain of the model.

There is a notational complication here, for the only way that I can describe ele-

ments of the domain is by writing down their names (or perhaps by using individu-

ating descriptions, such as ‘the father of modern philosophy’). It is important that we

do not confuse these names (or descriptions) with what is being named (or described).

This is particularly tricky when we want to use the same symbols in our formal lan-

guage to name an entity as we use in English. We use the numeral ‘1’ as a name for the

number 1 and we might also want to re-use that numeral in our formal language. But

we cannot say that 1M = 1 because this would mean that M assigns the number one

to the number 1, which does not set up a language-world link at all. Setting ‘1’M = 1

sets up the required language-world link but the proliferation of quotation marks is

likely to make our meta-language hard to read.

To avoid this problem, I will extend the convention from above: constants (and

predicates) in our formal vocabulary will be written in sans serif, with quotation marks

implicitly present. Names written in italic roman font belong to the metalanguage and

denote elements of the domain of the model. So, 1 is a numeral whereas 1 is a number

and, more generally, a is a constant whereas a is an element of the domain. The set

{a, b, c} can be used as the domain of a model, whereas {a, b, c} cannot.1 In this way,

we can happily write ‘1M = 1’ to mean that the numeral 1 names the number 1 in our

model M.

Interpreting Predicates

We want one-place predicates to pick out some property or other, such as being red or

being a prime number. In the domain {1, 2, 3, 4}, 2 and 3 are primes whereas 1 and 4

are not. The property being a prime number neatly divides the domain into two: those

that are and those that are not prime numbers. A neat way of interpreting the predicate

M

Prime in a model M is: we set Prime = {2, 3}. Suppose we also set

1M = 1

2M = 2

1 That is, unless we want to talk about and quantify over names, as linguists might.

Philosophy Insights: Formal Logic 64

3M = 3

4M = 4

Now we can see what Prime(2) means in the model M. M interprets Prime as the set

{2, 3} (this is the extension of Prime in M ) and 2 as the number 2 and (no surprises

here) we find that the latter is a member of the former. The interpretation given by

M not only tells us what Prime(2) means in that model, it also gives us a way to

establish whether M satisfies Prime(2). In our example, M satisfies Prime(2) because

M

2M ∈ Prime .

You might think that we are doing something blatantly circular here, by taking M

to assign the set of primes in the domain to the predicate Prime and then saying that

what Prime means is just that set. This would be circular if we were trying to define

the meaning of ‘is prime’ in English but this is not what we are trying to do. The

model assigns some set to each one-place predicate. If the model interprets a predicate

P as the set X, then members of X are what count as Ps in that model. It is perfectly

possible for another model to assign a set containing 1 and 4 to Prime but then Prime

in that model does not mean what we mean by a prime number.

We also want two-place predicates to denote binary relations holding between mem-

bers of the domain. Earlier, we represented binary relations by drawing arrows from

one member of the domain to another. We can represent the same information us-

ing ordered pairs hx, yi to mean that there is an arrow from x to y. The information

contained in the diagram

a

•

•

• b

c

n o

ha, bi, ha, ci, hb, bi, hb, ci

In fact, the diagram is just a graphical representation of these ordered pairs. Ordered

pairs (as the name suggests) differ from sets in that their elements are ordered. {a, b}

is the same set as {b, a}, yet ha, bi is not the same ordered pair as hb, ai. A model M

interprets each two-place predicate R in the vocabulary as a set of ordered pairs.

Philosophy Insights: Formal Logic 65

We can generalize the notion of ordered pairs to ordered n-tuples (n-tuples from

now on). An n-tuple hd1 , . . . , dn i is a finite sequence of n elements from the domain in

a fixed order.1

A model M interprets each n-place predicate in the vocabulary as a set of n-tuples

5.7 Satisfaction

n-place predicate followed by n constants. An atomic sentence Pc1 · · · cn is satisfied by

a model M if and only if the n-tuple formed by the extension of c1 through to cn , in

that order, is one of the n-tuples in the extension of P in M. We can write the former

n-tuple as hcM M

1 , . . . , cn i and so we have:

1 , . . . , cn i ∈ P

M

satisfies A’. We can then recursively define ‘|=M ’ in much the same way we did in

section 2.6. We already have the case of atomic sentences:

|=M Pc1 · · · cn iff hcM M

1 , . . . , cn i ∈ P

M

Next, assume that ‘|=M ’ has been defined for sentences of a certain complexity and

consider sentences that contain just one more connective than any of these, which are

satisfied as follows:

|=M ¬A iff 6|=M A

|=M A ∧ B iff |=M A and |=M B

|=M A ∨ B iff |=M A or |=M B

|=M A → B iff 6|=M A or |=M B

|=M A ↔ B iff |=M A and |=M B, or else

6|=M A and 6|=M B

I will come to the clauses for ∀xA and ∃xA below.

1 Ordered pairs are then just two-tuples and one-tuples are just the elements of the domain themselves.

1 Note that these n-tuples themselves are not elements of the domain. But if hd1 , . . . , dn i ∈ RM , then d1 through to dn must be

elements of the domain. Mathematically, the n-ary Cartesian product of sets X1 through to Xn , written X1 × · · · × Xn , is the set of

all n-tuples hx1 , . . . , xn i where x1 ∈ X1 , x2 ∈ X2 and so on, and xn ∈ Xn . When each set X1 , . . . , Xn is the same set X , we can

abbreviate X1 × · · · × Xn as X n . We can then say that, if the domain is D and P is an n-place predicate, then its interpretation PM

is a subset of D n , the n-ary Cartesian product of D.

Philosophy Insights: Formal Logic 66

Example

Take the domain D to be {a, b, c} and our vocabulary to be {P, R, a, b, c, ¬, ∧, ∨} only

(i.e. we will not use all the logical connectives or any quantifiers for now). We can

build a model M that interprets the language as follows:

aM = a

bM = b

cM = c

PM = {a, c}

RM = {ha, bi, ha, ci, hb, bi, hb, ci}

a•

•

c• b

The arrows show the extension of R and the shaded area shows the extension of P. M

satisfies the following sentences (and many more, of course):

|=M Pa

|=M Rab

|=M (Pa ∧ Rab) ∧ ¬Pb

|=M Rbb ∨ Pb

In the example above (with the vocabulary extended to include ∀ and ∃ as well as

variables x1 , x2 and so on), M should satisfy each of the following sentences:

∃xRxx ∀x(Px ∨ Rxx) ¬∀x∃yRxy

But what should our general definition of ‘|=M ∀xA’ and ‘|=M ∃xA’ be? The seem-

ingly obvious, yet wrong, account of quantification is as follows:

Philosophy Insights: Formal Logic 67

|=M ∀xA iff for every constant c, |=M A[x/c]

|=M ∃xA iff for some constant c, |=M A[x/c]

This is called the substitutional account of quantification. It says that universally quan-

tified sentences are satisfied iff all of their instances are satisfied and that existentially

quantified sentences are satisfied iff at least some of their instances are.1 The problem

for the substitutional account is that there can be at most countably many constants

in our language but we may have an uncountable domain (e.g. the real numbers). We

cannot possibly name all of the real numbers and so a universally quantified sentence

can well be false even though every substitution instance of it is true. What we can do,

for any particular element d of the domain, is pick just one new constant c to use as a

name for d. The model in which we do this is Mdc . This model is just like M except

that d now has a name, c. The correct account of the quantifiers is as follows.

The correct account: Let D be the domain of M . Then:

|=M ∀xA iff for every d ∈ D, |=Mdc A[x/c]

|=M ∃xA iff for some d ∈ D, |=Mdc A[x/c]

The definition says that M satisfies ∀xA iff, for any thing you pick and name c, the

resulting model satisfies A[x/c]. M satisfies ∃xA, on the other hand, iff something can

be named c such that the resulting model satisfies A[x/c].

Interrelating ∀ and ∃

Existential and universal quantifiers are the dual of one another. This means that the

following equivalences hold:

∀x¬A ≡ ¬∃xA and ∃x¬A ≡ ¬∀xA

Using these equivalences and what we know about ‘≡’ from sections 3.2 and 3.3, we

can get the following equivalences:

∀xA ≡ ¬∃x¬A and ∃xA ≡ ¬∀x¬A

As these equivalences have the form of definitions (of ∀x in terms of ¬ and ∃x and of

∃x in terms of ∀x and ¬), we could work with a vocabulary that contains just ∀ or just

∃ as the primitive quantifier-forming symbol, if we so wanted.

1 Recall that an instance of a quantified sentence is the result of stripping off the outermost the quantifier and replacing the resulting

unbound variable with a constant.

Philosophy Insights: Formal Logic 68

A sentence is valid when it is guaranteed to be true by logic alone, i.e. when it is

satisfied by all models (and so the valid first-order sentences have the same status as

the tautologies of propositional logic). We write ‘|= A’ as an abbreviation of ‘A is

valid’:

|= A iff |=M A for every model M

Given a set Γ of sentences, |=M Γ iff M |= A for every A ∈ Γ. Entailment is defined

in the way it was in propositional logic (except we now have models in place of val-

uations): a set Γ of sentences (the premises) entails a sentence A (the conclusion) iff

every model that satisfies Γ also satisfies A. In symbols:

Γ |= A iff, for every model M, if |=M Γ then |=M A

As before, we could also think of ‘|= A’ as an abbreviation of ‘∅ |= A’, i.e. ‘A follows

logically from the empty set of premises’. If Γ does not entail A then there must

be a model M which satisfies Γ but not A. M is a counter model to the purported

entailment.

If |=M Γ but 6|=M A, then M is a counter model to ‘Γ |= A’

Exercises

(a) A model assigns a subset of the domain to each two-place predicate.

(b) The domain of a model must be non-empty.

(c) Rab |= ∃x Rax

(d) ∃xA |= A[x/c] provided that c is a constant that does not appear anywhere in A.

Question 2 Translate the following into the logical language, using a one-place pred-

icate M for ‘is a man’, a two-place predicate A for ‘admires’ and a, b, c and d as names

for Anna, Bob, Cath and David, respectively. Assume that the domain is the set of all

people.

(a) Only men admire Anna.

Philosophy Insights: Formal Logic 69

(c) Cath admires Bob only if Anna admires David.

(d) Everyone admires someone.

(e) Someone is admired by everyone.

(f) Anyone who admires anyone admires Anna.

Question 3 Accurately translate the following into colloquial English, using the trans-

lation scheme from question 2. (Your answers should sound like everyday English, not

‘for all x, there is a y’ etc.) Again, assume that the domain is the set of all people.

(a) ∀x(Mx → Axa)

(b) ∀x∃yAyx

(c) ∃x∀yAyx

(d) ∀x(Mx ∧ Axc → Acx)

Question 4 Find a model that interprets the two-place predicate R as both a symmet-

rical and an asymmetrical relation.

Question 5 A sentence is in prenex normal form iff the only symbols appearing to the

left of any quantifier in the sentence are themselves quantifiers. ∀x∃y¬Rxy is in prenex

normal form, for example, whereas ∀x¬∃yRxy is not. For each sentence below, find an

equivalent sentence in prenex normal form.

(a) ∀xPx ∧ Qa

(b) ¬∀xPx ∨ Qa

(c) ∃xPx → Qa (careful with this one!)

(d) ∀xPx ∧ ∃xQx (be careful not to change the quantifier binding pattern)

(e) ∀xPx ↔ Qa

normal form equivalent to ∃x(Px ∧ ¬∃yRxy) → ∀x∃ySxy.

6. Identity

6.1 The Puzzle of Identity

Identity is a perplexing notion. In one sense, it is very simple: identity is the rela-

tion that holds between every thing and itself, for everything is identical to itself and

nothing else. We are interested in strict or numerical identity, not similarity. Identi-

cal twins might be very similar but they cannot be numerically identical since twins

are two distinct individuals. It is wrong to say that identity every holds between two

things, because if they are two things then they cannot be identical to one another. If

identity holds between x and y, then x is the same thing as y.1

An identity statement ‘a = b’ is informative, as it can tell us something new. But

what can it tell us? Not that two things are identical (for this is always false) and not

that a and b are each self-identical (for this is completely uninformative). What ‘a = b’

tells us is that anything true of a is true of b. This principle is known as Leibniz’s law,

after the German philosopher and mathematician Gottfried Wilhelm Leibniz (1646–

1716).

Leibniz’s Law: If a = b then, for any property P, Pa iff Pb.

Since identity is a relation and relations require relata (i.e. entities to relate), ‘a = a’

can be true only if a exists and if a exists, then a = a. So a exists if and only if a = a.

We thus have a way of understanding what ‘exists’ means and when a statement ‘a

exists’ is true in terms of identity. Rather than treating ‘a exists’ as a subject-predicate

sentence (suggesting that there could be entities that lack existence), we can analyze

it as the claim that a = a. On some views, a = a does not merely entail that a exists;

rather, it is what it is for a to exist.

We treat identity as a relation and add an identity predicate ‘=’ to the logical vocabu-

lary such that, if a and b are constants, then a = b is a well-formed atomic sentence of

L . We will write the negation of an identity statement as a 6 = b, rather than as ¬ a = b.

1 Because of this, you might question whether identity a relation at all. It is, albeit a special one.

Philosophy Insights: Formal Logic 71

If our domain D is {d1 , d2 , . . .}, then the extension of the two-place identity predicate

should be the set n o

hd1 , d1 i, hd2 , d2 i, . . .

It does not follow that any predicate P that has this extension stands for identity. Sup-

pose that D is a set of 365 people, none of whom share a birthday with anyone else in

the domain. Then the extension of the predicate ‘— was born on the same day of the

year as —’ on D is the set

n o

hd1 , d1 i, hd2 , d2 i, . . . , hd365 , d365 i

Now look at the extension of ‘is identical to’ on D: it is the very same set! We want

to say that these predicates pick out a different relation (being born on the same day

as and identity, respectively). Even if by chance no one in our domain shares a birth-

day, it is still the case that there could have been someone other than d1 who shares

d1 ’s birthday. The problem is that models cannot make this kind of distinction and so

this particular model cannot distinguish between being born on the same day as and

genuine identity. To avoid this problem, we do not assign an extension to the identity

predicate. Instead, we add a special clause defining satisfaction for identity statements:

|=M a = b iff aM = bM

The identity sign ‘=’ on the right is part of the metalanguage, abbreviating the English

‘is identical to’ and so does not belong to the logical language L, whereas the identity

sign ‘=’ on the left does belong to L. This definition says exactly what it we expect

of an identity predicate: a model M satisfies a = b if and only if M assigns one and

the same entity to both a and b. Identity is clearly reflexive, symmetrical and transitive

and so an equivalence relation.2

We can use the identity predicate to express ‘there are at least n Ps’, ‘there are at most

n Ps’ and ‘there are exactly n Ps’.

2 In fact, identity is the smallest equivalence relation there is, for the equivalence classes that it partitions the domain into are all

singletons. Suppose that R is interpreted as an equivalence relation by M. If |=M a = b, then |=M Rab (semantically, if d1 = d2

then hd1 , d2 i ∈ RM ) but the converse does not hold. It is in this sense that no equivalence relation can be smaller than identity.

Philosophy Insights: Formal Logic 72

At Least

We have seen how to say that there exists at least one P, for this just means that

something is P, i.e. ∃xPx. There are at least two Ps can be paraphrased as ‘x is P, y is

P and x and y are distinct’:

∃x∃y(Px ∧ Py ∧ x 6 = y)

Note that ∃x∃y(Px ∧ Py) will not do, as this is satisfied in models in which there is

only one P.1 Similarly, ‘there are at least three Ps’ is expressed as ∃x∃y∃z(Px ∧ Py ∧

Pz ∧ x 6 = y ∧ x 6 = z ∧ y 6 = z). In general, ‘there are at least n Ps’ is expressed as:

∃x1 ∃x2 · · · ∃xn (Px1 ∧ Px2 ∧ · · · ∧ Pxn ∧

x1 6 = x2 ∧ x1 6 = x3 ∧ · · · ∧ x1 6 = xn ∧

x2 6 = x3 ∧ x2 6 = x4 ∧ · · · ∧ x2 6 = xn ∧

...

xn−2 6 = xn−1 ∧ xn−1 6 = xn

xn−1 6 = xn )

Although this looks complicated, nothing difficult is going on. The first line says that

there are n things, all of which are Ps. The second line says that x1 is distinct from all

the others, the third that x2 is distinct from all the others and so on, such that each of

the n Ps are distinct.

At Most

We paraphrase ‘there is at most one P’ as ‘if x and y are Ps, then x and y are the same’:

∀x∀y(Px ∧ Py → x = y)

Think of the universal quantifier as saying, ‘pick any object you like from the domain

(and replace it afterwards)’. This sentence says: if you pick a P on the first go and

a P on the second go, then you picked the same thing each time. Because this is an

implication, it allows the case when there are no Ps. This is what we want: ‘you’ll get

at most 50% in the test’ does not rule out getting less! If there are at most two Ps and

we make three selections from the domain, picking a P each time, then we must have

picked the same object at least twice. ‘There are at most two Ps’ is therefore translated

as:

∀x∀y∀z(Px ∧ Py ∧ Pz → x = y ∨ x = z ∨ y = z)

1 Pa ∧ Pb is an instance of ∃x∃y(Px ∧ Py) and there is nothing to stop a and b from being interpreted as the same member of the

domain.

Philosophy Insights: Formal Logic 73

‘Exactly n’ is the conjunction of ‘at least n’ and ‘at most n’. ‘There is exactly one P’

is translated as:

∃x(Px ∧ ∀y(Py → y = x))

An even simpler way of expressing ‘there is exactly one P’ is:

∃x∀y(Py ↔ y = x)

This means that there is something you can pick from the domain such that, on every

selection you make, you either pick that thing or else something that is not a P. This is

possible only when that thing is the only P in the domain. ‘There are exactly two Ps’

is translated as:

∃x∃y(Px ∧ Py ∧ x 6= y ∧ ∀z(Pz → z = x ∨ z = y))

The three leftmost conjuncts say that there are at least two Ps and the rightmost con-

junct says that any P has to be one of these ones. ‘There are exactly three Ps’ is

translated as:

∃x∃y∃z(Px ∧ Py ∧ Pz ∧ x 6= y ∧ x 6= z ∧ y 6= z ∧ ∀w(Pw → w = x ∨ w = y ∨ w = z))

In summary:

There are exactly three Ps:

∃x∃y∃z(Px ∧ Py ∧ Pz ∧ x 6= y ∧ x 6= z ∧ y 6= z ∧ ∀w(Pw → w = x ∨ w = y ∨ w = z))

Philosophy Insights: Formal Logic 74

A definite description is a noun phrase such as ‘the present King of France’. Definite

descriptions pose the following puzzle. How is it that the sentence, ‘the present King

of France is bald’ is meaningful, even though there is at present no king of France?

Given that there is no such person, is it true, false or meaningless to say that he is bald?

How should we decide? The problem gets worse when we factor in the law of excluded

middle. Everyone is either bald or is not and so either the present King of France is

bald, or else the present King of France has hairs on his head. But neither proposition

is true. Does this constitute a counterexample to the law of excluded middle?

According to Bertrand Russell’s famous analysis of definite descriptions, the sen-

tence ‘the present King of France is bald’ asserts that there is a unique person who is

King of France at present and he is bald, i.e. that:

(i) someone is King of France at present;

(ii) no one else is King of France at present; and

(iii) that person, whoever it may be, is bald.

We already know how to translate each of these into our first-order language. Using K

for ‘— is King of France at present’ and B for ‘— is bald’, we get:

∃x(Kx ∧ ∀y(Ky → y = x) ∧ Bx)

As we saw above, this is equivalent to:

∃x∀y((Ky ↔ y = x) ∧ Bx)

Russell’s analysis gives us a way out of our problem. Definite descriptions are mean-

ingful, on Russell’s view, in just the same way that any existential statement is mean-

ingful. Our translation of ‘the present King of France is bald’ entails ∃xKx. But since

this is false, so is ‘the present King of France is bald’. According to Russell’s analysis,

therefore, definite descriptions that do not pick out any entity are meaningful but false.

‘The present King of France is bald’ would also be false if there were several present

Kings of France, even if they were all bald. For although there would then be a present

King of France, he would not be the unique present King of France. ‘The present King

of France is not bald’, which we translate as:

∃x∀y((Ky ↔ y = x) ∧ ¬Bx)

Philosophy Insights: Formal Logic 75

is false as well, since this too entails that there is a present King of France. What about

the law of excluded middle? We translate ‘the present King of France is either bald or

not bald’ as

∃x∀y((Ky ↔ y = x) ∧ (Bx ∨ ¬Bx))

i.e. there is exactly one present King of France and he is either bald or not bald. This

sentence entails ∃x∀y(Ky ↔ y = x). Since this is false, our translation of ‘the present

King of France is either bald or not bald’ is false too. Our translation is not of the form

A ∨ ¬A, so can be false without contradicting the law of excluded middle.

Above, we met Leibniz’s law, formulated as: if a = b then, for any property P, Pa iff

Pb. ‘Property’ is used in a very wide sense here, such that any set of entities forms

a property. This is the sense of ‘property’ that we used in section 5.6 when we inter-

preted one-place predicates in terms of sets. Semantically, the quantifier ‘for any P’

is quantifying over sets, in fact, over all subsets of the domain. Viewed in this way,

Leibniz’s law says that, if a = b, then any set that a is a member of, b is a member of

too.

Note that we cannot express Leibniz’s law fully in first-order logic, as we cannot

express ‘for any property P’. To quantify over properties in this way we need to use

second order logic, which contains second order quantifiers ∀X and ∃X, in which the

second order variable X ranges over all subsets of the domain. Leibniz’s law can then

be expressed as:

∀x∀y(x = y → ∀X(Xx ↔ Xy))

Second order logic is more expressive than first order logic, as there are sentences

(such as the statement of Leibniz’s law above) that can be expressed in second order

but not first order logic. Another example is the structure of the natural numbers,

which we can think of as the the result of applying the successor (or +1) function

without end, starting from 1:

1 2 3 4 5 6 7

This structure cannot be captured using first order sentences alone, because first or-

der sentences cannot rule out the possibility of many ‘rogue’ duplicates of the same

structure:

Philosophy Insights: Formal Logic 76

1 2 3 4 5 6 7

• • • • • • •

• • • • • • •

But why, if second order logic is so much more expressive than first order logic, have

we only looked at the latter in this book? The answer is because there can be no sound

and complete proof system for second order logic. Neither the tree test nor any other

algorithmic test can capture all of the truths about entailment in second order logic

(without mistakenly classifying some falsity as a truth).

Although we cannot fully express Leibniz’s law in first order logic, we can capture

a restricted version using the first order scheme:

∀x∀y(x = y → (Px ↔ Py))

Any instance of this scheme, obtained by replacing the predicate placeholder ‘P’ with

a predicate of L, results in a valid sentence. We thus have a valid instance of Leibniz’s

law for every one-place predicate P in the language but there may be an uncountable

number of subsets of the domain.1 We have not captured the ‘for every property’ of

our original formulation completely.

Exercises

Question 1 Translate the following into the logical language, using the same trans-

lation scheme as in question 1, chapter 5. Assume that the domain is the set of all

people.

(a) At most two people admire Bob.

(b) Somebody admires exactly one man.

(c) The man who admires Cath doesn’t admire David.

(d) Bob admires no one but himself.

(e) Somebody admires exactly two people.

(f) The person who admires Cath isn’t Cath.

1 In fact, if the domain is infinite, then it have an uncountable number of subsets. This follows from cantor’s theorem, which states

that the cardinality of the power set of a set X , infinite or not, is always greater than the cardinality of X .

Philosophy Insights: Formal Logic 77

Question 2 Accurately translate the following into colloquial English, using the same

translation scheme as above. Again, assume that the domain is the set of all people.

(a) ∃x(Axx ∧ ∀y(Ayy → y = x) ∧ ¬Aax)

(b) ∀x(Axx → ∃y(Axy ∧ x 6 = y))

(c) ∃x∀y(Aya ∨ Ayb → Axy)

(d) ∃x∃y∀z(x 6 = y ∧ (z = x ∨ z = y))

(e) ∃x∀y(My ↔ y = x)

(f) ¬∃x∃y∀z(Mz → z = x ∨ z = y)

7. Proof Trees for First Order Logic

Recap: Rules for the Logical Connectives

The basics of proof trees in first order logic are just the same as the basics of proof

trees for propositional logic (section 4.2). All of the rules that we encountered there

are still usable.

A∧B X A∨B X A→B X A↔B X

A A B ¬A B A ¬A

B B ¬B

¬A ¬B ¬A A ¬A A

¬B ¬B B ¬B

A ¬¬A X

¬A

× A

Here, A and B can be any first order sentences, atomic or complex. These rules do not

allow us to deal with quantified sentences or identity, which we will look at below.

The rules for negated quantified sentences ¬∀xA and ¬∃xA are obvious, given how ∀x

and ∃x are interrelated:

¬∀xA X ¬∃xA X

∃x¬A ∀x¬A

Philosophy Insights: Formal Logic 79

∀xA means that every member of the domain satisfies A and so, whatever constant c

we replace x with in A, adding A[x/c] to the branch cannot make the set of sentences

on the branch unsatisfiable if they were satisfiable to begin with. A first suggestion for

the rule for ∀xA is thus:

Incorrect attempt 1:

∀xA X

c is any constant

A[x/c]

Read this proposed rule as: given a sentence ∀x . . . x . . . , pick any constant c you like

and add . . . c . . . to the branch. Remember that ‘A[x/c]’ is schematic, so we never write

‘A[x/c]’ in a tree. ‘A[x/c]’ means ‘the sentence just like A, but with all xs replaced by

cs’ and that is what we would write in the tree if we used this rule.

This rule does not guide us much in building a tree. Remember that the proof tree

method is supposed to be algorithmic, so that a computer lacking any logical intuition

can carry out a proof simply by applying the rules. The problem with the proposed

rule is that a computer might never guess the right constant to instantiate ∀xA with.

Since we have infinitely many constants in the vocabulary, the computer might never

close a tree which could be closed if only the right constant were chosen. By way of

example, ∀xPx |= Pa and so the tree generated from ∀xPx and ¬Pa should close. The

correct tree is:

1. ∀xPx premise

2. ¬Pa ¬conclusion

3. Pa from 1, ∀

×

The ‘∀’ annotation on the right of line 3 indicates that Pa was added by applying the

∀ rule, instantiating ∀xPx by a. The proposed rule permits this move without in any

way suggesting that we should choose a as our constant. The tree hints at a more

constructive rule: instantiate ∀xA with a constant that already appears somewhere on

the branch. We will call a constant old when it is already on the branch, so the second

proposal is:

Philosophy Insights: Formal Logic 80

Incorrect attempt 2:

∀xA X

c is an old constant

A[x/c]

This rule is sound (as it allows fewer instantiations of ∀x p than the previous proposal).

The problem now is that we cannot show that ∀xPx |= ∃xPx. Our tree begins:

1. ∀xPx premise

2. ¬∃xPx X ¬conclusion

3. ∀x¬Px from 2, ¬∃

But then we are stuck, for there is no constant on the branch to instantiate either

universal sentence with. The correct rule is a half-way-house between these two pro-

posals: we instantiate a universally quantified sentence with an old constant wherever

possible; but if there are no constants on the branch, choose a new one. We can then

finish the tree we could not complete before as follows:

1. ∀xPx premise

2. ¬∃xPx X ¬conclusion

3. ∀x¬Px from 2, ¬∃

4. Pa from 1, ∀

5. ¬Pa from 3, ∀

×

Notice how we have not checked off either ∀-sentence here. Checking off a sentence

means that we cannot use it anymore. But a sentence ∀xA can be instantiated with as

many constants as we have on our branch (or a new one, if there are none), so we

never check it off. Our rule for ∀xA is therefore:

∀xA

c is old, if possible;

new otherwise

A[x/c]

Philosophy Insights: Formal Logic 81

1. Rab X premise

2. ¬∃xRax ¬conclusion

3. ∀x¬Rax from 2, ¬∃

4. ¬Rab from 3, ∀

×

The rule for ∃xA

Suppose we find ∃xA on a branch, on which the constants c1 , . . . , cn have already ap-

peared. We know that A must be true of some individual, but which one? We cannot

say that it is c1 , or c2 or so on, because A might not be true of that particular individ-

ual. It might be any of the individuals denoted by c1 through to cn , or none of them.

In order to see what to do with ∃xA in a tree, we need to consult the semantics for

∃-sentences. The definition of satisfaction tells us that, if a model M satisfies ∃xA,

then we can take a new constant c and assign it to some element d of the domain;

there must be at least one way of doing this which satisfies A[x/c].

Below 7.4, we will look at how to construct a model from an open branch. When

we do this, we give an interpretation of the constants that appear in that branch only.

As a consequence, constants that do not appear in the branch can be chosen as new

constants. When we instantiate ∃xA in a tree, therefore, we can use any constant not

already appearing in any sentence on the same branch as ∃xA. Our rule for ∃xA is thus:

∃xA X

c is new to the branch

A[x/c]

Read the rule this way: given a sentence ∃x . . . x . . . , look along the branch and find a

constant c that does not appear anywhere on it. Check off the sentence and add . . . c . . .

to the branch.

premises here, we being the tree with the negation of the conclusion only:

Philosophy Insights: Formal Logic 82

2. ∀x¬Px X ¬∀x¬Px X

from 1, ¬↔

3. ¬¬∃xPx X ¬∃xPx X

from 3, ¬¬

4. ∃xPx X ∃x¬¬Px X and 2, ¬∀

5. Pa ¬¬Pa from 4, ∃

from 2, ∀

6. ¬Pa ∀x¬Px and 3, ¬∃

×

7. ¬Pa from 6, ∀

×

The tree closes, demonstrating that ∀x¬Px ↔ ¬∃xPx is valid. The annotations on the

right of lines 4 and 6 describe what happens in that line from left to right.

2. ¬(∃xPx ∧ ∃xQx) X ¬conclusion

3. Pa ∧ Qb X from 1, ∃

4. Pa

from 3, ∧

5. Qb

× ×

Philosophy Insights: Formal Logic 83

The tree closes, demonstrating that ∃x(Px ∧ Qx) |= ∃xPx ∧ ∃xQx. The strategy here was

to apply all non-branching rules before any branching rules, to keep the tree as small

as possible. The annotations on lines 7 and 8 apply to both the left-hand hand the

right-hand branches, as we applied the same rule to each branch.

We want rules for positive and negative identity sentences a = b and a 6 = b. The rule

for negative occurrences is simple:

a6= a

×

We can immediately close any branch which denies that a is identical to itself.1 The

rules for positive identity sentences are also simple. They are a form of Leibniz’s law:

a=b a=b

···a··· ···b···

···b··· ···a···

The left-hand rule says: given a = b and any other sentence with a in it on a branch,

extend the branch by writing the second sentence, will every occurrence of a replaced

by a b, at the bottom of the branch. The right-hand rule is similar: given a = b and

any sentence containing bs, add that sentence with all bs replaced by as to the branch.

Note how we do not check either premise off: this is because a = b can be be re-

used together with other sentences that contain a (and other rules might well apply to

· · · a · · ·). This rule applies to atomic sentences a = b only: we cannot apply the rule to

quantified sentences such as ∀x∃y x = y.

1 You might think that there is more to say about negative identity sentences. But on reflection, a 6 = b tells us very little. We cannot

infer from a 6 = b and b 6 = c that a 6 = c, for example.

Philosophy Insights: Formal Logic 84

1. Pa ∧ Qa premise 1

2. a=b premise 2

3. ¬(Pb ∧ Qb) ¬conclusion

4. Pb ∧ Qb from 1, 2, =

×

We no not need to apply the ¬∧ rule to line 3 or the ∧ rule at line 4 as the tree can be

closed immediately after applying the rule for identity.

7.3 Undecidability

In section 4.4, we saw that the tree method is an effective test for entailment in propo-

sitional logic: whenever we test whether Γ |= p, we get a ‘yes’ or a ‘no’ answer sooner

or later. This is not so in first order logic. If the language contains just one two-place

predicate, the entailment relation is undecidable. This means that there are trees be-

ginning with first order sentences that never terminate. Here is one such tree:

1. ¬¬∀x∃yRxy X ¬conclusion

2. ∀x∃yRxy from 1, ¬¬

3. ∃yRay X from 2, ∀

4. Rab from 3, ∃

The tree is not finished, as line 2 has an unchecked sentence on it (remember that

we do not check off ∀-sentences when instantiating them). We have not instantiated

∀x∃yRxy with b yet, so we can carry on as follows:

Philosophy Insights: Formal Logic 85

..

.

5. ∃yRby X from 2, ∀

6. Rbc from 5, ∃

Now we have another new constant, c. We can use this to instantiate ∀x∃yRxy from line

2. But then we will instantiate another ∃-sentence and generate another new constant,

and then another and so on, forever. The branch will never close as we always generate

a new constant and so can always apply the rule for ∀-sentences.

The immediate reaction is to look for an alternative proof method for first order

logic. But this is not a deficiency of just our tree rules: no automatic proof procedure

is a decision procedure for first order entailment. The proof of this is fairly hard, as

we need a way of talking about any possible proof system, rather than just showing

that this or that way of proving things does not give us a decision procedure.1 The

lack of an automatic decision procedure is not so bad as it sounds at first, because we

can still show that the tree method is complete: this means that, if Γ |= A, then the

tree beginning with Γ ∪ {¬A} is guaranteed to close after finitely many steps. What

we cannot say is that every tree will either close or else run out of applicable rules.

Given a tree that has not closed yet, there is no general method for telling whether the

tree will close or not (because if we could know this in advance, for any tree, then we

would have a decision procedure for first order entailment).

We can construct a model from a finished open branch in much the same way as we

constructed a valuation from a finished open branch in propositional logic 4.5. But

first, we had better say what we mean for a branch to be finished.

Finished branches A branch is finished iff:

(i) it is closed; or

(ii) all non-atomic sentences on the branch have been checked off; or

(iii) the only unchecked non-atomic sentences on the branch are ∀-sentences, each of

which has been instantiated by every constant on the branch.

1 See Jeffrey (2006). Boolos et al (2002) gives more of the technical background.

Philosophy Insights: Formal Logic 86

Although we never check off ∀-sentences, they are no longer of use to us once we

have instantiated them with every old constant on the branch.

Trees Without Identity

To construct a model M, start with a finished open branch. List all of the constants

that occur in atomic sentences on the branch; suppose that c1 is the first in our list,

c2 is the second and so on and that cn is the last constant in our list. Our domain can

contain any kind of entity that we like. For convenience, we will take it to be the set of

the first n natural numbers, {1, 2, . . . , n}. Next, we need to provide an interpretation of

each constant and each predicate appearing on the branch. For the constants, we set:

cM M M

1 = 1, c2 = 2, . . . cn = n

Next, for every predicate P on the branch, we add the tuple

hcM M

1 , . . . , cm i

to the extension of P iff Pc1 · · · cm is on the branch. Given the way that we have inter-

preted constants from our list as numbers, this means that we add the tuple h1, . . . , mi

to PM iff Pc1 · · · cm is on the branch. As part of the completeness proof in section 7.5

below, I give a proof that |=M A for any sentence A on the branch. As a consequence,

M satisfies the set of sentences at the root of the tree.

1. ¬¬∃x∀y(Rxy ∧ Px) X ¬conclusion

4. Raa ∧ Pa X from 3, ∀

5. Raa from 4, ∧

6. Pa

The tree does not close and so ¬¬∃x∀y(Rxy ∧ Px) is satisfiable. We build a model M

that satisfies the sentence as follows. We have only the one constant a on the branch,

Philosophy Insights: Formal Logic 87

and so we set:

D = {1}

aM = 1

PM = {1}

RM = {h1, 1i}

It is easy to check that |=M Pa, |=M Raa and so |=M Raa ∧ Pa. What about the other

sentences on the branch? |=M ∀y(Ray ∧ Pa) because |=Mdc Rac ∧ Pa for all d ∈ {1}.1

Similarly, |=M ∃x∀y(Rxy ∧ Px) because |=Mdc ∀y(Rcy ∧ Px) for some d ∈ {1} and

hence |=M ¬¬∃x∀y(Rxy ∧ Px).

Trees With Identity

In trees that contain the identity predicate, we have to be a bit more subtle. We begin

with the list of constants on the open branch as before. We then sort these constants

into groups, so that ci and c j are in the same group if and only if ci = c j is on the branch.

If we have n groups of constants, the domain D = {1, 2, . . . , n}. We then proceed just as

before, with the proviso that we interpret ci and c j as the same number (i.e. CiM = CM j )

if and only if ci and c j belong to the same group.

Example 7 Construct a tree to test whether |= ¬ ∃x∃y(Px ∧ y = a) ∧ Pa .

The tree (below) does not close and so ¬¬ ∃x∃y(Px ∧ y = a) ∧ Pa is satisfiable. We

build a model M that satisfies the sentence as follows. We have three constants on the

branch, a, b and c. But we also have c = a on the branch and so we group the constants

as follows:

Group 1: a, c Group 2: b

As we have two groups, we set

D = {1, 2}

aM = 1

bM = 2

cM = 1

PM = {1, 2}

Philosophy Insights: Formal Logic 88

1. ¬¬ ∃x∃y(Px ∧ y = a) ∧ Pa X ¬conclusion

2. ∃x∃y(Px ∧ y = a) ∧ Pa X from 1, ¬¬

3. ∃x∃y(Px ∧ y = a)

from 2, ∧

4. Pa

5. ∃y(Pb ∧ y = a) from 3, ∃

6. Pb ∧ c = a from 4, ∧

7. Pb

from 6, ∧

8. c=a

9. Pc from 4, 8, =

Soundness

Recall that we write ‘Γ ` A’ in the metalanguage to abbreviate ‘the tree generated by

Γ ∪ {¬A} closes’. The tree test is sound if and only if: Γ ` A only if Γ |= A. In terms

of trees, soundness of the tree test means that:

Satisfiable sets of sentences generate open trees

To prove that this is so, we need to show that all tree rules preserve satisfiability: if the

sentences on a branch are satisfiable before a rule is applied, then they are satisfiable

after it is applied too. We saw in section 4.6 that the rules for the logical connectives

are sound and the rules for ∀, ¬∀, ¬∃, = and ¬ = are all clearly sound too. The

only questionable rule is the ∃ rule, for the inference from ∃xA to a[x/c] is invalid.

Nevertheless, the ∃ rule preserves satisfiability. Suppose that the set ∆ of sentences

on a branch are satisfiable and include ∃xA but that c does not appear anywhere on

this branch. Then we can build a model M which satisfies ∆ as we did above. Since

Philosophy Insights: Formal Logic 89

|=M ∃xA and c is not interpreted in M , |=Mdc A[x/c] for some d in the domain of M.

Hence |=Mdc ∆ ∪ A[x/c] and so adding A[x/c] to the branch preserves satisfiability.1

Completeness

The tree test is complete if and only if: Γ |= A only if Γ ` A. In terms of tress, this

means that:

Unsatisfiable sets generate closed trees

As in section 4.6 we will prove this by proving that:

The set of sentences on a finished open branch are satisfiable

This entails that the set at the root of a finished tree with an open branch is satisfi-

able, which is the contrapositive of (and so equivalent to) what we are trying to prove.

Let ∆ be the set of sentences on a finished open branch and let M be the model

constructed from the open branch in the way described above. We need to show that

|=M ∆. It is clear that, if A ∈ ∆ is atomic, then |=M A (since aM = bM if a = b ∈ ∆

and hcM M

1 · · · cn i ∈ P

M if Pc · · · c ∈ ∆). The argument from section 4.6 shows that

1 n

M satisfies all negated sentences, conjunctions, disjunctions, conditionals and bicon-

ditionals of a certain complexity in ∆, provided that M satisfies all less complex

sentences in ∆. What about quantified sentences ∃xA and ∀xA? Assume that M sat-

isfies all sentences in ∆ that are less complex than ∃xA and that ∃xA ∈ ∆. We would

have checked off ∃xA and so, for some constant c, A[x/c] ∈ ∆ and, since A[x/c] is less

complex than ∃xA, |=M A[x/c] by our hypothesis. But A[x/c] |= ∃xA, hence |=M ∃xA.

Now assume that M satisfies all sentences in ∆ that are less complex than ∀xA and

that ∀xA ∈ ∆. Since we instantiated ∀xA with every constant in the branch, |=M A[x/c]

for every c that appears in ∆. Because the domain of M contains no more elements

than there are constants in ∆, it follows that |=M ∀xA.

Exercises

Question 1 Using trees, test the following. If the entailment does not hold, construct

a counter model (state what the domain of the model is and what interpretation it gives

to P, Q and those constants that appear in your tree).

1 The ∃ rule differs from the other rules in that not every model that satisfies the set of sentences on a branch before the rule is

applied will satisfy the set of sentences on the branch after the rule is applied. This is because ∃xA 6|= A[x/c], even if c does not

appear anywhere in A.

Philosophy Insights: Formal Logic 90

(b) ∃xPx |= ∀xPx

(c) ∃x(Px ∧ Qx) |= ∃xPx ∧ ∃xQx

(d) ∃xPx ∧ ∃xQx |= ∃x(Px ∧ Qx)

Question 2 Translate each of the following into first order logic (using the translation

scheme from the questions in the previous chapter) and use a tree to show that the

entailment holds:

(a) ‘The man who admires Cath also admires Anna’ entails ‘someone admires Anna’.

(b) ‘The person who admires Cath is a man’ entails ‘only men admire Cath’.

(c) ‘The person who admires Cath is a man’ entails ‘at most one person admires Cath’.

∃x∀y(Py ↔ y = x), ∃x∃y(x 6 = y ∧ Px ∧ Rxy)

‘Bob’s father’ and ‘Bob’s father’s father’ are all terms. We can extend our formal

language by saying that all constants are terms and, if t is a term, then ft and gt are

terms. If t1 and t2 are terms, then Pt1 and t1 = t2 are well-formed atomic sentences.

There are no extra tree rules for terms. Using trees, test whether:

(a) ∀x fx = gx, Pfa |= ∃xPgx

(b) ∀x gfx = x |= ∀x∀y(fx = y → gy = x)

First order Hintikka sets correspond to the sentences on a finished open branch of a

proof tree. Complete the following conditionals which, in addition to the conditions

given in question 5, chapter 4, define a first order Hintikka set H:

(a) If ∀xA ∈ H then —?

(b) If ∃xA ∈ H then —?

(c) If ¬∀xA ∈ H then —?

(d) If ¬∃xA ∈ H then —?

Philosophy Insights: Formal Logic 91

A set is just a collection of objects. The collection is abstract: objects do not have to

be arranged in space or time in any particular way form a set. For example, there is a

set consisting of Tony Blair, the Emperor Augustus and the number 17. If a, b and c

are objects, we write the set that contains them (and no further objects) as

{a, b, c}

Here, a, b and c are called the elements or members of the set. Sets themselves are

considered to be objects, so that a set can be an element of another set, for example,

the set

{a, b, {a, c}}

What makes a set the set it is, rather than some other set, is just its members, so that

there cannot be two distinct sets that have exactly the same members. Thus, ‘{a, c, b}’,

‘{a, b, c}’ and ‘{a, b, b, c}’ are different ways of writing the very same set. But note

that {a, b, {c}} is not the same set as {a, b, c}, for the former contains the set {c}

whereas the latter does not. We count the collection of no objects at all as a set. This

is called the empty set and is written ‘∅’ (some authors use ‘{}’).1

I will use capital roman letters ‘X’, ‘Y ’, ‘Z’ to name sets, the symbol ‘∈’ to abbrevi-

ate ‘is a member of’ and ‘∈’ / to abbreviate ‘is not a member of’. The union of two sets

X and Y , X ∪Y , is the set that contains all the members of X and all the members of Y

(and no more). An object a ∈ X ∪Y , therefore, if and only if either a ∈ X or a ∈ Y . For

example, if X = {a, b, c} and Y = {b, c, d} then X ∪Y = {a, b, c, d}. The intersection

of X and Y , X ∩Y , is the set that contains all of the objects that are both members of

X and members of Y . An object a ∈ X ∩Y if and only if a ∈ X and a ∈ Y . If X and Y

are as above, then their intersection X ∩Y = {b, c}.

X is a subset of Y , written ‘X ⊆ Y ’, if and only if every member of X is also a

member of Y . {a} is a subset of {a, b}, as is {b}. Note that X ⊆ X and ∅ ⊂ X for

every X. The set of all subsets of X, including ∅ and X itself, is the power set of X,

written ℘(X).2

1 There can only be the one empty set. If there were two distinct empty sets, then at least one object must belong to one but not the

other. But since no objects belong to the empty set, this cannot be the case.

2 The power set of of X can also be written ‘2X ’.

Philosophy Insights: Formal Logic 92

Appendix B. Infinity

Infinity is a confusing topic but for our purposes we only need the concept of the

members of a set being countable. This means that we could, if we had enough time,

patience, paper and so on, write out a list containing all members of the set. When

we could do this, the set is countable (or enumerable or denumerable; these are both

synonyms for ‘countable’); if we could not do this, even in theory, then the set is

uncountable. Clearly all finite sets are countable, so from now on I shall reserve the

term ‘countable’ for countably infinite sets.

The set if all natural numbers N = {0, 1, 2, . . .} is countable: if we started at 0 and

each time we write a number n down, we next write n + 1, then any particular natural

number will be added to the list eventually.1 The formal definition of a countable

set is that its members can be paired off one-to-one with the natural numbers. The

cardinality of a set X, written ‘|X|’ is a measure of X’s size. When two sets have

the same cardinality, they are called equinumerous and all members of one set can

be paired off one-to-one with all members of the other (in fact, some approaches

define cardinality in terms of such one-to-one pairings). If X is finite, then |X| is the

number of entities in X. The infinite cardinal numbers are ℵ0 , ℵ1 and so on. The set

of natural numbers has a cardinality of ℵ0 and so, by definition, do all countable sets.

The following examples illustrate these ideas.

The set of even natural numbers is countable

That is, |{0, 2, 4, . . .}| = |N| = ℵ0 . Suppose this were not the case; then there must be

a largest even number, which of course there isn’t.

The set Z of integers is countable

We can pair the off positive members of Z (0, 1, 2, . . .) with even members of N and

negative members of Z (−1, −2, −3, . . .) with odd members of N.

1 Sometimes 0 is not included in N but reasons for thinking it to be in any way unnatural are historical rather than mathematical. The

natural numbers less 0 are sometimes called the counting numbers but since computer scientists always count from 0, this is not

wholly appropriate! From the point of view of how many natural numbers there are, it does not matter whether we include 0 or not.

Philosophy Insights: Formal Logic 93

Rational numbers are numbers that can be expressed as a fraction n/d . Enumerate them

as follows: for each natural number m, write down every fraction n/d where n ≤ m and

d ≤ m (crossing out any ratio already written down as you go). This enumeration

contains all rational numbers.

The Real Numbers Are Uncountable

Even if we only consider the real numbers between 0 and 1, we find more than we

could possibly enumerate them. Real numbers between 0 and 1 can be written as ‘0.’

followed by an infinite number of digits (0 to 9), like this:

0.d1 d2 d3 d4 d5 d6 · · ·

Suppose we could enumerate all real numbers. Let’s write dnm to mean the nth digit of

the mth number in our list. Now, let’s construct a real number r as follows: the first

digit after the decimal place is not d11 , the second is not d22 , the third is not d33 and so

on. Suppose this number is the mth in our enumeration. Then its mth digit after the

decimal point must be dmm . But we constructed r so that it’s mth digit is not dmm , hence

r cannot be in our enumeration. No enumeration can contain all of the real numbers

between 0 and 1 (and so certainly cannot contain all real numbers).

The Continuum Hypothesis

This clever style of proof is known as a diagonal argument and was found by Gre-

gor Cantor in the 1890s. It establishes that |R|, the cardinality of the real numbers,

is greater than |N|, the cardinality of the natural numbers. |R| is equal to the cardi-

nality of the power set of the natural numbers, i.e. 2ℵ0 . Cantor could not, however,

determine whether there is a cardinal number greater than |N| but smaller than |R|. He

hypothesized that there is not, i.e. that:

2ℵ0 = ℵ1

This is known as the continuum hypothesis. The generalized version of the continuum

hypothesis is that, for any ordinal number α,

2ℵα = ℵα+1

This is an open problem: no one has been able to prove or disprove either statement.

Philosophy Insights: Formal Logic 94

Two longer modern introductions to formal logic are:

Greg Restall, Logic: An Introduction (Abingdon: Routledge, 2006).

Peter Smith, An Introduction to Formal Logic (Cambridge: Cambridge University

Press, 2003).

Texts on logic do not all agree on the notation used. Restall and Smith use ‘⊃’ for the

material conditional, for example (this is quite common). Restall also uses ‘∼’ and

‘&’ for negation and conjunction.

Richard Jeffrey, Formal Logic: its Scope and Limits, 4th edition, ed. John Burgess

(Cambridge: Hackett, 2006).

This is a concise introduction to logic which covers the undecidability of first order

logic and the incompleteness of second order logic well, using register machines. A

more advanced text on incompleteness and (un)computability in general is:

George Boolos, John Burgess and Richard Jeffrey, Computability and Logic, 4th edi-

tion, ed. John Burgess (Cambridge: Cambridge University Press, 2002).

Good introductions to relevant logic (which avoids the counterintuitive consequences

of entailment that we met in chapter 3.3) are:

E. D. Mares and R. K. Meyer, ‘Relevant Logics’ in L. Goble, ed., The Blackwell Guide

to Philosophical Logic (Oxford: Blackwell 2001).

Stephen Read, Relevant Logic (Oxford: Blackwell, 1988).

A standard text on the philosophy of logic is:

Haack, Susan. Philosophy of Logics (Cambridge: Cambridge University Press, 1978).

And a good introduction to how logic applies to philosophy in general is:

A. C. Grayling, An Introduction to Philosophical Logic, 3rd edition (Oxford: Black-

well, 1997).

Answers to Exercises

Chapter 1

(a) True. The argument: snow is green, therefore snow is green is valid, because the

conclusion must be true if the premise is, as they are the same! But the premise is

false, so the argument is not sound.

(b) False. If the truth of premises p1 to pn guarantees the truth on the conclusion, then

the truth of premises p1 to pn plus pn+1 to pn+m must also guarantee the truth of

that same conclusion (since p1 to pn remain true).

(c) False. If removing premises could make a particular invalid argument valid, then

adding those same premises to the same valid argument would make it invalid.

But as we saw in the previous question, this is not possible.

Chapter 2

Question 1

(a) False. Valuations assign truth values only to the primitive propositions.

(b) True.

(c) False. Using a truth table, you can check that p → ¬p is satisfied by a valuation V

that sets V (p) = F (in other words, p → ¬p is true when p is false).

(d) False. The primitive proposition p is not a tautology, but neither is ¬p.

(e) True. If p has an ‘F’ in every row of its truth table, then ¬p will have a ‘T’ in every

row of its truth table.

(f) True (by definition of a satisfiable set of sentences).

(g) False. p is satisfiable and so is ¬p, but the set {p, ¬p} is not.

Philosophy Insights: Formal Logic 96

F F F T F T F F F T T T T

F T F T T T F F T T T T T

T F T F T T F T F T T T F

T T T T F T F T T T T T T

T F F T T T T

T F T T T T T

T T F T T T T

T T T T T T T

(d)–(g) are tautologies. Valuations that do not satisfy the remaining propositions are

as follows:

(a) V (p) = V (q) = F or V (p) = F, V (q) = T

(b) V (p) = T, V (q) = F

(c) V (p) = V (q) = F or V (p) = V (q) = T

(h) V (p) = T, V (q) = V (r) = F

uw = ‘you will work hard’ iw = ‘I will work hard’

uf = ‘you will fail’ if = ‘I will fail’

(a) p = uw ∧ iw ∨ uf ∧ if, q = uw ∧ (iw ∨ uf ∧ if)

(b) p → q is a tautology. V does not satisfy p → q when V (uf) = V (if) = T and

V (uw) = F.

(c) q → p is a tautology.

Chapter 3

Question 1

(a) False. ‘|=’ belongs to the meta-language (English augmented with logical sym-

bols).

(b) False. ‘Bertrand Russell was a philosopher’ is a true proposition but is not entailed

by ‘Bertrand Russell was an Earl’, for example.

Philosophy Insights: Formal Logic 97

(c) True.

(d) False. Since a valuation can satisfy infinitely many propositions, ‘|=V Γ’ and

hence ‘Γ |= p’ makes perfect sense.

(e) True.

(f) True. p entails p and p ∧ p and p ∧ p ∧ p and so on. Since there is no limit to

the length of such conjunctions, there must be infinitely many such propositions

entialed by p.

(g) True.

Question 2

(a) p q (> → p) ∨ (q → ⊥)

F F F T T

F T F F F

T F T T T

T T T T F

(c) ¬(¬q ∨ p) ≡ ((> → p) ∨ (q → ⊥)) → ⊥.

(d) q ∧ ¬p ≡ ¬(¬q ∨ p).

Question 3

(a) This is already in negation normal form.

(b) p ∧ ¬q.

(c) p.

(d) ¬p ∧ (¬q ∨ ¬r).

(e) p ∧ ¬q ∧ q. Note that, since this is unsatisfiable, p ∧ ¬p or ⊥ are just as good

answers.

Philosophy Insights: Formal Logic 98

Question 4

(a) p q p ∧ q ∨ ¬p ∧ ¬q

F F F T T

F T F F F

T F F F F

T T T T F

Intuitively, each disjunct describes a valuation that satisfies the proposition. One

valuation sets p and q to T, the other sets both to F.

(b) Propositions (b), (c) and (e) are already in disjunction normal form. The remaining

answers are:

(a) ¬p ∨ q and (d) ¬p ∧ ¬q ∨ ¬p ∧ ¬r.

Chapter 4

Question 1

q→r X ¬((¬p ∧ q) ∨ r) X ¬q ∨ r X

¬(¬p ∨ r) X ¬r ∨ s X

¬p ¬(¬p ∨ s) X

¬¬p X q∨r X

¬r ¬¬p X

¬(¬p ∧ q) X ¬s

p ¬r

p

¬¬q X ¬p q r

× × ¬p q

q ×

¬¬p ¬q

× × ¬q r

¬q r ×

× ×

¬r s

× ×

Philosophy Insights: Formal Logic 99

¬(¬p ∧ ¬q) X ¬¬(p ∨ q) X

¬p ¬q ¬p ¬q

× × × ×

q p

Any valuation that sets V (p) 6= V (q) gives a counterexample to both (d) and (e).

Question 2

(a) ¬ p → (q → r) → (p → q) → (p → r) X

p → (q → r) X

¬ (p → q) → (p → r) X

p→q X

¬(p → r) X

p

r

¬p q

×

¬p q→r X

×

¬q r

× ×

Philosophy Insights: Formal Logic 100

(b) ¬ (p → q) → (p → r) → (p → (q → r) X

(p → q) → (p →r) X

¬ p → (q → r) X

p

¬(q → r) X

q

¬r

¬(p → q) X p→r X

p ¬p r

¬q × ×

×

Question 3 q∧s∨r X

p∨q X

q ↔ ¬r X

p → q∧r X

q ¬q

¬r ¬¬r

q∧s X r q∧s X r

×

q q

¬p q∧r X

s s

× q

p q

¬p q∧r X r

× ×

×

q

p q

r

×

×

Philosophy Insights: Formal Logic 101

q ↔ ¬r, p → q ∧ r} sets:

Question 4 We need to show that, if all branches close, then the set at the root is

unsatisfiable. This amounts to showing that each rule that can be applied preserves

satisfiability in the tree: if the set of sentences on the branch was satisfiable before

the rule was applied, then the set of sentences on at least one of the new branches is

satisfiable, too. Suppose the set of sentences on the branch before the rule applied is

∆, which we will assume to be satisfiable. We proceed on a case-by-case basis.

If the ¬¬ rule is applied, then ¬¬p ∈ ∆, hence ∆ ∪ {p} must be satisfiable too and

the resulting branch is satisfiable.

If the ∧ rule is applied, then p ∧ q ∈ ∆, hence ∆ ∪ {p, q} must also be satisfiable.

So the resulting branch is satisfiable too.

If the ¬∧ rule is applied, then ¬(p ∧ q) ∈ ∆, hence either ∆ ∪ {¬p} or ∆ ∪ {¬q}

must be satisfiable. So at least one of the new branches must also be satisfiable.

If the ∨ rule is applied, then p ∨ q ∈ ∆, hence either ∆ ∪ {p} or ∆ ∪ {p} must be

satisfiable. So at least one of the new branches must also be satisfiable.

If the ¬∨ rule is applied, then ¬(p∨q) ∈ ∆, hence ∆∪{¬p, ¬q} must be satisfiable.

So the resulting branch is satisfiable too.

If the → rule is applied, then p → q ∈ ∆, hence either ∆ ∪ {¬p} or ∆ ∪ {p} must

be satisfiable. So at least one of the new branches must also be satisfiable.

If the ¬ → rule is applied, then ¬(p → q) ∈ ∆, hence ∆ ∪ {p, ¬q} must be satisfi-

able. So the resulting branch is satisfiable too.

If the ↔ rule is applied, then p ↔ q ∈ ∆, hence either ∆ ∪ {p, q} or ∆ ∪ {¬p, ¬q}

must be satisfiable. So at least one of the new branches must also be satisfiable.

If the ¬ ↔ rule is applied, then ¬(p ↔ q) ∈ ∆, hence either ∆ ∪ {p, ¬q} or ∆ ∪

{¬p, q} must be satisfiable. So at least one of the new branches must also be

satisfiable.

Since we assumed that ∆ is satisfiable, the rule for closing branches cannot be applied

and so the proof is complete.

Philosophy Insights: Formal Logic 102

Question 5 Notice how each condition in the definition of a Hintikka set corresponds

to a tree rule. This is intentional, so that the sentences on any finished open branches

in a tree will form a Hintikka set. To show that all Hintikka sets are satisfiable, we

use the technique from the completeness proof in section 4.6. Suppose that H is a

Hintikka set. We will construct a valuation V and show that |=V H. To begin, for

every primitive proposition p, we set:

T if p ∈ H

V (p) = F if ¬p ∈ H

T otherwise

For our inductive hypothesis, assume that, for every proposition q with |q| < n (for

some arbitrary number n), if q ∈ H then |=V q. We show that the same holds for a

proposition p ∈ H with |p| = n, by considering the kinds of proposition that p could

be:

If p is a double-negated proposition ¬¬r, then |r| < n and, since H is a Hintikka

set, r ∈ H. Hence, by hypothesis, |=V r and so |=V p.

If p is a conjunction p1 ∧ p2 , then both |p1 | < n and |p2 | < n and so p1 ∈ H and

p2 ∈ H. By the inductive hypothesis, |=V p1 and |=V p2 and so |=V p.

If p is a negated conjunction ¬(p1 ∧ p2 ), then |¬p1 | < n, |¬p2 | < n and either

¬p1 ∈ H, in which case |=V ¬p1 (by hypothesis) or else ¬p2 ∈ H, in which case

|=V ¬p2 . Either way, |=V p.

If p is a disjunction p1 ∨ p2 , then |p1 | < n, |p2 | < n and either p1 ∈ H, in which

case |=V p1 or else p2 ∈ H, in which case |=V p2 . Either way, |=V p.

If p is a negated disjunction ¬(p ∨ q), then |¬p1 | < n, |¬p2 | < n and both ¬p1 ∈ H

and ¬p2 ∈ H. By hypothesis, |=V ¬p1 and |=V ¬p2 , hence |=V p.

Chapter 5

Question 1

(a) False. It assigns a set of ordered pairs to each two-place predicate (subsets of the

domain are assigned to one-place predicates only).

Philosophy Insights: Formal Logic 103

(b) True.

(c) True. If a is related to b in some way, then a must be related to something in that

way.

(d) False. Counterexample: someone is prime-minister and the name ‘Elvis’ does not

appear in ‘—is prime minister’, yet ‘Elvis is prime minister’ is not true.

Question 2

(a) ∀x(Axa → Mx) (d) ∀x∃yAxy

(b) ∃x(Axa ∧ ¬Axb) (e) ∃x∀yAyx

(c) Acb → Aad (f ) ∀x∀y(Axy → Axa)

Question 3

(a) Every man admires Anna.

(b) Everyone is admired by someone.

(c) Someone is admired by everyone.

(d) Cath admires all her male admirers (i.e. all men who admire her).

asymmetrical, RM must satisfy both ∀xy(Rxy → Ryx) and ∀xy(Rxy → ¬Ryx). Since the

consequents of these conditionals are not jointly satisfiable, this is possible only when

neither antecedent is ever true, i.e. when RM does not relate anything to anything.

Hence setting RM = {} will suffice.

Question 5

(a) ∀xPx ∧ Qa ≡ ∀x(Px ∧ Qa)

(b) ¬∀xPx ∨ Qa ≡ ∃x¬Px ∨ Qa ≡ ∃x(¬Px ∨ Qa)

(c) ∃xPx → Qa ≡ ¬∃xPx ∨ Qa ≡ ∀x¬Px ∨ Qa ≡ ∀x(¬Px ∨ Qa) ≡ ∀x(Px → Qa)

(d) ∀xPx ∧ ∃xQx ≡ ∀xPx ∧ ∃yQy ≡ ∀x(Px ∧ ∃yQy) ≡ ∀x∃y(Px ∧ Qy)

Philosophy Insights: Formal Logic 104

≡ ∃x(Px → Qa) ∧ ∀x(Qa → Px)

≡ ∃x (Px → Qa) ∧ ∀y(Qa → Py)

≡ ∃x∀y (Px → Qa) ∧ (Qa → Py)

≡ ∃x∀y(Px ∧ ¬Rxy) → ∀z∃wSzw

≡ ∀x∃y(Px ∧ ¬Rxy → ∀z∃wSzw)

≡ ∀x∃y∀z∃w(Px ∧ ¬Rxy → Szw)

Chapter 6

Question 1

(a) ∀x∀y∀z(Axb ∧ Ayb ∧ Azb → x = y ∨ y = z ∨ x = z)

(b) ∃x∃y(My ∧ Axy ∧ ∀z(Mz ∧ Axz → y = z)) or ∃x∃y∀z(Mz ∧ Axz ↔ y = z)

(c) ∃x(Mx ∧ Axc ∧ ∀y(My ∧ Ayc → y = x) ∧ ¬Axd)

(d) ∀x(Abx → x = b)

(e) ∃x∃y∃z(Axy ∧ Axz ∧ ∀w(Axw → w = y ∨ w = z))

(f) ∃x(Axc ∧ ∀y(Ayc → y = x) ∧ x 6 = c)

Question 2

(a) The person who admires herself isn’t Anna.

(b) Nobody likes only themselves (i.e. everybody who likes themselves also likes

somebody else, too).

(c) Someone admires everyone who admires Anna or Bob.

(d) There exist exactly two people.

(e) There exists exactly one man.

(f) At least three men exist.

Philosophy Insights: Formal Logic 105

Chapter 7

Question 1

(a) The entailment holds: (b) The entailment does not hold:

∀xPx ∃xPx X

¬∃xPx X ¬∀xPx X

∀x¬Px ∃x¬Px X

Pa Pa

¬Pa ¬Pb

×

A counter model for (b) is: D = {1, 2}, aM = 1, bM = 2, PM = {1}

(c) The entailment holds: (d) The entailment does not hold:

∃x(Px ∧ Qx) X ∃xPx ∧ ∃xQx X

¬(∃xPx ∧ ∃xQx) X ¬∃x(Px ∧ Qx) X

Pa ∧ Qa X ∃xPx X

∃xQx X

Pa

Qa Pa

Qb

¬∃xPx X ¬∃xQx X

∀x¬(Px ∧ Qx)

∀x¬Px ∀x¬Qx

¬(Pa ∧ Qa) X

¬Pa ¬Qa

¬(Pb ∧ Qb) X

× ×

¬Pa ¬Qa

×

¬Pb ¬Qb

×

Philosophy Insights: Formal Logic 106

Question 2

(a) ‘The man who admires Cath also admires Anna’: ∃x(∀y(My ∧ Ayc ↔ x = y) ∧ Axa)

‘Someone admires Anna’: ∃xAxa The tree is shown on the left.

(b) ‘The person who admires Cath is a man’: ∃x(∀y(Ayc ↔ x = y) ∧ Mx)

‘Only men admire Cath’: ∀x(Axc → Mx) The tree is shown on the right.

¬∃xAxa X ¬∀x(Axc → Mx) X

Aba Ma

¬Aba Aac ↔ a = a X

×

¬(Abc → Mb) X

Abc

¬Mb

Abc ↔ a = b X

Aac ¬Aac

a=a a6= a

×

Abc ¬Abc

a=b a6= b

×

Mb

×

Philosophy Insights: Formal Logic 107

∃x(∀y(Ayc ↔ y = x) ∧ Mx) X

¬∀x∀y(Axc ∧ Ayc → x = y) X

∃x¬∀y(Axc ∧ Ayc → x = y) X

∀y(Ayc ↔ y = a) ∧ Ma X

∀y(Ayc ↔ y = a)

Ma

¬∀y(Abc ∧ Ayc → b = y) X

∃y¬(Abc ∧ Ayc → b = y) X

¬(Abc ∧ Adc → b = d) X

Abc ∧ Adc X

b6= d

Abc

Adc

Abc ↔ b = c X

Adc ↔ d = c X

Abc ¬Abc

b=c b6= c

×

Adc ¬Adc

d=c d6= c

×

b=d

×

Philosophy Insights: Formal Logic 108

Question 3 ∃x∀y(Py ↔ y = x) X

∃x∃y(x 6 = y ∧ Px ∧ Rxy) X

∀y(Py ↔ y = a)

Pa ↔ a = a X

∃y(b 6 = y ∧ Pb ∧ Rby) X

b 6 = c ∧ Pb ∧ Rbc X

b 6 = c ∧ Pb X

Rbc

b6= c

Pb

Pb ↔ b = a X

Pc ↔ c = a X

Pa ¬Pa

a=a a6= a

×

Pb ¬Pb

b=a b6= a

×

Pc ¬Pc

c=a c6= a

b=c

×

A model M that satisfies {∃x∀y(Py ↔ y = x), ∃x∃y(x 6 = y ∧ Px ∧ Rxy)} is:

Philosophy Insights: Formal Logic 109

Question 4

Pfa ¬∀x∀y(fx = y → gy = x) X

¬∃xPgx X

∃x¬∀y(fx = y → gy = x) X

∀x¬Pgx

¬∀y(fa = y → gy = a) X

fa = ga

∃y¬(fa = y → gy = a) X

Pga

¬(fa = b → gb = a) X

¬Pga

× fa = b

gb 6 = a

gfa = a

gb = a

×

Question 5

(a) If ∀xA ∈ H then, for every constant c that appears in any sentence in H, A[x/c] ∈ H.

(b) If ∃xA ∈ H then, for some constant c, A[x/c] ∈ H.

(c) If ¬∀xA ∈ H then ∃x¬A ∈ H.

(d) If ¬∃xA ∈ H then ∀x¬A ∈ H.

Humanities Insights

The following Insights are available or forthcoming at:

http://www.humanities-ebooks.co.uk

History

The British Empire: Pomp, Power and Postcolonialism

The Holocaust: Events, Motives, Legacy

Methodism and Society

Southern Africa

Chatwin: In Patagonia

Conrad: The Secret Agent

Eliot, George: Silas Marner

Eliot, T S: ‘The Love Song of J Alfred Prufrock’ and The Waste Land

Faulkner: The Sound and the Fury

Gaskell, Mary Barton

Hardy: Tess of the Durbervilles

Heaney: Selected Poems

Hopkins: Selected Poems

Hughes: Selected Poems

Lawrence: The Rainbow

Lawrence: Sons and Lovers

Lawrence: Women in Love

Morrison: Beloved

Shakespeare: Hamlet

Shakespeare: Henry IV

Shakespeare: Richard II

Shakespeare: Richard III

Shakespeare: The Tempest

Shelley: Frankenstein

Wordsworth: Lyrical Ballads

English Renaissance Drama: Theatre and Theatres in Shakespeare’s Time

Fields of Agony: English Poetry and the First World War

An Introduction to Critical Theory

An Introduction to Rhetorical Terms

Philosophy Insights

American Pragmatism

Business Ethics

Ethics

Existentialism

Formal Logic

Heidegger

Informal Logic and Critical Thinking

Islamic Philosophy

Marxism

Meta-Ethics

Philosophy of History

Philosophy of Language

Philosophy of Mind

Philosophy of Religion

Philosophy of Sport

Sartre: Existentialism and Humanism

Wittgenstein

General Titles

An Inroduction to Feminist Theory

An Introduction to Critical Theory

An Introduction to Rhetorical Terms

Aesthetics

Austen: Pride and Prejudice

Blake: Songs of Innocence & Experience and The Marriage of Heaven & Hell

Eliot: Four Quartets

Fielding: Tom Jones

Lawrence: Selected Poems

Mental Causation

Plato

Plato’s Republic

Renaissance Philosophy

Shakespeare: Macbeth

Shakespeare: Romeo and Juliet

Wonder

- ArgumentationUploaded byNikio23
- Class Notes- Legal Reasoning Second SemUploaded bySiddharth Matai
- solution1-10Uploaded bytntxp256
- Discrete Maths xUploaded byssssoumiksen
- General Assessment CriteriaUploaded byashrahhana
- Guidelines for Essay Assignment History 15Uploaded byAnonymous v5f9oTW
- Course ReflectionUploaded bylcolli28
- Legal Logic Technique OutlineUploaded byCha Ancheta Cabigas
- yunmathf4Uploaded bySean Gomez
- United States v. Jazon Dussan, 3rd Cir. (2010)Uploaded byScribd Government Docs
- Discrete Lecture 3Uploaded byPríyöjìt Däs
- For and Against EssaysUploaded byLaurieWilliams
- Logical Reasoning.docxUploaded byRanaAbrarSabir
- Solutions to Exercise 11Uploaded byPhúc Nguyễn
- Debate Teaching Material-day 2-FallacyUploaded byNajibawazier
- A Brief Overview of Logical TheoryUploaded byHyman Jay Hernandez Blanco
- ISM SOE11139 Assessment InformationUploaded byAbhishek Sharma
- I Took the GMAT Twice and Scored 6Uploaded byDiego Gallo
- DevicesUploaded bydany.dandan
- Essay PlanUploaded bydancer_upstairs2000
- Assignment 1 TemplateUploaded byAaron Bugo
- Met a Discourse ArgumentUploaded byLK
- FallacyUploaded byDea Novieann Oliveros
- Fly Love FoxxUploaded byLOL
- Purcomm Exam FridayUploaded byYoo Ahn Jeong
- 16 Re as InterviewUploaded byAndrésdeOlarte
- POL 312.pdfUploaded byAbdurrahman
- Standard Seven-Artifact OneUploaded byChristina_Sickert
- Hansen Video Art Spring 2019 Syllabus.pdfUploaded byAnonymous 8DDZuj
- 2376-4415-1-PBUploaded byLu Hdk

- OA Big Book Study 12steps a Big Book Study for Compulsive EatersUploaded byMaria Ines Pazos
- Alexy, R. (2003). On Balancing and Subsumption. A Structural Comparison. Ratio juris, 16, 4.pdfUploaded byDúber Celis
- Ethical Considerations in the Use of Transgenic Animals-RaymondGiraudUploaded byMaria Ines Pazos
- Alexy Two or Three AnalogyUploaded byMaria Ines Pazos
- Being a man in the 21st.pdfUploaded byMaria Ines Pazos
- UrbanityUploaded byMaria Ines Pazos
- Seneca for BeginnersUploaded byMaria Ines Pazos
- The-Akron-Manual-A Manual for Alcoholics AnonymousUploaded byMaria Ines Pazos
- Sponsorship.pdfUploaded byMaria Ines Pazos
- David Lewis and HolesUploaded byMaria Ines Pazos
- A Logician's Fairy Tale. H.L.a. HartUploaded byMaria Ines Pazos
- David Lewis's Semantics for Deontic LogicUploaded byMaria Ines Pazos
- DRACULAUploaded byDerya Kayıhan
- Frege Gottlob the Foundations of Arithmetic 2nd Ed RevisedUploaded byMaria Ines Pazos
- Frege, ConceptografiaUploaded byBernardo Ochoa Gaxiola
- Ibn Arab i Sufi PhysicsUploaded byBalqis Salama

- ai chap 5 solnUploaded byadityav_13
- TruthTablesForCompoundPropositionsWorksheet AnswersUploaded byYohan Ranuk Wijesinghe
- Garson J.W. Modal Logic for PhilosophersUploaded byNashHardMod
- DS CSLI AlmostFinalUploaded byeleni13121975
- Long Exam Midterm Artificial IntelligenceUploaded byDanilo Albay
- Resolution RefutationUploaded byAnimesh Hussey
- A Note Concerning Criticisms of Ayer's Revised Definition of VerifiabilityUploaded byjd watson
- Dm2 Ch1a Propositonal Logic Review Part IUploaded bypham minh loi
- 1.2-3Uploaded bySean Fogerty
- Sequent Calculus for Nonclassical LogicsUploaded byJared Millson
- ISO_31-11Uploaded byMatheus de Castro
- Course Plan Discrete MathematicsUploaded byZain Chishti
- Template Instructional Mtk3013Uploaded byZikri Zaki
- Logic StrandUploaded byNiroshan Karunarathna
- Group C Rice Chapter 1 3 UpdatedUploaded bydeevonc
- Aaby, Proof Methods in Logic [16 pgs]Uploaded bygzalzalkovia
- Knowledge Representation Additional ReadingUploaded byBintang Pamungkas
- On Syntactic StructuresUploaded byRevisão Textual
- representation of knowledge in expert systemsUploaded byHarpinder Kaur
- Proramming & AIUploaded byWaheed Elbeih
- discrete mathsUploaded byAntonio J. Lundy
- Logic HandoutUploaded byakmaken
- Syllogism and Truth TablesUploaded byDragica Marina
- Chapter 06Uploaded byIoana Duminicel
- The Computer and the Incarnation AhrimanUploaded byambertje12
- Logic for Philosophy 2eUploaded bysanyo0987
- Predicate Logic Solution 1Uploaded byprejppr
- Propositional LogicUploaded byprianna
- discrete_mathematics_tutorial.pdfUploaded byunknown45
- lecture 2.pdfUploaded byAbdul wadood