You are on page 1of 65

Methods of PPE 1

Part Logic
Course notes
2019

Prof. dr. Lieven Decock

Lecturer Dr. Trijsje Franssen


Content
1. Introduction
1.1. What is logic?
1.2. Logic as a formal discipline
1.3. Formal logic and human reasoning
2. The language of propositional logic
Exercises 2.1, Exercises 2.2
3. Validity
3.1. Validity and fallacies in arguments
3.2. Validity and fallacies in formal arguments
3.3. Validity and inference rules in propositional logic
3.4. Summary of valid inference rules
4. Truth tables
Exercises 4.1, Exercises 4.2, Exercises 4.3
5. Natural deduction in propositional logic
5.1. Introduction
5.2. Conjunction
Exercises 5.1
5.3. Elimination of implication
Exercises 5.2
5.4. Introduction of implication
Exercises 5.3, Exercises 5.4
5.5. Disjuction
Exercises 5.5
5.6. Introduction of negation
Exercises 5.6, Exercises 5.7
5.7. Elimination of negation

2
Exercises 5.8
5.8. Elimination of double negation
5.9. Double implication
5.10. Proof strategy
Exercises 5.9
6. The language of predicate logic
6.1. Introduction
6.2. Names and predicates
6.3. Relations
Exercises 6.1, Exercises 6.2
6.4. Quantifiers
Exercises 6.3, Exercises 6.4
6.5. Multiple quantifiers
6.6. The domain of predicate logic
6.5. Exercises
6.7. Well-formed formulas in the language of predicate logic
Exercises 6.6, Exercises 6.7
7. Natural deduction in predicate logic
7.1. Introduction of the existential quantifier
Exercises 7.1
7.2. Elimination of the universal quantifier
Exercises 7.2, Exercises 7.3
7.3. Introduction of the universal quantifier
Exercises 7.4, Exercises 7.5
7.4. Elimination of the existential quantifier
Exercises 7.6, Exercises 7.7, Exercises 7.8

3
Chapter 1
Introduction

1.1 What is logic?

Logic is the art of reasoning. It is a formal discipline in which correct forms of reasoning are studied.
It has its origin in Ancient Greece and the word ‘logic’ is derived from the Greek word ‘logos’, which
means something like “ground”, “opinion”, “word”, “speech”, “reason”, “plea”, “discourse”. The
philosopher Aristotle (384-322 BC) was the first to present a theory of valid forms of argumentation
in his Analytica posteriora, to wit, the theory of syllogisms. A well-known example of a syllogism is the
following argument:

All humans are mortal.


All Greeks are humans.
Hence, all Greeks are mortal.

Until the middle of the 19th century, logic was mainly the study of syllogisms, and was a compulsory
part of any university education from the Middle Ages onwards.
Logic is not the only discipline in which reasoning is studied. Another philosophical discipline is
epistemology or theory of knowledge. In epistemology the central questions are ‘What is knowledge?’,
and ‘How can we know something?’ Epistemologists are interested in the grounds and limits of human
knowledge. Logic has an essential role in epistemology, as it answers the question how one can reliably
derive new knowledge from existing knowledge. By means of valid arguments, new conclusions can
be derived. In the beginning of the 20th century, epistemologists concentrated in particular on
scientific knowledge, and the philosophical discipline in which the grounds and limitations of scientific
knowledge are studied is called philosophy of science. Since the middle of the 20th century, psychologists

4
and A.I. researchers started to study human knowledge empirically, and the interdisciplinary field of
cognitive science emerged. Cognitive scientists study all aspects of human knowledge, such as the
perceptual apparatus, the processing of information in the brain, but also thinking and reasoning. Both
in philosophy of science and in cognitive science, logic plays an important role. Logic (and
epistemology) can also be compared to and to some extent opposed to rhetoric. In rhetoric it is studied
how arguments can be used to convince people. It provides a toolbox of tricks to change people’s
minds in a particular way. As such, it is useful in public places such as the political arena or courts of
law. In logic however, one is less interested in the practical use of arguments, than in the intrinsic truth
of statements and the intrinsic validity of arguments. Logic can be forceful only because the laws of
thought are compelling.

1.2 Logic as a formal discipline

In the second half of the 19th century, logic has undergone a major change. In addition to the already
mentioned syllogisms, others types of arguments were studied. Arguments involving connectives
between sentences were developed into propositional logic. The theory of syllogisms was expanded and
became part of predicate logic. Important in this development was that logic became a formal discipline.
A first important step was the publication of The Laws of Thought (1854) by George Boole. Boole was
the first to give a formal or mathematical account of the intrinsic laws of reasoning.1 A quarter of a
century later, the theory which is now called first order logic, and comprises propositional logic and
predicate logic, was developed independently by the German mathematician Gottlob Frege in Jena in
the famous Begriffsschrift (1879) and by the American philosopher Charles Sanders Peirce in Harvard.
It is remarkable that these results were not fully appreciated when they were discovered and both
Frege and Peirce were ‘re-discovered’ in the middle of the 20th century. Their work was the basis
however for the seminal work in logic that gave logic its prominent place in modern science and
philosophy, viz. Principia mathematica (1905) by the philosophers/mathematicians Bertrand Russell and

1 Boole’s work is still influential and is used in the so-called Boolean algebra in computer science. Computers operate according to the

laws of logic and can rightly be called ‘logical’ devices. The very idea of the contemporary computer, the Turing machine, was put
forward by Alan Turing in 1936 in a paper in which he proved an important logical theorem (the undecidability of predicate logic).

5
Alfred North Whitehead. In the 20th century the formal discipline was further developed by eminent
logicians such as Alonzo Church, Alfred Tarski, and Kurt Gödel. Moreover, larger logical frameworks
were developed. Saul Kripke’s seminal work on the notion ‘possible’ and ‘necessary’ gave rise to modal
logic, arguably the most important extension of predicate logic.

1.3 Formal logic and human reasoning

Logic is often believed to represent the way human beings reason. However, several experiments in
psychology show that human reasoning is less than perfect. A well-known experiment is the Wason
test. In this experiment participants are shown four cards as below, and are asked which cards have to
be turned to check whether the following statement is true: “If a card shows an even number on one
face, then its opposite face is red.”

Whereas the correct answer is that the card with the number 8 and the orange card must be turned,
most participants in this experiment typically answer that only the card with the number 8 should be
turned. However, one should also turn the orange card, since it could have an even number on its
other side, but this is often overlooked. From a logical point of view, this is a relatively easy task, but
many people already experience difficulties. If the task is modified however, the result becomes easier.

6
If, in a different experimental set-up, participants are asked whose identity card should be asked in
order to check the law that only 18 year olds are allowed to drink alcohol, then most participants
indicate both the man with the glass of beer in his hand, and the seemingly underage girl whose drink
is not visible. Human beings seem more capable of making correct judgments if the argument is less
abstract or if the reasoning task is less formal. Some psychologists2 argue that human reasoning is not
based on the laws of logic, and that the meaning of the statements always plays a role in human
reasoning. However, one might argue that participants in the experiments can easily understand what
they did wrong, and that people can be taught how to reason correctly. A better assessment of the
psychological experiments that highlight the difficulties humans face in reasoning tasks is to say that
the norms of human reasoning are quite clear, but that human performance is often weak. The use of
the formal framework of modern logic is a means to avoid the typical human fallacies in abstract
reasoning.

2 The recent novel explanation of Cosmides and Tooby of the Wason test is famous and controversial. They argue that human beings

do not reason according to the laws of logic, but that they have an inbuilt cheater detection device in the brain, that has been developed
in the course of the evolution of the human species.

7
Chapter 2
The language of propositional logic

In logic, we use a formal language to make our statements precise, so that we can assess the validity of
arguments in which they occur unambiguously. Natural language is often too vague, imprecise, and
ambiguous for the assessment of the validity of arguments. The natural language is the language that is
commonly used in normal conversations. For all practical purposes, we will assume that the natural
language is English.
The most simple formal language is the language of propositional logic, the logic that is based on
propositions. For all practical purposes, in this course we can equate propositions with full sentences in
the natural language.3 We can distinguish atomic sentences from complex sentences. Atomic sentences are
sentences such as ‘The stock market crashed in 1929’, ‘Hoover was the 31th president of the USA’,
… Complex sentences contain one or more connectives that combine several atomic sentences.
Connectives that combine various sentences are expressions such as ‘… and …’, ‘… or …’, ‘if … then
…’, ‘… if and only if …’, or ‘neither … nor…’. A special connective is negation ‘not …’ or ‘it is not
the case that …’. Other connectives are possible, but the above mentioned ones are most commonly
used in propositional logic. By means of the connectives, complex sentences such as ‘Hoover was the
31th president of the USA and the stock market crashed in 1929’ can be built.
In the formal language of propositional logic, there are four standard logical constants. These are four of
the connectives we have introduced. The formal notations for the logical constants are Ù (… and …),
Ú (… or …), → (if …, then …), and ¬ (not …).4 Moreover, in the language of propositional logic,
we introduce a propositional letter for each atomic sentence in the natural language. The letters used to
this end are p, q, r, s, and t. If more letters are needed each of these letters can be indexed by a natural
number, and hence we can have p1, p2, p3, …, q1, q2, … r1, r2, … etc.

3 Philosophers and linguists often distinguish between sentences and propositions. For them, a sentence is a string of letters, phonemes
in case it is a spoken sentence, or a string of written letters on a page if it is a written language. Each of these sentences is said to express
a proposition; i.e. a meaningful claim that can be true or false. Hence, sentences are concrete and propositions are abstract entities. We
will neglect the distinction between propositions and sentences.
4 Sometimes alternative notations are used, e.g. ~ instead of ¬; É or Þ instead of →; . or & instead of Ù .

8
More complex sentences in the language of propositional logic are constructed by means of logical
constants and propositional letters. By means of Ù, we can form a conjunction of two atomic sentences,
e.g. p Ù q. With Ú we obtain a disjunction, e.g. q Ú r. The sentence formed by relating an atomic sentence
p to an atomic sentence q by means of → is a called the implication p → q. The sentence p before →
is called the antecedent, and the sentence q after → is called the consequent. Putting the negation sign ¬
before an atomic sentence p yields the negation ¬p. Lengthier sentences in the language of
propositional logic can be obtained by repeating this process over and over again with sentences that
have been built. However, some caution is due. If we choose the atomic sentence p as the antecedent
of an implication, and the disjunction q Ú r as its consequent, we would obtain the expression p → q
Ú r. The same sentence can be obtained if one takes the implication p → q as the left side of a
disjunction with r on the other side of the Ú sign. The meaning is different in the two interpretations.5
In order to disambiguate, we use brackets to indicate which parts belong together. The first case can
thus be written as p → (q Ú r), and the second as (p → q) Ú r. By means of the following definitions
we can characterize all the well-formed formulas in the language of propositional logic.

Definition The language of propositional logic L consists of the following symbols:


i. Propositional letters: p, q, r, s, t, p1, p2, …, q1, …
ii. Logical constants: ¬, Ù, Ú, and →.
iii. Brackets (, ).

Definition Well-formed formulas in the language of propositional logic L are sequences of symbols
of the language L that are obtained as follows:
i. Propositional letters are well-formed formulas
ii. If α is a well-formed formula, then ¬ α is a well-formed formula.
iii. If α and β are well-formed formulas, then (α Ù β), (α Ú β), and (α → β) are well-formed
formulas.
iv. No other formulas than the ones obtained by repeated application of rules i, ii, and iii, are well-
formed formulas in the language of propositional logic.

5 This can be made very clear by means of truth tables, see below.

9
Definition A well-formed formula γ is
i. a negation, when γ has the form ¬ α.
ii. an implication, when γ = α → β, and for which α is the antecedent and β the consequent.
iii. a conjunction, when γ = α Ù β, and for which α and β are the conjuncts.
iv. a disjunction, when γ = α Ú β, and for which α and β are the disjuncts.

Note that in these definitions, we used the letters α, β, and γ. These are not propositional letters as p,
q, r, …, but they are letters that stand for any well-formed formula in the language of propositional
logic. Propositional letters stand for sentences in the natural language such as ‘Caesar was murdered
in the Senate on the Ides of March’, or ‘It is raining today’. The letters α, β, and γ are letters used in
the metalanguage of propositional language, i.e. the language that describes the structure of the language
of propositional language. These letters can stand for sentences such as (p Ú (q Ù r)), or (r → ¬q), but
also for propositional letters such as p, q, r, …, since the definition stipulates that propositional letters
are well-formed formulas in the language of propositional logic.
The definition for well-formed formulas is a recursive definition. This means that the definition can
iteratively be used to construct ever more complex sentences by using the result as the input in a next
step. The definition gives also a completely syntactical definition of well-formed formulas. We have a
mechanical method for systematically manipulating formal symbols that has well-formed formulas as
a guaranteed outcome of the process. Consider the formula ((¬ p Ù q) → ¬ (r Ú ¬q)). We can construct
the formula in the following way:

1 p FAP
2 ¬p FN(1)
3 q FAP
4 (¬p Ù q) FC(2,3)
5 ¬q FN(3)
6 r FAP
7 (r Ú ¬q) FD(6,5)
8 ¬(r Ú ¬q) FN(7)
9 ((¬p Ù q) → ¬(r Ú ¬q)) FI(4,8)

10
In every step we can introduce a new atomic sentence as a new formula (Formula Atomic Proposition:
FAP). In every step in the series we can negate any of the earlier well-formed formulas, as is done in
lines 2, 5, and 8. The lines are indicated by FN (Formula Negation) and the number of the line that is
negated. By means of FC, FD, and FI, respectively, we can form conjunctions, disjunctions, and
implications. The first number indicates the line whose formula is on the left (α), and the second
number the line whose formula is on the right (β). By means of this purely mechanical process, every
well-formed formula can be constructed.
The given formulas are the standard form of formulas in propositional logic. Often abbreviations of
the formulas are used to reduce the number of brackets in the formulas. In the remainder of the
course, we will drop the left and right brackets, if any, of a conjunction, disjunction, or implication.
E.g., the formula ((p → q) Ú r) will be written in the abbreviated form (p → q) Ú r.6 Moreover, we can
drop the inner brackets in nested conjunctions or nested disjunctions for most purposes (except in
natural deduction, see below). E.g., the formula ((p Ù q) Ù (r Ù s)) can be written as p Ù q Ù r Ù s by
dropping both the outer and the inner brackets. Also in the formula p → (q Ú (r Ú s)), the inner brackets
can be dropped, and hence we obtain p → (q Ú r Ú s). Note that we still need the remaining brackets to
make clear that the formula is an implication. Note moreover that we can only do this for nested
conjunctions and disjunctions separately, and not for formulas with a mix of conjunctions and
disjunctions. E.g. in the formula p Ù (q Ú (r Ù s)) no brackets can be dropped, since the conjunctions
and disjunction should be kept apart; the formula p Ù q Ú r Ù s is incomprehensibly ambiguous.
A special abbreviation is the connective “… if and only if …”, which is commonly used in natural
language. We might have introduced this connective as an extra logical constant ↔, as is often done.
However, it is easier to regard a sentence α ↔ β as the abbreviation of the well-formed formula (α →
β) Ù (β → α). Logical equivalence or double implication is the conjunction of two implications. In the
remainder of the course we will use the symbol ↔, but the sentences containing it have to be construed
as abbreviations of a longer sentence in the language of propositional logic.

6 In some textbooks the brackets around conjunctions and disjunctions are dropped within an implication, e.g. (p → (q Ù r)) would
then be written as p → q Ù r. We will not adopt this convention in this course.

11
Exercises 2.1

Which of the following formulas are well-formed formulas and/or abbreviated well-formed formulas
in the language of propositional logic? Make clear whether it is a well-formed formula, an abbreviated
formula, or neither (i.e. a formula that cannot occur in the language of propositional logic).
1. (p Ù q Ù r)
2. ((p Ù q) Ù r)
3. ¬p → q
4. pq
5. (p ↔ q) Ù (r Ú p)
6. ((p Ú ¬q) Ù (r → q)
7. p → (q Ú r1 Ú r)
8. (p → (q Ú (r1 Ú r2)))
9. p → ¬¬¬(r Ú s)
10. p Ú q → ¬(r Ú s)

With the formal language in place, we can translate complex sentences in the natural language into the
language of propositional logic. To this end, we first have to identify the atomic sentences in the
complex sentences in the translation key, and subsequently connect them in the appropriate way by
means of the given logical constants.

Example 1
“If the interest rate goes very low, the housing market goes up and a real estate bubble is created.”
Translation key:
p: The interest rate goes very low.
q: The housing market goes up.
r: A real estate bubble is created.
Translation:
p → (q Ù r)
Example 2
“Napoleon and Kutuzov were generals.”

12
Translation key:
p: Napoleon was a general.
q: Kutuzov was a general.
Translation:
pÙq

Observe that in the second example we have to twist the meaning of the complex sentence slightly in
order to obtain two atomic sentences. This will often be the case in translations. Natural language is
more vague and ambiguous than the formal language of logic, and the reason why we translate is just
to eliminate the imprecision.
One instance in which the precision in the formal language is quite clear is the case of disjunctions. In
natural language we use the ambiguous connective “…or…”. When someone uses a sentences “I will
vote for the libertarians or for the liberals”, it is not clear whether she says she might vote for both (if
there are simultaneously different elections, e.g., for the national parliament and the European
parliament), or whether she will vote for only one of them. In case she wants to vote for only one of
them, she could use the phrase “either …, or…” to make it clear. If she doesn’t use this phrase the
sentence remains a bit ambiguous. In logic, we may disambiguate the two forms of disjunction. The
standard disjunction Ú that is used in propositional logic implies that either one of the disjuncts is the
case or that both disjunct together are the case. The exclusive disjunction is represented by the symbol
⊽. The sentence p ⊽ q expresses that p can be the case, or that q can be the case, but that p and q
cannot together be true. If, as in the example, someone wants to say that she will vote either for the
libertarians, or for the liberals, she should use the exclusive disjunction. In this course we will seldom
use the symbol for exclusive disjunction. We might even drop the symbol ⊽ altogether and translate
sentences p ⊽ q by the equivalent sentence (p Ú q) Ù ¬ (p Ù q).
In some cases, we will lose part of the meaning of the sentences. We have only four logical constants
at our disposal, but natural language contains more connectives than these four. Many connectives in
English such as “but’, “although”, “therefore’, “however”, “because”, “meanwhile”, “moreover”,
“furthermore”, express more than mere conjunction, disjunction, or implication. For several of these
connectives, ‘richer’ logical frameworks have been developed in which temporal, causal, or modal
information, can be translated. In propositional logic, however, we lack these extra means of
expression, and hence, the disambiguation and precisification of natural language is also to some
extent an impoverishment of the language.

13
Nevertheless, when translating sentences in natural language one should strive for the most precise
translation possible. One might very well imagine that in example 2, one could take as translation key
‘p: Napoleon and Kutuzov are generals’, whereby the translation of the sentence would simply be p.
Technically speaking, this is not wrong, but it does bring about an unacceptable loss of information,
as the word ‘and’ is not translated. The connectives should all be translated where possible. In
particular, one should take special care to translate all the relevant negations in the sentence (as
students often neglect this at the exams).

Exercises 2.2

Translate the following sentences in the language of propositional logic:

1. If the UK leaves the EU, the value of the pound will drop, and inflation will go up.
2. Unless an unforeseen event happens, the Democrats will win the 2018 elections.
3. If the refugee crisis is under control, Angela Merkel will win the German elections, but if the
refugee crisis is not under control, she won’t.
4. If the Scottish had voted yes in the independence referendum, Scotland would be a new country.
5. If the Liberal party wins the election, but the Social Democrats lose, the coalition cannot be
continued.
6. The Senate will vote the Health Care Act, only if they have enough votes.
7. If Syria uses chemical weapons, the USA will attack, but the USA doesn’t want to be involved.
8. If you study, you will pass the exam, but only if you do the exercises.
9. If it rains, there are lots of raindrops in the air, and if moreover the sun shines, a rainbow will
appear.
10. If the British decide on a hard Brexit, the value of the pound will decrease, and real estate sales
will go down, but the if the British do not decide on a hard Brexit, political instability will follow,
and the value of the pound will decrease.

14
Chapter 3
Validity

3.1 Validity and fallacies in arguments

In logic, we are interested in deriving conclusions from knowledge we already have acquired. Certain
types of deduction are valid, if we can derive in a legitimate way the conclusion on the basis of several
premises and some specified deduction rules. If we know that the Sun is shining and that if the Sun
shines, solar energy is added to the grid, then we can conclude that solar energy is added to the grid.

Example
The Sun is shining.
If the Sun is shining, then solar energy is added to the grid.
Hence, solar energy is added to the grid.

Intuitively we accept that this is a correct argument. However, as we saw in the introduction with the
Wason test, many people make mistakes in arguments. The following error is quite common:

Example
If the defendant was present at the crime scene at the time of the murder, her DNA material will be
found at the crime scene.
The defendant’s DNA material was found at the crime scene.
Hence, the defendant was present at the crime scene at the time of the murder.

As several prisoners serving undeserved jail sentences will testify, this is not impeccable reasoning. It
may be the case that the defendant’s DNA material was put at the crime scene in a set-up, or, as was
the case with the mysterious Phantom of Heilbronn, that DNA material of one of the workers in the
factory that produces the cotton swabs for the DNA tests got attached to cotton swabs.

15
Arguments that may have incorrect conclusions, often in a subtle way, are called fallacies. A fallacy is a
type of reasoning whose conclusion does not follow from the premises. Well-known fallacies are:

- argumentum ad auctoritatem/verecundiam: invoking some authority (religious text, famous writer,


politician, historical figure, …) to buttress a claim; e.g., “As Aristotle said, stones move to their
natural place.”
- argumentum ad populum: making a popular but unfounded claim; e.g. “Building a wall at the border
will stop immigration.”
- argumentum ad baculum (argument of the cudgel/stick): claim that is also a threat; often the negative
consequences of the claim are pointed out; e.g. “We’ll conquer the fort, or I will have to shoot you
for desertion.”
- argumentum ad hominem: insult; contradicting a claim by defaming the opponent; e.g. “You
climatologists believe in climate change, because you get research grants that way.”
- argumentum ad temperantiam: assuming that the middle ground is correct; e.g. “Some believe that Earth
is 6000 years old and others believe it is 5 billion years old, so let’s say that it is a 100 million years
old.”
- argumententum ad nauseam: repeating a claim so often that people start to believe it
- argumentum ad lapidem (appeal to the stone): rejecting a claim as absurd without precise analysis
- petitio principii: circular reasoning, begging the question
- non-sequitur: drawing a conclusion that is in no way related to the premises
- post hoc ergo propter hoc (after this; hence because of this): unwarranted ascription of cause/effect
structure to a temporal sequence of events; false cause
- ignoratio elenchi: red herring; deviating from the topic
- false dilemma: providing only two options from a broader range of options
- equivocation: using ambiguous words or vague meanings in an argument
- false analogy
- hasty generalization
- selective use of facts

16
The list is not complete, but it gives some of the most common fallacies in arguments7 and illustrates
that arguments can be deceptive.

3.2 Validity and fallacies in formal arguments

In logic, we are interested in a particular way that arguments can be valid or fallacious. We study
whether certain arguments are valid or fallacious on the basis of the ‘form’ of the argument. In many
of the above mentioned fallacies, semantic and pragmatic factors are involved. Semantic factors are
related to the ‘meaning’ of the terms involved. Equivocation and false analogies, for instance, are only
possible by using imprecise or ambiguous meanings of the terms. Pragmatic factors are related to the
way the argument is ‘used’ in a conversation. Arguments ad baculum and ad hominem rely on pragmatic
factors; they are meant to evoke physical responses in the conversational situation. In quite some
cases, however, formal factors alone suffice to determine whether an argument is valid or fallacious.
In all cases in which the formal structure of the argument can be represented, we are able to assess
the logical validity of the argument.
With the following definition we can make the notions argument, validity, and fallacy clear:

Definition An argument consists of a set of sentences, which are the premises, and a sentence that is
the conclusion.

Definition An argument is valid if and only if it is not the case that all the premises are true and that
the conclusion is false.

Definition An argument is logically valid if it cannot be the case that the conclusion is false if all the
premises are true, merely on the basis of the logical form of the argument.

Definition An argument is fallacious if it is the case that all the premises are true and the conclusion is
false.

7 Not all the fallacies are related to the argumentative structure. E.g. the base-rate fallacy is a well-known error in reasoning, but is related
to the typical human problems with calculating probabilities, and not to problems with the rules of argumentation.

17
Definition An argument is logically fallacious if the form of the argument is such that under a particular
interpretation of the premises and the conclusion, the premises are all true, and yet the conclusion is
false.

3.3 Validity and inference rules in propositional logic

The language of propositional logic is an excellent tool to study the validity of formal arguments. If
we translate an argument into the language of propositional logic, we exhibit the formal structure of
the argument unambiguously. Semantic and pragmatic factor are filtered out, and what remains is the
bare formal structure of the argument. We will see that the logical validity of arguments that are
translated in the logic of propositional logic can always be assessed.
We consider again the examples, and translate them in the language of propositional logic.

Example
The Sun is shining.
If the Sun is shining, then solar energy is added to the grid.
Hence, solar energy is added to the grid.
Translation key
p: The Sun is shining.
q: Solar energy is added to the grid
Translation
p, p → q / q

For translations of an argument, it is important to distinguish the different premises and translate
them separately. In this case, the premises are p and p → q. In the translation, the different premises
are separated by means of commas. The conclusion is q. In the translation of an argument, the
inference is represented by means of the symbol /. When translating arguments in natural language,
one often finds words as “hence”, “therefore”, “since”, “as”, etc. These words are typically used to
indicate that an inference is being made.

18
Arguments of this type will always be true. If we have an argument of the form α, α → β/ β, it is
always logically valid in propositional logic. This valid formal inference rule is called modus ponens.

Example
If the defendant was present at the crime scene at the time of the murder, her DNA material will be
found at the crime scene.
The defendant’s DNA material was found on the crime scene.
Hence, the defendant was present at the crime scene at the time of the murder.
Translation key:
p: The defendant was present at the crime scene at the time of the murder.
q: The defendant’s DNA material is found at the crime scene.
Translation
p → q, q / p

Arguments of this type are fallacious. This does not mean that all the argument with this form will be
invalid, i.e. have true premises and a false conclusion. It does imply, as is the case here, that there can
be cases in which the two premises are true and the conclusion is false. The argument α → β, β/ α is
a formal fallacy in propositional logic, and it is known as an ex consequentia argument, or as “affirming
the consequent”. If we have a conditional sentence α → β, and we affirm that β is the case, we cannot
be sure that α is the case. From a formal point of view, we are not in a position to say anything
whatsoever about the antecedent α.
There are two other well-known inference rules related to conditional sentences, one valid and one
fallacious. The valid form is α → β, ¬ β/ ¬ α. If, in the first example, we know that if the Sun shines,
solar energy is added to the grid and we know that no solar energy is added to the grid, we can conclude
that the Sun is not shining. This valid inference rule is known as modus tollens. The fallacious rule is
known as negating the antecedent, and has the form α → β, ¬ α/ ¬ β. It is obvious to see that this is a
fallacious inference rule. Consider the following conditional statement: “If I win the lottery, then I will
buy a new house.” It is clear that affirming the antecedent, i.e. asserting that I won the lottery, will
imply that I have enough money to buy a new house. However, even if I don’t win the lottery, there
are endlessly many ways to get the money to buy a new house. No conclusion should be drawn from
the fact that the antecedent is false. Hence, negating the antecedent is a fallacious inference rule.

19
In view of the validity of modus ponens and modus tollens, we see that the conditional sentences p → q
and ¬q → ¬p will yield the same valid conclusions in combination with the statements p, q, ¬p, and
¬q. Hence we obtain the new deductive rule transposition. The following inference is always valid:
α → β / ¬ β → ¬ α.

A valid deductive rule related to disjunction is the constructive dilemma. Actually, we can formulate it in
a restricted and a more general form. The restricted form is α Ú β, α → γ, β → γ / γ. Suppose we have
the following scenario. Two friends have to execute an unpleasant task and decide to toss a coin. If it
is heads, then the friend having chosen heads will execute the task, if it is tails the other friend having
chosen tails will execute the task. We thus have the following arguments:

Example
The coin will be heads or the coin will be tails.
If the coin is heads, the task will be executed.
If the coin is tails, the task will be executed.
Hence, the task will be executed.

The more general form is α Ú β, α → γ, β → δ / γ Ú δ. It is intuitively clear that this must be a valid
deductive rule. However, in many cases it will be less useful than the restricted rule, since we end with
a disjunction in the conclusion and not with an ambiguous unique sentence that is said to be the case.
The more general form of the constructive dilemma can be useful as an intermediate step in a longer
deductive chain though.

Another valid deductive rule related to disjunction, in combination with negation, is the disjunctive
syllogism. It has the form: α Ú β, ¬ α / β. Again, this is a very intuitive inference rule. Suppose we toss
a coin, we will have the following valid inference:

Example
The coin will be heads or tails.
The coin does not end heads.
Hence, the coin will end tails.

20
As we will demonstrate later, the disjunctive syllogism is equivalent with a far less intuitive inference
rule, to wit, ex falso. The rule was known and used in the mediaeval logic courses, but didn’t fail to
puzzle students at the time. The form of the inference rule is α, ¬ α / β. If some sentence in the
language of propositional logic is asserted together with its negation, then we can conclude anything.
One could say once one starts to contradict oneself, there is no end to the nonsense that can be stated
in consequence. In view of the definition of validity, it is relatively easy to see why the rule must be
valid. An argument is valid if the conclusion is necessarily true if all the premises are true. Since the
premises cannot be together true, or, in other words, are inconsistent, the conclusion becomes irrelevant.
There cannot be a case in which all the premises are true and in which the conclusion is false, merely
because there cannot be a case in which all the premises are true. This being said, also contemporary
students in logic courses often experience some form of embarrassment with this valid inference rule.
E.g., the following example is a valid inference:

Every even number is the sum of two prime numbers.


Not every even number is the sum of two prime numbers.
Hence, rabbits can fly.

De Morgan’s rules are other rules that are well-known. They express a relation between negation,
conjunction, and disjunction. We have the following equivalences:

¬ (α Ú β) ↔ (¬ α Ù ¬ β)
¬ (α Ù β) ↔ (¬ α Ú ¬ β)

The fact that we have two equivalences here implies that in any statement we can replace the
expression on the left side of one of the two laws with the expression on the right side and vice versa.
De Morgan’s laws can be used as substitution rules in propositional logic. It is rather obvious that the
expressions left and right of the equivalence signs are equivalent. It is clear that the expression “It is
not the case that the US will attack North Korea and Iran.” can be replaced by “The US will not attack
North Korea or the US will not attack Iran.” Similarly, the statement “It is not the case that the US
will attack North Korea or Iran” is equivalent with and substitutable for “The US will not attack North
Korea and the US will not attack Iran.” Note that De Morgan’s laws are only valid if disjunction is
used in its standard role in propositional logic, and not as an exclusive disjunction.

21
Other substitution rules are the rules of associativity and commutativity of conjunction and
disjunction. The associativity rule states that the place of brackets within a sequence of conjunctions
or disjunctions is irrelevant from a logical point of view:

Associativity of conjunction: ((α Ù β) Ù γ) ↔ (α Ù (β Ù γ))


Associativity of disjunction: ((α Ú β) Ú γ) ↔ (α Ú (β Ú γ))

The commutativity rule states that the order of the conjuncts and disjuncts in irrelevant from a logical
point of view:

Commutativity of conjunction: (α Ù β) ↔ (β Ù α)
Commutativity of disjunction: (α Ú β) ↔ (β Ú α)

If we apply these rules repeatedly, we can shift the brackets and the conjuncts (disjuncts) within a
conjunction (disjunction) wherever we want. The abbreviation rule for well-formed formulas in the
previous chapter is thus well motivated.

The elimination of double negation is another inference rule. It can be formulated both as an inference
rule ¬¬ α / α and as a substitution rule ¬¬ α ↔ α. The rule is related to the law of the excluded middle α
Ú ¬ α. If we have the law of the excluded middle as a logical law that is always true, and we know that
¬¬ α is the case, then by application of the disjunctive syllogism, we see that α must be the case

αÚ¬α
¬¬ α
Hence, α

The elimination of the double negation and the law of the excluded middle are also related to the
principle of bivalence we will encounter in chapter 5. A way of understanding the law of the excluded
middle α Ú ¬ α, or, that α is the case or not the case, is to say that α must be either true or false. Hence,
the laws of the excluded middle says that if ¬¬ α is true, then ¬ α is false, and hence α must be true.

22
The elimination of double negation, the law of the excluded middle, and the principle of bivalence all
seem very intuitive, and yet this is the most controversial rule, law, or principle in propositional logic.
In standard logic, the principle is accepted. However, they are not accepted by intuitionists. Intuitionism
is a position in logic and philosophy of logic that has built on the work of the Dutch logician and
mathematician Brouwer. Brouwer’s rejection of the principle of bivalence is related to his views
concerning mathematical knowledge. He believed that a theorem can only be true if we have
constructed a genuine mathematical proof from basic claims. A claim is false if we have derived a
contradiction from the claim. However, if we don’t have a genuine proof for a mathematical theorem,
and if we have not derived a contradiction, then, according to Brouwer we are not entitled to say
anything about it. E.g., we cannot say of Goldbach’s conjecture, which is the claim that every even
number is the sum of two prime numbers, and which is one of the oldest and most famous unproven
conjectures in mathematics, that it is either true or false. Brouwer would also claim that if we derive a
contradiction from the negation of the Goldbach conjecture, we would not be able to assert the
Goldbach conjecture, as we still would not have a genuine constructive proof of the theorem.
Intuitionism is a minority position among logicians, and for the remainder of the course we will stick
to standard logic.

23
3.4 Summary of valid inference rules

Modus ponens: α → β, α / β
Modus tollens: α → β, ¬ β / ¬ α
Transposition: α → β/ ¬ β → ¬ α
Constructive dilemma: α Ú β, α → γ, β → γ / γ
Constructive dilemma: α Ú β, α → γ, β → δ / γ Ú δ
Disjunctive syllogism: α Ú β, ¬ α / β
Ex falso: α, ¬ α/ β
De Morgan’s Law 1: ¬ (α Ú β) ↔ ¬ α Ù ¬ β
De Morgan’s Law 2: ¬ (α Ù β) ↔ ¬ α Ú ¬ β
Associativity of conjunction: ((α Ù β) Ù γ) ↔ (α Ù (β Ù γ))
Associativity of disjunction: ((α Ú β) Ú γ) ↔ (α Ú (β Ú γ))
Commutativity of conjunction: (α Ù β) ↔ (β Ù α)
Commutativity of conjunction: (α Ú β) ↔ (β Ú α)
Elimination of double negation: ¬¬ α / α
Law of the excluded middle: /αÚ¬α

24
Chapter 4
Truth tables

Hitherto we have interpreted the sentences in propositional language by translating them in natural
language, or in this case English. For every propositional letter in the language of propositional logic,
we have a sentence in the natural language. Logical constants correspond to the grammatical
conjunctions in the natural language, such as ‘and’, ‘or’, ‘but’, … Subsequently, various rules of
deduction and the characterization of valid arguments were presented. In this part, we will present a
formal characterization of logical validity, namely semantic validity. The term ‘semantical’ is derived
from the Greek ‘semantikos’, which means ‘significant’. Semantics is the study of meaning. In formal
logic however, meaning is explained in a very particular way. The semantical value (meaning) of
sentence is its truth value.
Sentences can be true or false, and hence they can have two truth values: 1 (for true) and 0 (for false).
The truth value of complex sentences can be computed if we know the truth values of the atomic
sentences in them. If we consider the sentence ‘Napoleon was the Emperor of France and he won the
Battle of Waterloo”, we have the sentence p, ’Napoleon was the Emperor of France’, which has the
truth value 1, and the sentence q, ‘Napoleon won the Battle of Waterloo’, which has the truth value
0, and since both sentences have to be true for a complex sentence with the conjunction ‘and’ to be
true, and one of them is false, we conclude that the complex sentence ‘p Ù q’ has the truth value 0.
In the example we have tacitly assumed two basic principles of propositional logic:

(i) The principle of bivalence: each sentence (or proposition) is either true or false. There are exactly
two truth values and each sentence has one of them.
(ii) Frege’s principle: the truth value of a complex sentence is completely determined by the truth
values of the atomic sentences it contains.

25
The second principle is also called the principle of the compositionality of meaning, and since meaning
is equated with truth in propositional logic, the principle of the compositionality of truth. The
principles are not uncontroversial though, and it is possible to argue against adopting them.
As for the first principle, we might assume that there could be truth value gaps. In a claim about future
events such as ‘The successor of Donald Trump as president of the USA will be a Democrat’, we have
a sentence whose truth value is at least unknown and in all reasonableness not yet determined. Hence,
one might argue that the sentence is neither true nor false. A similar objection concerns borderline
cases for vague concepts. It might be the case that a plum is borderline yellow/orange. One could
argue that the sentence ‘This plum is yellow’ has no determinate truth value. It is possible to develop
alternative logical systems that treat these cases differently. It is possible to develop logics with truth
value gaps8 or with extra truth values.9 In the remainder, we will disregard these possible objections,
and we will always presuppose the principle of bivalence.
Also Frege’s principle has some drawbacks. For example, consider the complex sentences ‘Imogen
got pregnant and married’ and ‘Imogen married and got pregnant’. In propositional logic the two
sentences are identical from a semantical point of view. They contain the same propositions (‘Imogen
got pregnant’; ‘Imogen got married’) and the same logical constant (‘and’), and hence, in view of
Frege’s principle and the way truth values are determined for the logical constant ‘and’, both sentences
will always have the same truth value. In common parlance, however, the order of presenting the
sentences does play a role, and only one of them is deemed to be true in a particular (awkward or
joyous) situation. Below in this chapter, we will see that Frege’s principle of the compositionality of
truth is somehow unnatural in the case of implication.
In order to compose the truth values of complex sentences, we need the composition rules for the
various logical constants. These are given in so-called truth tables for the logical constants. In a truth
table the various distributions of truth values over the composing parts are considered, and for each
distribution of truth values, the truth value of the composite sentence is given. For each of the logical
constants, a truth table can thus be given.

8 A theory with truth value gaps and special rules of composition of truth values for complex sentences is the theory of supervaluation,
first developed by Bas van Fraassen.
9 The best known logic with extra truth values is Lukasiewicz’s three-valued logic, in which an extra truth value ½ is added between 0

and 1. A recent proposal for treating vagueness has been proposed by Igor Douven and Lieven Decock, in which infinitely many truth
values between 0 and 1 are determined for the borderline region.

26
For conjunction, we have the following truth table:

α β αÙβ
1 1 1
1 0 0
0 1 0
0 0 0

For any two sentences α and β in propositional logic, the conjunction α and β, is true in case both α
and β are true, and false in all other cases. Since α and β can both be true and false, there are four truth
value distributions to consider. The truth table is rather obvious, and the following example makes it
clear: The compound sentence ‘‘The Chancellor of Germany is a Christian-democrat and the president
of France is a socialist’ is true in case both the sentences ‘The Chancellor of Germany is a Christian-
democrat’ and ‘The president of France is a socialist’ are true, and the compound sentence is false in
case any of two sentences is false.
The truth table is constitutive of the ‘meaning’ of the logical constant ‘and’ in propositional logic. As
the above example concerning Imogen getting pregnant and married illustrates, the formal
characterization of the meaning of the logical constant is slightly restrictive. The temporal and causal
order, which is often deemed important in natural language, is filtered out in the formalization by
means of the truth table. One could thus say that propositional logic is more precise than natural
language, but has less expressive power. Extra expressive power could be added, but then one would
need a more complex logical system than the first order propositional logic we are studying.
The truth table for negation is also obvious. If a particular sentence, e.g. ‘Interest rates are below 1%’
is true, its negation ‘It is not the case that interest rates are below 1%’ is false and vice versa. Hence,
for any sentence α, we have the following truth table:

α ¬α
1 0
0 1

Disjunction is less straightforward. Consider the sentence ‘I will go to Brussels or to The Hague
tomorrow.’ It is obvious that in case I am not going to either of the two cities, the compound

27
disjunction is false. If I go to Brussels and not to The Hague or vice versa, the sentence is obviously
true. However, if it happens to be the case that I go to The Hague and subsequently travel further to
Brussels, it is less clear whether the composite disjunction is true. The problem is that disjunction is
ambiguous. We may disambiguate disjunction in the ‘standard’ disjunction, in which both sentences
in the disjunction can be true at the same time, yielding the truth value 1 for the disjunction and the
exclusive disjunction that only yields the truth value 1 in case only one of its composing sentences is
true. This disambiguation of two logical constants results in two different truth tables. The first truth
table is for standard disjunction Ú, as it will be used in this course:

α β αÚβ
1 1 1
1 0 1
0 1 1
0 0 0

The nonstandard exclusive disjunction ⊽ has the following truth table:

α β α⊽β
1 1 0
1 0 1
0 1 1
0 0 0

The least obvious truth table is the one for implication. Consider the case for two sentences p and q:

p q ¬p ¬q p→q
1 1 0 0 ?
1 0 0 1 ?
0 1 1 0 ?
0 0 1 1 ?

28
To determine the truth values for all the truth value distributions, we proceed line by line. We will
argue by means of two valid deductive forms we have encountered in Chapter 3 in the part on
deduction, to wit, modus ponens and modus tollens, and two invalid deductive forms (fallacies), to wit, ex
consequentia and negation of the antecedent (see pp.19-20 and p.24).10
In the first line we have that the antecedent p is true, and the consequent q is true: ‘If you are pretty
(p=1), you smell nice (q=1)’. It is natural to associate this line with the validity of modus ponens, since it
is the only line in which both p and q are true. If the implication p → q is true, and the antecedent p
is true, q should be true and is in fact true.
The second line is also motivated by modus ponens, but then in a negative way. Suppose the implication
p → q were true. From the truth table we read that the antecedent p is true (you are pretty), and hence,
by force of modus ponens, q has to be true. However, in the truth table q is in fact false (it is not the case
that you smell nice), and hence the (only) supposition we made that the implication is true must be
wrong. Hence, in the second line, the truth value of the implication is 0.
We move first to the fourth line. It is natural to associate this line with modus tollens. In the truth table
q has the value 0, and hence, ¬q has the truth value 1 (it is not the case that you smell nice). Hence, if
we assume that p → q , by force of modus tollens, ¬p should be true, or, as is in fact the case, p should
be false (it is not the case that you are pretty). The fourth line is the only line in which the validity of
modus tollens could be established, and hence we conclude that the truth value of the implication in this
line should be 1.
Unfortunately, the negative use of modus tollens does not take us to line 3, but takes us back to line 2,
and yields the same conclusion as was the case for modes tollens, namely that the truth value of the
implication in line 2 should be 0. In order to determine the truth value of the third line, we will need
to start from the fact that ex consequentia is an invalid deductive form. As we saw in Chapter 3 (p.19),
this is a formal fallacy: if we have a conditional sentence α → β, and we affirm that β is the case, we
cannot be sure that α is the case. Now suppose that p → q is true, and that q is true, then there must
at least be one case in which p could be 0: even though you smell nice (q=1) it is still possible that it
is not the case that you are pretty (p=0), otherwise ex consequentia would indeed be a valid argument.
In the truth table, the lines for which q is true are lines 1 and 3. In line 1, the implication is also true,
and p is also true. Hence, line 1 cannot account for the fact that ex consequentia is invalid. The only
remaining option is stipulating that p → q is true in line 3. We obtain that p → q is true and that q is

10 Recall that according to modus ponens, if p→q and p are both true, then q is also true. According to modus tollens, on the

other hand, if p→q and ¬q are both true, then ¬p is also true.

29
true. By ex consequentia we should conclude that p is 1, which it isn’t, and hence we have a case in the
truth table that explains why ex consequentia is indeed an invalid argument. The argument can be
repeated for negation of the antecedent, again yielding that the implication in line 3 should be 1.
The argumentation why the truth table of implication is as it should be is very compelling in view of
the deductive rules in propositional logic. Yet the upshot is less than intuitive. The implication ‘If
Napoleon wins the Battle of Waterloo, then Talleyrand is the French ambassador who negotiated the
peace agreement in Vienna’ is true, only because its antecedent is false. It is quite irrelevant whether
Talleyrand negotiated the peace agreement or not.11 It even gets more counterintuitive. The
implication ‘If Napoleon won the Battle of Waterloo, then 2 + 2 = 5’ is a true implication. Again, it
suffices that the antecedent is false. Many people would hesitate to assign truth values to implications
with false antecedents, but in view of the principle of bivalence, which states that every well-formed
sentence in propositional logic must have a truth value, this is not possible.
There is still another way of looking at implication. Sometimes implication is believed not to be a
primitive logical constant, but a mere shorthand notation; an arbitrary implication α → β is then
replaced by ¬α Ú β. In chapter 5 we will prove the equivalence of both expressions by means of an
alternative logical method, natural deduction. We can compose the truth table for this expression, and
we obtain the following composite truth table:

α β ¬α ¬α Ú β
1 1 0 1
1 0 0 0
0 1 1 1
0 0 1 1

Hence we have the following general truth table for implication:


α β α→β
1 1 1
1 0 0
0 1 1
0 0 1

11 As a matter of fact, Talleyrand did negotiate the peace agreement in Vienna after Napoleon’s defeat, and he did earlier negotiate the
peace of Tilsit for Napoleon, but fell out with Napoleon afterwards.

30
This form of implication is also called material implication. In material implication, the truth value of the
complex sentence is determined only on the basis of the truth values of the antecedent and
consequent. This leads to rather counterintuitive examples. The sentence ‘If Mill wrote On Liberty,
then the square root of 25 is 5’ is a well-formed implication, whose antecedent and consequent are
both true, so that the implication is true. This is a bit startling, as many people assume that the two
sentences should be related in some way. For material implication, no causal, temporal, or necessary
connection whatsoever between the antecedent and consequent is required. This is to some extent a
deviation from the way implication is used in natural language. The logician C.I. Lewis proposed that
in addition to material implication a stronger form of implication should be introduced in logic and
made some proposals for strict implication, in which there should be a necessary connection between
the antecedent and the consequent. This notion can be developed, but requires a more complex logical
framework than first order propositional logic.
The last logical constant we consider is double implication. As we explained earlier, double implication
can be rewritten as the conjunction of two implications. We can compose the following truth table
for the propositions p and q:

p q p→q q→p (p → q) Ù (q → p)

1 1 1 1 1
1 0 0 1 0
0 1 1 0 0
0 0 1 1 1

and hence, the general truth table for double implication for arbitrary sentences α and β in the language
of propositional logic is:

α β α↔β
1 1 1
1 0 0
0 1 0
0 0 1

31
Observe that in the previous two truth tables, we haven’t computed the truth values of sentences in a
single step. This procedure can be extended for the computation of the truth value of any sentence α
in propositional logic. The general strategy is first to identify all the atomic sentences in the complex
sentence α, and write down all the truth value distributions for the atomic sentences. If there is a single
atomic sentence, this yields two truth value distributions, p = 1 and p = 0. If there are two atomic
sentences p and q, there are four truth value distributions: (1, 1), (1, 0), (0, 1), and (0, 0). In general, if
there are n atomic sentences, there are 2n truth value distributions. Subsequently, following the order
in which the sentence is built up from the atomic sentences by means of the logical constants, we can
compute the truth function of every sentence through the series, to end with the truth function of the
sentence α. The truth function of a sentence is the (mathematical) function that has as its arguments
truth value distributions and as its values the truth values. Consider the sentence ¬ p Ú (r → q). The
resulting truth table can be computed as follows:

p q r ¬p r→q ¬ p Ú (r → q)

1 1 1 0 1 1
1 1 0 0 1 1
1 0 1 0 0 0
1 0 0 0 1 1
0 1 1 1 1 1
0 1 0 1 1 1
0 0 1 1 0 1
0 0 0 1 1 1

From this table we can read off for which truth value distributions the sentence is true and for which
truth value distributions, it is false. In this case the sentence is true for all truth value distribution
except for the truth value distribution (1, 0, 1), in which case p and r are true and q is false.

32
Excercises 4.1

Construct the truth table for the following sentences:

1. p → ((p Ù ¬q) Ú r)
2. ¬ (q ⊽ p) → (¬p ↔ ¬q)
3. p → ((p Ù ¬r) Ú q)
4. q ↔ ¬¬q
5. ((p → q)→ p)→ p
6. ((p → q) Ù (r → s))↔ ((q → p) Ú (s → r))
7. ((p → q) Ù (q Ú t)) → (p Ú t)
8. (q ⊽ p) ↔ (q Ú p)
9. (p Ú q) → ((p → r) Ú (q → r))
10. (p → q) ↔ (p Ù ¬q)
11. (p Ù (q Ú r)) ↔ ((p Ú q) Ù (p Ú r))
12. (p ® q) ↔ (¬p Ù q)
13. (¬q Ù ((r Ú p) → q)) → ¬p
14. ¬(p Ù q) Ú (p ↔ ¬q)
15. (p → (q → r)) → (q → (p → r))
16. p Ú (q Ú (p Ú r))
17. (p → r) → ( p → ( q → r))
18. ((p Ùq) →r) Ú (r → (p Ú r))
19. ((p →q) Ù (q Ú r)) → ( p Ú r)
20. ((p Ù q) → r) ↔ (p → (q → r))
21. ((p Ú q) → (p Ù ¬r)) Ú (¬q → r)
22. (p Ú (q Ù r)) ↔ ((p Ú q) ↔ (p Ú r))
23. (p → (q Ú r)) Ù (¬q Ù ¬r)
24. ((p Ú q) → r) ↔ ((p → r) Ù (q → r))
25. (p → (q Ù r)) → (q Ú r)
26. ((p → q) Ù r) ↔ (¬ p Ù (r → q))
27. ((p → (q → r)) Ù (q → (r → p))) → (r → (p → q))

33
28. (p Ù (q Ú r))→ ((p Ù q) Ú (p Ù r))
29. (¬q Ù ((r Ú p) → q)) → ¬p
30. ((p → q) Ù (q → r) Ù (r → s)) ↔ (¬s → ¬p)
31. ((p → (q → r) Ù ¬ r) → (p ↔ q)
32. ¬((p → q) Ù (r → s)) ↔ ((p Ù r) Ù ¬(q Ú s))
33. (p ⊽ q) ↔ ((p Ú q) Ù ¬(p Ù q))

The semantic method of constructing truth tables for sentences gives us information about the truth
conditions under which the sentences are true or false. In his famous Tractatus Logico-Philosophicus,
written in the trenches during World War I, the philosopher Ludwig Wittgenstein gave a philosophical
account of how one could understand truth tables. We could consider the truth value distributions as
different states-of-affairs that possibly could be the case in the world. If we have the sentences p for
‘Inflation is going up’, q for ‘The situation in the Middle East is stable’, and r for ‘Europe’s oil reserves
are large’, we can consider eight possible states-of-affairs the world could be in. Either sentence could
be true or false, and each combination describes a possible situation the world could be in. We could,
as it were, represent eight different scenario’s the world could be in.
Some sentences in the language of propositional logic are peculiar. It can be the case that the truth
function of a sentence yields the value 1 for every truth value distribution. On Wittgenstein’s
interpretation, this would mean that the sentence is true independent of any state-of-affairs in the real
world. In other worlds, the sentence doesn’t give us any information about what is the case in the
world. Sentences that are true under all truth value distributions have been called tautologies or logical
truths. The traditional meaning of the term ‘tautology’ is ‘sentence devoid of meaning’, and in logic,
this is understood as ‘being vacuously true’. Other sentences have a truth function that for every truth
value distribution has the value 0. In other words, whatever the state the world is in, the sentence is
false. Sentences of this form are called contradictions. Sentences that are true in some conditions, i.e.
under some truth value distributions, and false in others, are called contingencies.

Definition A sentence α is a tautology or logical truth if α is true for every truth value distribution.

Definition A sentence α is a contradiction if α is false for every truth value distribution.

Definition A sentence α is a contingency in case it is neither a tautology, nor a contradiction.

34
Exercises 4.2

Determine whether the sentences in the exercises 4.1 are tautologies, contradictions, or contingencies.

So far, we have considered truth tables for single sentences in propositional logic. The method of
truth tables can also be used to determine the semantical validity of arguments in propositional logic.

Definition An argument is semantically valid if and only if, for all truth value distributions under which
all of the premises are jointly true, also the conclusion is true.

Definition A counterexample to an argument is a truth value distribution that assigns the truth value 1
to all the premises, and the truth value 0 to the conclusion.

We can illustrate this by means of the argument (p Ú q) → r / ¬p Ú (¬q → r).

p q r pÚq (p Ú q) → r ¬p ¬q ¬q → r ¬p Ú (¬q → r).

1 1 1 1 1 0 0 1 1
1 1 0 1 0 0 0 1 1
1 0 1 1 1 0 1 1 1
1 0 0 1 0 0 1 0 0
0 1 1 1 1 1 0 1 1
0 1 0 1 0 1 0 1 1
0 0 1 0 1 1 1 1 1
0 0 0 0 1 1 1 0 1

The strategy goes as follows. Determine the truth functions for all the premises and for the conclusion.
Check for each truth value distribution if all the premises are true (look if it has the value 1 in all the
relevant columns). In the example this is for the truth distributions in the green column lines 1, 3, 5, 7, and 8 of the
table. If, such as in these lines, all the premises are true indeed, check whether the conclusion has the

35
value 1 for each of these truth value distributions. If it does not have value 1, this particular truth value
distribution is a counterexample. In the above example, we did not find a counterexample. If, after going through
all the truth value distributions, no counterexample has been found, we can conclude that the
argument is valid. Since we did not find a counterexample in the given example, the argument is semantically valid.
Note that tautologies can be considered as a special kind of valid argument, namely arguments without
premises. If we consider the sentence α as the conclusion of an argument without premises, we have
to consider all the truth value distributions (they are the ones under which all the premises are true),
and check whether the conclusion is true.

Exercises 4.3

Check whether the following arguments are valid. If the argument is not valid, give the
counterexamples:
1. p → q / ¬q → ¬p
2. p → q / ¬p → ¬q
3. p → (p Ù q), r → s / p → s
4. (p Ù q) → r / (p → (q → r)

36
Chapter 5
Natural deduction
in propositional logic

5.1 Introduction

In propositional logic, in addition to the semantical method explained in chapter 4, we have another
formal method, natural deduction, to prove the validity of arguments. The method relies on some
inference rules we have discussed in the third chapter, and a formal method to infer ever more
conclusions from a set of premises until a required conclusion is reached. A proof by natural deduction
consists of a sequence of numbered sentences. The first sentences are the premises of the argument.
The last line in the proof is the conclusion. The intermediate lines are lines that are obtained through
applying the inference rules on the lines with a lower number in the proof. For a valid proof, we will
use the following notation:

α1, α2, … αn ├ β.

Here α1 until αn are the premises in the proof. β is the conclusion of the proof. The symbol├ is called
‘turnstile’. Note that this is a specification of the notation / we introduced in the previous chapter.
The expression α1, α2, … αn / β is the general translation of an argument in which α1, α2, … αn are
the premises and β is the conclusion, whereas α1, α2, … αn ├ β is the translation of the same argument
with the additional claim that the conclusion can be derived from the premises by means of natural
deduction. If we assert this sentence, it means we have found a list of sentences that connect the
conclusion to the premises by means of acceptable inference rules. We will introduce the method step

37
by step, and will introduce the inference rules, i.e. the introduction and elimination rules for the various
logical constants.

5.2 Conjunction

The introduction rule for conjunction IÙ is given by the following proof schema:

1 …
.
m α
.
n β
.
o αÙβ IÙ(m,n)

The rule states that if, in a proof, we have the sentence α in line m, and the sentence β in line n, we
can write down a new line with the conjunction α and β at any place o after the line n in the proof.12
It is obvious that, if we have both derived that α and β are the case, that we can derive that their
conjunction must also be the case. Also β and α can be derived. We can stipulate that the order of the
numbers m and n in IÙ(m,n) indicates that the expression on line m is on the left of the Ù-sign and the
expression on line n on the right in line o.

12 Here the letters m, n, o as well as the dots in between stand for random line numbers.

38
The elimination rule for conjunction EÙ is given by the following proof schema:

1 …
.
m αÙβ
.
n α EÙ(m)

The rule states that if we have derived that it is the case that α Ù β, then we can conclude that α is the
case. We can also conclude that β is the case. From a formal point of view, we could easily distinguish
the left- and the right- elimination rule for conjunction. For practical reasons, we will use the
expression EÙ(m) both for the left and right elimination rule for conjunction.
By means of these two rules we can derive proofs in natural deduction. The following example is a
case in point. If we want to derive that p Ù r, q ├ q Ù r, we have the following proof in natural
deduction:

1 pÙr Prem
2 q Prem
3 r EÙ(1)
4 qÙr IÙ(2,3)

In lines 1 and 2, we start with the premises (Prem). By means of two inference steps, we arrive at the
conclusion, and thus complete the proof. In line 3, we write EÙ(1) to indicate that we have used the
elimination rule for conjunction EÙ on line 1. In line 4, we write IÙ(2,3) to indicate that we have
applied the introduction rule for conjunction IÙ on lines 2 and 3.
We can easily prove that conjunction is commutative. We prove p Ù q ├ q Ù p.
1 pÙq Prem
2 p EÙ(1)
3 q EÙ(1)
4 qÙp IÙ(3,2)

39
In the next example, we prove that conjunction is associative. We prove p Ù (q Ù r) ├ (p Ù q) Ù r.

1 p Ù (q Ù r) Prem
2 p EÙ(1)
3 qÙr EÙ(2)
4 q EÙ(3)
5 r EÙ(3)
6 pÙq IÙ(2,4)
7 (p Ù q) Ù r IÙ(6,5)

Note that the order of introducing the conjunctions in steps 6 and 7 will determine the place where
the brackets are. The following proof starts with the same lines 1-5, but will yield a different
conclusion.

1 p Ù (q Ù r) Prem
2 p EÙ(1)
3 qÙr EÙ(2)
4 q EÙ(3)
5 r EÙ(3)
6 rÙp IÙ(5,2)
7 q Ù (r Ù p) IÙ(4,6)

One readily sees that by means of an adequate sequence of introductions of conjunctions, we can get
the conjuncts and the brackets wherever we want to have them. In natural deduction, one should take
care however to go step by step by means of the inference, and one should avoid to jump to
conclusions too easily or to take shortcuts. Sometimes the formal derivation in natural deduction is
quite lengthy, while the conclusion is very obvious. Yet the rigorous successive application of the rules
is the only guarantee of the validity of the proof.

40
Excercises 5.1

Give the proof by means of natural deduction of the following arguments:


1. (p Ù r) Ù q ├ (p Ù q) Ù (r Ù q)
2. p, r Ù q ├ p Ù r
3. p Ù r, r Ù ¬q ├ p Ù ¬q
4. ((p Ù r) Ù (r Ù q)) Ù (s Ù p) ├ r Ù q
5. p Ù r, s Ù (r → q) ├ p Ù (r → q)
6. p Ù q, r Ù¬q ├ q Ù ¬q

5.3 Elimination of implication

The elimination rule for implication E→ is based on the modus ponens rule. The schema is:

1 …
.
m α→β
.
n α
.
o β E→(m,n)

If we have derived on line m that α → β, and in line n that the antecedent of the conditional is actually
the case, by modus ponens we conclude that β must be the case.

41
Example: p → (q Ù r), s → p, s ├ p Ù r

1 p → (q Ù r) Prem
2 s→p Prem
3 s Prem
4 p E→(2,3)
5 qÙr E→(1,4)
6 r EÙ(5)
7 pÙr IÙ(4,6)

Exercises 5.2

Prove by means of natural deduction:


1. r Ù q, q → s ├ s
2. p Ù q, q → (r Ù p) ├ r Ù q
3. p Ù q, p → (q → (p → r)) ├ r
4. r, r → p, p → q ├ (r Ù q) Ù (p Ù q)
5. p, p → (p → (p → r)) ├ r
6. p Ù q, p → r, q → s ├ r Ù s

5.4 Introduction of implication

For the introduction of an implication we have no immediate intuitive inference rules available. In
many ways, implications or conditionals are strange sentences. They do not express that some factual
situation is the case, but they express the fact that when the fact expressed by the antecedent is the
case, then it is always the case that the consequent must be true. In other words, they express some

42
fact about the conclusion under condition that the antecedent is true. The way to introduce an
implication is by assuming that the antecedent is indeed the case, and to prove that the conclusion
then inevitably also must be the case.
In natural deduction, the introduction rule for implication I→ becomes:

1 …
.
m α As
.
n β
n+1 α→β I→(m,n)

In line m we assume (As) that the antecedent is the case. In the lines m+1 until n we prove by means
of natural deduction that with the help of this assumption we can indeed derive the conclusion. In
step n+1 we can conclude that α → β is indeed an implication that can be derived by means of natural
deduction.
In the derivation, we have derived step m+1 until n on the basis of the assumption α. We don’t know
though whether this assumption is the case or not. Hence, all the intermediate steps are based on an
uncertain claim α; they are all true if the assumption is indeed the case, but we can’t say anything in
case the assumption were to be false. Hence, the intermediate steps are perfectly acceptable as
intermediate steps used in the derivation of an implication, but not as lines that can be used outside
this procedure. Hence, when introducing the implication, the assumption has to be ‘closed’; we cannot
use the lines m+1 until n in the remainder of the proof. Graphically, this is represented by means of
an indentation. We write the lines m+1 until n further to the right in the proof, and shift back to the
normal position in line n+1. It is a means to indicate that once the assumption is closed in line n+1,
the indented lines can no longer be used.

43
We can illustrate the rule by means of the example p → r, q → r ├ (p Ù q) → r.

1 p→r Prem
2 q→r Prem
3 pÙq As
4 p EÙ(3)
5 r E→(1,4)
6 (p Ù q ) → r I→(3,5)

Note that in the proof, we did not need one of the premises: the premise q → r. This is never a
problem. Proofs can contain redundant lines. This doesn’t have an effect on the validity of the proof.
The crucial step in this proof is the choice of the appropriate assumption in line 3. The choice is very
natural in view of the implication we want to derive. In general, if we want to derive an implication,
we have the following useful heuristic rule:

Heuristic rule If at a certain stage of a proof, you have to prove an implication α → β, start with
assuming α and then try to derive the consequent β.

The rule can be applied repeatedly in some proof. This may involve nested assumptions. Consider the
argument p → q ├ p → (r → q).

1 p→q Prem
2 p As
3 r As
4 q E→(1,2)
5 r→q I→(3,4)
6 p → (r → q) I→(2,5)

In the conclusion, we have a nested implication. Hence in our proof, we will have to apply the heuristic
rule twice. The first application is in line 2, in which the antecedent of p → (r → q) is assumed.

44
Subsequently we try to prove the consequent r → q, which again is an implication. The second
application is in line 3 in which we assume the antecedent of r → q. Observe that for a new (nested)
assumption, we make a new indentation. The reason is that we have to close every assumption that
we make, before we can reach a genuine conclusion. Consider the following proof:

1 r Prem
2 p As
3 q As
4 p→q I→(2,3)*

In line two and three we make a new assumption, and close everything at one stroke in line 4. The
result is gruesomely wrong (* means incorrect). It would mean that we can derive any implication, in
this case p → q out of nothing. We would have that every implication is always true. This, of course,
cannot be the case. The error is easily traced:

1 r Prem
2 p As
3 q As
4 p→q I→(2,3)

The second derivation is correct in every step, but doesn’t lead to any conclusion. Only the second
assumption is closed, but the first assumption is still ‘open’. This means that p → q isn’t a conclusion,
but only an intermediate step that is true only if p is the case. The graphical structure of the proof
makes this clear. Line 4 is still an indented line. Only if we close the assumption, and start writing right
under the letter ‘r’ in line 1 again, do we obtain lines that can be considered a conclusion of a proof in
natural deduction. It is a common error in assignments and exams not to close all the assumptions
and to present intermediate lines as conclusions.
In logic, there are special sentences that can be proved without any assumption. For example, we can
prove ├ p → (q → (p Ù q))

45
1 p As
2 q As
3 pÙq IÙ(1,2)
4 q → (p Ù q) I→(2,3)
5 p → (q → (p Ù q)) I→(1,4)

In the proof we start with an assumption, and only in line 5 all the assumptions are closed. Sentences
that can be proved without assumptions are tautologies; they are true under every circumstance.
Normally a certain scenario is described by means of the premises of an argument, but tautologies are
true in every possible scenario. This special type of sentence is also discussed in the chapter on truth
tables.
A special case of such a proof is the petitio principii, which can be described by means of the logical law
p → p. If one claims that if p is the case, then p is the case, one has not done anything wrong, and
stated a very obvious fact. The fallacy consists in conflating the claim p → p and the genuine claim p.
However, for the proof of p → p, we will need a new rule. We will need that a certain line in the proof
can be repeated at a certain stage in the proof. If one has derived (under some assumptions) that α is
the case, it remains the case that α (under some assumptions). The obvious restriction is that no lines
from closed assumptions are repeated. The repetition rule has the following proof structure:

1 …
.
m α
.
n α R(m), if m is not part of a closed assumption

We can immediately prove the petitio principii:

1 p As
2 p R(1)
3 p→p I→(1,2)

46
Without the repetition rule we could not obtain line 2. One might suggest that as a matter of
convention we might have defined the rule I→(m,m) such that

1 …
.
m α As
.
m+1 α→α I →(m,m)

This indeed seems acceptable, but opting for the repetition rule instead has the advantage that the
proofs remains more transparent, while the rule is utterly uncontroversial.
More complex proofs can be constructed, e.g. the proof that ((p → p) → p) → p is a tautology.

1 (p → p) → p As
2 p As
3 p R(2)
4 p→p I→(2,3)
5 p E→(1,4)
6 ((p → p) → p) → p I→(1,5)

Exercises 5.3

Prove by means of natural deduction the following arguments


1. q ├ p → (q Ù p)
2. p Ù q├ r → q
3. p → q, q → r ├ p → r
4. p → r, q→ r ├ p → (q → r)
5. p → ¬q, ¬q → (s Ù r), r → t ├ p → (s → ¬q)
6. (p → q) → p, p → (p → q) ├ p
7. (p → q) → p, p → (p → q) ├ q

47
Exercises 5.4

Prove by means of natural deduction the following tautologies:


1. ├ p → (r → (p Ù r)
2. ├ (r Ù s) → (q → q)
3. ├ (r Ù s) → (q → (q Ù s))

5.5 Disjunction

The introduction rule of disjunction is an obvious one. If one has established that α is the case, then
one knows that also α or β is the case. It is totally irrelevant what β is; the truth of the disjunction is
already secured by the fact that α is the case. The introduction rule of disjunction IÚ has the
following formal form:

1 …
.
m α
.
n αÚβ IÚ(m)

Here the disjunct is placed at the right side of the Ú-sign. The alternative with β at the left side is
equally valid:

1 …
.
m α
.
n βÚα IÚ(m)

48
The rule is intuitively valid, but its potential use is often overlooked when students try to find proofs
by means of natural deduction. The reason is that β can be any statement in propositional logic
whatsoever. If we have as a premise “It is raining”, it is very obvious that one can subsequently claim
“It is raining or it is snowing”. If one gives more farfetched examples, people start to get puzzled.
With the same premise, we can easily and validly prove that the sentences “It is raining or 2 + 2 + 4”,
“It is raining or 2+ 2 + 5”, or “It is raining or is it the case that if the Pope dies, World War III will
break out”. For all these cases, the truth of the disjunction is guaranteed by the fact that it is indeed
raining. In formal proofs, it is often overlooked that adding any expression β that is helpful as a
disjunct can be added by means of the introduction rule for disjunction.

The elimination rule for disjunction EÚ is based on the constructive dilemma α Ú β, α → γ, β → γ ├ γ


(see chapter 3 p. 20). Its formal form is:

1 …
.
m αÚβ
.
n α
.
o γ
o+1 α→γ I →(n,o)
.
q β
.
r γ
r+1 β→γ I →(q,r)
.
t γ EÚ (m, o+1,r+1)

The crucial lines are the lines m in which the disjunction is given, and the lines o+1, an q+1. These
are the numbers that are given at line t for the application of the elimination rule of disjunction

49
EÚ(m, o+1, r+1). The reason why the structure is presented in the above way is because the proof
will almost always have this form. Unless one derives α → γ or β → γ in another way, which will very
seldom be the case, we will have to derive them as an introduction of an implication. The most sensible
strategy for using a disjunction is to use both disjuncts as an assumption (as in lines n and q) and
derive the conclusion in both cases (lines o and r). If both parts have been proven, we then can use
the elimination rule for disjunction. We can formulate the following heuristic rule.

Heuristic rule When one has derived a disjunction α Ú β, one can eliminate the disjunction by
assuming both the left and the right disjunct and in both cases try to derive the conclusion γ.

The rule can be illustrated with the example p Ú q ├ (q → p) → p

1 pÚq Prem
2 p As
3 q→p As
4 p R(2)
5 (q → p) → p I→(3,4)
6 p → ((q → p) → p) I→(2,5)
7 q As
8 q→p As
9 p E→(8,7)
10 (q → p) → p I→(8,9)
11 q → ((q → p) → p) I→(7,10)
12 (q → p) → p EÚ(1,6,11)

In the example we have a disjunction in line 1. We assume both the left and the right disjunct in line
2 and 7. In both cases we try to prove the conclusion, which is an implication and hence we assume
the antecedent in lines 3 and 8, and try to prove the consequent p, which is relatively trivial in both
cases. Following the heuristic rule for disjunctions and implications leads immediately to the correct
proof.

50
We easily prove the commutativity of disjunction. We prove p Ú q ├ q Ú p.

1 pÚq Prem
2 p As
3 qÚp IÚ(2)
4 p → (q Ú p) I→(2,3)
5 q As
6 qÚp IÚ(5)
7 q → (q Ú p) I→(5,6)
8 qÚp EÚ(1,4,7)

The law is very obvious, but the proof is a bit more laborious. As has been said above, there are no
shortcuts in natural deduction.
We can also prove the associativity of disjunction. We prove here (p Ú q) Ú r ├ p Ú (q Ú r).

1 (p Ú q) Ú r Prem
2 pÚq As
3 p As
4 p Ú (q Ú r) IÚ(3)
5 p → (p Ú (q Ú r)) I→(3,4)
6 q As
7 qÚr IÚ(6)
8 p Ú (q Ú r) IÚ(7)
9 q → (p Ú (q Ú r)) I→(6,8)
10 p Ú (q Ú r) EÚ(2, 5,9)
11 (p Ú q) → (p Ú (q Ú r)) I→(2, 10)
12 r As
13 qÚr IÚ(12)
14 p Ú (q Ú r) IÚ(13)
15 r → (p Ú (q Ú r)) I→(12,14)
16 p Ú (q Ú r) EÚ(1,11,15)

51
Strict application of the rules leads immediately to the required results. A complication in this case is
that we have nested disjunctions. Hence, in step two we have the left disjunct of a disjunction, which
is itself again an disjunction. We have to assume again the left hand side and the right hand side of the
implication in lines 3 and 6. It may be helpful to understand the structure of the elimination of the
two disjunctions to give the structure for both. For the elimination of the first disjunction (p Ú q) Ú r,
we have (m=1, n=2, o= 10, o+1=11, q=12, r=14, r+1=15, t=16) and for the elimination of the
second disjunction p Ú q we have (m=2, n=3, o= 4, o+1=5, q=6, r=8, r+1=9, t=10). Note also that
we have used the introduction rule for disjunction in several lines.

Exercises 5.5

Prove by means of natural deduction:


1. p Ú q, p → q ├ q
2. (p Ù q) Ú r ├ (p Ú r) Ù (q Ú r)
3. p Ú q, p → r, q → s ├ r Ú s
4. p Ú q, p → (r Ù s), q Ù s ├ s

5.6 Introduction of negation

It is quite difficult to imagine how we could introduce negation on the basis of positive claims. If we
have a list of atomic sentences, conjunctions, disjunctions, and implications, there is no
straightforward derivation of negative facts. The way negation ¬ α is introduced is by means of a
reductio ad absurdum. We assume that α is the case and demonstrate that this leads to a logical
contradiction, and hence we must conclude that the assumption was not true after all. This proof
strategy is very common in mathematics, and reveals the basic structure of our logical thought.13

13 As mentioned above, intuitionists would disagree.

52
An easy example is the proof of Euclid’s theorem. The theorem states that there are infinitely many
prime numbers.

Suppose: There is a prime number that is the largest prime number n.


Consider the number n! + 1 (with n! = n x (n–1) x (n–2) ... 3 x 2 x 1)
Either n!+1 is a prime number or n!+1 is not a prime number.
If n!+1 is a prime number, then there is a prime number larger than n.
If n!+1 is not a prime number, then it must have a prime number as divisor.
n!+1 does not have a prime number ≤ than n as divisor, since n! is divisible by all the numbers ≤ n,
and 1 is not divisible by any of these.
Hence, n!+1 is divisible by a prime number larger than n.
Hence, there is a prime number larger than n.
Hence, by means of a constructive dilemma, we can conclude that there is a prime number larger than
n.
Hence, n is the largest prime number and n is not the largest prime number.
We conclude that the assumption that there is a largest prime number was wrong.

The proof strategy is clear. We begin with the assumption p that n is the largest prime number, and
derive a logical contradiction. Hence we conclude that ¬ p is the case.
We can give the following form of the introduction rule of negation I¬:

1 …
.
m α As
.
n βÙ¬β
n+1 ¬α I¬(m,n)

Note that we have β and ¬ β as the most general form of a contradiction. This complicates the proof
strategy, because it is not always clear which contradiction will be useful in the proof. Often it will be
the case that α Ù ¬ α will be an appropriate choice, as is actually the case in the above example of
Euclid’s theorem where the contradiction p Ù ¬p is used.

53
One of the inference rules we can prove by means of the introduction rule for negation is modus tollens
p → q, ¬q / ¬p:

1 p→q Prem
2 ¬q Prem
3 p As
4 q E→(1,3)
5 q Ù ¬q IÙ(4,2)
6 ¬p I¬(3,5)

Note that we use an assumption in the same way we did as for the introduction of an implication. The
restriction that we cannot use lines that were derived on the basis of an assumption further on in a
proof is again applicable. It is easy to see that this would go wrong. Consider the following:

1 p As
2 ¬p As
3 p Ù ¬p IÙ(1,2)
4 ¬p I¬(1, 3)

This would be an extreme form of negativism. We would be able to prove the negation of any atomic
sentence in the language. It is obvious where it goes wrong. One cannot close the two assumptions at
the same time.
An example of proof is (p Ù q) → ( r Ù s), ¬(q → r) ├ ¬p.

1 (p Ù q) → (r Ù s) Prem
2 ¬ (q → r) Prem
3 p As
4 q As
5 pÙq IÙ(3,4)
6 rÙs E→(1,5)
7 r EÙ(6)
8 q→r I→(4,7)

54
9 (q → r) Ù ¬ (q → r) IÙ(8,2)
10 ¬p I¬(3,9)

In this example, the important step in the proof strategy is line 3. We have a negation as a conclusion,
and the standard way of obtaining a negation ¬ α is by assuming α, in this case this is the sentence p.
Next we need a contradiction, and we see that we get soon our contradiction if we choose β = q → r.
It is sometimes possible to obtain a negation that is already part of a sentence in a proof by applying
the elimination and introduction rules for disjunction, conjunction, and implication, but for most
derivations of a negation the following heuristic rule should be used:

Heuristic rule If you try to derive a negation ¬ α (as a conclusion or as an intermediate result), start
by assuming α and try to find a contradiction.

Exercises 5.6

Prove by means of natural deduction:


1. p → q, r → s, t, (q Ù s) → ¬t ├ ¬(r Ù p)
2. p → ¬r, q → s ├ (p Ú q) → ¬(r Ù ¬s)
3. q → ¬p, ¬q → ¬p ├ ¬p

Exercises 5.7

Prove that the following sentences are tautologies:


4. ¬¬(p Ú ¬p)
5. ¬(p Ù ¬p)

55
5.7 Elimination of negation

For the elimination rule of negation, we will use the ex falso rule α, ¬ α ├ β. The formal structure of
is this rule is

1 …
.
m α
.
n ¬α
.
o β E¬(m,n)

An example in which the rule is applied is ¬ q ├ p → (q → r)

1 ¬q Prem
2 p As
3 q As
4 r E¬(3,1)
5 q→r I→(3,4)
6 p → (q → r) I→(2,5)

In step 4, we immediately obtain r since we have q and its negation in lines 1 and 3. As was the case
with introduction of disjunction, we have as conclusion the sentence β that did not occur earlier. In
exercises with the elimination of negation, an apt choice of β will be of great value in the proof.
Unfortunately, there are no good heuristic rules for a particular good choice of β.
As argued in the previous chapter, the ex falso rule is somewhat counterintuitive. The rule is not
problematic though. We will illustrate that the rule is equivalent with the disjunctive syllogism α Ú β,
¬ β├ α. If we give a schematic form for the application of an inference rule for the disjunctive
syllogism (DS), this would be

56
1 …
.
m αÚβ
.
n ¬β
.
o α DS(m,n)

We will prove the equivalence by proving that we can derive the disjunctive syllogism by means of the
ex falso rule/elimination rule for negation and vice versa. First we prove that the disjunctive syllogism
is valid.

1 …
.
m αÚβ
.
n ¬β
n+1 α As
n+2 α R(n+1)
n+3 α→α I→(n+1,n+2)
n+4 β As
n+5 α E¬(n+4,n)
n+6 β→α I→(n+4,n+5)
.
o α EÚ(m,n+3,n+6)

We have used the structure of the disjunctive syllogism until line 6, and have subsequently proven that
the conclusion of a disjunctive syllogism in line o can be proven without explicitly invoking the rule
DS.
The inverse proof goes as follows:

57
1 …
.
m α
.
n ¬α
n+1 βÚα IÚ(m)
.
o β DS(n+1,n)

We have the same proof structure until line n, and in the subsequent lines, we derive β by means of
the rule for the disjunctive syllogism DS.
We can conclude that the two rules are equivalent; we can derive the same conclusions on the basis
of these two rules. We opt for the ex falso rule as the rule for the elimination of negation, since a single
negation is eliminated in line n, whereas the rule for the disjunctive syllogism is a rule for both
disjunction and negation. So far, we have introduction rules that introduce a new logical constant in
the concluding line, and we have elimination rules, in which a logical constant is no longer present in
the concluding line. We can say that the introduction and elimination rules define the use of the four
logical constants in propositional logic. If we accept the philosophical claim that ‘meaning is use’, we
have by means of these rules given the meaning of the logical constants.

Exercises 5.8

Prove by means of natural deduction:


1. p → (q → ¬p) ├ p → (q → r)
2. s Ù q, r → ¬q ├ s → (r → p)

58
5.8 Elimination of double negation

The system of natural deduction is not complete yet. As yet, we don’t have the means to prove a
‘strong’ reductio ad absurdum. We have introduced the introduction rule as a reductio ad absurdum, but this
is a ‘weak’ reductio ad absurdum. By assuming α and deriving a contradiction, we can conclude ¬α. In
the strong reductio ad absurdum, ¬α is assumed, and by deriving a contradiction, we conclude α. The
structure of a reductio ad absurdum is:

1 ..
.
m ¬α As
.
n βÙ¬β
.
o α

By means of the already presented rules, we could have

1 ..
.
m ¬α As
.
n βÙ¬β
n+1 ¬¬ α I ¬ (m,n)

We don’t have the means yet to go form line n+1 to the conclusion α (try!). Nevertheless, most
logicians accept the strong reductio ad absurdum as a valid inference. Only intuitionists do not accept the
rule; and the rule characterizes the difference between classical propositional logic and intuitionistic
logic. But in order to salvage classical logic, we have to add a new rule. We could introduce the reductio

59
ad absurdum as a new rule, but as the explanation above illustrates, it suffices to add the elimination
of double negation Elim¬¬. The rule is given by the following schema:

1 …
.
m ¬¬ α
.
n α Elim¬¬(m)

Remarkably, we might have dropped the elimination of negation as a separate rule, because we can
derive the conclusion reached by means of E¬, by means of Elim¬¬ instead. Consider the following:

1 …
.
m α
.
n ¬α
n+1 ¬β As
n+2 αÙ¬α IÙ(n,n+1)
n+3 ¬¬ β I ¬(n+1,n+2)
.
o β Elim¬¬(n+3)

Nevertheless, we will retain E¬ as an elimination rule. The reason is that we have introduction and
elimination rules for the four separate logical constants, and for negation we would not have the rule
for the elimination of a single negation, but a rule for a double negation. Moreover, the separation of
the rules E¬ and Elim¬¬ gives us a sharp demarcation between classical logic and intuitionist logic.
The strong reductio ad absurdum is a forceful proof method, but unfortunately it requires some creativity,
and there are no general rules of thumb for choosing the contradiction one could use. In some sense,
the strong reductio ad absurdum is the last resort of the desperate logician. In case nothing else works,

60
one may try to assume the negation of the conclusion, and then try to find an appropriate conclusion.
A nice proof by means of this strategy is for one of De Morgan’s Laws ¬(p Ù q) ├ ¬p Ú ¬q.

1 ¬(p Ù q) Prem
2 ¬(¬p Ú ¬q) As
3 ¬p As
4 ¬ p Ú ¬q IÚ(3)
5 (¬ p Ú ¬q) Ù ¬(¬p Ú ¬q) IÙ(4,2)
6 ¬¬p I¬(3,5)
7 p Elim¬¬(6)
8 ¬q As
9 ¬p Ú ¬q IÚ(8)
10 (¬p Ú ¬ q) Ù ¬(¬p Ú ¬q) IÙ(9,2)
11 ¬¬q E¬(8,10)
12 q Elim¬¬(11)
13 pÙq IÙ(7,12)
14 (p Ù q) Ù ¬(p Ù q) IÙ(13,1)
15 ¬¬(¬p Ú ¬q) I¬(2,14)
16 ¬p Ú ¬q Elim¬¬(15)

The strategy for this proof is as follows. We have to prove ¬p Ú ¬q, which is not evident, and hence
we proceed by negating the claim in an ad absurdum strategy. In line 15 an appropriate contradiction is
finally derived and we immediately have the conclusion. For this conclusion we need p Ù q as an
intermediate step. We have to derive both separately. No immediate other strategies are available to
obtain either p or q, hence in lines 3 and 8, we prove both by means of intermediate reductios.

5.9 Double implication

Logical equivalences are a special type of tautology. They have the form of a double implication. In
the previous chapter, we have mentioned a few substitution rules that are valid, and these had the
form of a logical equivalence. In our definitions of the language of propositional language, the double

61
implication has been introduced as an abbreviation of a longer formula. We will prove logical
equivalence by writing out the complete well-formed formula, and take this as the conclusion in our
proof. An example is the logical equivalence (p → q) ↔ (¬p Ú q). The long form of the sentence is
((p → q) → (¬p Ú q)) Ù (¬p Ú q) → (p → q)).

1 p→q As
2 ¬(¬p Ú q) As
3 ¬p As
4 ¬p Ú q IÚ(3)
5 (¬p Ú q) Ù ¬(¬p Ú q) IÙ(4,2)
6 p I¬(3,5)
7 q E→(1,6)
8 ¬p Ú q IÚ(7)
9 (¬p Ú q) Ù ¬(¬p Ú q) IÙ(8,2)
10 ¬¬(¬p Ú q) I¬(2,9)
11 ¬p Ú q Elim¬¬(10)
12 (p → q) → (¬p Ú q) I→(1,11)
13 ¬p Ú q As
14 ¬p As
15 p As
16 q E¬(15,14)
17 p→q I→(15,16)
18 ¬p → (p → q) I→(14,17)
19 q As
20 p As
21 q R(19)
22 p→q I→(20,21)
23 q → (p → q) I→(19,22)
24 p→q EÚ(13,18,23)
25 (¬ p Ú q) → ( p → q) I→(13,24)
26 ((p → q) → (¬ p Ú q)) Ù (¬ p Ú q) → (p → q)) IÙ(12,25)

62
Proving a logical equivalence normally yields a long proof, since two implications have to be proven.

5.10 Proof strategy

In the explanation we have given some comments about the strategy in particular proofs and several
heuristic rules. Here we summarize a general strategy for finding a proof for an arbitrary argument in
propositional logic.

6. Start with writing down the premises α1, … αn.


7. Choose the conclusion β as the first intermediate aim γ at this stage of the proof.
8. See if you can obtain the aim γ by applying the inference rules on the premises or newly derived
sentences.
9. If this is not possible:
a. for a negation γ = ¬δ: assume δ and try to deduce a contradiction ε Ù ¬ε. The contradiction ε Ù
¬ε is the new aim γ in the proof.
b. for an implication γ = δ → ε, assume that δ is the case and try to prove ε; ε is the new aim γ in
the proof
c. for a disjunction γ = δ Ú ε, try to prove either δ or ε, dependent on this choice δ or ε is the new
aim γ
d. for a conjunction γ = δ Ù ε, try first to prove δ, whereby δ is the new aim γ, and then try to prove
ε, whereby ε is the next aim γ
5. In case none of the above steps lead to a valid proof, assume that ¬ γ is the case, and try to derive
a contradiction δ Ù ¬ δ, and eliminate the double negation so to obtain γ.
6. If you have a new aim in one of the previous steps, go through steps 3 until 5 again with the γ =
new aim. If you have proven the aim γ in one of the previous steps, go back to the previous aim. If
you have proven all the aims including the first aim, you have reached the conclusion β and the proof
is finished.

63
Exercises 5.9

Prove by means of natural deduction:


1. p → ¬q ├ q → ¬p
2. p → (q Ù r) ├ ¬(p Ù ¬q)
3. ¬p → p ├ (p → p) → p
4. p → (q → r) ├ (p → q) → (p → r)
5. p → q, r → s ├ (p Ú r) → (q Ú s)
6. ¬ p Ú ¬q ├ ¬ (p Ù q)
7. (p → q)→r ├ (p Ù q) → r
8. (¬ p Ú r) → ¬q, q ├ p
9. p → q, p → r, ¬q Ú ¬r ├ ¬p
10. ├ ( p → ¬q) → ¬(p Ù q)
11. (p Ù q) → r ├ p → (q → r)
12. (p → ¬q), (¬p → ¬q) ├ ¬q
13. p → (p → q), (p → q) → p ├ p
14. ¬(¬ p Ú ¬q) ├ p Ù q
15. ¬p → p ├ p
16. ├ (p Ú (q Ú r)) ® ((p Ú q) Ú r)
17. ¬( ¬p Ú ¬q) ├ p Ù q
18. ¬p → ¬q, q Ú ¬ r ├ r → p
19. p, ¬s → r, s → (¬p Ù q) ├ r
20. p → q, r → ¬s ├ (p Ú r) → ¬(¬q Ù s)
21. p → q, ¬ p → q, ¬ q ├ p Ù ¬q
22. ├ ¬(p → q) → (p Ù ¬ q)
23. ├ p Ú (p → q)
24. v → (t Ù u), p → q, p Ú v, (q Ú r) ® s├ s Ú t
25. p → (q Ú r), ¬q Ù ¬r ├ ¬p
26. p → (q → r), ¬r ├ ¬p Ú ¬q

64
27. ├ ¬(p Ù q) Ú (p ↔ q)
28. p → (q → r), r → ¬p ├ p → ¬q
29. (r Ú p)→ q, ¬q ├ ¬p
30. ├ ¬((p Ú ¬q) Ù ¬(¬p → ¬q))
31. p ├ (¬p Ú q) ↔ q
32. ├ ¬ ((p Ù q) Ù ¬ (p ↔ q))
33. ├ p Ú ¬p
34. (p → q) → p, p → (p → q) ├ q
35. p → ¬r, q ® s ├ (p Ú q) → ¬ (r Ù ¬s)
36. r, p → q, p → ( ¬q Ú ¬s) ├ p → ¬s

65

You might also like