Professional Documents
Culture Documents
Symbolic Logic
Odysseus Makridis
Palgrave Philosophy Today
Series Editor
Vittorio Bufacchi
Philosophy
University College Cork Philosophy
Cork, Ireland
The Palgrave Philosophy Today series provides concise introductions to all the
major areas of philosophy currently being taught in philosophy departments
around the world. Each book gives a state-of-the-art informed assessment of a
key area of philosophical study. In addition, each title in the series offers a distinct
interpretation from an outstanding scholar who is closely involved with current
work in the field. Books in the series provide students and teachers with not only a
succinct introduction to the topic, with the essential information necessary to
understand it and the literature being discussed, but also a demanding and engaging
entry into the subject.
Symbolic Logic
Odysseus Makridis
Literature, Languages, and Philosophy
Fairleigh Dickinson University
Madison, NJ, USA
© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature
Switzerland AG 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
Cover image © Science Photo Library / Alamy Stock Photo, Image ID: RAGKPW
This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
References���������������������������������������������������������������������������������������������������������� 481
Index������������������������������������������������������������������������������������������������������������������ 485
v
Chapter 1
What Logic Studies
human species is critically dependent on throwing out conclusions that have been
drawn incorrectly and retaining conclusions that have been proved correctly by
proofs that “work” or are correct. Even though this assessment does not come natu-
rally and is – as it turns out – an inherently technical matter, correct evaluation of
proofs is vital, and rather overlooked in educational settings, because it is the accre-
tion of added truths, drawn as conclusions, that makes the characteristically expo-
nential progress in knowledge possible.
A last preliminary remark is that the standards of what counts as a correct proof
depend on what kind of proof it is – what kind of logic is at play: reasoning comes
in two varieties – we cannot discern any deeper reason as to why it is exactly two
varieties – which are Deductive Logic and Inductive Logic. Deductive arguments
are judged differently – they are, by their nature, different from – inductive argu-
ments. We devote no space to the examination of Inductive Logic in this text as the
business of formal or symbolic logic – and the field in which mathematical instru-
ments are available in logic – is Deductive Logic. In either case – whether it be
deductive or inductive – a proof (or, as we will call it, technically, an argument) has
premises from which a conclusion is derived and the claim is presented that, as we
may say colloquially, the conclusion “follows from” the premises or is correctly
proven from the premises or is sufficiently supported by the premises. There is a
common tendency to regard the collection of the premises in a proof as “evidence”
and this is also referred to as “the facts” but meaningful sentences do not have to be
factual or descriptive. A sentence like “all people are equal before the law” is not a
descriptive sentence if it states the principle (not the legal fact); we take it, of course,
to be a meaningful sentence. Accordingly, the word “evidence” should not be used
in the context of focused technical study of proofs.
In addition to the study of proofs, technically called “arguments,” logic also stud-
ies such logical properties as consistency, which is a property of a collection of
meaningful sentences taken all together, and the logical status of a meaningful sen-
tence, which refers to whether a sentence is a necessary truth or necessary falsehood
or logically indeterminate. We will be examining these notions in detail in the pres-
ent text.
Sometimes the term “logic” is used to refer to a logical system or structure rather
than to the study of proofs or of logical characteristics and related subjects. If we ask
the question about what constitutes a logical system, we can define such a system in
different ways that parallel the definitions we gave above: a logical system (or, a
“logic”, not in the sense of studying logic but in the sense of a “logic” as a structure)
can thus be defined on the basis of its so-called consequence relation (also, logical
consequence relation) or as a collection of all its logical truths or as a system that
generates proven theses out of a small number of given postulates (or axioms) and
with the assistance of implementing specified rules. It is a deep philosophic matter
whether we should define logic, or a logical system, one way or another, but this
will not concern us in the present text. To remove any ambiguity, we point out that
in this text logic refers to the study, not to a system or structure but, at the same time,
we can say that we are engaged in logic as the study of logics-as-systems. The sys-
tems we investigate are limited to the basic logics (in the sense of “system” or
4 1 What Logic Studies
structure): thus, we will study the standard sentential and standard predicate (or
first-order) logic but we do not have the space to undertake further excursions that
would transfer us to such areas as the terrain of modal logics or alternative non-
standard logics, with the exception of forays we will be taking into what is known
as Intuitionistic Logic. Nor are we at liberty to indulge in explorations of fascinating
problems in the philosophic themes and problems associated with the philosophy of
logic, except availing ourselves of opportunities that arise occasionally – including
selected exercises that accompany sections.
Based on the preceding remarks, there is some remaining ambiguity – which
cannot be tolerated in the study of any subject, let alone in one so technically pre-
cise – between: logic understood, on the one hand, as a formal (mathematically
facilitated) study of logical structures or forms and other abstract objects (which
may or may not be mathematical, regardless as to whether such objects are thought
to exist independently of the human mind thinking about them); and, on the other
hand, as a motivated investigation of the rules of reasoning as these rules govern,
and normatively constrain, actual transaction in reasoning (possibly dictated by the
workings of linguistic conventions.) This distinction parallels that between pure and
applied geometry – a parallelism that has been drawn many times and has been
subjected to multiple critiques. Although geometry too was once considered moti-
vated by the imperatives of investigating and managing understanding of the actual
physical world, we should rather think of the subject of geometry as the study of
abstract structures and associated abstract objects while, on the other hand, the
claim that there is a fit between a geometrical system and the physical universe is an
empirical matter.
According to the pluralistic view of logic, there is no one correct logic; this
seems to be at variance with the case of geometry, for which it appears that – what-
ever the “correct” geometry of the physical universe may be, there has to be one
such applicable geometry; on the other hand, the “correctness” of the applicable
logic may be dependent on the area of application. Nevertheless, a parallel consid-
eration emerges in the case of geometry: for instance, certain tasks are better carried
out by application of the classical Euclidean geometry whereas other tasks require
a non-Euclidean geometry for dispatch. Nor is this a pragmatic issue since deeper
considerations about justifying such fitting applications arise. Our purpose is not to
discourse on the parallels, and arguable disanalogies between geometry and logic,
but to elucidate certain difficult observations about logic, which we need to raise
even at the preliminary level. The moral of the story is that a distinction can be
drawn between Abstract or Pure Logic (understood as the mathematical examina-
tion of the relevant abstract objects) and Applied Logic (which is not “pure” in the
sense defined but can be subjected to considerations or desiderata about the proper
fit between the mathematical or formal system and the intended area of application.)
For our current purposes, a bulk of what we examine is inevitably instigated by
applicability considerations – with recurrent references to linguistic operations and
the normative requirements of reasoning as encoded in linguistic operations. But
when we turn to the exploration of properties of the formal systems (the formal
languages and their specified grammars, the devices that are constructed and
1 What Logic Studies 5
item, a proof that has a completed structure which has to include at least one prem-
ise and one conclusion. It is unusual to examine argument structures with more than
one conclusion – but it should be pointed out that there is no technical difficulty in
doing that. An initial difficulty in the study of deductive logic consists in this: try to
think of an argument as an instance in which a pattern is exemplified (or instanti-
ated) in the way in which enforcement of the rule “three-strikes and you are out” is
always a particular and concrete exemplification of this rule of baseball (with the
rule itself never concretely available to us, and with only its specific enforcement-
applications available to us.) The correctness of a deductive argument (called tech-
nically “validity”) depends completely, sufficiently, and exhaustively on this
instantiated pattern, which logicians are wont to call “argument form.” The surprise
is that the content of what is being talked about in the argument plays role in the
assessment of the validity. This is an immediate and early difficulty that we should
be ready to confront.
When we make arguments in language, those are understood to be arguments in
accordance with the definition we have given. Strictly speaking, in Symbolic or
Formal Logic, we are not directly examining everyday arguments delivered by lin-
guistic means. The catch is this: if arguments in language are deductive (with the
same definition of deductive argument applied always), then the logical validity of
this argument is a matter of whether this argument has a valid argument form or
pattern. Our study of logic attends to the study of those patterns. We are fortunate in
modern times to have mathematical instruments available to us – something that
was denied to Aristotle and to subsequent generations of logicians in the western
tradition. It is not our present concern to go over historical details about what revo-
lutionary developments in Mathematics made this possible. From a practical point
of view, the speaker or writer who has made a deductive argument is – whether she
or he knows it or not – playing a game over which the rules of deductive reasoning
have binding force; assessment of the “correctness” or validity of the argument
according to those rules is an objective matter. Whether a given argument is invalid –
because it has an invalid pattern, an invalid argument form – is a matter of normative
significance: an invalid argument is bad, wrong, incorrect, to be rejected – not in a
moral sense of “bad”, of course, but it is still normative in the sense that we are deal-
ing with binding and correcting valuations and orders like “you should not accept.”
It is comforting to know that subjective views or cultural variations are not grounds
for raising objections or imposing conditions on logical assessment: it is better to
think of our activity as similar to the linguistic activity that involves judgments and
assessments of correct-incorrect grammar. Of course, languages vary across the
world, but to show that the logic is also different would require a specific examina-
tion of what forms are valid and it is not something to accept instantly. Even if logic
were to vary across different linguistic groups (which we can put aside), still, within
each relevant community the logic would be compelling, ordering, and not a matter
that is left to subjective evaluation. I can be rightly corrected for wrong grammar
and the normative aspect is that I should, obviously, also enforce correct grammar.
There are objective and binding (“ordering”) rules that constrain and guide and
whose violations constitute corrigible (correctable) offenses. Notice that, if some
1 What Logic Studies 9
other idiom or local variation of the language encloses a different grammar, I still
cannot plead that it is all subjective – it is not, and, in that other localized context
where the grammar is different, I will be corrected and rightly so. What happens
with an invalid argument is somewhat like the example of the correct grammar. But
it is vital to realize – without receiving a proof at this point – that the grammar of a
language does not help us see how the logic works. Even if trained thoroughly in the
grammatical details, we cannot depend on the grammar to determine validity of an
argument. Thus, once again, we reach the ineluctable realization that logic needs to
be studied as a distinct subject.
In some languages, the premises of an argument are called expressly “assump-
tions”. A point to notice and keep in mind is that an argument does not guarantee
that the premises are true. Nor does the logical correctness – validity – of a deduc-
tive argument depend on whether the premises are actually true. This seems surpris-
ing to beginning students. This is also related to the other point we will have to
grasp: in deductive logic, argument validity is not a matter of content or of what is
being talked about. It is a matter of form. Inductive logic – which we do not study
in the present text – does not have this dependence on logical pattern or form; in
Inductive Logic, content evaluation from a logical point of view arises as a relevant
evaluative issue but in Deductive Logic, think of what we do in evaluating argu-
ments as a matter of assessing a relation (the relation or connection between all the
premises taken together and the conclusion) regardless of the content. For instance,
Jill is the daughter of Mary and Jackie is the daughter of Jill: our attention is focused
now on the relation “daughter-of.” If we go on and replace the used names in the
above examples with other names, claiming always the relation of daughter-of, we
can proceed – but the variability of the content depending on what names are used,
whom we are talking about, is not our business insofar as we are strictly interested
in the relation daughter-of. For instance, we might notice that, as a relation, daugh-
ter-of is not transitive: the daughter of the daughter of x is not the daughter of x (she
is the grand-daughter of x!) This is important for us in investigating the relation we
denote by “daughter-of.” But content (whose daughter we are talking about) is not
important in the study of logic. The content does not affect the logical characteris-
tics of the relation (like transitivity, in the preceding example.)
Let us return to the concept of argument and, bearing the above in mind, let us
think of an argument as a relation between: a) the premises (all of them taken
together conjunctively – connected by “and”, each premise to the next and to the
next all the way to the end); and b) the conclusion. The claim in presenting an argu-
ment is that this relation works. The technical term in assessing the premises-
conclusion relation as logically correct in deductive reasoning is validity. A
deductive argument is valid if and only if it has a valid argument-form (a pattern,
structure, with more to say about this.) An argument form is valid if and only if:
if all the premises are true, then the conclusion is necessarily true. (This is the
case of a deductive argument; for an inductive argument, the connection between
premises and conclusions is a matter of probability – and, thus, a matter of relative
strength and not a matter of logically guaranteeing that the conclusion is true if the
premises are true. But in that case too, of inductive arguments, the object of our
10 1 What Logic Studies
inquiry is again a relation between all the premises taken together and the conclu-
sion. This is key!) The deductive argument form – exemplified by a given argu-
ment – is valid if and only if: the truth of the premises guarantees the truth of the
conclusion (notice the relation between premises and conclusion and notice the role
played by what we may call “truth preservation.”) We can say this in other ways,
and we will have opportunities to continue presenting this central concept: in a
valid argument form, it is logically impossible to have all the premises true and
the conclusion false. We need to alert ourselves that, in the definition of validity, we
are using some concepts that are themselves inherently difficult to grasp: we speak
of if-then – if the premises are all true, then the conclusion is logically guaranteed
to be true in a valid argument form. We also speak of “logical necessity” and “logi-
cal possibility:” it is logically necessary in a valid argument, and only in a valid
argument, that, if the premises are true then the conclusion is necessarily true. It
is logically impossible in a valid argument that we can have all the premises true
but (and) a false conclusion. Sometimes, it can be put as follows: in a valid argu-
ment, if the premises are true, then the conclusion is necessarily true. This, however,
is not a proper way of putting it because the conclusion of a valid deductive argu-
ment does not have to be a logical truth (a logically necessarily true sentence.)
Sentences that are logically indeterminate or, as we say, logically contingent (they
are possibly true and, in other contexts, possibly false), may well be conclusions of
valid deductive arguments.
A last preliminary point to make is this. Occasionally, a definition that is given of
the concept of deductive argument is: an argument whose premises provide absolute
(logically necessary) support for the conclusion. This, however, prevents drawing a
distinction between valid and invalid deductive arguments. You will notice that this
last definition, given about deductive arguments, is the same as the definition we
gave earlier of valid deductive arguments. If we adopt this as the definition of a
deductive argument as such, then we accept that all deductive arguments are valid
and that invalid arguments are not properly to be considered as arguments at all.
This is an unorthodox view and we bypass it after we have briefly mentioned it.
The concept of argument is related to the concept of implication (and it is an
implication we state when we state an “if-then” sentence, which can be called an
implicative sentence.) The untrained person faces difficulties when it comes to
grasping the meaning of and assessing “if-then” sentences. The notion of inference
studied by Logic – logical inference – is not psychological: we are not interested in
subjective actions or impressions pertaining to drawing inferences from given
assumptions. It is an objective matter as to whether an inference is correct or incor-
rect. Similarly, for assessing whether a collection of sentences – a theory, we may
call it – is consistent or holds together logically, we need to know if it is logically
possible for all the given sentences (and those that can be correctly inferred from the
given sentences) to be true together. Lastly, whether a sentence expresses a logical
truth or a logical falsehood – or neither – is a matter that transcends our daily intu-
itions and requires systematic inquiry to learn. Surveying again the terms we have
already brought up so far: try to appreciate the kind of difficulty we will be facing.
We have mentioned terms that are not familiar from everyday discourse. We have
1 What Logic Studies 11
also intimated that something like the validity or correctness of a proof or com-
pleted inference is actually a relation – between given premises/assumptions and the
conclusion that is claimed to be following form those premises. A concept like
“relation” is also one that is not immediately transparent to our intuitions. Also,
speaking of consistency as the logical possibility that all given sentences are true
together – we need to grasp in this case another concept which is rather difficult and
not immediately accessible to our common sense (the concept of “logical possibil-
ity.”) Lastly, we have also intimated already that logical truth, and logical falsehood,
are concepts that cannot be understood based on our raw abilities to use “true” and
“false” in everyday conversations.
We can think of a language like English as having a logic, or many logics per-
haps, embedded in it; in the same way that there is a structured and regulative gram-
mar for every language, it is also the case that there is a logical grammar. In older
times, it was said that there is something called Reason and this is what Logic ulti-
mately investigates; or, perhaps, there is a certain way in which the mind works and
logic has that subject as its subject matter. For even more ancient thinkers, like the
father of logic, Aristotle, there is an ultimate foundation in the way nature is consti-
tuted, which determines how logic itself works. Aristotle would state logical laws
fist by referring to how things are and can possibly be, by nature, but he would then
proceed to provide additional formulations of the logical law. Aristotle understood
something about deductive reasoning, which is one of the causes of difficulty in a
course in Logic. Aristotle’s references to nature can be omitted without harm to his
logical studies. The updated version of what the objects studied by logic are talks
about how language itself works. Any “natural” language has many words and
expressions – which we can call logic-words – whose meanings determine the logic
of the language. Some students of logic think that the deeper foundation of those
meanings lies in the logical rules that allow us to accept a logic-word in a sentence
and also to remove it from a sentence. Whether the buck stops with the meanings of
words or with the rules that govern their usage in language, this whole issue is
clearly an objective matter. There is no room here for declamations to the effect that
it “depends” on what someone thinks subjectively. The parallelism to linguistic
grammar helps to bring this out. In the same way a language has a grammar, it also
has a logical grammar. The two do not cooperate, so to speak: the linguistic gram-
mar does not necessarily help us see clearly what the logical grammar is. If you
reflect on this, you can appreciate, again, how urgent it is to study Logic. But is it is
hard subject; you should be prepared for that.
It would be surprising to proponents of older philosophic schools that language
plays such a role in determining the logic; this is a subject of interest to philosophy
and philosophy of logic and we don’t need to dwell on it longer. What you should
pay attention to is that logic is an objective matter – and you can draw the compari-
son to grammar to see that. There is something normative – something that gives us
orders and regulates how we ought to think – in logic. In the same way that it is
wrong, incorrect, to violate the grammar – this ought to be trivially obvious – it is
also objectively wrong to commit logical errors. Thus, it does not depend on opinion
if the logic of an argument is correct or a theory is free of logical pathologies like
12 1 What Logic Studies
inconsistency. We cannot say that different people have different logics – this is
nonsensical unless the word “logic” is used in some other sense but we are not inter-
ested in that.
Logic is not an easy subject to study. Evolutionary advantages have been con-
ferred on us all over long biological millennia but those advantages do not include
facility to study logic. Apparently, our biological ancestors benefited from a propen-
sity toward fight and flight, for instance, but they did not survive and reproduce
thanks to being able to make logical inferences with facility. Fleeing when perceiv-
ing danger might have been unnecessary in most cases – an inference might have
shown that there is no need to flee – but the flight strategy worked also for those
fateful and crucial situations in which survival was at stake. Thus, we are commonly
prone to anxiety conditions but logic is a hard subject for the considerable majority
of students.
We will not be studying language in this course; we will be constructing artificial
or formal languages and we will be working with them. Of course, the motivation
may well be to fathom the logic of the language but this is an incidental matter: the
formal systems we construct are what we study and if such systems “get it right”
what the logic of language is, then that is an extra bonus. It is like geometry:
Einstein’s physics uses a different geometry, not the classical Euclidean one; this
does not undermine the Euclidean geometry itself but only shows that the geometry
of our universe is not Euclidean after all. Similarly, if our formal system does not
match the logic or logics of our language, that is a limitation on the usefulness or
applicability of our formal system but not an invalidation of this system. As you will
find out, we will be translating English sentences into our formal languages and we
will be working with those translations toward determining logical characteristics.
It is a sound assumption that significant fragments of language have logics that are
captured by the formal systems we study in this course. The benefit, then, consists
in having tools – apparatus – that allows us to figure out if certain logical character-
istics obtain or not. We will see now what these characteristics are – which also
shows us what subjects are studied by Logic. We should make it clear that the appli-
cation of formal techniques is possible only for deductive reasoning and this is our
demarcated territory in this text.
As we embark on our exploration of our subject, let us make clear, in a prelimi-
nary way, how we are using certain terms. We speak of sentences a lot in this text.
By “sentence” we mean, to be precise, the meaning that is expressed by a grammati-
cally articulated sentence of a language like English. Notice that in the preceding …
sentence I wrote, the word “sentence” occurs twice with two different meanings.
Our sense of “sentence” is different from that of the grammatical definition of “sen-
tence.” To simplify things, let us focus on the word “meaning”, of which we should
have some intuitive notion to begin with. The sentences of logic are the meanings.
Much fuss is made, philosophically, about the status of such meanings: what kinds
of things are we talking about? Do such things exist independently of being thought?
We are not interested in such issues, given our present interests. We need to clarify,
though, that “meaning” is understood here to be the kind of thing – whatever thing
it is – that is capable of being true or false. We say that meanings have truth
1 What Logic Studies 13
values – true or false. Nothing else has truth value. Thus, since we are using the
word “sentence” indifferently and to stand for meaning, it is correct to say that what
we are talking about as sentences are the only things that can be true or false. It
would be more precise to distinguish the grammatically articulated sentences from
the meanings. We simplify by ignoring this distinction – but it should be understood
that there is such a distinction. You can think of this in the following way: to express
the same meaning, we can use many different written or spoken, grammatically cor-
rect, sentences. We can even translate a sentence to another language. The original
sentence and the one that translates that original sentence are both expressing the
same meaning: one meaning, two sentences. Accordingly, we can see that sentences
and meanings are not the same thing!
Our sentences are true or false; they are the only things we admit as capable of
being true or false; they have to be either true or false; they cannot fail to be true or
false; and they cannot be both true and false. Strictly, what was just said applies to
the meanings (while the sentences ought to be regarded as being grammatically cor-
rect or incorrect.) Our decision to speak of sentences in a broad way blurs this dis-
tinction. It is unfortunate that we need to go over this ground – creating what comes
across as confusion in the process – but there are certain serious philosophic dis-
putes and issues in the background. For our purposes here, let it be stressed that our
sentences are capable of being true or false and this is the key notion we need to
retain. A few other details should be added. Our sentences are considered to have
packed in them sufficient content – not obvious to the naked eye, so to speak – that
ensures that whatever truth value (true or false) they take, it is not variable! We will
use an example to explicate this.
I am driving to work.
This apparent sentence has the ambiguous pronoun in it, “I,” and also its verb-
ending “-ing” alludes to a temporal context in which this is supposed to be happen-
ing. If different persons uttered this sentence, it can be true in some cases and false
in other cases. Depending on what time contexts, and place-context and other pos-
sible contextual elements, this could be a true or a false sentence. But we cannot
tolerate such variability depending on context. We have to have sentences whose
meanings are settled: whether true or false, it doesn’t matter in this case, but we
have to insist on our sentence units not having truth values (true or false) which are
variable depending on how we specify who utters the sentence or contextual cir-
cumstances surrounding such utterance. Thus, our sentences are like the following –
to continue with the same example as above:
I, −---(name) --- drive to work ---at such and such a time and date ----…
We leave open spaces also to indicate that every possible contextual or other vari-
able circumstances are specified precisely. Our sentences are either true or false: it
is not true in some contexts and false in other contexts. We need to undertake all
this, complicated-sounding, ado for technical reasons. This is how our formal tools,
given how they are constructed, can be used properly.
Every meaningful sentence of a language like English can be used to make an
assertion. What is asserted is the meaning of the sentence, which we may call “state-
ment.” An open-ended number of sentences can be used to assert the same meaning
14 1 What Logic Studies
what logic is ought to exclude the motivation for selecting a formal logical
system. If, for instance, a Quantum Logic is adopted, then it is that logic that
should be thought of as being in force and this notion should be held in isolation
from competing claims about different domains that may generate different
selections of formal logical systems. Given this last remark, universality is
weakened and preserved rather in a negative sense: whatever the logic is, it is
absurd to claim that a competing logic also holds.
2. Topic-Neutrality: The validity of arguments, the definitions of the logic-words
or connectives, and assessments of consistency of theories are independent
entirely of the subject matter. For instance, all sentences of the pattern “X and
Y” are true if the statements X and Y are both true regardless of what sentences
are put in for X and Y, and, therefore, regardless of what the contents or specific
subjects of those statements are. This characteristic overlaps with the preceding
one as we characterized it above. We may confine universality to the claim that
logic is the same across languages, so that it is actually presupposed for any
meaningful linguistic system including the case of natural languages. In that
case, topic neutrality can be seen as specifically the claim that the choice of
subject matter can have no impact on, and indeed it presupposes, the universal
logic that is at work in the production of meaning in all cases.
3. Precedence: Another characteristic of logic is that its study takes precedence to
any project of reasoning or linguistic activity in the sense that it is presupposed
as a check on the correctness of any meaningful linguistic or theoretical action.
The restrictions imposed by logic, so that statements can be accepted as proven
or views accepted as consistent, are prior to, and constrain, any generation of
meaning. We can say that logic is pre-theoretical and also that, on this basis, it
cannot be scientific: logic is not science but presupposed by science; but no sci-
ence can take precedence or be prior to all sciences and, therefore, logic is not
a science on account of its characteristic of precedence. This view was advanced
by Ludwig Wittgenstein in his Tractatus Logico-Philosophicus. As Aristotle
observed in ancient times, students of all subjects ought to be first versed in
logic which is, then, one of the foundational disciplines along with the study of
reading and writing. The medieval curriculum of studies observed this founda-
tional pedagogical requirement in its imposition of an obligatory curriculum
called “the trivium” (three subjects, consisting in reading, writing and logic.)
4. Normativity: Logic enforces standards of correctness which are binding and
objective. For instance, an invalid argument is “incorrect”, as the intuitive
notion has it, and this means that one should refrain from accepting, at least
under the assumption that one is rational. More strongly, accepting an invalid
argument as valid, to continue with the same example, would be wrong under
any rational construction and its continuous acceptance would betoken not only
irrationality but would also constitute an act that is absurd and cannot be
defended.
5. Non-Self-Referentiality: Because logic is pre-theoretical, constraining any
meaning-generating, a surprising corollary is that logic cannot speak about
itself in any substantive way: logic cannot subject itself to revisions, for
16 1 What Logic Studies
instance, on pain of generating contradictions. This does not mean that we can-
not study logical systems in a metalanguage, so that we can canvass a whole
array of structural characteristics of logical systems and even detect anomalies,
which is precisely what metalogic does. The point, rather, is that the correctness
of the logical rules we use in the metalanguage, as we investigate logical sys-
tems, is presupposed and, because of that, it is not open to revision by means of
its own resources.
6. Truth-Preservativeness: The standard logic has two truth values, true and false.
In a more general way, a logic has semantic values (which can be thought of as
ways of being meaningful), as constructed semantically, and a distinction must
be imposed on those values: the distinction is between so-called designated and
non-designated value-types. For the standard logic, which has two values or is
bivalent, the value we call true is the one that is designated. Because there is
only one non-designated value, the false, this logic is elegantly symmetrical,
and has a defined connective of negation that seems intuitively appealing: thus,
if not true, a meaningful sentence has to be false and vice versa; if not valid, an
argument has to be invalid and vice versa. There are no degrees admitted in the
attributions of values. Characteristic logical properties, like validity, are defined
to be preserving the designated value, which is the true in the case of the stan-
dard logic. This means that a valid argument form cannot possibly have any
instances with all premises true and a false conclusion. In other words, the truth
of the premises, stipulated as true, must be necessarily “preserved” by the con-
clusion or, to put it differently, if all the premises are true then the conclusion
must necessarily be true. There are alternative views about what may be pre-
served from premises to conclusion so that the argument form is regarded as
valid but the view of truth-preservation is the standard one. In terms of the rela-
tion of logical consequence: if all the sentences in the set of premises are true
(taken conjunctively, or conjoined with “and”, so that the compound statement
so produced is true), then the conclusion is true and this is a matter of logical
necessity. The broader notion for logical consequence is preservation of desig-
nated value (with designated values being thought of as representing ways of
being true) but for the standard bivalent logic, this amounts exactly to preserva-
tion of the truth value true.
7. Formality: It is an inexorable staple of textbook presentations that deductive
logic is a matter of logical form. The sentence “it rains and it does not rain at
such-and-such a place and at-such-and-such a time” is a contradiction because
any sentence of the logical form “X and not-X”, with X as placeholder or vari-
able for any meaningful sentence, is false for any logically possible assignment
of truth values to the ultimate individual components (the atomic sentences) of
the compound sentence. Similarly, argument validity is a matter of logical form
which is aptly called in that case argument form: arguments are valid/invalid if
and only if they are instances (they instantiate or betoken) argument forms
which are valid in the sense that they can have no instances with all premises
true and a false conclusion. Presentation of any instance of an argument form
with all premises true and a false conclusion, which is a called a c ounterexample
1.1 General Characteristics of Logic 17
true sentence to false, we would need an even more convoluted and absurd
ontology, allowing for the erasure of facts in addition to the alleged factive sta-
tus of non-facts that underlie falsehoods, to deal with such a matter. Ridding
ourselves of such misleading and perplexing conceits, we may as well settle for
the abstract character of the things on which logical connectives operate: such
things are the truth values, true and false, themselves. Gottlob Frege, the
German mathematician and originator of modern logic in some crucial respects,
was the first to see this; although the maneuver – stipulating truth values as the
referents of meanings – lacks initial intuitive appeal, it effectively preempts
those hoary ancient philosophic perplexities and also fits neatly the widely
accepted characterization of logic as a non-empirical endeavor.
12. Non-Metaphysical: We accept that there is no need for speculation about kinds
of objects to which the discipline of logic is addressed. When semantics is pro-
vided for logical systems, stipulations about the kinds of things that are being
talked about are needed; in that case, the enterprise is not one of discovery or
aiming to make defeasible claims about the world; indeed, what happens is to
check if a semantics is available in the sense that we can provide a make-believe
story that is free of inconsistency. Unlike Aristotle who considered the founda-
tion of logic to be metaphysical – grounded ultimately, and sanctioned, by the
way the totality of things is structured and how it functions – the modern view
is that it is a confusion of categories to pair logic with metaphysical inquiries.
13. Incorrigibility or Lack of Empirical Foundation: Given that deductive logic is
not empirically grounded, it is a remarkable characteristic of logic that it cannot
be corrected: it is nonsensical to anticipate that perhaps what logic prescribes
could, after all, turn out not to be the case.
14. Perspicuity, Ambiguity and Vagueness: When we construct formal systems for
the study of logic, we ensure that the symbolic resources we use, equipped with
a recursive and systematic grammar, can only be used properly in ways that
render all the relevant properties perspicuous (upon proper examination) and
preclude emergence of ambiguity or vagueness: ambiguity arises when more
than one interpretations are plausibly available to the competent practitioner;
the grammatical arrangements and demarcation of the formal language prevent
this from ever occurring. Ambiguous symbolic expressions have to be consid-
ered as nonsensical – not amenable to eking out meaning from them. Vagueness
arises when the boundaries of inclusion of an object within a category are pen-
umbral: if there is a specifiable degree to which any acceptable object has the
categorial characteristic, then we have vagueness. Concepts may indeed be
vague, although it would be far-fetched to consider every concept as being
vague. Although there are so-called “fuzzy logics” for the analysis of vague
notions, the standard logic is conditioned by the foundational acceptance that
inclusion is not vague. This means that the standard logic, as we will have
opportunities to discover, draws significantly on the classical theory of sets,
which is “sharp” or “crisp” and not vague or fuzzy; this means that for any item,
it can be determined whether it is a member of a given set or not absolutely
(either it is or it is not) and not as a matter of degree on the interval [0, 1].
20 1 What Logic Studies
p recisely the characteristics that are visible in the smaller scale; although we
presuppose “seeing” as an activity, the “seeing” we are constructing is specifi-
cally attuned to bringing out with systematic precision the elements we are
interested in. Differences recorded in the broadly accessible target make impacts
on the specifically addressed area within which our vision is trained by means
of the calibrated instruments. Although metaphors are tricky, and should not be
accepted in the stead of definitions, our tentative metaphor given above tries to
showcase why we are not wading into circularity when we define our connec-
tives, in spite of appearances to the contrary.
Aristotle, who is considered the originator of the systematic study of Logic, thought
that the nature of things consisting in the self-subsisting, organized and eternal
totality of what exists, provides the ground or foundation for logic. Aristotle would
define a logical law – for instance, that no meaningful sentence can be both true and
false – in metaphysical terms: “nothing can both have and not have a certain prop-
erty in the same respects.” It sounds as if the philosopher speaks of how things work
in nature – with “nature” taken in a broad sense, so as to include the totality of
things, possibly including objects that are not physically concrete. Aristotle would
also go on to add non-naturalistic versions of the logical laws but his opening pre-
sentation would be like the one given above – which, incidentally, is known as the
logical law of non-contradiction. This naturalistic view seems wrong, though.
A similar error affected the erstwhile development of geometry: it was thought,
rather intuitively but inappropriately as it turns out, that the postulates of geometry
are justified by the physical geometry of the universe as it is. Nevertheless, the fact,
if it so happened, that a physics with an entirely different type of geometry turned
out to be the one we adopt would not mean that our initial geometry was defective
in itself: it would only show, as an empirical matter, that the geometry we thought
was the “natural” geometry is not the geometry of the universe after all. This does
not prove that anything in the geometry is wrong: provided that we take the postu-
lates of the geometry to be self-justifying, every statement in the geometry is not
simply true – but, rather, logically necessarily true. This shows that our geometry is
never dependent on empirical, verifiable and falsifiable, considerations. Although
there are marked differences between logic and geometry, the parallel can be used
illustratively to make the point that it is a fundamental error to think that deductive
logic is dependent for its verification on empirical considerations. Certainly, the
purpose of investigating and systematizing the logic of a language may motivate the
study of logic but the formal systems of logic are independent of linguistic facts; it
may turn out that a given formal logic may fail to hit the mark with respect to how
the logic or logics of a given linguistic idiom.
Another analogy we can draw is to a game. The rules of a game must be given so
that the game can be played: the rules are not up for grabs. If we are to ask a referee,
22 1 What Logic Studies
we would not be asking for an opinion as to whether the rules are indeed the rules
of the game; at most, we may ask for a list of the rules and certainly dispute factual
elements in cases of applications of the rules. Even though the rules have been
forged historically, we can ignore this background, for our present purposes, and
consider the game as an independent and self-sustaining arrangement. In that sense,
dependence on empirically verifiable developments, when it comes to ascertaining
what the rules of the game are, would make it impossible to actually play the game:
we couldn’t make sure what rules to apply as we would be tracking the develop-
ments that may affect the rules. This, however, discloses an additional problem:
even as we track developments that might fix evolved rules for our game, we would
still have to know what we would be able to count as acceptable rules of the game
that emerge in the developmental process. This means that we cannot actually carry
out this enterprise: we need to have the rules set so that these rules are insulated
from empirical effects. This story illustrates quite appositely how logic is in a way
predetermined, known independently of experience in the sense that it is incorrigi-
ble or cannot be corrected or set right by appealing to empirical factors.
It is also poignant that logic is prior to actual transactions in the sense that we
have to be able to apply rules of reasoning already if we are to reason about what is
acceptable and what is not. This point seems to apply broadly, including also the
logics that are presented in natural languages. For instance, quite clearly, we cannot
both accept and not accept some putative rule X: but this means that we are already
committed to the logical law of non-contradiction – the one Aristotle was found to
be presenting, albeit initially in a naturalistic formulation: we cannot accept both a
rule X and a rule whose acceptance would logically entail that X is not applicable.
The view of meaning reflected in the preceding comments is called extensional-
ist: we take meaning to be determined by what the word refers to, or what the word
has as its referent or denotatum. Such referents are considered, adequately for the
examination of logical properties, to be the truth values of sentences: whether sen-
tences are true or false. We do not take a meaningful sentence to be receiving its
meaning by reference to facts or states of affairs. If we adopted a view that takes
meanings to be collections of facts, those would have to be the facts relative to
which the sentence is true: we would have to make sure that all other facts are also
included in the comprehensive collection of relevant states of affairs to ensure that
there are no inconsistencies: this is like saying that a sentence would be given mean-
ing by means of all the scenes in a story, in which the sentence is included or asserted
as true. It seems, then, that truth value (true-false) takes conceptual precedence over
any way in which we would define meaning by using references to facts: this is
because, consistency as a logical concept is defined by means of having the con-
cepts of true and false available to us in the first place.
The meaning or referent of a meaningful sentence is thus taken to be truth value
(true or false.) This may seem like a narrow view: sentences that have different
meanings based on content can be, nevertheless, true or false together in all possible
contexts: we say that such sentences are mutually equivalent (or equipollent in
older, now rather forgotten, terminology). Equivalence may be relative to context –
for instance, it might be relative to actuality: “the earth has one moon” and “the
1.2 Logical Meaning, Logic-Words and Logical Form 23
earth’s sun is bright” are both true, and as such equivalent, but this is relative to the
actual state of affairs and not because of any logical relations between the contents
of the two sentences. A consistent alternative story is possible in which one of the
two preceding sentences is true while the other is false. Logical relations, like equiv-
alence, should hold across all logically possible cases and not only contingently on
what happens to be the actual case, which is after all only one logically possible
case. But sentences like “a triangle has three angles” and “a rectangle has four
angles” are equivalent in a stronger sense: no meaningful story can be presented in
which either one of them is false. And yet, these sentences are not true by virtue of
logic but only as a matter of how words like “triangle” and “angle” are defined. We
do not expect that such words would be logic-words or that these words would mat-
ter when it comes to the determination of logic. But let us think, instead, of “it is not
the case that it is raining and not raining” and “it is not the case that we have class
and we do not have class.” Observe that these sentences must be true – not as a mat-
ter of psychological impression but as a matter of logic – and must be true, or neces-
sarily true, in every logically possible context. The relevant words, whose meaning
fixes the truth value as true, are “not” and “and.” These sentences are necessarily
logically equivalent – they have the same truth value (true) in every logically pos-
sible context. Once again, the content of the sentences does not matter: one is about
raining and the other about having class. In a deeper sense, of course, they are not
about such things but about the meanings of the logic-words – “and” and “not.” The
rest – the content – simply does not matter.
Abstracting from content – disregarding content and treating only true and false
as referents for meaning – may seem to be narrowing our scope impermissibly, but
this is not the case. We should think of the maneuver that assigns truth values as the
referents of meaningful sentences to be cutting through an infinite array of linguistic
idioms in order to circumscribe only what all those idioms have in common as logi-
cal properties. Based on this, “a triangle has three angles” is logically independent
from “a bachelor is unmarried” even though they are both true always in every logi-
cally possible story but what we discern as their logically significant relationship
(that they are necessarily equivalent) is a matter simply of their both being true in
every possible case. As we see again, true and false are the “players” in this game.
These two sentences are necessarily true not because of logic-words but because of
defined meanings of words like “triangle” and “angle,” which are not logic-words.
We see, however, that their equivalence is, again, not a matter of context as they
have to be true in every context. A story about a context – a world or state of affairs –
in which these two sentences are not both true would be nonsensical.
Having adopted the view that logical meaning is a matter of extensionality, which
means that meaning is fixed by reference, we have apparently rejected another plau-
sible way of settling the concept of meaning: according to such an alternative view,
meaning can be considered to consist in sense or connotation as distinguished from
reference which is also called denotation. The notion of sense is about the content
of meaningful sentences rather than the more refined distillation that leaves us with
truth value as the meaning of a sentence. An example, a famous one, highlighting
the distinction between sense and reference is about names: the extensionalist view
24 1 What Logic Studies
also takes names to be defined by their referent. Thus, names are not characterized
in their meanings by any properties of the thing that is named but only serve as
conventional and arbitrary labels or tags for tracking an object: the meaning of the
name is, accordingly, provided fully by the referent (the item that is labeled by the
name.) The names “morning star” and “evening star” happen to refer to the same
celestial object, Venus, but their linguistically generated senses as non-logical words
were in variance from each other, with the “morning star” endowed with the sense
“the first star visible in the morning” and “evening star” having the sense “the last
star visible at night.” While the variable senses of the words give rise to, and explain,
such phenomena as having wrong beliefs about stars and how such beliefs are cor-
rected over time, from a strictly logical point of view beliefs and even knowledge of
non-logical subjects is not relevant to the determination of logical properties. Thus,
it turns out that the logical identity of the two names, “morning star” and “evening
star,” is a matter of logical necessity (since they have the same referent which is, of
course, self-identical) even though the fact that they do have this same referent was
discovered empirically: sense belongs to the realm of facts but denotation or refer-
ence has to do with logical properties (like identity understood as co-reference.)
This is a neat framework; there are difficulties and complications lurking but we
bypass them for our present introductory purposes. Having said all this, we should
also point out that ascent to logics that investigate intensional notions – as distin-
guished from extensional notions – would be needed for taking into account context-
sensitive shifts of meaning which seem to be required for transcending extensionality.
These are controversial subjects, which we cannot delve into in detail in this text.
Importantly, we also abstract from context in fixing the logical meaning of a
meaningful sentence: a meaningful sentence has a referent – a truth-value, true or
false – which is not variably depending on any dynamic context (such as the passage
of time or the change of place or the accrual of information or the shifting of indexi-
cal pointing to this or that item.) A clever trick that ensures this is to consider all
elastic contexts as embedded in a conceptual (but not stated) continuation of the
sentence: for instance, “Socrates is happy” is understood as “Socrates is happy and
such-and-such a place and at such-and-such a time, etc.” where “etc.” is meant to
encompass any conceivable dynamic context whose variable influence may have an
impact on the truth value of the sentence. Notice that it does not matter whether the
sentence is true or false but that its truth value, if indicated, is immutable. This strat-
egy – consisting in implicitly embedding notionally in the sentence what is known
as the “ceteris paribus” (“all things being equal”) clause – ensures that we can con-
sistently take as the referent of a meaningful sentence its truth value (true or false,
not both and not neither.) Whatever price we pay for this adjustment is something
we will not bother about for now; on certain occasions, we will be able to comment
on ramifications of this stipulation.
The meaning of a meaningful sentence is, strictly speaking, a separate thing from
the carrier-sentence but it is not important for our purposes to inquire into what kind
of thing this meaning of a meaningful sentence is. Aristotle would not appreciate
this avowed indifference to metaphysical considerations and he would also insist on
including in our logical studies something else we have deliberately excluded – the
1.2 Logical Meaning, Logic-Words and Logical Form 25
concept of the “truthmaker,” which refers to what it is that makes true sentences true
(and, somewhat puzzlingly, what makes false sentences false by failing to be or
occur.) But we are not interested in truthmaking conditions, as we have remarked
rather emphatically, and we will find out that our systematic study can proceed
completely and successfully without ever needing to undertake an examination of
what makes true sentences true and what makes them false. This important observa-
tion goes together with another point we repeat often in this text: deductive logic is
not at all related to empirical (factual, descriptive, etc.) issues about the actual
world: it is all about logical structure or pattern of our schematic formulas and sym-
bolic arrangements – hence, it has nothing to do with whether statements make
actually true or false sentences. Related to all this is the emphasis, which can be
discerned in definitions of concepts, on logical necessity and logical possibility –
and not on actuality as such. Unfortunately, logical-modal concepts like logical
necessity and logical possibility can be recalcitrant to our untrained intuitions but
we will need to overcome such difficulties cautiously and systematically.
We can agree with Aristotle that only meaningful sentences are properly avail-
able for laying-down-as-true or correctly asserting. Moreover, we set up our basic
logic so that any meaningful sentence (more precisely, the meaning of the sentence)
can have one of two semantic characterizations: true or false. But we also add that
no meaningful sentence can be neither true nor false and no meaningful sentence
can be both true and false. This move is, so to speak, Aristotelian but, unlike
Aristotle, we do not claim that we have sanctioning by the way things are or by the
way nature works. We may even say that this is an arbitrary move – we just lay
foundations for a logic-game; we could have played a different game but the game
we are playing is motivated by an interest in the study of the logic of languages like
those people speak around the world. A simplified but direct way of saying this is:
rather than start with nature, as Aristotle thought we ought to do, we begin with see-
ing logic as conventionally embedded in linguistic constructions. But, notice that
we are not studying the logic of language but an instrument that may or may not
work in applications. Aristotle could see this maneuver by being reminded of the
Euclidean geometry, which he knew, but even then there is something that might be
surprising for him in what follows: Euclidean geometry is its own internally con-
structed, perfectly complete and self-standing system regardless as to whether the
geometry of our universe is or is not Euclidean – it turns out that it is not, in relativ-
istic physics, but this does not undermine the Euclidean geometry, it only estab-
lishes that there are limitations to its applicability.
In terms of logical status, a sentence can be characterized as one of the following
(and, of course, we are talking not necessarily of simple sentences but also of com-
pound sentences that are made by simple sentences): a logical truth (or tautology),
a logical falsehood (or contradiction) and a logical contingency (or logically inde-
terminate or logically indefinite sentence.) We are repeatedly examining these con-
cepts in this text – with the repetition serving learning objectives. We stipulate that
a simple (single, atomic) sentence that we symbolize in our formal system cannot
be a logical truth or logical falsehood. Of course, a sentence like “a triangle has
three angles” has to count as true necessarily (although not logically necessarily, but
26 1 What Logic Studies
as so-called analytically true), but we would have to tinker with providing a special
lexicon to allow our symbols to stand for different logical status; instead, our stipu-
lation – remember – is that any sentence can be true or false, not neither and not
both, and we codify this and show it at the level of the ultimate atomic components
of our symbolic expressions (formulas) that can be used to translate linguistically
meaningful sentences.
But something else happens too: the sentence “a triangle has three angles” is, as
we call it, analytic and true (with more to say on this in 1.4.) The meanings of the
words in this sentence suffice to establish a fixed determination that the sentence is
true. But these words, which matter and adequately allow determination of truth
value (true or false) are not logic-words; they are non-logical words like “triangle”
and “three” and “angle.” Instead, a sentence like “either it is raining or it is not rain-
ing” is logically necessarily true: the meanings of words that matter in this case to
fix the truth value of this sentence as logically necessarily true are the logic-words
“either-or” and “not.” It would be news to Aristotle that the logical characteristics
we are studying are to be taken as ultimately based on what happen to be the speci-
fied definitions of special words, the logic-words like “not,” “and,” “either-or,” and
other such words (but not all logic-words can be studied through sentential logic, as
we will see.) In textbooks, the standard way of explaining how we draw such a dis-
tinction between the kind of sentence we called analytic but not logically true and
the logically true sentence is by saying that we are dealing with logical forms in
deductive logic: while the logical form of “either it is raining or it is not raining” is
the form of a logical truth – “either X or not-X” for any sentence plugged in for the
variable X – the sentence “a triangle has three angles” has the logical form X – and
remember that we have postulated (properly, it turns out) that atomic sentences –
symbolized so that they have the logical form “X”) can be possibly true and possi-
bly false (one or the other, not both, but not necessarily true or necessarily false.)
The approach that uses “logical form” to make the case is different from the
approach that traces this all ultimately to the meanings of logic-words: we can have
the first without having the second. We do not enter into philosophic issues or
details, given our given purposes. We continue with this theme in 1.4 but, for now,
we may glance at some seminal points.
• “Either X or not-X” is the logical form of a tautology or logical truth.
◦◦ “X and not-X” is the logical form of a contradiction or logical falsehood.
◦◦ “If X, then X” is the logical form of a logical truth.
◦◦ “Only if X, then X” is the logical form of a logical truth.
◦◦ “X if and only if not-X” is the logical form of a logical falsehood.
◦◦ Etc…
▪▪ The negation of a logical form of a logical truth is a logical form of a logi-
cal falsehood – and the other way around.
• This means that any compound sentence that has the form “either X or not-X” is
a logical truth. And the same for compound sentences that have the logical forms
of logical truths or falsehoods – they are themselves, respectively, logical truths
or logical falsehoods.
1.2 Logical Meaning, Logic-Words and Logical Form 27
1.2.1 Exercises
1. Plug in any meaningful English sentences for the variables (capital letters from
{X, Y, Z, W}) in the given logical sentence-forms to generate instances of
those forms.
a. If X, then not-X.
b. Unless X, then either Z or not-Z.
c. Neither X, nor Y.
d. If X and Y, then either not-Z or W.
e. It is not the case that X implies Y.
f. Only if X, Y or Z.
1.2 Logical Meaning, Logic-Words and Logical Form 29
g. If it is the case that necessarily X or not-X, then it is the case that necessarily
X or actually Y.
Does it matter what specific sentences we put in the variable places? Why or
why not?
2. In the preceding exercise, circle the logic-words. Notice that the sentential logic
we will be studying can “see” – has symbols and specified ways for managing –
some but not all of these logic-words. Still, we can circle the logic-words in the
preceding exercise. How can we do that?
3. We can mark the place in the represented logical form with some other metalin-
guistic symbol: for instance, we can use different types of lines-symbols com-
pounds (let’s say from {___, −--, ===, |||}). Return to exercise 1 and
systematically use these lines symbols to express the forms. What do we mean
by “systematic”? Why doesn’t it matter what symbols we use to mark the places
of the variables in the logical form?
4. Are the circled words in the following sentences logic-words or not? (Some are
and some are not. Which ones are and which ones are not logic-words?)
a. If a triangle has three angles, then a rectangle has four angles.
b. It is not the case that a rectangle has two angles.
c. It will always be the case that a triangle has three angles.
d. It is not the case that if it rains then it snows.
e. If it is possible that God exists , then it is necessary that God exists .
f. It is possible that aliens exist , but it is also possible that they do not exist .
g. Only if it rains and pours is the game cancelled.
5. The meanings of logic-words are defined by reference to true and false: for
example, the meaning of “not” is: it reverses the truth value (true, false / T, F)
of a sentence – if applied to a true sentence the result is a false sentence and if
it is applied on a false sentence the result is a true sentence. The definition com-
prises the entire schedule (collection, agglomerate, nexus) of the cases/value-
assignments: {T-F, F-T}. Can you tell what the meanings are of the following
logic-words? Find the logic-words embedded in forms. They are, as we know
by now, the fixed parts of the forms. For the binary logic-words (the ones that
connect two sentences) the nexus of all possible combinations of truth-value
assignments are: {TT, TF, FT, FF}. For instance, the meaning of “and” can be
represented as follows: {TT=>T, TF=>F, FT=>F, FF=>F}. This should be obvi-
ous: a compound sentence with “and” as the logic-word connecting the indi-
vidual sentences, is true only in the case in which both connected sentences are
true; it is false in every other case. Try to write the meanings of the logic-words
given below. Another way of saying this is: try to give the truth conditions of the
logic-words given below. The meanings of the logic-words are their truth condi-
tions. Can you explain this in some detail?
a. Both X and Y. [Is the meaning different from the meaning of “and”? Notice
that we are talking about the logical meaning, not linguistic uses that serve
various purposes in language: the logical meaning is the truth-conditions, as
we have explained.]
30 1 What Logic Studies
We don’t know how we can give truth conditions in such a case even though
“necessarily” is a logic-word too. We say that a logic-word like “necessarily”
is not truth-functional. You will learn more about this in due time. Now try to
give an analysis for why each of the logic-words in the sentences given below
is not truth-functional.
b. It is logically possible that X. [It might be easier to do by plugging in
instances of false sentences for X.]
c. X is true before Y is true.
d. It is morally obligatory that X.
e. It is known that X.
f. It is believed that Y.
7. Words like “all” and “at least one” are also logic-words but it is difficult to see
how we may talk of truth conditions in explicating their meanings.
But here is an idea. “All” has some meaning-connection with “and” and “at
least one” is related to “either or”. But there is a catch. We have to specify the
universe of things we talk about and to ensure that this universe of things does
not have an infinite number of things in it. How would we express conjunctions
or disjunctions of infinite number of sentences?
a. Express the sentence “All persons are students” over a universe or domain
{John, Mary} so as to bring out the connection in logical meaning between
“all” and “and” – so that the truth-conditions for “and” are made to do the
work for expressing the truth conditions for “all.”
b. Now express the sentence “At least one person is a student” over the same
universe, {Mary, John}, so as to bring out the meaning-connection between
“at least one” and “either X or Y or both.”
8. Consider the sentence: “Mary is a student.” Are there any logic-words in this
sentence? Defend your answer.
9. The logical form of a sentence (for instance, “X and Y” being the logical form
of “It is Sunday today and there is a football game today”) is not informative
about the world of experience, empirical reality, facts. Explain what this means.
10. If the sentence X is true, then it has to be true that “X or Y or both X and Y.” This
is because, X being true, indeed at least one of X and Y is true: so, “either X or
Y or both” is true. We have a relationship between “X” and “either X or Y”
which, as we will learn, is a relation of implication: “X” implies “X or Y.” It is
logically impossible for “X” to be true and “either X or Y or both” to be false.
The first logical form is related to the second logical form so that the first
implies the other in the sense we have indicated. Does it matter that in language
we would not see the point of inferring “either X or Y or both” from “X?”
11. Can any word expressing a property or attribute be a logic-word? (Consider
predicating anything of anyone: for instance, “Schlup is a student” or “Tara is a
teacher.” Can such a sentence be logically necessary? Or it is logically
contingent, which means that it is logically possible for it to be true but also
32 1 What Logic Studies
logically possible for it to be false – regardless of what may actually be the case
if your predications are matched to names of actual entities?)
12. Is the phrase “is identical with” a logic-word or is it not? (Interestingly, “is
identical with” is like a predicate, which we considered in our preceding exer-
cise, if we conceive broadly of a predicate so that it can be applied to more than
one object – such predicates may be called relational predicates. But consider
also that the principle by which we state that “everything is identical to itself”,
which we may call the principle of self-identity, has to be a logical principle.)
metaphors have been tried (like comparing the sentence to a vehicle and the mean-
ing to what is carried by the vehicle) but such metaphors are regularly found want-
ing. Although the debates arising from such profound philosophic inquiries are
fascinating and intrinsically worth perusing for their depth and challenges they pose
to the human mind, this is not the proper place for engaging in such issues. It is to
be understood from now on that “sentence” serves the purpose in this text of refer-
ring to the meaning of a grammatical sentence, which can only be true or false; we
are not referring to the grammatically formed concatenations of symbols we have
write in language or to the succession of phonemes that constitute spoken units.
Whenever “sentence” is also used when discussing grammatical sentences, context
removes ambiguity.
When we study Formal Logic, we construct formal languages of symbols and
impose by formal stipulation on such languages specified grammatical conventions:
the term “sentence” could be emerging again, ambiguously, as a candidate for using
to refer to the properly formed symbolic expressions of our formal language.
Instead, we opt for the term “formula” in that case: a formula is a symbolic expres-
sion, understood to be referring to nothing specifically and to be a constructed
aggregate of symbols. Such a formula, or symbolic expression, may be well-formed
within a specific formal language if it is constructed in accordance with the gram-
matical (also called syntactical) rules of that formal language. Otherwise, the for-
mula is not well formed; we can say that it is ill-formed. An ill-formed formula is
ill-formed relative to a specified formal language and its grammatical rules: the
specified formal language cannot “read” the ill-formed formula given the formula’s
relatively ungrammatical construction. There may or may not be another formal
language for which the given formula is readable because it is, relative to that other
formal language, a well-formed formula. This is not our concern, however, since we
are exclusively focused on operating within the grammatical conventions of our
given formal language. There are significant advantages we gain by using formal
languages. This will all be explained and experienced in due time.
The above remarks adumbrate an early hint about the interesting relationship
between meaning and symbolic notations. Significantly, in deductive logic, we are
speaking of logical meaning, which is not the same as the meaning-content of sen-
tences. This is a surprise, perhaps a stumbling block to grasping what the enterprise
of logical study is all about. As we will learn, the empirically verifiable – descrip-
tive, factual, contingent – content of meaningful sentences is not our business in the
study of deductive logic. An intuitive illustration of what is afoot can be attempted:
take the sentence “Rob is President.” We should say, properly – although we might
omit such pedantic precision if we can trust that it is all understood – that the mean-
ing of this sentence can be true or false. Whether it is true or false is not for logic to
analyze: it is a factual matter, more narrowly a historical issue. Logic scans this
sentence, or, properly, the meaning of this sentence as a possibly-true-possibly-
false-unit-of meaning. From the standpoint of logical analysis, what matters is that
this is a meaningful sentence – hence the meaning can be true or false. This is not a
sentence (meaning) that is necessarily true or necessarily false. It is a logically con-
tingent or indeterminate sentence (using “sentence”, as we have arranged, to mean
34 1 What Logic Studies
“meaning.”) A sentence like “it is raining here and now but it is not raining right
here and now” is a sentence that is logically necessarily false! This sentence is com-
pound or complex: it is made of two single or atomic sentences that are connected
by “and” and one of the two conjuncts is operated-on by “not.” These are logic-
words, as we will explain in due detail later on. “Not” placed in front of a true sen-
tence brings about that we now get a false sentence and the other way round. Since
it interacts with true-false to alter logical meaning (to turn true to false and false to
true), “not” is a logic-word. “And” is also a logic-word: in this case the available
possibilities are four – these are the combinations or true-false for the two sentences
which are connected with “and.”
Let us continue with our illustrative example. For logical analysis purposes,
“Rob is President” is not about the factual, descriptive, confirmable empirical con-
tent: it is possibly-true-possibly-true-unit: by unit we mean the smallest possible
grammatical and representable unit that can hold or carry logical meaning, which is
the meaning of a single or atomic sentence. A good question is: how do we know
and how do we determine that this is an atomic sentence? Let’s say at this point that
there are no logic-words – like “not” or “and.” Moreover, it is relevant indeed that
we are not at this point equipped to “look” inside the meaning and into its composite
parts. We take the whole sentence as a block, so to speak.
Interestingly, a sentence like “Mara is President” is also a possibly-true-possibly-
false-unit for our purposes of logical analysis. There is a dramatic difference in
content between the two sentences but this does not matter for logic. We are not
contending that it does not matter as such; only that it is not relevant to what logic
is bent on examining. Let us explain this. We can see intuitively, hopefully, that
logic should analyze and correctly determine such matters as: how two sentences
are related in proving one from the other or in having them both consistently
included in a theory or viewpoint. Given one sentence, does the other “follow”, as
we may say in everyday parlance? In our case, the answer is negative. Neither sen-
tence implies the other. That Rob is President does not imply that Mara is President;
nor does it work the other way round. You might actually think that either sentence
implies the negation of the other – if Mara is President, then it cannot be the case
that Rob is President. But you are assuming that there is exactly one President. This
may be true and it is actually the case for familiar political systems that mandate a
singular executive; but from the standpoint of logic, it is, again, possibly true and
possibly false; it is not a matter of logic how the institutional machinery of govern-
ment is set out. You would have to add as an assumption that “there is exactly one
President” and then, indeed, lo and behold, you should be able to prove that either
one of our initially given sentences implies the negation of the other.
When we speak of implication, what we mean, precisely speaking, is this: A
implies B if and only if it is logically impossible that A is true and B is false. We will
see this again later in this text, time and again. Notice how our increasingly more
precise definitions of the concept of implication are built with the concepts of true
and false. This is the key to grasping why our “game” is a game with true and false
(at least, in one way of presenting what logic does – keeping away from the
1.3 Sentences and Meanings 35
It ought to be obvious by now that logic has a certain generality and foundational
character about it: logic compels assessment of any and all theories: any theory has
to be consistent, regardless of what the subject matter is, or, otherwise, it is an
absurd no-theory.
Mara is
President
Mara is Rob is
President President
and
Rob is
President
Logically Possible Worlds
Let us return to our running example. If we were to add the assumption that there
is exactly one President, then we have three, not the initial two, sentences. Try to see
that this new viewpoint, comprising all three sentences, is inconsistent! They cannot
all be true together. Of course, we could withdraw the last sentence and we are back
to a consistent set of sentences; or we could withdraw one of the other two sentences
to obtain a consistent set. Try to practice this. But the three sentences together form
an inconsistent collection of sentences.
Let us symbolize the sentence “Rob is President” by “R” and “Mara is President”
by “M.” Now, suppose that we interpret our symbols R and M so that they have dif-
ferent contents. Nothing changes about the comments we made above: both sen-
tences are capable of being true and of being false (but not true and false in the same
case); it follows that they are mutually consistent – there is at least one combination
of them both being true. Now, we saw that the introduction of an additional assump-
tion, “there is exactly one President,” unsettles matters as it generates a new and
inconsistent collection of sentences. Let us symbolize, “there is exactly one
President” by “P.” As we said, however, we cannot turn to logical analysis to be
informed about a regulation as to how many presidents there may be. We can think
of this in the following way: surely, it is logically meaningful to have a stipulation
that exactly one president ought to be allowed but it is also logically meaningful that
two presidents be allowed, and so on. You could have one world in which the rule is
that one president is allowed, or mandated; another world in which two presidents
are allowed or mandated; and so on. These are all logically possible worlds. Our
actual state of affairs or possible world is just one of all the possible worlds. Logic
has generality: it ought to apply to all possible worlds. If we labeled each possible
world by “1” or “2” and so on, reflecting how many presidents are mandated for the
system, those labels would characterize such worlds. Instead, we might as well use
our sentences – “exactly one president is allowed”, “exactly two presidents are
1.3 Sentences and Meanings 37
allowed,” and so on. But the logical characteristics ought to be generally applica-
ble – not confined to any specific possible world but applicable for all possible
worlds. It follows that we cannot logical truths (or logical falsehoods for the matter)
to label worlds.
Given R and M, if we add P, we cannot detect any inconsistency. We need a more
fine-grained logical structure. Symbolizing the restriction that compels having
exactly one president by a single sentence symbol P does not show us sufficient
detail of logical structure. We could take this added sentence symbol, P, and use it
to label some random logically possible world. We don’t see the inconsistency. Our
task becomes how to express this restriction by showing more structural detail in
our representation of the restriction. Here is what we can do: we can represent the
restriction by “M if and only if not-R” (or “not-M if and only if R.”) Indeed, the
restriction is logically equivalent (it is true/false under the same logical conditions)
with the suggested expressions. What we mean by logical conditions is this: the
whole collection of true-false values, which we get by assigning true and false to the
symbols for all possible combinations. This may seem like a forced trick but it let us
assess what has happened: we have discovered that we can have a perspicuous nota-
tional way of expressing logical structure, so that the logical properties can be eval-
uated. At the same time, we have pressed that logical characteristics must be
invariable across any logically possible cases or states of affairs or possible worlds.
The restriction we added is not itself a logical postulate: we may think of it rather as
an extra-logical meaning postulate (i.e., as a postulate that is not about logical
meaning but about “meaning” in the common sense that includes non-logical con-
tent of sentences. Once the restriction is added, however, logic again is inexorably
put to the task of checking for consistency.)
Let us examine, in brief, how the addition of the restriction, expressed so that
sufficiently fine-grained logical structure is shown, results in an inconsistent triad of
sentences. We cannot have any logically possible world in which all three sentences
are true together. Such a world would be notionally understood as a logically impos-
sible or absurd world. We show below the possibilities. (We have been using this
heuristic device of “possible worlds” without, as before, entering into metaphysical
issues as to what kinds of objects we may have in mind to make sense of such
“worlds.”)
referents or denotata for purposes of meaning; such abstract objects are not to be
understood as incurring any ontological debts. Metaphysical considerations may be
set aside as irrelevant to our present purposes of logical study. The reference to
objects plays a formally mandated and circumscribed role – as in a case of counting
on a plausible narrative to enable us to make sense and track our systematic enun-
ciations but without attaching any relevant significance to whether our posited
things actually exist or not. For our semantic purposes in sentential logic, the refer-
ents of the meanings of sentences are truth values – true or false, not both true and
false and not neither true nor false.
There is an alternative philosophic view, which comes with significant recom-
mending credentials, according to which logical meaning is not a matter of truth
conditions but, rather, it is generated by the rules under which logic-words are cor-
rectly asserted in a language. Since we are bent on the project of studying moti-
vated logical or formal languages, we need to know how this alternative view is
presented in the context of formal logical analysis: when we turn to the study of
what we call natural deduction methods for proofs, we will witness how rules are
laid down for manipulating introductions and eliminations of the symbols for the
connectives of the logical languages. According to an influential view, the logical
meaning of a logic-word, or a connective of the formal system, is determined by
the rule that governs the introduction of the symbol for the connective. To return to
the linguistic setting: a competent speaker of a language like English will accept as
correct assertion of a sentence with form “X and Y” only if both X and Y are cor-
rectly assertable. Accordingly, the meaning of “and” is established inherently not
by the conditions under which “X and Y” is true and false but by the conditions
under which “X and Y” is rightly assertable: although assertability seems related to
truth, the point is that “and” is introduced in the sentence that is to be correctly
accepted in language under the proper conditions. There are also rules that manage
the elimination of connective symbols – or of logic-words in the linguistic setting.
For instance, X is accepted if it has been asserted that “X and Y”, and so is Y: this
means that the rule for the elimination of “and” is such that each one of the con-
joined sentences are accepted in assertion once the sentence “X and Y” has been
laid down. We will not pursue this philosophically deep subject in any detail in the
present context.
Let us start with the truth conditions that are stipulated for the atomic or indi-
vidual sentence, which is taken as the absolutely minimal unit in which meaning
can fit or by which meaning can be expressed. The nexus of mathematically possi-
ble value assignments, on the stipulation that we have exactly two truth values, true
and false, is: true or false, not both, and not neither. Thus, there is a restriction on
those value assignments since we do not allow both values to be assigned and we do
not allow that neither value is to be assigned. We have also said that these are exactly
(all and only) the available mathematically possible options, under the restricting
stipulation, but we should keep in mind that the issue about how logic is different
from mathematics is not trivial. The options can be characterized as logical in the
40 1 What Logic Studies
sense that we have applied the restrictions indicated. Pressed on what objects are
referred to or denoted by meanings of declaratory sentences we stipulate that those
objects are the truth values. Once again, we shrug off any metaphysical queries
about the nature of such abstract objects.
Based on what we have investigated, the meaning of a logic-word like “not” –
called connective, even though it does not connect two sentences in this case – is
characterized completely as a matter of truth conditions: for any formula, its nega-
tion is false when the formula is true and it is true when the formula is false. This
is all, it is sufficient and there is no other possible case. Remarkably, we cannot
find anything in nature or in any narrative we could construct about naturally
occurring or descriptive states of affairs, to which we could point as the referent
of “not.” We might speak of dogs and cats and we could characterize symbolically
those entities by constructing symbols that represent ostensibly the natural appear-
ance of the cats and dogs in the natural world but we cannot effect this for a logic-
word like “not.” Accordingly, our semantics is not, and should not be, like what
we may devise when we deal with the subject matter of a natural science. This,
again, is eminently satisfactory in a sense, because logic ought to have a certain
character of generality and precedence (in the sense that it is presupposed as being
in place before and so that we can engage in any science or any discipline): logic
should not be dependent on naturalized objects and we are content that our account
of how the formal system is constructed and how we can speak of it capture some-
thing about the character of logic. It is also notable that there is something about
our enterprise that seems rightly reducible to operations with symbols. Such sym-
bolic notation affords us perspicuous views and ensures that ambiguity cannot
arise but there is something more to it: the generality of logic – the overall appli-
cability of logic that seems to be self-justifying in a way – can be explained by
taking the logical formalism to be something of a “game” with symbols. We can
certainly motivate different such games on the basis of claims that we “get it
right” when it comes to how the logic of the language works, for instance, but a
game in and of itself is closed off and it should sound wrong to ask for a higher
justification of a game as such. This is fascinating as it shows us that there could
well be many logics that can be constructed, with the question of correctness not
arising; but it can also happen that a given formal logical system can be held up to
external criteria of applicability with respect to a specified linguistic target. As for
the question regarding “where the logic of the language comes from”, this has
been a challenging philosophical issue. One option is to consider this as a brute
fact arising from what the definitions of the logic-words in a given language hap-
pen to be. This would suggest that the logic of language is a matter of arbitrary
convention – a prospect that is anathema to the traditional view that logic is abso-
lutely binding across universal boundaries and not something that can emerge
variably in different contexts.
1.3 Sentences and Meanings 41
1.3.2 Exercises
1.4 Arguments
First, we need to define the term “argument” as used in Logic. The term does not
have the common meaning of “disputation” or of the “act or process of engaging in
a debate or dispute. An argument is a collection of meaningful sentences, one of
which, called conclusion, is presumed to be supported by all the others taken
together (conjunctively, or understood as being joined by “and”.) Strictly speaking,
an argument is a collection of the meanings (called statements or propositions).
Many texts use the term “proposition” instead of “sentence” but, as we have already
hinted, other logicians are rather spooked by the kind of thing a proposition would
be. Another commonly encountered term is “statement.” We can ignore such issues
here, but we need to notice certain things:
A. An argument is like a proof. Think of a proof, completely laid out, with prem-
ises (also called assumptions) and the conclusion. The pretense is that the con-
clusion follows from all the premises. But what if it doesn’t? This is one of the
subjects Logic, and only Logic, studies. How do we evaluate arguments? What
makes an argument “correct?” By “correct” we mean that IF the premises of the
argument are true, then the conclusion ought to be accepted as true. This “ought”
is not moral, it is not pragmatic (as in “you ought to take such and such a route
if you want to get to the city faster.”) It is a logical “ought.” We can think of an
argument, then, as a relation: between the premises and the conclusion. A state-
ment by itself cannot be an argument. A collection of sentences, which may be
thought of as a theory, cannot be an argument: there has to be a conclusion, to
have an argument. The argument in the sense used by logicians comprises the
premises and the conclusion and the assessment of the argument evaluates the
relation between the premises (all the premises, or, we can say, the set of prem-
ises) and the conclusion. In common parlance, we may say, with idiomatic flare:
the conclusion follows the premises; the premises support the conclusion. Given
the premises as true, the assessment of the correctness of an argument regards
whether the conclusion is true. We can also think of this as a matter of truth
preservation: given, hypothetically, that the premises are true, is the conclusion
true too?
B. As we are working with only two logical possibilities – true and false – for
sentences (or, better, for the meanings of sentences), we can grasp intuitively
that the one of the two values, the true, is the “winning” one. Technically, this is
called designation: the value true is the designated value. The conclusion is the
new sentence – the one that presumably follows or is proven by the given prem-
1.4 Arguments 43
ises. We must be able to ascertain that the conclusion has this winning semantic
(meaning-related) characteristic of truth – IF the premises are taken as being
true, THEN the conclusion also has this designated value of being true, and the
whole relation (between all the premises and the conclusion) is logically
necessary.
C. The notion of “if-then”, which we are clearly using here, causes grief to begin-
ning students. Caution is needed. Notice, again, that the laying of the premises
as true (thus, having the winning semantic characteristic of being true), is taken
as given regardless as to whether the premises are “in fact” or actually true; it is
simply laid down that they are all true and then the evaluative issue arises as to
whether the conclusion is also true on the assumption that all the premises are
true. The crucial issue is whether, on the assumption of having all true premises,
the conclusion (the new, inferred, and presumably proven) sentence is also true.
We can say that what we are assessing, then, is truth preservation
D. We could be playing this game to see if some other winning characteristic
(something other than truth) is preserved. Regardless of that, we need to under-
score, once again, that, if we let the premises have it, as given, we must deter-
mine whether the conclusion also has this winning characteristic. Note that we
take true, and false, to be characteristics only of (meanings of) sentences and of
nothing else. (Of course, the sentences that can be true or false have to be mean-
ingful sentences. Conversely, if a sentence is meaningful, we take this to be a
matter of that sentence’s having the characteristic of being true or false – having
a truth value, as we say.)
E. The standards for the evaluation of the correctness of the argument – if the argu-
ment is “good” and “ought” to be accepted – depend on whether the argument
is deductive or inductive. In the case of inductive arguments, the preservation of
truth by the conclusion – IF the premises are true – is a matter of probability. We
cannot offer any deeper grounds as to why logic – and argument types – come
in two varieties (deductive and inductive.) This is a brute fact. (There is a view
that there is a third type of argument, called Abductive, or Inference to the Best
Explanation, but that will not occupy us here.)
F. Only arguments have conclusions. If you hear that a conclusion is mentioned,
you are certain that you are dealing with an argument. (Of course, we leave out
other, easily distinguishable senses of the word “conclusion,” as in “the story
had a happy conclusion.”) Anything that is said to have a conclusion is an argu-
ment for our purposes. And, of course, the other way round, there can be no
argument that does not have a conclusion.
G. The conclusion is a new sentence that we don’t have a default right to include in
our body of knowledge. Assuming that we have the premises as true, the conclu-
sion then is new to us: should we accept it? Is it true, based on the truth of all
the premises? This is the key. It is vastly important for the development of
human knowledge that we incorporate conclusions we can correctly draw from
premises we accept as true. We need to know, then, if the conclusion indeed is
supported by the premises. This means, for deductive arguments: Is the conclu-
sion definitely (logically necessarily) true, if the premises are true? You need to
44 1 What Logic Studies
work on these concepts: the average person has difficulties with concepts like
“if-then” and “necessarily”: notice how we have been using these notions to
define what a deductive argument is.
H. Intuitively, we can say that, in an acceptable argument, the premises support the
conclusion or the conclusion follows from the premises. A more precise way of
saying what this means, as we already indicated, has to do with whether the
truth of all the premises is preserved in the conclusion. But, for inductive argu-
ments, it is a matter of degree of probability or relative strength of likelihood
that the conclusion is true given that the premises are true. In the present text we
study only deductive logic.
I. We will not consider cases in which there may be no premises – that does not
seem to make sense in the case of exploring linguistic arguments. (Keep in
mind, however, that in 4.4, when we study natural deduction systems, you will
be confronted with an arrangement that asks us to think of no-premises proofs.
This can be deferred until then but, briefly, a no-premises proof can be elegantly
considered as proving a logical truth in the sense that nothing is needed to prove
a logical truth – a sentence that is true in all logically possible cases. We may
say that a logical truth can be thought of as being provable from the empty col-
lection of premises.)
J. We will be considering only cases in which an argument has only one suggested
conclusion. Although it is feasible to handle a concept of “argument” in which
more than one conclusions are suggested to be following from the premises, we
will not be engaging in studying such species. It is suggested, or propounded,
that conclusions follow from premises, because, until we evaluate whether the
argument is good or correct, or not, we do not know that! Any argument – to be
an argument – comes with a collection of premises and one conclusion; we
should better say, a suggested conclusion since the argument might not be cor-
rect or “good” after all!
K. The issue as to whether the argument should be accepted is an objective matter;
it is not a subjective or opinion-based evaluation. It is not a psychological
assessment at all! We have here, rather, something like what happens with the
grammar of a language – in which case we are not at liberty to make subjective
judgments as to what is correct or incorrect. We can even speak here of the logi-
cal grammar of a language. When we study logic, applying your investigations
to linguistic arguments, we are examining the logical grammar of the language.
We can say this! Based on this, we can see by analogy that the evaluation is not
a matter of opinion. You should also remember that the linguistic grammar is
not guaranteed to help us with evaluating the logical grammar of given argu-
ments. In fact, the linguistic grammar can be misleading when it comes to
assessing the logical grammar. For instance: “John and Mary are students” is a
single sentence grammatically; it has a complex subject but it is grammatically
single. When, however, we delve into the logical grammar of this sentence, we
spot a word that, as we will learn, is a logic-word: “and.” This logic-word joins
two single sentences. Thus, from a logical-grammar point of view, the sentence
is not a single sentence but a compound or complex sentence: “John is a student
1.4 Arguments 45
and Mary is a student.” We are not being fussy in doing this. There is a good
reason. As a compound sentence, this sentence has a logical meaning that
depends on the logical meanings of its parts. The parts are themselves the single
sentences “John is a student” and “Mary is a student.” We have decomposed the
given sentence: even though grammatically single, we have found it to be logi-
cally complex and we have identified the parts; no further decomposition is
possible. The whole sentence is TRUE or FALSE only based on whether the
single sentences we have produced are the right combination of true and false.
If you think about “and”, you should have no difficulty realizing that an and-
sentence (a complex sentence made of two sentences that are joined by “and”)
is true only when both joined sentences are true; it is false in every other pos-
sible case (and those other possible cases are true-false, false-true, and false-
false.) This is the logical analysis of the sentence we have been examining – but,
remember, from a grammatical point of view, this was just a single sentence.
L. It is nonsense to say that “this argument is not good for you but it is good for
me.” Compare how it sounds to say that “this might be wrong grammar but it is
correct grammar for me.” People are confused sometimes about this: whether I
think that an argument is good – even if I am fully convinced - is not relevant to
whether the argument is indeed correct. I could be wrong. The impressions I
have are a psychological matter but the logic of a language depends ultimately
on the meanings of certain special words of the language – the LOGIC-WORDS
of the language, like “not” and “either-or” and “and” and “all,” and such words.
We can see, again, that this is an objective matter: Meanings in language are
fixed; otherwise, we would not be able to use language for communication.
Meanings may change over time, or from one dialect to another, but do not let
this impress you: they are fixed again immediately as soon as they have changed!
Meanings are like denomination of currencies: if your ten-dollar bill is liable to
be rightly interpreted as a five-dollar bill by someone else (or as a twenty-dollar
bill by yet someone else, and so on), then we do not have a currency that we can
use for transactions. If you start making calculations in your mind about how
you can make claims about that five-dollar bill, so that you can make money out
of the transaction, then, notice what you are doing: you are actually treating the
currency denomination as fixed! (“I could make money if I convinced him or
her that this is a five-dollar bill.” But this commits me to accepting it as a one-
dollar bill.)
M. You should also take to heart that we cannot depend on intuitions or hunches to
evaluate arguments; and, unfortunately, it is not the case that the more we
advance in studying various subjects, the better we become in evaluating argu-
ments. This is not the case. Well-educated people do not do well, without prepa-
ration, in logic tests, as can be demonstrated time and again. Students preparing
for the Law School Admission Test, a logic test, need to attend to the subject
regardless of what brilliant achievements they may have had in the course of
their studies.
N. Repeat until memorizing, and make sure you grasp what this prompts us to do
and why it is important: Given that all the premises are true, is the conclusion
46 1 What Logic Studies
also necessarily true? This is the good or correct deductive argument test: we
say then that the argument is valid.
O. In the species of reasoning called deductive, preservation of truth must be abso-
lute or not at all: either it is possible that all the premises are true and the conclu-
sion false (in which case the argument is invalid); or there is no logical possibility
that all the premises are true and the conclusion false (in which case the argu-
ment passes the logical test, as it were, and is considered valid.)
P. In the species of reasoning we call inductive, truth preservation is a matter of
probability: assessment of this type of argument seeks to discern, informally,
the probabilistic degree to which the conclusion might be true if it is given that
all the premises are true. Inductive arguments are not to be classified as valid or
invalid but as relatively strong or weak depending on an imprecise but defensi-
ble estimate as to whether there is a sufficiently high probability that the conclu-
sion is true if it is given that all the premises are true.
Q. It is hard but vital to learn that invalidity is a matter of possibly having all the
premises true and the conclusion false. Here we are speaking of deductive argu-
ments. How can we detect this easily? The term “valid” applies only to charac-
terize “correct” deductive arguments. The word “correct” is rather sloppy,
although it has an initial intuitive appeal. “Valid” is the official term we are
using – for deductive arguments. What does it mean to say that it is not possible
for a valid argument to have all premises true and a false conclusion? To grasp
this we need now to speak of logical form – and we are prepared to do this since
we have examined the concept of logical form already.
R. A deductive argument in language exemplifies a logical form – an argument
form. If it doesn’t, it is not a deductive argument. Inductive arguments do not
have characteristic forms. What makes our given argument valid is that its form
is valid; if the form is invalid the argument we have is also invalid. The possibil-
ity of having all premises true and a false conclusion is understood if we think
about the argument form: is it possible for this form to have all premises true
and a false conclusion? That is sufficient to make the form invalid: in that case,
our argument is also invalid even if it happens to have all premises true and
conclusion true: we say that this is an accident; nothing has been proven cor-
rectly because this argument can possibly have all premises true and the conclu-
sion false. You should commit this to memory – but you need to understand it
first! Notice that there is a difficult key concept in our definition: logical possi-
bility. This is not intuitive and you will need to appreciate that. Concentrate on
learning the concepts presented here, fully accepting – without being irked –
that you will need to dig into this subject further. Take this for a suspense ride.
But if you have not learned the definition of the concepts, the difficulties will
turn cumulative.
S. An instance of an argument form, with all premises true and the conclusion
false is called a counterexample to the given argument form. Thus, we can
define invalidity as the logical possibility that there is a counterexample to the
argument form; we can define validity as the logical impossibility that there can
be a counterexample to the argument form.
1.4 Arguments 47
Example
You might think that this is a good argument. It is not! It is invalid! The sentences
are all true. This sounds good; but we are dealing with an argument: there is a con-
clusion that is drawn, presumably, on the basis of the premises. What matters here
is if the argument is valid – not invalid: This means, CAN the argument form have
all true premises and a false conclusion? In that case, our argument – having that
form – is invalid! You realize, of course, that this is not easy to figure out. But let us
extract the argument form of our argument; and, then, we will show that there can
be an example of this form with all true premises and false conclusion: that suffices
to condemn our given argument! It has an invalid form. It is invalid – we don’t care
if the premises and the conclusion are all true: we cannot say that the conclusion can
be inferred or drawn validly (correctly) from the premises.
1. not->>>
2. Either not-=== or not->>>
3. ⇒ not-===
How have we extracted the argument form? We have isolated the logic-words.
Assuming that we are dealing with Sentential Logic, the logic-words are “not” and
“either-or.” This is one of the two senses of “either-or” – the one called inclusive:
one or the other of the two connected sentences has to be true but both can be true
as well. What is left, besides the logic-phrases, are individual or simple sentences.
We just need to mark the place they are holding – we need to use a variable. Any
symbols we can agree on can be used as variables. Here is our key. Also, we use as
an imposed meta-symbol “” to mark the place where the conclusion is inserted.
KEY: >>>: Tijuana is in the United States. / ===: Quebec is in the United States.
In this way we have found the argument form of our given argument. An open-
ended number of arguments (it doesn’t matter if and when and how they are or
might be made) have this argument form. If there is any such argument with all true
48 1 What Logic Studies
premises and false conclusion (a counterexample), that establishes that this argu-
ment form – and, hence, our argument – is invalid. Luckily, you will not have to
ransack your brains trying to come up with counterexamples – although that is a
respectable way to pass time and it becomes easier when one keeps practicing. We
will study in this text systematic and correct methods for determining validity and
invalidity of given argument forms.
We plugged the sentences in for the variables of the argument form. In this way, we
get instances (also called tokens, examples, and instantiations) of the sentence-forms.
The overall instance is an instance of the argument form. We see now that our argu-
ment – instantiating the argument form – has all true premises and a false conclusion.
Remember our definition of counterexample. This is a counterexample to the form: it
is an instance of the argument form and it has all its premises true and its conclusion
false. Therefore, the argument form has at least one counterexample. We conclude that
the argument form is invalid.
Let us look again closely: The counterexample has true premises. The first sen-
tence is true. The second sentence is true too: remember that, for this meaning of
“either-or,” it is sufficient if one of the connected sentences is true – which happens to
be the case. But the conclusion is false!
Now we have seen an example of the form, in which all the premises are true and
the conclusion is false. This is a counterexample to our given argument form. Since it
has a counterexample, the argument form is invalid. Our initially given argument is
invalid – it is not good and not to be accepted, regardless of whether its invalidity had
been initially detected or not! Notice, again, that our given argument has all premises
and the conclusion true; but that does not matter. Make sure that you understand why.
The only condemning possible case is one in which some instance of the argument
form would have all its premises true and the conclusion false.
1.5 Consistency 49
1.5 Consistency
We will revisit this concept in 2.2. We return to these fundamental concepts again
and again as we try to solidify understanding.
Two or more statements are consistent taken together if and only if it is possible
for all of them to be true together. Let us give the definition in different ways, high-
lighting certain central issue. Two or more statements are consistent as a collection
if and only if it is logically possible for them to be together true. We will need to
elaborate on the concept of logical possibility that figures prominently in this defini-
tion. Notably, consistency is a characteristic or property of collections of statements.
At a minimum, it is a characteristic of two statements – a pair or doublet of state-
ments. Every statement that is not a contradiction or logical falsehood is consistent
with itself: in that case too, we have a pair comprised of the statement and itself.
When taking together as a pair a statement whose truth value is assigned as true and
the same statement: the pair comprising the true statement and itself is consistent in
accordance with our definition since we have clearly a case in which both members
of the pair are true. Accordingly, we can say that every statement that is not a logical
falsehood is self-consistent.
If we have two or more statements, consistency is a characteristic of the collec-
tion of all of them – as we have emphasized. There are some tricky issues to pay
attention to. It suffices, by the definition of consistency, that it is logically possible
for all the statements to be true together. But the concept of logical possibility may
seem elusive and we need to elaborate on this. Let us start by drawing attention to a
common error regarding logical possibility. When we contemplate a statement that
is “definitely” true – in the sense that it is considered verified beyond doubt, estab-
lished, generally accepted, well-known to be true – then we might think that such a
statement cannot be possible be false – but this does not follow. “The earth has a
moon” is actually true – definitely true, verifiably true, etc. – but it is logically pos-
sible that this statement can be false: logically, it could be that the statement made
by the sentence “the earth has a moon” is false. A narrative, an alternative story, we
can tell in which the earth has no moon is logically possible. Such a story can be
understood – but attention is needed in that we are not referring to psychological
comprehension or even proper knowledge but we only mean that such a narrative
would not commit us to any logical error by its claim that the earth has no moon. It
is important that the meaning of the words “earth” and “moon” do not necessitate
that the statement “the earth has a moon” is true. When we use the words “earth”
and “moon” in our alternative story to state that “the earth has no moon” we are not
stating something like “the earth, which by definition has a moon, has no moon.” (It
is actually rather controversial if “the earth” should be referring “rigidly” to the
specific planet, which does have one moon. But we disregard for our current pur-
poses this rather sophisticated philosophic controversy about how names designate.)
Here is a contrasting example. The statement “a triangle has four angles” cannot
be logically possibly true. It is logically necessarily false. In an alternative story, the
sentence “the triangles had four angles in that alien world” is logically nonsensical:
50 1 What Logic Studies
33 No triangle has four angles. Analytic and True: the meanings of the words that
are boxed sufficiently determine the truth value of this sentence (true in this
case.) These words are non-logical. As we state repeatedly in this text, the logic-
words are those whose definitions necessarily involve references to their truth
conditions: the truth values they assume under assignments of truth values to the
individual parts; for instance, “not” is defined as reversing true to false and false
to true –the truth conditions for “not” comprise “not-true = false and not-false =
true. On the other hand, a word like “triangle” is not definable in terms of out-
comes from truth-value assignments. Thus, “triangle” is a non-logical word. We
can call the statement made by this sentence syntactically analytic, and it is true
too. A logical truth, on the other hand, is necessarily true by virtue of the mean-
ings of the logical words in it. Such a statement can be called semantically or
formally analytic. Similar comments apply in drawing a distinction between
syntactically analytic false and semantically-formally analytic false statements.
An example of a semantically analytic false statement is the one expressed by the
sentence “it is raining and it is not raining” whereas a syntactically analytic false
statement is expressed by the sentence “a triangle has five angles.”
33 If a triangle has four angles, then a triangle has four angles. In this case, we have
a logical truth. This is an implicative sentence – an if-then sentence. As such, it
has an antecedent and a consequent – the antecedent is the sentence that follows
“if” and the consequent is the sentence that follows “then.” Both antecedent and
consequent – which are the same in this case – are false, of course. You might at
first find it strange that this is a necessarily true sentence – a logical truth. This
sentence, however, does not state that a triangle has four angles: it states that if
that is the case, then, trivially, it is the case! Focusing on the words that make this
sentence a logical truth, we place them in boxes: in this case, the phrase “if-then”
is the phrase responsible for this. It is a logic-word. If we extract its logical form,
we have “if p, then p.” Any sentence that is an instance of this logical form is a
logical truth. In deductive reasoning, as we have been emphasizing, the determi-
nation of meaning is not based on conveying and assessing empirical information
but on the structure or form of the sentence. A form is like a shape or figure. Try
to think of “if p, then p” as figure-like. Perhaps, we can analogize to a cookie
cutter which imposes a shape; it is not important at all what material goes into the
cookie cutter insofar as we insist on the shape that our product must have. At
what we take to be a deeper level, we have said that the reason why this logical
form is a logical truth has to do with the meanings of the logical words – in this
case, “if-then.”
33 Either a triangle has four angles or a rectangle has three angles. In this case, we
have an analytic false statement but not a logical falsehood. The logical form is
“either p or q,” which is not a logical form of a logical truth and it is not a logical
form of a logical falsehood. If both disjuncts (as we call the combined sentences
in an “either-or” statement) are false, the form is false; but if at least one disjunct
is true, then the form is true: thus, this logical form can possibly be true and can
possibly be false (not both together, of course, but for different assignment of
truth values to its parts.) Now, turning to the meanings of the non-logical words
52 1 What Logic Studies
in our given sentence: both disjuncts are indeed false and, so, the whole sentence
has to be false. It is the meanings of the non-logical words (boxed above) which
determine this sentence to be analytic false.
33 If it rains on Monday, then it rains on that Monday: this is a logical truth; but If
it rains on Monday, then it will also rain on Tuesday, is not a logical truth (its
logical form is “if p, then q”) and it is not an analytic sentence (it is not analytic
true and it is not analytic false.) The meanings of the non-logical words in the
sentence do not settle whether it is true or false. It is logically possible that it is
true and it is logically possible that it is false. Do no pay attention as to whether
it is likely or not for this sentence to be true: deductive-reasoning properties are
like a light that is either on or off: we cannot have degrees or a spectrum. A sen-
tence is determined to be analytic or not analytic: we cannot have degrees of
analyticity. Similarly, a logical form is or is not a logical truth – it cannot be a
logical truth, or fail to be a logical truth, to some degree.
Both analytic and logical sentences are not informative in relation to how the
world works or events that happen. (We could say that they are informative about
the meanings of the words that make them necessarily true but notice that knowing
those meanings is presupposed and not to be left to extracting from assessing the
sentences.) They are not empirically confirmable or disconfirmable. Knowing how
to play the game of language is sufficient for the competent user to figure out if such
a sentence is true or false. This invites reflection, again, as to how deductive logic
does not deal with the realm of experience. Notice the examples given below. It is
important to realize and remember that: analytic and logical truths are logically
necessarily (true or false, whatever they are.) This means that it is not logically pos-
sible to make them false – if they are true – or true – if they are false.
Analytic Sentences - examples
1. A triangle has three angles. == Necessarily true/It cannot be made false.
2. A triangle has four angles. == Necessarily false/It cannot be made true.
3. A dog is an animal. == Necessarily true/It cannot be made false.
4. A table is a piece of furniture. == Necessarily true/It cannot be made false.
5. Brown is not a color. == Necessarily false/It cannot be made true.
6. Not being able to see, he was blind. == Necessarily true/It cannot be made false.
7. Consciousness consists in having mental states. == Necessarily true/It cannot be
made false.
At this point, you realize that analytic sentences cannot be disputed; if a debate
or dissension arise around them, that shows confusion; the disagreement is not gen-
uine but rather it is a pseudo-disagreement. The debaters might not realize this.
They are still confused regarding what they are doing.
To lay down an analytic truth false or an analytic falsehood as true commits logi-
cal nonsense or absurdity. This is not a psychological or subjective assessment. The
meanings of the words are objective in a language; if they change over time, dynam-
ically and organically, they still are rendered fixed in their new meanings; this is
how language works: if meanings were up for grabs, we could not use language as
1.6 Logical Truths/Falsehoods and Analytic Sentences 53
always to determine that we are dealing with logical truths or logical falsehoods.
Who would ever imagine that the following is a logical truth? Observe how mind-
bogglingly difficult it is to follow what the meaning of this sentence is.
If, assuming that if we assume that it rains then the game is canceled, it fol-
lows that it rains, then, it definitely rains.
What about this? This is a logical contradiction – thanks to the standard mean-
ings of “if then” and “not” and the inclusive sense of “either or.”
Even though it is the case that either it doesn’t rain or the game is canceled,
still it might rain without the game being canceled.
It might seem that these are convoluted, rather unnatural, sentences when it
comes to the way people speak; but these are only examples and we can easily run
into logical forms that are those of logical truths and logical falsehoods, without this
being detected. People have special difficulties with double negations, with “if
then,” with “either or” of which there are actually two meanings – to mention only
a few examples; when it comes to logic-words whose behavior we don’t study in
this text, there are severe difficulties for the average user of the language when it
comes to modal concepts like those of necessity and possibility.
Examples of Logical Truths and Logical Falsehoods
1. Even though someone came to class, no one came to class. == Necessarily false/
It cannot be made true.
2. It is raining and it is not raining. == Necessarily false/It cannot be made true.
3. Either it is raining or the game is canceled but it is not the case that it is neither
raining nor the game is canceled. == Necessarily false/It cannot be made true.
4. Given that, if it is raining then it is not raining, then it is not raining. ==
Necessarily true/It cannot be made false.
5. Since Mary students, then there is at least one student. == Necessarily true/It
cannot be made false.
6. Because it is possible for someone to go to Australia, there is someone who can
possibly go to Australia. == It seems that this is not a logically true sentence, unless
you are dealing with a context in which the things you have are fixed or increase in
number without any dropping out. This is a modal-logic issue which will not detain
us here. Note that words like “necessarily” and “possibly” are indeed logic-words
although the study of their logical behavior is beyond our present scope.
7. John and Jack are the same person, but John is a student and Jack is not. ==
Necessarily true/It cannot be made false. You are not excited by this, but it is an
interesting observation that identity – which in logic is treated as reference-of-
names-to-the-same-entity – is a logical word! Notice, on the other hand, that
other uses of the verb “to be” are not logical words. For instance,
Mary is a student.
is not an analytic or a logical sentence.
1.6 Logical Truths/Falsehoods and Analytic Sentences 55
Sentences that are not analytic are called synthetic. Specifically in the case of logi-
cal truths and falsehoods, sentences that are neither logical truths nor logical false-
hoods are called logical contingencies, logically indeterminate or logically indefinite
sentences. This is, again, a matter of the meanings of the words in them: based on
those meanings, we cannot determine if the sentence is true or false; recourse to the
realm of events – how the world works – is needed. Examples follow of synthetic
sentences and also of logically contingent or indeterminate sentences. These are the
informative sentences of language. We say that a synthetic sentence or – if it is a mat-
ter of the logic-words – a contingent sentence can possibly be true and can possibly be
false; of course, no sentence can be both true and false in the same context; but, pos-
sibly means in some alternative case that we can conceive. Think of the central circle
as the standpoint from which you make the evaluation as to true-false; the other circles
are logical alternatives: it doesn’t matter that they are not the actual state of affairs; all
that it matters is that they are logically possibly. We can think of them as consistent
stories that can be told – exhaustively about all things. Although our actual world and
its narrative are privileged in some ways, that does not matter for the study of logic.
The actual state is only one logically possible state.
Logically necessary sentences must have the same truth values (true or false) in
every logically possible state. Synthetic sentences and contingencies, on the other
hand, can possibly be true and can possibly be false – although not both true and
false in the same state, because that would be nonsensical as we have learned.
1.6.1 Exercises
1. It is not true that the earth has more than one moons. Is it, however, a matter of
logical necessity that this is so? On the basis of this reflection, also answer if the
sentence “the earth has two moons” is a tautology, a contradiction or logically
indefinite (a logical contingency) which can be logically possibly true and logi-
cally possibly false.
2. Are the following sentences expressing statements that are syntactically or
semantically-formally analytic or synthetic?
a. If the triangle has three angles, then the triangle has three angles.
b. If the triangle has three angles, then it does not have four angles.
c. Either the triangle has three angles or the triangle does not have three angles.
d. If all triangles have three angles then it is not the case that some triangles do not
have three angles.
e. If every triangle has three angles and there is a triangle in the room, then there is
something in the room that has three angles.
f. Some triangles have two angles.
g. Any geometrical figure that does not have three angles is not a triangle.
h. Any geometrical figure that has two angles cannot possibly be a triangle.
i. No triangles have more than three angles.
56 1 What Logic Studies
j. It cannot be the case that some triangle both does and does not have three angles.
3. What is wrong with the view that someone’s arguments may be valid for that
person but not for some other person?
4. Why is it not important for assessing arguments whether those arguments are
actually or in fact persuasive in conversation?
5. Can a work of fiction about alien worlds be inconsistent without, nevertheless,
being meaningless? Can such a work be meaningless, as mandated by the
objects it describes, and yet be regarded as meaningful insofar as it is descrip-
tive of an alien and incomprehensible world?
6. If the premises of an argument form are all logical falsehoods (contradictions),
then we cannot have any instance of this argument form in which all the prem-
ises are true and the conclusion is false: therefore, this argument form has to be
accepted as valid. Do you agree? Discuss. Is it different if only one of the prem-
ises is a logical falsehood? What about the case in which the conclusion of the
argument form is a logical truth?
7. Do the observations in the preceding exercise apply in the cases in which the
premises and/or conclusion are not semantically analytic (logical truths or logi-
cal falsehoods) but are, instead, syntactically analytic? Discuss.
8. Shouldn’t “no triangle has four angles” be validly deducible from “all angles
have three angles?” But here we are dealing with meanings of non-logical
words (like “triangle”) and, hence, with syntactic analytic truths: not with for-
mal truths. We should not expect this proof to go through in logic. Isn’t this a
problem? If not, why not? Discuss.
9. When we detect inconsistencies in stories, we still do not grant logical permis-
sion to derive any conclusion whatsoever. But, given the definitions of the con-
cepts we have presented, inconsistent premises with any conclusion whatsoever
must be accepted as valid. (Why is this? See preceding exercise.) It would be a
different kind of logic in which this inferential license is blocked. Is this a prob-
lem for the standard logic? Discuss.
10. We say that logical truths/falsehoods are true/false regardless of context.
Argument forms that are valid are likewise valid regardless of context. What
does this mean? How is it related to the characterization of deductive logic as a
non-empirical enterprise? Can we define “logically necessary” and “logically
possible” by reference to a concept of context (understood as a complete and
consistent description of a logically possible state of affairs)?
Chapter 2
Concepts of Deductive Reasoning
Reasoning comes in two varieties – inductive and deductive. There is a view that
there is a third case – inference to the best explanation, which is dubbed Abductive
Reasoning. We will disregard that here. When it comes to applying mathematical
tools we have available to study logic, we can only pursue deductive reasoning by
formal means (mathematized formal, systematic methods.) The rest of our text is,
accordingly, devoted to deductive logic.
Logic specializes in evaluating arguments – among other things – and we need to
determine correctly if a given argument is deductive or inductive in order to evaluate
if it is a “good” argument or not: different standards apply in the two cases. Think
of an argument as a proof. Logicians use the term “argument” and this is the word
we will be deploying as the relevant technical term. As a first step, think about what
a proof is. Do not think of the act of putting a proof down. Do not think of whether
the proof is in fact accepted – people could be wrong in thinking that the proof
works or not. Concentrate on thinking of the proof as the entirety of the lines you
would have if you could see this proof from the premises to, and including the con-
clusion. This is a collection of meaningful sentences. We can say that the collection
of meanings, one of which is the conclusion that is drawn, constitutes the proof; the
meanings are so encompassed as to relate all the premises taken together to the
conclusion those premises are supposed to be proving. Let us make an effort to see
this as a relation between the sentences (or meanings) that are given (the premises)
and the conclusion that is supposed to be derived from those premises. (Having
elaborated on this issue, we use “sentence” and “meaning” interchangeably and
trust that there is no ambiguity or confusion.) The crucial question is: does this
proof work? Is it successful in proving what it is supposed to be proving?
In mathematics, you always get proofs, informally – which means that you don’t
get information about what rules of inference license or justify moving from lines to
the next lines and all the way to the conclusion. Not only in mathematics but in
everyday life too and certainly in the various fields of studying and when debates
are carried out and theories or claims are supported – we are constantly subjected to
proofs. Parts of the proof may be missing and presumed to be obvious or to be fig-
ured out by the competent reasoner. The proofs may be inductive or deductive. (In
mathematics, you should know, all proofs are always deductive.) The rational per-
son should not accept claims, which can be challenged, without proofs.
Well, this could be debated when it comes to certain contexts: for instance,
should you demand sufficient proof that the team you play for will win, if they have
been losing for very long, before you commit your efforts to it? Or is there an over-
riding moral obligation to contribute to the team spirit and serve your team even if,
as a reasonable person, you should draw the conclusion that your team is bound to
lose again? In general, however, the right principle to abide by can be agreed to be
this: the rational person should not accept challengeable claims without sufficient
proof. We need to be able to assess proofs – but this is something that you will be
taught only in a class like this!)
When we say that a proof, or an argument, is “good” or not, we are not being
precise. There is an intuitive notion of an argument’s goodness: by argument we
mean, always in this text, not an act of disputation or disagreement but a collection
of sentences one of which is presented as a conclusion that “follows” from the other
sentences – the premises. The word “follows” is itself imprecise and needs to be
defined; moreover, different concepts are at work in the inductive and in the deduc-
tive case and these should be disentangled. The intuitive notion can be said to con-
sist also in that the premises “support” the conclusion so that, given the premises,
we can “correctly” have the conclusion – which is a new sentence that we are then
entitled to and which we can “have” and add to our stock of knowledge only if it is
properly supported by the given premises. It is not a subjective matter if the argu-
ment is “good” or not. This is another reason why “good” is not only imprecise but
also potentially misleading because it is not a matter of what seems good or feels
good. A better word is “correct,” which we also used above in this series of attempted
explications. This is still imprecise but we are getting closer. Final observation that
is due in this paragraph is that, when we say that we are “entitled” to the conclusion,
given the premises, we don’t have in mind a moral matter or something that is a
moral claim or right. Everything has to do, rather, with reasoning; not perceiving,
feeling, morally being entitled to.
To define this notion of correctness of a deductive argument, we also need the
concepts of logical necessity and logical possibility. They seem to be prior to our
concept of “correctness” of an argument – which we will be calling “validity” in the
case of deductive arguments. A valid argument cannot possibly have all its premises
true and a false conclusion (note the use of “possibly”); or, a valid argument that has
all true premises must have its conclusion true as a matter of logical necessity (here,
the concept of “necessity” is used.) We can draw on these observations for defining
the concept of argument validity but it is then a good idea first to work on the con-
cepts of logical necessity and logical possibility, which are themselves not readily
accessible to intuitions.
Think how you might be able to tell alternative stories – not necessarily confin-
ing yourself to actual descriptions of events but also creating descriptions of worlds
that are not actual or real but they are, nevertheless, possible in a logical sense.
2 Concepts of Deductive Reasoning 59
There are nuances to this, which we should bring out. A world like that – which we
call a logically possible world – is obtained through its description; this description
should be consistent (and this is a concept that you will study in this text); it is also
supposed to be a full description (having all conceivable sentences as either true or
false – whatever they happen to be in this logically possible world.) This seems like
an unattainable enterprise but you don’t need to worry about this in the present con-
text: we are using this device to help us understand some elusive concepts like “logi-
cally possible.”
Here is the catch: Can you make a sentence true in any logically possible world?
If yes, this is a logically possible sentence. If not, it is a logically impossible sen-
tence. The same can be done with false. Do you have to make a sentence true in
every logically possible world? If yes, it is logically necessary; if not, it is not logi-
cally necessary.
60 2 Concepts of Deductive Reasoning
Now we can explicate the concept of deductive validity by using such concepts
as necessity and possibility in addition to our fundamental concepts of true and false.
about what kind of thing such a form is although we can safely commit to regarding
argument forms as abstract things.
We will now show examples of how deductive arguments exemplify argument
forms. We start with a given argument in language (it has to be an argument, of
course); then we box its logic-words; those are fixed, cannot be removed, they char-
acterize the logical form; next, we replace everything else by lines (which serve as
variables) so that the same kind of line is used for the same sentence. We separate
the conclusion from the premises by means of some metalinguistic symbol: we
choose to use the metalinguistic symbol “/..” for this purpose. We are, thus, left with
the argument form. This is certainly not the only way to write out an argument form.
The argument form is something of a pattern or structure. It is this that determines
whether the argument we are given is valid or not. The given argument is called an
instance (or instantiation) of the argument form. We label the argument “A1” and its
argument form “FORM1”. The logic-words in FORM1 are “if-then” and “not.” The
first of these logic-words needs two sentences to connect while the second, “not”,
works always on one sentence.
A1
If it rained, then the game was canceled. The game was not canceled.
Therefore, it did not rain.
⇒
If ---, then ___. Not ___. /.. Not ---.
FORM1.
Since “if-then” is one compact logic-phrase, we may want to spell out in the fol-
lowing fashion. Once again, this is not good idiomatic English but it shows us more
perspicuously the argument form.
You might wonder as to why we specify context-elements like space and time (and
the “etc.” clause alludes to specifying all other relevant contextual elements.) Our
logic requires that we fix all those elements: we presume them included in the senten-
tial content. We create, in this way, what are called “eternal sentences.” This is because
our logic works in such a way that we cannot track how truth values (true – false)
change across dynamically shifting contexts. For instance, if our sentence is “the
game is canceled”, then this sentence – or, rather, its meaning – is true when, indeed,
the game is canceled and it is false otherwise. If, however, we are dealing with a sen-
tence like “the game is canceled at such-and-such a time” we can see that this sentence
is true or it is false forever. We stipulate that we are dealing with such context-free
sentences. This is a detail for purposes of an introductory text but we mention it for the
sake of completeness. Notice also how this also reinforces the point we keep repeat-
ing – that our formal logical systems do not record empirical or factual occurrences.
But something else is also at stake: as mentioned, we do not have the mechanisms we
would need to track how truth values (true – false) change across changing contexts.
This, however, does not interfere with the success of our logical formalism when it
comes to achieving the purposes we set for it. As you will find out, such purposes
include checking correctly if an argument form is valid or invalid. Roughly, this means
correctness or incorrectness of the deductive argument, but we need to define the
terms precisely. Insofar as we can translate arguments from English into our formal
language so that we capture their argument forms, then we can check if those argu-
ments are valid or invalid. This is a significant feat.
Here are other examples, all of them in sentential logic. This means that we only
“see” logic-words for which we will have symbols in sentential logic: these are
symbols for what we call connectives. These are logic-words – their meanings
depend on how they interact with true and false. For instance, “not” is defined as the
part of language that changes true to false and false to true. This is its meaning. It is
a logic-word. The same is the case for “and”, “either-or,” and many other words –
not all of which we can list here. Any expression whose meaning changes when the
truth values (true-false) of its atomic variable parts change is a connective. In
2.1 Argument Validity 63
predicate logic, we will also have symbols for logic-words like “every” and “some”
and we will also be able to symbolize names and logical predicates.
A2.
Either the treasure is to the left or it is to the right. It is not to the left. Therefore,
it is to the right.
⇒ Either the treasure is to the left or it is to the right. It is not to the left.
Therefore, it is to the right.
⇒
Either --- or ___. not ---. /.. ___.
FORM2.
We have said that an argument is valid if and only if its argument form is valid.
This is the same with all the following.
If the argument is valid, then it has a valid argument form.
If the argument has a valid form, then it is valid.
If the argument does not have a valid form, then it is not valid (it is invalid.)
If the argument is not valid (it is invalid), then it has an invalid argument form.
It is not possible that the argument has a valid form and is invalid.
It is not possible that the argument is valid and has an invalid argument form.
A3.
Either it rains or it doesn’t rain. If it rains, then the game is canceled. If it doesn’t
rain, then we lose. So, either the game is canceled or we lose.
⇒ Either it rains or it does not rain. If it rains, then the game is can-
celed. If it does not rain, then we lose. So, either the game is canceled or
we lose.
⇒
Either --- or not ---. If ---, then ___. If not ---, then <<<. So, either
___ or <<<. FORM3.
Notice that we alter the grammar of the sentences – and we do this while we are
capturing the logical grammar or logical form. In the next example, working with
the same argument still, we show more perspicuously how we proceed as follows:
A3
Either it rains or it doesn’t rain. If it rains, then the game is canceled. If it doesn’t
rain, then we lose. So, either the game is canceled or we lose.
⇒
Either (it rains) or not (it rains). If (it rains), then (the game is canceled.)
If not (it rains), then (we lose.) /.. Either (the game is canceled) or (we lose).
Now we can legislate replacements of the bracketed sentences by variables.
It rains: ---.
The game is canceled: ___.
We lose: <<<.
64 2 Concepts of Deductive Reasoning
Thus, we get:
Either --- or not ---. If ---, then ___.
If not ---, then <<<. /.. Either ___ or <<<.
What do you think of this variation of our game of how to extract the argu-
ment form?
Either Or(it rains, not (it rains)). If Then(it rains, the game is canceled).
If Then(not(it rains), we lose).
So, Either Or(the game is canceled, we lose.)
⇒
Either Or(−--, not ---). If Then(−--, ___).
If Then(not(−--), <<<).
So, Either Or(___, <<<.)
The argument forms have variables replacing content-specific sentences.
Variables are placeholders – they show the formulaic space where symbols of a
certain kind can be inserted. For instance,
For all x, all y and all z(x 2 + y 2 = z 2 ) –
with x and y standing for the sizes of any right-angle sides and z standing for the
size of the hypotenuse of the same triangle. The letters are variables.
Now, back to our argument forms. The variables hold the place for sentence
symbols. This shows us that deductive argument validity has nothing to do with the
content of what is being discussed. All that matters is the meanings of the logical
parts – and those are the fixed words that do not get replaced by variables in the
argument forms.
Now, we proceed to defining validity.
Argument Validity.
Argument Form: P1, … Pn ⊢ C ========= Premises –Conclusion.
[Definition] Valid Argument Form.
It is not logically possible (it is logically impossible) for any instance of
the form to have all the premises true and the conclusion false.
If the premises are all true in any instance of the form, then as a matter of
logical necessity the conclusion has to be true too.
[Definition] Invalid Argument Form P1, … Pn ⊬ C.
It is logically possible for an instance of the form to have all true premises
and a false conclusion.
It is not logically necessary that every instance of the form has to have a
true conclusion if it has all its premises true.
2.1 Argument Validity 65
COUNTEREXAMPLE.
[Definition] An instance of an argument form with all true premises and
false conclusion.
An argument form is invalid if and only if it has at least one
counterexample.
[“If and only if” is an if-then that goes in both directions: accordingly, the
above can be read:
If an argument form is invalid, then it has at least one counterexample; and,
if an argument form has at least one counterexample, then it is invalid.]
Examples.
(FORM1) If P then Q, not-P ⊢ not-Q.
Instance 1 of FORM1: If you are from the US, then you are from the coun-
try exactly south of Canada. You are not from the US. ⇒ You are not from the
country exactly south of Canada.
In Instance 1 we have all true premises and true conclusion. But this does
not establish that FORM1 is valid. Remember, it takes even one
COUNTEREXAMPLE (an instance of the form with all true premises and
false conclusion) to establish invalidity.
Instance 2 of FORM1: If you are from New York, then you are from the
US. You are not from New York. ⇒ You are not from the US.
A valid argument form cannot possibly have an instance with all true premises
and a false conclusion. When it comes to arguments in the language, which are
instances of valid argument forms, we might still find that their premises are not all
true. Does this mean that something has gone wrong with our conceptualization of
validity? Not really. But this is one of the matters that puzzle and distract beginners;
to counteract this predicament, we need to rivet our attention to this standard issue
of introductory logic.
66 2 Concepts of Deductive Reasoning
If P, then Q; P ⊢ Q.
The point is this: IF the premises are all true, then it is logically impossible for the
conclusion to be false. If it were true that the United States is south of France – and if
it were also true that being south of France entails being north of Germany – then it
would be logically necessary that the United States be north of Germany. Clearly, this
is an alternative world, not our actual world. But, from the point of view of deductive
relations, it is irrelevant what happens in any particular world – including our actual
world which is, after all, one of an open-ended number of logically possible worlds.
This is a good opportunity to contemplate on how deductive reasoning is not depen-
dent on empirical considerations – or on any empirically discoverable details about
what happens in any world. This independence of deductive argument validity from
empirical matters is also in evidence in that you can put any sentences in for the vari-
ables: regardless of the content of those sentences, the argument form yields valid
arguments (if the form is valid) and invalid arguments (if the form is invalid) because
the argument form is determined as valid or invalid singly by the meanings of the
logic-words in it – which, in the preceding example, is the phrase “if-then.”
Now, the alternative world in which all the above odd (from our actual stand-
point) geographical configurations happen is itself not our actual world but it is a
logically possible world. Think of such an alternative world. Look again into the
example. In this world, let us focus on the premises and accept them as true. Build
this world on the basis of the given premises. The map of this world has the United
States being south of France and north of Germany. This makes the premises true.
The conditional or implicative sentence (“if the United States is south of France,
then it is north of Germany) is made true: both antecedent and consequent are true
and this makes a conditional sentence true. The other premise is “the United States
is south of France.” Now, we examine the conclusion: “The United States is north
of Germany.” Look at your alternative-world map. There is NO way to make the
conclusion true. The conclusion has to be true; it is logically necessarily the case
that the conclusion is true given that the premises are true. It is not logically possible
for the conclusion to be false while the premises are all true. Thus, this is a valid
argument. It does not matter that our instance of the argument form has premises
and conclusion describing some alternative world; this is irrelevant to assessing
validity or other deductive-reasoning properties.
We may speculate as to whether we can actually have nonsensical sentences
instantiating the variables in a valid argument form and still declare the resulting
argument to be valid. Let us consider, again, the valid argument form of the preced-
ing example:
2.1 Argument Validity 67
If P, then Q; P ⊢ Q.
The pattern is characteristically present if we instantiate P by “gobblegobble
drug blodigoggle” and Q by “drack druck garg”. We have:
If gobblegobble drug blodigoggle, then drack druck garg. Gobblegobble drug
blodigoggle. Therefore, drack druck garg.
Validity is a matter of pattern – argument form. Whether the premises are even
true does not matter, as we have examined and explained. But do the premises and
conclusion have to be meaningful – capable of being true or false? Given our defini-
tions of concepts, it seems so. The semantic definition of validity we have been
working with includes the notions of true and false as truth values; it follows, then,
that we are stipulating that the exemplars of argument forms are meanings (the only
kinds of thing that can be true or false.) It does appear, on the other hand, that the
argument form itself, as a figure-like or shape-like pattern, is discernible and the
game of matching instances with forms can be played regardless as to whether we
have meaningful sentences or not. It would require a different approach to setting up
our definitions of concepts, though, to work with such a game: for one, “true” and
“false” should not be included in the definitions of the relevant concepts. It is an
interesting question if we can proceed along alternative lines (without the concepts
of true and false but with other appropriate concepts) in defining some concepts that
can then be shown to match our semantic concepts (which use true and false in their
definitions.)
Another way of defining validity: An argument form is valid if and only if the
truth of the premises provides absolute support – or guarantees, or necessitates –
the conclusion. The support we are talking about is logical – it is not a matter of
what you might perceive or what opinion you could form subjectively.
Here is yet another way of defining validity – of course, we are defining the same
concept even by using alternative ways: An argument form is valid if and only if the
truth of the premises is necessarily preserved in the conclusion. This sounds some-
what unnatural in ordinary English but think of the truth of the premises (if the
premises are all true) in terms of a metaphor: you have the ball in your hands (this
is “all the premises.”) It does not matter if this factually the case; assume it! Dropping
the ball (going to a false conclusion even when all the premises are true, when you
have the ball in your hands) – this means that you lose the game: invalidity. True
must be preserved for validity. IF the premises are all true, then the conclusion must
be true. There are some difficult concepts here that you need to pay attention to.
Words like “necessarily” and “possibly” and “must” are not easy to process; and
such words do appear in our definitions. The point is that if you kept finding your-
self in a case in which you happen to have the ball (truth of all the premises) then,
no matter how many times this is repeated, you don’t drop the ball (the conclusion
is also true): this is a winner! This is a valid argument. If there is any possibility that
you have the ball (all the premises are true) but you drop it (the conclusion is false),
then this is an invalid argument – a loser. It does not matter how often this happens;
68 2 Concepts of Deductive Reasoning
even once is bad enough. Thus, we can say things like “a valid argument guarantees
support for the conclusion” or “a valid argument’s premises provide absolute sup-
port for the conclusion.”
Notice that you cannot tell, based on intuitions or hunches or by using some skill
you have naturally or from previous studies, whether an argument is valid or not or
whether an argument’s form is valid or not. Invalid forms can have misleading
instances that may come across as being valid. We are misled by psychology but
logic and psychology do not go together. It is not inherently something about the
form to be capable of being instantiated by “misleading” instances: we are misled
by subjective and broadly psychological factors. The instances of a valid argument
form are all necessarily valid.
It should not be surprising that truth and falsehood play this role in the definition
of argument validity. The conclusion of an argument is something new. It has to be
a new, previously unknown to us, sentence. If we do not need to derive it, it is not a
conclusion. When knowledge expands, this happens by drawing conclusions too.
The conclusions are not in the stock of knowledge we have had so far. This also
means that any expert draws conclusions not as an expert in her field but by per-
forming a logical operation. The new sentence, which we did not have until we drew
it from premises, is the conclusion: it is unacceptable, incorrect, to be avoided that
we draw a conclusion that can be false even if our premises are true. You need to
note that the expression “if-then” in the preceding sentence is also one of those
expressions that give trouble and cause confusions. We are not saying that the prem-
ises have to be true: we are saying that if the premises are true, then the conclusion
has to be true as a matter of logical necessity for the argument to be valid.
To lose the game is to be facing the possibility (mere possibility) that your
conclusion may be false even though all the premises are true. This is invalidity.
Technically, as we have presented it, an instance of the given argument form with
all true premises and a false conclusion is called a counterexample to the given
form. We can then give yet more definitions of our concepts: An argument form is
valid if and only if it has no counterexample. An argument form is invalid if it has
any counterexample. We showed a counterexample to a given argument form
above but, in this text, we will not be working with attempts to conjure up coun-
terexamples to given forms. We will have mechanical procedures in sentential
logic, which we can apply or implement to determine if there is a counterexample
or not. We will be able to specify this counterexample. This is the case for two
methods we will study – the truth table and the tree method. Once our implemen-
tation of the method yields a counterexample, we determine that we have an
invalid argument. If the method yields no counterexample, we determine the given
argument form to be valid.
Let us see some examples of argument forms, with the logic-words in them,
boxed. We see examples of valid and of invalid argument forms. Of course, we can-
not produce an exhaustive list.
2.1 Argument Validity 69
If P, then Q. P. ⊢ Q.
If P, then Q. Not -Q. ⊢ Not -P.
If P, then Q. ⊢ If not -Q, then not -P.
Either P or Q. Not -P. ⊢ Q.
Either P or Q. Not -Q. ⊢ Not -P.
Either P or Q. If P, then R. If Q, then R. ⊢ R.
If P, then Q. If Q, then R. ⊢ If P, then R.
P and Q. ⊢ P.
P and Q. ⊢ Q.
P and Q. ⊢ Either P or Q.
If P, then If Q then R. ⊢ If P- and -Q, then R.
If P- and -Q, then R.⊢ If P, then If Q then R.
It is not that it is not that P. ⊢ P.
If P, then Q. If not -P, then Q. ⊢ Q.
If P, then Q. If P, then not -Q. ⊢ Not -P.
If P, then Q and not Q. ⊢ Not -P.
Not (P and Q). ⊢ Either not -P or not -Q.
Not ( either P or Q). ⊢ Neither A nor B. ( Not -A and not -B.)
It is important to note that the standard logic of sentences cannot handle certain
linguistic logic-words which require more advanced logics to accommodate a study
of their logical behavior. Such logic-words are called non-truth-functional. We are
ready to reveal a secret about what is happening here. The basic sentential logic is
70 2 Concepts of Deductive Reasoning
truth-functional, as we say. This means that its symbols refer to functions – believe
it or not. The values that are inputs to those functions are true and false – and we call
them truth values. The foremost characteristic of a function is that it takes inputs
from a specified set to yield an output that is unique. The emphasis we need to place
here is on “unique.” Consider the following examples of functions – shown in an
informal fashion to cultivate intuitions. (For details on Set Theory and Functions,
see chapter 9.)
• ℊ+(2, 3) = 5 [This is the familiar algebraic function called addition: it has to be
defined as taking inputs from a set, the Domain of the function, and the specified
output for each pair of inputs belongs to a set called the Range of the function;
this is a binary function, which means that it always has to take two inputs. Let
us specify the domain as D(ℊ+) = ℕ, the set of natural numbers; the range is, then,
R(ℊ+) = ℕ, also the set of natural numbers. It is crucial that the output in each case
is a unique member of the range.]
• Examples of mappings, as we call them, which are not functions! This is because
the outputs are not always unique for all specified inputs or inputs pairs! son(x)
=? —one might have more than one sons…
daughter(x) =?
teacher(x) =?
student(x) =?
• Here are more examples of functions, from the domain ℕ to the range ℕ. Some
are unary and some are binary or even ternary functions (defined as having,
respectively, one or two inputs.) The output is always one and, as we have already
stressed, it has to be unique if the specified mapping is to be functional.ℊ1(x) =
(x + 2)2.
ℊ2(x) = 2x + 4x2.
ℊ3(x, y)= √(x + y)3.
ℊ4(x, y) = x/y2.
ℊ5(x, y, z) = x/(y + (x +y)2).
We can go back to the language to see how we spot the logic-words that can be
interpreted through truth-functional connectives in some idiom of formal logic. The
special logic-words we have been regarding as fixed in the logical forms are defined
basically in terms of true and false. This is their meaning. For instance, the meaning
of “not” is: what turns true to false and false to true. The same for all the others: they
are defined by the conditions under which the whole sentence is true – given the
values of its ultimate simple components. And, conversely, if any phrase has a
meaning that is based on truth conditions, that is a logic-word that is truth-functional;
rest assured that we express this word with our basic symbolic resources if we
develop some sophistication in the study of formal logic.
Non-truth-functional words cannot be defined only by true-false. For instance,
you can find sentences that are true but, historically, the first was true before the
other; then, you can also find true sentences but, this time, they were historically
true so that the second was true before the first. Try to understand what is happen-
ing here.
2.1 Argument Validity 71
It is easy to see this. Here is the significance of this. From the standpoint of
deductive reasoning, meaning is a matter of true and false. Rather, ignore the con-
tent and see how both sentences are true. Thus, we have:
BEFORE(True, True) is True (in the first case) but also BEFORE(True, True) is
also False (in the second case.) This means that functionality breaks down: while we
have the same specified inputs (<True, True>), yet we do not have a unique output:
in one case we have True and in the other case we have False. Thus, “before” cannot
be expressed by a truth-functional operator.
Necessarily, if P then Q.
It is generally believed that P.
It is possibly not the case that Q.
Either it is always the case that P, or it is not necessarily the case that Q.
It is morally obligatory that P.
It is known that P.
It is P before Q.
It is P after Q.
It will be true that P in the future.
It has always been true that P.
It is P specifically at location x.
It is physically impossible that light does not bend around massive
objects.
NOTE: because “not” is within the scope of “physically impossible”, we
could express it at all.
72 2 Concepts of Deductive Reasoning
2.1.1 Exercises
1. Extract the argument forms of the given arguments. Take the steps shown above –
box in the logic-words and proceed to the extraction of the form.
a. Either it rains or the game is not canceled. But the game must be canceled.
Therefore, it is not raining.
[Attention: “must” is a logic-word but not one that can be scanned by the basic
sentential logic. This is what we explained above under “non-truth-functional
words.”]
b. Since it rains, the game is canceled but we do not go home. [The word “since”
has the logical, not the temporal sense.]
c. If you need to go there fast, you should take route X, but if you don’t care
about how fast you get there, then it doesn’t matter how you travel there.
d. If the game is cancelled, it must have rained. Therefore, if it did not rain, then
the game was not cancelled.
e. If it rains and the ball floats in a puddle of water, then the game is cancelled.
Therefore, if it rains, then if the ball floats in a puddle of water then the game
is cancelled.
f. If it rains, then the game is cancelled. Therefore, if it rains and the players are
drenched, then the game is cancelled.
2. Produce instances in English of the given argument forms.
a. If X but not-Y, then Z. Therefore, if not-Z, then either not-X or Y.
b. X and not-Y, but if not-Y then Z. Therefore, Z.
c. If X and Y. Therefore, Y or Z.
d. Either X or Y, but not both X and Y; and not-X. Therefore, Y.
e. It is not true that if X then not-Y. Therefore, Y.
f. It is not the case that it is not the case that X. Therefore, X or Y.
2.2 Consistency
Suppose that we have sentences in our theory, which are symbolized as follows.
The enclosing boxes show logically possible worlds and each such world is symbol-
ized by “@j” with the subscript being a positive number.
We use “{“and “}” to enclose members of a set and we separate the members of
a set by commas. The set of sentences we have is: {Δ, Θ, Ξ}. This set is consistent
insofar as there is at least one logically possible world in which all the sentences are
true there. We have, let’s say, eight logically possible worlds; in two of them, all
sentences are true there; that is sufficient to determine that the set of sentences is
consistent.
@1: in this world none of the given sentences is true. Notice that this does not
condemn our set to being inconsistent. We need at least one logically possible world
in which all the sentences are true there. Focus on this to understand how “possibly”
works: at-least-one-state suffices to establish logical possibility. Some texts use the
term “satisfiability” for a set of sentences that can possibly be true together.
(Parenthetically, and for the sake of completeness, let us point out that this is not the
same as the concept of “possible worlds” used in modeling modal logics: ours is
merely a pedagogical device.)
ΔΘ
Θ
74 2 Concepts of Deductive Reasoning
@3
@4
ΘΞ
@5
Δ Θ Ξ.
@6
@7
@8
We can think of these worlds as being presented through their exhaustive descrip-
tions; notice that we are only laying down conceptual tools to facilitate our understand-
ing and we do not need to enter into specific details (for instance, it should be also
understood that any sentence whatsoever is either true or false in every possible world
for this way of modeling to be used in certain applications but this does not concern us.)
Since there is at least one logically consistent – logically possible – narrative we
can present with all the sentences in the narrative, this means by definition that the
theory composed of these sentences is logically consistent. The narrative would be
about logically possible world @6: that is the one world in which all the given sen-
tences are true there. It doesn’t matter if the narrative for logically possible world @6,
where all the sentences are true together, describes our actual world; all that matter is
that it depicts some logically possible world. Perhaps, as an example, given the depic-
tions we have, our actual world is @7. None of the given sentences are true there! But
this is a matter of empirical discovery and not of concern to deductive logic.
Here are instances or interpretations for the three sentences of our example that
show this kind of case – they are not true at all in our actual world but there is a
world, even if an alternative one, where they are all true together. It is not important
that this alternative world is alien or fantastic from the standpoint of how nature
works or how historical events have transpired. All that matters is the logical pos-
sibility that the sentences are all true together.
2.3 Logical Status of a Sentence 75
2.2.1 Exercises
i. If some sentences are all false in some logically possible world, why doesn’t
that mean that the sentences cannot be consistent?
j. If we define logical necessity as the logical impossibility of being false, then
how we can state a definition of logical consistency by using “necessary”
instead of “possible?” <* This is hard: try starting by stating “a set of sen-
tences is consistent if and only if it is not logically necessary for them to be
all ---.” Should you now refer to some or to all possible worlds? The difficulty
in this has to do with the lack of familiarity we have when it comes to dealing
with the concepts of necessity and possibility: notice that we have reduced
these concepts and their relations to the concepts of “all” and “some” with
these quantifiers used over logically possible world. Now the problem reap-
pears in a way because the conceptual relationship of “all” and “some” is also
challenging. Focus on thinking how: “all” is the same in logical meaning with
“not-some-not” and “some” is the same with “not-all-not.” >
A sentence has a characteristic logical status because of the logical form which the
sentence shows or, as we can also say, instantiates or exemplifies or betokens. We
should not think that we have to deal with any thorny metaphysical issues about
what kinds of things those forms are. Having studied argument validity and consis-
tency, we noticed already that those characteristics were also a matter of form. It all
seems to come back to logical form in deductive logic. An alternative – not incom-
patible – way of thinking about the deeper structural reasons for the logical charac-
teristics that are deductive is to say that the meanings of the logic-words confer
these characteristics. Let us examine sentences, with a view to discerning their char-
acteristic logical status in each case, under both available approaches – the form-
approach and the logic-word-approach.
The logic-word is “if-then” in all three sentences. The logical form is an impli-
cational/conditional or if-then logical form. Any sentences could be substituted for
the sentences connected by the if-then connective logic-word; essentially, these sen-
tences occupy placeholders – indeed they can be thought of as being entered in the
place-holding open spaces that represent variables. But we cannot ignore, of course,
repeated sentences: we are still dealing with non-fixed, variable place-holding
shapes but those have to be the same variables: this means that, once again, any
sentence whatsoever can go in to fill the place of the variable but for each occur-
rence of the same variable the same sentence – whatever that sentence is – should
be put in. Let us now “see” the logical forms. And, notice, how we have connected
the logic-word approach with the logical form approach. As we execute the extrac-
tion of the form, we may first showcase the “shape” or form more closely by bypass-
ing grammatical peculiarities of the natural language, English in this case, in the
following way. This way of putting things may well seem ungrammatical but we are
actually getting now closer to isolating the logical form, which we can subsequently
present as a shape by using as variables different types of lines.
Finally, we arrive at the logical forms written in a certain way (and we detect here
the role played by extraneous issues like conventions and possibly psychological or
habitually trained tendencies that may dictate why one convention rather than
78 2 Concepts of Deductive Reasoning
One of the forms (the form of sentence Θ) is true for any possible assignment of
truth values to the individual or atomic component sentences. This type of sentence
has the logical form whose logical status is that of a so-called tautology; other terms
we can use are logical truth or necessary truth. The other two sentences have logical
forms that have a logical status we call indeterminate or indefinite or contingent (or
the logical status of a contingency.) This means that in some logically possible
cases, the sentence is true and in some logically possible cases, the sentence is false.
These are logically possible cases we are talking about. For instance, the sentence
“the earth has exactly one moon” is indeed such a sentence. Its logical form in sen-
tential logic is that of a single line (if we stick still to the convention of representing
the logical shape of an individual sentence by a line-type.) It is fundamental to the
setup of deductive logic that an individual sentence can be logically possibly true
and can be logically possible false – although not true and false in the same logically
possible case.
It might seem odd at first that the sentence “the earth has exactly one moon” is a
logically indeterminate (contingent) sentence – it has a logically indeterminate
structure or form. But the error in changing at this comes from mistaking actuality
(factual actuality, historical given, or anything that is empirically settled) for logi-
cally necessary. The sentence “the earth has exactly one moon” is not logically
necessary. It is certainly logically possible – it is not a logical contradiction, a sen-
tence whose logical form is false in every logically possible case, like the sentence
“the earth has one moon and the earth does not have one moon” for instance. But
this sentence can be false – it is logically possible that it be false: its form dictates
this and here is how we can make sense of this. An alternative state of affairs can be
constructed in which the sentence is false; this is not the actual state of affairs but it
is still a logically possible state of affairs. That it is not actual is not a matter of
deductive logic. Another way of seeing this is as follows: we could give a full
description of this alternative state of affairs – in which it is false that the earth as
exactly one moon – without running into logical nonsense or logical absurdity. But
2 Concepts of Deductive Reasoning 79
if we try to give a full description of the presumed state of affairs in which “the earth
has one moon and the earth does not have one moon,” we present a logically impos-
sible or nonsensical state of affairs. The problem with this nonsensical state is not
our psychological limitations or any epistemic (knowledge-related) deficits we may
have: the problem is logically. We can trace it back to the form (the form “______
and not-______” has the status of a logical contradiction.) Or we can also say that
this is because of the meanings of the logic-words “and” and “not.”
In addition to the logical statuses of tautology (logical truth, necessary truth) and
logical contingency (indeterminate, indefinite status) we also have, of course, the
logical status of a contradiction (logical falsehood, necessary falsehood, logical
impossibility.) Some texts may be using the terms “validity” and “invalidity” respec-
tively for “logical truth” and “logical falsehood” but, of course, caution is needed to
distinguish this terminology clearly from the parallel terms of “valid” and “invalid”
applied to characterize forms of arguments. The concepts of argument validity and
of consistency apply to characterizations of sets or collections of forms of sentences
but the logical status of a sentences, examined in the present section, applies strictly
to the case of the logical forms of sentences. Furthermore, an argument which has
to be either valid or invalid is different conceptually from the concept of a mere set
of sentence forms, which has to be either consistent or inconsistent; as we ought to
know by now, an argument form is indeed a collection of sentence forms but with
one of those sentence forms (the putative conclusion) as distinguished; on the other
hand, a mere collection of sentence forms (which is consistent or inconsistent) has
no conclusion-sentence form in it. It is a common error to confuse applications of
these terms.
Finally, a sentence like “an angle has three angles” is, as we say, analytically
true (we can say also that it is semantically analytically true and even, although
rather confusedly, necessarily true but not logically necessarily true) but it is not
true by virtue of its logical forms; it is necessarily true because of the meanings of
its non-logical words (like “triangle” and “angle”.) Thus, the inference from “d is
a triangle” to “d has three angles” is not a logically valid inference: it has to be
accepted, of course, on grounds of what we may call analyticity but it should not
be expected that this is a matter of argument validity because the inference in this
case does not depend on logical form and does not go through because of the
meanings of logical words but thanks to the meanings of the non-logical words in
the sentences. A sentence that is not analytically true or analytically false, is called
synthetic.
2.3.1 Exercises
3. Characterize the following sentences in terms of the meanings they express: are
they tautologies (logically true), contradictions (logically false), logically indefi-
nite (possibly true, possibly false), analytic true, analytic false, or synthetic?
a. Planet X has two moons.
b. Either planet X has two moons or it has three moons.
c. Either planet X has two moons or it does not have two moons.
d. If planet X has two moons, then it has two moons.
e. Planet X has two moons but it does not have two moons.
f. Planet X has two moons but it does not have three moons.
g. Given that planet X has two moons, the earth does not have as many moons
as planet X does.
h. The celestial body named Morning Star is the same as that named Evening Star.
i. If neither planet has two moons, then both planets cannot have four moons.
Chapter 3
Formal Logic of Sentences, Sentential
Logic (also called Sentential Logic
and Statement Logic)
mathematics although the topic of the relationship between the two falls outside the
scope of our present interests.
One could construct many formal systems that serve as logical languages. There
may be matching motivating considerations. For instance, in a famous case in the
history of logic, the motivating claim that statements about future events should be
assigned a third truth value (understood as neither true nor false but indeterminate)
gave rise to the construction of a motivated formal system for a three-valued logic.
As a formal language, with its symbolic grammar and stipulated operations, such a
language would not be controversial: in fact, such languages with multiple values
have been used to investigate and determine whether axioms (of axiomatically
defined logical systems) are independent (which means that the axiom, which is
independent, cannot be proved by the other axioms of the system.) Controversies
can arise only when it comes to interpreting the languages, and also with respect to
applicability and (in the case of non-standard multivalent formal languages) about
the philosophic implications of such constructions regarding the concept of truth as
such. In our present text, we do no venture into alternative or non-standard logics
with more than the two values, true and false.
Without delving into minutiae about such matters, no matter how fascinating
they may be to the student of philosophic logic, we continue now for our purposes
with the construction of our formal language for sentential logic. This is a language
for the standard, also called classical or orthodox, sentential logic. You can easily
find many such formal languages; indeed, any book you may study in logic, which
addresses the standard logic, would have to be deploying a formal language for the
standard sentential logic: all such formal languages (including the one we present
below and even languages not yet constructed for this purpose) should be thought of
as notational variants of each other – as idioms. They are members of the same fam-
ily of idioms which together comprise a formal language. If the rules of grammar
are themselves different, we should then be dealing with different dialects. What is
important is that these languages are not different “logics” (in the sense of “logic”
that refers to formal language); they are like different idioms, all of which belong to
the same language.
Before we enter into the first formal symbolic language we will construct, we lay
out some terminology. A formal language or symbolic language for a formal sys-
tem, like the one we are about to construct under the name ∑, is understood to be
like an idiom in an open-ended array of variations that can be constructed for the
same type of logic: such variations have different symbolic resources and grammars
or syntactical regulations about how the symbols are to be arranged notationally;
possibly, the symbols and grammatical rules can be overlapping although that
3.1 Formal Languages: Variations, Extensions, and Deviations 83
We now lay out the formal grammar of our language ∑. Our formal language has
symbols in it. Nothing that is not officially included as a symbol is recognized. The
symbols themselves have names. A symbol is supposed to be denoting something –
referring to something. Symbols do not refer to real things that can be accessible
empirically. The referents are abstract objects, and we will assign names to those
objects. The language ∑ must have a grammar or syntax if it is to be a proper lan-
guage, and we supply the needed symbolic amenities along with a strictly specified
grammar and syntactical rules. You should think of this setup as a game for how to
use and move around and concatenate (put next to each other) all those symbolic
resources you are given in the formal language.
Finally, we will also have symbols made available not for ∑ but in what we call the
Metalanguage of ∑: ℳ(∑). This is a made-up language but it is really English (well,
as much of English as we need) plus some symbols we make available for our conve-
nience. We show those symbols below. It should be understood that the metalinguistic
symbols are not official in the way the symbols of ∑ are and are considered to have a
flexible and convenient grammatical regulation. We can use, as we should say, copies
of the symbols of ∑ in ℳ(∑). It is all a matter of convenience. The formal language
itself and the Metalanguage are different languages, and one of them only is the official
formal language – that is ∑, which logicians call the Object Language (while ℳ(∑) is
the Metalanguage for ∑.) Why do we need ℳ(∑)? We need it to talk about ∑.
The discovery of the use of variables has been one of the most significant break-
throughs in the history of Mathematics, and of Logic. It seems impossible to grasp
the fundamentals of deductive reasoning without also discerning some principle
that can lead naturally to the discovery of variables. The prospect of making prog-
ress in a systematic study of reasoning depends strongly on using variables for sev-
eral reasons, among which we can mention the following: brevity of expression can
be achieved, avoiding clutter and making it possible to develop and study theoretical
claims and proofs; representations can be rendered perspicuously – in a fashion that
is easy to survey and work with; and, finally, use of variables facilitates understand-
ing and respecting certain fundamental and foundational distinctions in deductive
reasoning. For example, if you try to express the following equations without using
variables, you will end up using an unwieldy, long and obscuring mode of speaking.
(x + y)2 = (x2 + y2 + 2xy)
With the use of variables – which will become available to us when we turn to
the study of predicate logic – the formulation becomes: “for all x and all y from a
certain universe of things – numbers – the square of the sum of x and y is equal to
the sum of the sum of the square roots of x and y and double the product of x and
y.” This is a cumbersomely cluttered expression. By using variables, we can actually
just write out the equation as we did above. Notice that variables are like blanks that
can be filled in correctly by symbols that have been established to refer to the appro-
priate kind of thing. Although this seems odd, only because it is unfamiliar, consider
the preceding equation written as:
3.2 Grammar of our Formal Language of Sentential Logic: ∑ 85
∑
Symbol, Name of Symbol, and What the Symbol Refers to
{p, q, r, …, p1, p2, …}: Name: Simple/Atomic/Individual Meanings of
Variables; Referent: Sentences
{A, B, C, …, A1, …}: Name: Translation Atomic Meanings of
Variables; Referent: Linguistic Translated
English
Sentences
Connectives: Symbol, Name of the Symbol, and Name of the Connective
~: Name of the Symbol: Tilde; Name of the Connective: Negation
∙: Name of the Symbol: Dot; Name of the Connective: Conjunction
∨: Name of the Symbol: Wedge; Name of the Connective: Inclusive Disjunction
⊃: Name of the Symbol: Horseshoe; Name of the Connective: Conditional /
Implication
≡: Name of the Symbol: Triple Bar; Name of the Connective: Biconditional /
Equivalence
(,): Left and Right Parentheses [also called Auxiliary Symbols]
---The tilde is the only Monadic or Unary or One-Place connective symbol.
All the other connective symbols are Binary or Dyadic or Two-Place.
Compare operation symbols in basic arithmetic:
- 3: the name of the operation symbol is “minus” and it is monadic/unary/
one-place.
2 + 3: the name of the operation symbol is “plus” and it is binary/dyadic/two-
place. The same is the case for the operation symbols {x, ∸}.
86 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
ℳ(∑).
φ, ψ, …, φ1, … are metalinguistic variables.
We use these variables to talk about well-formed formulas of ∑. We can
bring into our metalanguage symbols from the Object Language ∑, but, actu-
ally, we don’t have a symbol like what we are looking for. We need these
metavariable symbols to allow us to talk about any well-formed formulas –
not only simple but also compound formulas.
When we refer to symbols from ∑, we place them within special metalin-
guistic symbols that are called “corners:” “⌜” and “⌝” . When we use and talk
about metalinguistic symbols, we don’t need such corners. That is because
those are functioning like names already within our metalanguage.
When we don’t use but, instead, talk about or mention symbols we place
quotation marks around them – but for symbols from the Object Language ∑
we use corners as we have said.
We now present the Grammar of our formal language. Notice that we present
the grammar of ∑ in ℳ(∑).
GRAMMAR of ∑.
Grammatically correct symbolic expressions are called well-formed for-
mulas. The abbreviation we use for “well-formed formula” is wff and, for the
plural, wffs.
(continued)
3.2 Grammar of our Formal Language of Sentential Logic: ∑ 87
(continued)
language ∑. This is the case for the Greek lower-case letters {φ, ψ, …, φi, …,
ψj, …} – allowed to have subscripts from the positive integers. We need these
special symbols to talk about wffs; such wffs may or may not be atomic for-
mulas – the atomic formulas are the simple or atomic letters we have made
available in our formal grammar. Certainly, once we specify our grammar, we
will be making it possible for compound symbolic expressions (not atomic)
to be well-formed – to be wffs. Accordingly, when we want to indicate that
we are speaking of any wff – not necessarily an atomic letter – we need some
metalinguistic symbol to say that! For this purpose, we use the lower-case
Greek letters specified above.
1. We do not need to use parentheses to enclose the entire wff. Such paren-
theses are called external or outer parentheses. In other words, we are
stipulating that we may dispense with – omit, not use – outer parentheses.
We can do this – we can call it a “liberalization” of our still be able to
figure out what the boundaries of the whole wff are. If we ran into uncer-
tainty about this, we would have to use the parentheses.
2. Another simplification has to do with formulas in which the connective sym-
bols {∨, ∙, ≡} are involved; it applies only to these connective symbols and
(continued)
88 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
(continued)
not to {⊃}. This simplification also related to the use of parentheses – on
which we will have more to say later. We may omit parentheses as follows:
• ⌜p ∨ (q ∨ r) ⌝ and ⌜(p ∨ q) ∨ r⌝ can be written as ⌜p ∨ q ∨ r⌝
• ⌜p ∙ (q ∙ r) ⌝ and ⌜(p ∙ q) ∙ r⌝ can be written as ⌜p ∙ q ∙ r⌝
• ⌜p ≡ (q ≡ r) ⌝ and ⌜(p ≡ q) ≡ r⌝ can be written as ⌜p ≡ q ≡ r⌝
In other words, we may omit parentheses when we have iterations of these
three connective symbols. An explanation of this is due. The two expressions
we see, in each case, with the parentheses left in place are mutually equiva-
lent: they have the same logical meaning with each other. “Logical meaning”
is a technical term, you should remember. It means that – for the SAME true/
false values for the atomic letters in the two wffs, the whole wffs obtain the
same true/false value. Because logical meaning is truth value for assignments
for truth values to the parts, this means that they have the same logical mean-
ing. Therefore, we can move the parentheses any way we want – we can go
from the one symbolic expression to the other symbolic expression (we are
talking about the expressions with the parentheses still left in them.) But, if
the movement of parentheses has no impact on logical meaning, we don’t
need to use parentheses at all! As you will soon find out, parentheses are used
only for the purpose of distinguishing which one of different meanings is
expressed. Doing this is to prevent ambiguity. You can see in the grammatical
instructions that the purpose of parentheses is said to be to prevent ambiguity.
But we don’t have ambiguity since the meanings are the same no matter how
the parentheses are placed.
We say that the connective symbols {∙, ∨, ≡} have the property of associa-
tivity. What this means is more or less what has been explained about what
happens when the parentheses are shifted as indicated above. You might find
this complicated but, to facilitate understanding, consider the following
examples from simple arithmetic.
• We do not write “(A)” or “~ (A)”; this is clearly not allowed by our grammar.
The prohibition is consistent with the regulation we have laid down: we only use
parentheses for the purpose of preventing and removing ambiguity and no ambi-
guity arises in the case of individual letter symbols. Of course, we could have
legislated the use of parentheses in a more permissive fashion, and it is possible
to find texts in which parentheses are placed around atomic variable letters.
3.2 Grammar of our Formal Language of Sentential Logic: ∑ 89
We may reflect briefly on the reasons we do not need parentheses around indi-
vidual (atomic) variable letters. ⌜A,⌝ for example, is unambiguous and, hence,
⌜(A)⌝ is not needed. (The use of the so-called Quine corners, “⌜” and “⌝”, is to
enclose symbolic expressions from the object language ∑, which are mentioned
rather than used when appearing in our metalanguage.) Any atomic letter, like ⌜A⌝
for instance, is not bound to itself by means of any connective symbol, hence we do
not need parentheses: this is an interesting concept we need to grasp now – the con-
cept of “binding” – in order to have a deeper understanding of how the connection
between removing ambiguity (disambiguating) and using parentheses works.. Let
us show – in our metalanguage – the binding of connective symbols to non-
connectives expressions by using “↪” and “↩”. We place what is bound by the
connective symbol within boxes. In our grammar, as we have stipulated, the unary
connective symbol is written in prefix notation and the binary connective symbols
are placed in between the connected symbolic expressions (or, in infix notation.)
Notice the consequences for how the binding diagrams below look.
• (p ∨ q) ⊃ (s ∙ ~ t)
○○ binding(⊃): (p ∨ q) ↩⊃↪ (s ∙ ~ t)
○○ binding(∨): ( p ↩∨↪ q ) ⊃ (s ∙ ~ t)
○○ binding(∙): (p ∨ q) ⊃ (s ↩∙↪ ~ t )
○○ binding(~): (p ∨ q) ⊃ (s ∙ ~↪ t )
Now we consider a different symbolic expression from the one of the preced-
ing example: this well-formed formula we consider next has different placements
of parentheses; the binding attachments of connectives to sub-expressions of the
formula are different; this formula does not have the same logical meaning –
when interpreted within a system with truth values – as the preceding one. Once
again, the diagram presented below appeals to intuitions. We have four connec-
tive symbols in this formula – as we had in the one of the preceding example –
and we analyze, diagrammatically, the binding forces of the connective symbols,
one by one. The binding for each connective consists in the symbols that are
controlled by the symbol. It is important to notice that we do not attach any hier-
archical ranking to the binding strengths. All binary connective symbols are
treated as equal in terms of binding and it is only the symbols’ placements in the
formula that determine the bindings (or, as we will call them soon, the scopes.)
We will try to understand why parentheses are needed because of this equality of
binding forces. (Also to notice, the fact that our grammar legislates that the tilde
is placed in front, in prefix notational position, has consequences too, as we will
explain.)
90 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
• p ∨ (q ⊃ (s ∙ ~ t)): p ∨ (q ⊃ (s ∙ ~ t))
○○ binding(∨): p ↩∨↪ (q ⊃ (s ∙ ~ t))
○○ binding(⊃): p ∨ ( q ↩⊃↪ (s ∙ ~ t) )
○○ binding(∙): p ∨ (q ⊃ (s ↩∙↪ ~ t ))
○○ binding(~): p ∨ (q ⊃ (s ∙ ~↪ t ))
Without appropriately placed parentheses, the following symbolic expression is
not a well-formed formula. It could be one or the other of the formulas of the pre-
ceding two examples. Notice how binding uncertainties are related to – and illus-
trate the – ambiguity. This uncertainty is not a psychological matter, of course, but
it is a structural defect in the writing of the ambiguous symbolic expression relative
to the grammar we have stipulated. (To be sure, there are grammars, with conven-
tions different from the ones we have stipulated, for which, grammars, this follow-
ing expression can be read, is not ambiguous.) But for our grammar this symbolic
expression is ambiguous; it cannot possibly be “scanned.” Compare the different
possibilities for binding – scope – that we obtain: we cannot reasonably decide
which one of the two possibilities applies – hence, we have ambiguity. Not every
connective symbol suffers from ambiguity, of course, as you can see. But some
symbols (two in this case) present this defect – ambiguity of scope, which is a con-
sequence of the fact that we have not made arrangements for the binding strength of
the symbols to be more or less - all symbols have the same “right” to binding, so to
speak. We indicate the ambiguous connective symbol (in this metalinguistic presen-
tation) by a “?” superscript and we show the available options, between which we
do not know how to decide based on our grammatical conventions. There can be two
or more options of “reading” the connective’s binding to engender ambiguity (in our
example we have two options, which is the minimum number of options to have
ambiguity.)
• p ∨ q ⊃ (s ∙ ~ t)
○○ p ∨ q ⊃ (s ∙ ~↪ t) ==not ambiguous, there is only one option
○○ p ∨ q ⊃ (s ↩∙↪ ~ t) ==not ambiguous, there is only one option
○○ p ∨ q ⊃? (s ∙ ~ t) ==ambiguous: two options (shown below)
• p ∨ q ↩⊃↪ (s ∙ ~ t)
• p ∨ q ↩⊃↪ (s ∙ ~ t)
○○ p ∨? q ⊃ (s ∙ ~ t) ==ambiguous: two options (shown below)
• p ↩∨↪ q ⊃ (s ∙ ~ t)
• p ↩∨↪ q ⊃ (s ∙ ~ t)
Notice that, depending on what is bound by a connective, other connectives may
have to “adjust” accordingly for what those other connectives bound. In our exam-
ple, the does not suffer from ambiguity thanks to the placement of parentheses.
Without the extant parentheses symbols, we would have more disambiguating
options – more possible well-formed formulas as options. It is also remarkable that
the tilde cannot be ambiguous. There is a reason for this. Notice that the equality of
binding forces we talked about applies to the binary connective symbols. When it
comes to the tilde, on the other hand, it is understood to be bound rigidly to the well-
formed formula that immediately follows it. Since an atomic variable letter by itself
is a well-formed formula, this applies also in the case of the tilde placed in front of
3.2 Grammar of our Formal Language of Sentential Logic: ∑ 91
an atomic letter. To indicate rigid attachment (meaning that the binding is only, or
exactly, to what follows to the right, we use the metalinguistic symbol “⤠”.
• ~⤠ p ∙ q
○○ NEVER to be read: ~↪ p ∙ q
○○ Hence, NOT to be read: ~ (p ∙ q)
○○ Therefore, no ambiguity between:
• ~ p ∙ q and
• ~ (p ∙ q)
• Accordingly, we don’t need: ~ (p) ∙ q
• To indicate that the tilde binds more than the following atomic letter, we
MUST use parentheses:
• ~ (p ∙ q)
There are grammars in which differential strength of binding attachments are
stipulated for the various binary connective symbols; this makes it possible to spare
parentheses, to use fewer parentheses, without incurring ambiguity. There is also a
so-called Polish notation, which places binary connective symbols in a prefix nota-
tion – not in infix placements, as we have done – and which has no need for paren-
theses at all. We give brief examples of what may happen when such grammars are
available but you may skip the text within the following box.
Suppose that we stipulate that ⌜∨⌝ and ⌜∙⌝ bind more strongly than ⌜⊃⌝
and ⌜≡⌝. The convention for the binding by the tilde remains as we have
arranged it. Stronger binding means attachment to what follows rather than to
outlying symbols. Notice how we may save parentheses, without risking
ambiguity, in some cases (although it is not possible to avoid parentheses
altogether.) Remember that we show rigid binding or stronger biding by “⤠”,
and by “⤟” to the left, while we now indicate looser binding by “↩” and
“↪”. Using these adjusted symbols, notice that the tilde is rigidly attached
(only to the right, since it is a unary symbol.)
== (p ⤟∨⤠ q) ↩⊃↪ (s ⤟∙⤠ ~⤠ t)
We may omit parentheses without ambiguity:
== p ⤟∨⤠ q ↩⊃↪ s ⤟∙⤠ ~⤠ t ==hence, we can write:
== p∨q⊃s∙~t
But, the following needs parentheses:
== p ∨ (q ⊃ (s ∙ ~ t))
Without parentheses, the expression is automatically the well-formed formula
of the preceding example. Notice how the hierarchically differential binding
strengths can be still represented accurately – but the wedge, now, attaches rigidly
to the entire within-parentheses expression to its right (since this is the intention.)
== p ⤟∨⤠ (q ↩⊃↪ (s ⤟∙⤠ ~⤠ t))
(continued)
92 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
(continued)
Other examples from this grammar, which is different from ours, and
which we may dub as “∑⤠”. Next to each well-formed formula of ∑⤠ we
show the grammatically correct formula of our language ∑ that represents the
same sentential form.
∑⤠ ∑
p≡~t∨q p ≡ (~ t ∨ q)
~p⊃~q∙~r ~ p ⊃ (~ q ∙ ~ r)
~t∨~q⊃s∨~q (~ t ∨ ~ q) ⊃ (s ∨ ~ q)
∑⤠ ambiguous ∑⤠-unambiguous ∑
p≡t⊃q p ≡ (t ⊃ q) p ≡ (t ⊃ q)
p⊃q⊃~r (p ≡ t) ⊃ q (p ≡ t) ⊃ q
t⊃q⊃s∨~q p ⊃ (q ⊃ ~ r) p ⊃ (q ⊃ ~ r)
(p ⊃ q) ⊃ ~ r (p ⊃ q) ⊃ ~ r
t ⊃ (q ⊃ s ∨ ~ q) t ⊃ (q ⊃ (s ∨ ~ q))
(t ⊃ q) ⊃ s ∨ ~ q (t ⊃ q) ⊃ (s ∨ ~ q)
left and right parentheses are equal; if not, the given symbolic expression must be
declared to be not well-formed, not grammatically correct, ill-formed: such an
expression does not “scan;” we cannot read it, it is meaningless, in ∑.
We refer to connectives in the following section. Strictly speaking, the term con-
nectives should be used when we provide a semantics for our formal system; this is
done when the definitions are given over the set of truth values {T, F}. A more
appropriate term for uninterpreted connectives is “operators:” the interpretations of
the operators, given by defining those operators over the truth values {T, F}, as we
do in 4.3, furnish us the connectives. We disregard this and refer to connective sym-
bols throughout: we do this to avoid a distracting transition from the term “opera-
tors” to the term “connectives”.
Let us start by discussing a concept called the scope of the connective symbol. This
concept plays a significant role in many of the operations we will need to attend to.
Consider the following symbolic expression, which is a well-formed formula (wff).
φ1 = p ⊃ q
This is well-formed; it is a wff (well-formed formula) of ∑. The only connective
symbol is what we have called the horseshoe. This is a binary connective symbol; it
must have two input symbols – and it does. In this case the input symbols are atomic
sentential variables. Our grammar has set it that atomic sentential variables are well-
formed. The connective symbol, which is the horseshoe in this case, is in the infix
position (it is between the two joined atomic variables.) This is all well-formed. We
say that the two input symbols are within the scope of the connective symbol. We
can write this as follows. I am using the set-brackets, again, even though we have
not studied Set-Theoretical concepts yet. (But you can find Set Theory covered in
chapter 11.) It is useful to know that a set – in this case a collection of symbols as
members – is defined as a collection of items such that for any given item we can
tell if this item is or is not in the set. We say that a set has members or elements. We
don’t repeat a member symbol more than once. We can define the empty set, a set
with no members at all, which is usually symbolized as “∅”. We use “{” and “}” to
enclose the symbols that name the members of the set.
Scope(⊃) = {p, q}
If we were to query about the scope of ⌜∨⌝ in the given formula, we would have
to say that this is the empty set. (Can you tell why there is only one empty set?)
Scope(∨) = ∅
94 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
Now, we are given a formula with more than one connective symbol. We will be
discussing the use of parentheses in the next section; for now, you are just treated to
scanning wffs (well-formed formulas) with parentheses.
φ2 = ~ (p ⊃ q)
Consider the scopes of the two connective symbols in the given wff.
Scope(~) = {p, ⊃, q}; Scope(⊃) = {p, q}.
We consider within the scope of a connective symbol all the symbols that are
regulated by the symbol – not including parentheses. The symbol itself is not within
its own scope. This is important. In the second given wff, the scope of ⌜~⌝ is larger
than the scope of ⌜⊃⌝. This means, straightforwardly, that there are more symbols
within the scope of ⌜~⌝ than within the scope of ⌜⊃⌝. It is also noteworthy that ⌜⊃⌝
falls within the scope of ⌜~⌝. In the first case, of course, we only have one connec-
tive and the issue of relative scope does not arise.
Consider examples from language:
The formula φ3 has a different meaning from the formula φ4. Although they both
have two occurrences of the horseshoe, the scopes are different; in the second for-
mula, the first occurrence has a larger scope than the second occurrence. It was the
other way round in the first formula.
We will provide more examples in 4.1.1. For now, let us tackle a wff with many
connectives symbols. This might seem formidable but we don’t have to alter any-
thing we have said so far to deal with a case like this. For the sake of convenience,
we are superimposing metalinguistic symbols – superscripts to mark the occur-
rences of connectives – on the formula. Note first that the given wff has five connec-
tive symbols. We mark occurrences of the same symbol from left to right. We then
track carefully what lies within the scope of each connective symbol.
φ5 = ~ (p ⊃ (~ q ⊃ ~ t))
Imposition of Superscripts to Mark Multiple Occurrences.
φ5 = ~1 (p ⊃1 (~2 q ⊃2 ~3 t))
Scope(~1) = {p, ⊃1, ~2, q, ⊃2, ~3, t}
Scope(~2) = {q}
Scope(~3) = {t}
Scope(⊃1) = {p, ~2, q, ⊃, ~3, t}
Scope(⊃2) = {~2, q, ~3, t}
We need also to understand why and how parentheses are used in this formal gram-
mar. The instructions we are given in our grammar (also called syntactical instruc-
tions) only tell us to use parentheses to prevent ambiguity – and the instruction is to
use parentheses only for this purpose (which we may also call “disambiguation.”)
We will start by discussing this matter.
p∨q∙r
The above symbolic expression has symbols that are available in the grammar of
∑. It only has such symbols. The connectives symbols in this expression are {⊃, ∙}.
But this symbolic string is not well-formed. It lacks parentheses and – what is
equally important – without parentheses, it is ambiguous. What we mean by ambi-
guity is this: We have ambiguity when the competent reader cannot figure out which
one meaning is meant out of a plurality (two or more) of possible meanings. In this
case, there are two possibilities and, without parentheses, we can’t tell which one of
the two is presented!
Possibility 1: p ∨ (q ∙ r)
Possibility 2: (p ∨ q) ∙ r
These two symbolic strings do not express the same meaning. If they did, there
would be no problem. The first wff captures the logical form of an inclusive disjunc-
tion – an either-or-and/or-both-meaning. The second, however, captures the logical
form of a conjunction – an and-meaning. These are different logical meanings: they
do not have always the same truth values (true/false) when the same truth values are
assigned to their atomic parts. For the and-meaning to be true, both the joined sen-
tences must be true; but the either-or-and/or-both-meaning can be true even if only
of its joined sentences is true.
The first logical form has the wedge as the connective symbol with the widest or
largest scope; but, in the second form, the dot has the largest scope. The first is an
inclusive disjunction whereas the second is a conjunction.
98 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
• ⌜φ ∨ ψ ∨ χ⌝ is well-formed.
• ⌜φ ∙ ψ ∙ χ⌝ is well-formed.
• ⌜φ ≡ ψ ≡ χ⌝ is well-formed.
• ⌜φ ⊃ ψ ⊃ χ⌝ is not well-formed: the connective of material implication does not
have the property of association: the two possible placements of parentheses
yield formulas that do not have the same logical meaning.
• ⌜φ ⊃ (ψ ⊃ χ)⌝ does not have the same logical meaning with ⌜(φ ⊃ ψ) ⊃ χ⌝
One more observation we need to make is this: obviously, we could always ques-
tion if the tilde applies to only the first succeeding symbol or to more than one suc-
ceeding symbol, if there are more such symbols. Does this mean that we should
regard formulas with tildes as needing disambiguations? Or, perhaps, should we
stipulate, as it has been done in some texts, that we use parentheses to enclose the
symbols in the scope of the tilde even if the scope of the tilde has only one symbol
in its scope? This latter option would compel us to regard as well-formed formula
⌜~ (p)⌝ instead of ⌜~ p⌝. Instead, we have in our grammar that no parenthesis is
required in the case in which only one symbol is in the scope of the tilde; this also
compels us to take at face value that the scope of the tilde has been correctly indi-
cated as applying to the one symbol that succeeds it: we do not raise questions about
that and, instead, assume that no such error has been committed.
• p⊃q≡t
• Disambiguations: (p ⊃ q) ≡ t \\ p ⊃ (q ≡ t)
• p∨~q∙r
• Disambiguations: p ∨ (~ q ∙ r) \\ (p ∨ ~ q) ∙ r
• ~~p⊃p≡~q
• Disambiguations: ~ ~ p ⊃ (p ≡ ~ q) \\ (~ ~ p ⊃ p) ≡ ~ q
• ~ (p ≡ q ∨ s ⊃ t)
• Disambiguations: ~ ((p ≡ q) ∨ (s ⊃ t)) \\ ~ (((p ≡ q) ∨ s) ⊃ t)
• ~ (p ≡ (q ∨ (s ⊃ t))) \\ ~ (p ≡ ((q ∨ s) ⊃ t))
Another way in which absence of parentheses can result in an ill-formed (not
well-formed) formula is when the number of left and the number of right parenthe-
ses do not match. For a formula to be well-formed, the number of left parentheses
must be equal to the number of right parentheses in the formula. We must check to
make sure that this is the case. If it is not the case, the formula is not well-formed.
The following formulas are not well-formed formulas of ∑ because the numbers of
left and right parentheses do not match.
• ~ p ∙ ~ (q ≡ (~ r ⊃ s)))
• p ∨ (q ⊃ ~ (s ∨ t)
• ~ (~ p ≡ (p ⊃ (q ⊃ ~ s))
• ~ (r ⊃ (p ∨ q)) ∙ ~ (s ∨ ~ t))
3.2 Grammar of our Formal Language of Sentential Logic: ∑ 101
No other symbols besides the ones that have been stipulated by GRAMMAR(∑)
are recognized in ∑. We can present, compendiously, this grammar by the follow-
ing conventional means:
GRAMMAR(∑): p, p1, …, q, …/A, A1, …, B, …/~/∙/∨/⊃/≡/(/).
3.2.3 Well-Formedness
So far we have presented our symbols and specified the correct grammar for our
formal language. Here we make some observations about the symbolic expressions
of our formal language.
Parentheses have been presented as so-called syncategorematic symbols: what
this means is that parentheses, as auxiliary symbols, cannot stand by themselves;
they are not given syntactical standing for themselves but they must be used in con-
catenations with other symbols in strict accordance with the grammatical conven-
tions that are to be specified. The same will be the case with the connective symbols.
A symbolic expression or formula φ (with the symbol being metalinguistic,
referring to a not necessarily atomic formula) is well-formed relative to our formal
grammar if and only if it is constructed in accordance with the stipulated grammati-
cal conventions of our formal language.
It is possible to conduct mathematically rigorous investigations and to prove
claims about formal properties of well-formed formulas of our formal language. We
will not dwell on this topic extensively but it is important that we sketch briefly how
such proofs are carried out. As an example, we will prove that any well-formed
formula (wff) of ∑ must have the same number of left and right parentheses.
The approach is based on what is called Mathematical Induction; specifically,
our proof is by mathematical induction on the length or construction of the well-
formed formula of ∑. The underpinning claim for the validity of this method can be
understood as follows: if we are dealing with a systematically constructed series of
formal entities (which are built in a systematic fashion according to a specified
rule), then we can proceed to establish that all these entities possess certain proper-
ties, even if the series is infinite, in the following way. First we need to show that the
basic or initiating element or entity (the base-case) possesses the property F about
which we claim that all the elements possess. Next, we take what is called the
Induction Step. We make an assumption: that up to the nth element, all the elements
possess the property F. On the basis of this assumption, we have to show that the
element following the nth element, the (n+1)th element, also possesses the property
F. The logical rule of inference that is at work in this argument is called Modus
Ponens: if x is true and also “if x then y” is true, then y must be true. The proof step
we are required to present – from n to n+1 – along with the initial proof, which we
are also required to present, that the first element possesses the property F, carries
us along. We may present this all as follows:
• α has property ℱ
• for any n, so that ℱ(n), we prove that it follows that ℱ(n+1)
102 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
to the numbers of parentheses of the given formulas; since those numbers are
equal, by the induction assumption, it follows that the resulting well-formed for-
mula also has equal numbers of left and right parentheses.
• It follows that:
• (7) #) (φ * ψ) = #((φ * ψ)
Next we catch a glimpse of a notation, called “Polish,” which does not need
parentheses at all without risking ambiguous expressions on account of this.
The Polish formal grammar is not popular anymore and is not to be found in
any but older texts of logic, but continues to enjoy prominence in computer
languages. This formal notation not only affords us a parsimonious use of
parentheses but, in addition, it also allows for great transparency or perspicu-
ity as to the scopes of the various connective symbols. This notation uses only
prefix notation for all connective symbols including both unary and binary.
Let us roll out the stipulations for symbolization and show examples of imple-
mentation within a symbolic idiom we may label “∑POLISH”. We can legislate
any connective symbols we wish but let us preserve the flavor of the original
Polish notation, in which capital letters are used for the various connectives as
shown below. We show diagrammatically how the automatic binding works in
this grammar and we also show the corresponding well-formed formulas
expressing the same form in ∑.
104 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
∑POLISH ∑
Np ~p
Apq p∨q
Kpq p∙q
Cpq p⊃q
Epq p≡q
Examples Examples: ~ (p ∨ q)
NApq p ⊃ (s ⊃ t)
CpCst ~ (q ∨ (s ∙ t))
NAqKst
3.2.5 Exercises
1. We have seen how to present the rules of grammatical formation for our formal
language ∑ in a way that is called recursive or inductive. We state, in a sym-
bolically enhanced metalanguage, the conditions for possession of the property
of well-formedness by the ancestral symbolic items: these are the atomic vari-
ables which are declared well-formed, as we have seen. Subsequently, we spec-
ify how this property, of well-formedness, is inherited by means of manipulations
of symbols. In this mode of definition, we avoid circularity too. It is notable that
we can do this for other formal characteristics in the following way. We specify
the output, which has to be unique or functional, for the base case which is the
atomic variable; subsequently, we define the specified (always unique or func-
tional) outputs for the cases of all the connective symbols. For example, regard-
ing the number of parentheses (and incorporating the liberalized grammatical
condition that permits omission of parentheses when no ambiguity arises), we
have for the function 𝑓 assigning integers to the symbol-inputs as shown below.
This way of defining the function is also called recursive.
𝑓(p) = 0.
𝑓(~ φ) = 𝑓(φ).
𝑓(φ * ψ) = 𝑓(φ) + 𝑓(ψ)/ * ∈ {∙, ∨, ⊃, ≡}.
Now given recursive definitions of the functions for the following formal
attributes:
a. number of atomic variables in the well-formed formula
b. number of connective symbols in the well-formed formula
c. number of binary connective symbols in a well-formed formula
d. maximal connective depth of a well-formed formula, where maximal con-
nective depth is defined as the number of connective symbols within the
scope of the connective symbol that has the greatest number of such connec-
tive symbols nested within its scope (e.g. marking occurrences of the con-
nective symbols from left to right: (~ p ∨ (~ p ∙ ~ ~ q))) = max( (~1), (∨), (
~2), (∙), (~3), (~4)) = max(0, 4, 1, 0) = 4)
2. We have constructed a formal grammar for our language ∑ and we have used a
symbolically reinforced metalanguage ℳ(∑) using its resources to speak about
∑. Which of the following expressions are in ∑ and which expressions are in
ℳ(∑)? Can an expression be a member of neither ∑ nor of ℳ(∑)?
a. ⌜ p ∨ p ~⌝ is not a member of the set of well-formed formulas of ∑.
b. ⌜ p⌝ implies ⌜ p ∨ q⌝.
c. ~ p ≡ ~ (~ p ≡ ~ ~ p)
d. ⌜ p ≡ q⌝ and ⌜(p ⊃ q) ∙ (q ⊃ p)⌝ are mutual logical equivalents.
e. ℾ!#$%#
f. ~ ~ ~ (p ⊃ (p ⊃ ~ ~ p))
g. p ≡ ~ (q ⊃ (~ q ∙ ~ s
106 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
and triple bar denote transitive connectives (as we have explained) and no need
arises to disambiguate if only such connective symbols are involved.
a. p∨q∙r
b. (p ∨ q) ∙ r
c. p ∨ (q ∙ r)
d. p≡~q∨r
e. p⊃q⊃r
f. ~p∨q⊃r
g. t≡r≡s
h. p∙~s⊃t
i. q∨r∨s
j. ~ (p ∙ q) ∨ r ≡ s
k. p≡p∙~q⊃r
l. p∨s⊃s∨t
7. Provide the translations into ∑ of the following symbolic expressions which
are well-formed in ∑POLISH.
a. NNCpq
b. CpCqp
c. CKpCpqq
d. NCKpCpqNp
e. CNApqKNpNq
f. CpCpCpCpp
g. EApKqrKApqApr
h. CCpCpqKpq
i. EKCpqCqpKEpqEqp
8. The following symbolic expressions are not well-formed in ∑ because they are
ambiguous. Disambiguate – which means that you should provide all the differ-
ent well-formed, unambiguous, formulas which can be produced from each one
of the given expressions.
a. p⊃s∨t
b. ~p⊃s∨t
c. p⊃q⊃p
d. ~ p ∨ ~ (q ⊃ s ≡ t)
e. ~p⊃q⊃~q
f. p∨q∨s∙t⊃~p
g. (p ⊃ q ∨ s) ⊃ (q ⊃ p ⊃ s)
9. Why are the following symbolic expressions unambiguous well-formed formu-
las, and so not in need of disambiguation?
a. ~ p ⊃ q
b. p ∨ ~ q ∨ ~ s
c. q ∙ s ∙ r ∙ t
3.3 Parsing Trees of Well-Formed Formulas of ∑: ℑ(∑) 109
d. (~ p ∙ q) ≡ ~ (~ q ∙ r) ≡ ~ (q ⊃ t)
e. ~ ~ p ⊃ ~ ~ p
f. ~ q ∨ ~ ~ q ∨ ~ ~ ~ q
10. Indicate the scopes of all the connective symbols and determine which is the
major connective symbol of each of the following well-formed formulas.
a. ~~~p
b. (p ≡ ~ q) ≡ ~ (~ q ⊃ ~ ~ p)
c. p ∨ ((q ∨ r) ∨ s)
d. ~~~p≡~p
e. ~ p ≡ ~ (p ⊃ (q ⊃ p))
f. (p ∨ (q ∙ r)) ≡ ((p ∨ q) ∙ (p ∨ r))
g. p ∨ ((q ∙ r) ≡ ((p ∨ q) ∙ (p ∨ r)))
h. p ⊃ (p ⊃ (p ⊃ (p ⊃ p)))
i. (((p ⊃ p) ⊃ p) ⊃ p) ⊃ p
j. ~ (p ⊃ ~ (p ⊃ ~ (p ⊃ ~ p)))
k. ~ ((p ⊃ q) ⊃ ~ (p ⊃ ~ q))
Now that we have mastered the grammar of ∑, we can use a diagrammatic method,
called the Parsing Tree of a well-formed formula (ℑ(∑)), to carve a wff of ∑ starting
with the largest-scope symbol and proceeding until we have obtained all the atomic
letters in the formula. Although we don’t discuss metalogical characteristics of for-
mal systems in this text, the parsing tree can assist us in consolidating our grasp and
facility with manipulating the formal grammar; it will also help us obtain the sub-
formulas of a given wff (well-formed formula); and, finally, it prepares us for deal-
ing with other diagrammatic arrangements – like the semantic trees we will be
studying in 4.7.
A tree is actually a systematically constructed abstract mathematical object that
can be studied rigorously; but we are not interested in such advanced explorations.
We have to adjust, however, to speaking precisely about certain items on the tree
and we need to use specific terms to do this. The tree has a root; that is where the
given formula is placed. We say that the root is labeled by the formula. Unlike natu-
ral trees, this mathematical diagram has its root at the top. The branches issue from
the root; from each branch, more branches can be issued; when we trace a branch
from the top and its attached branches all the way to the bottom, we are tracing what
we call a path. We need to know when the tree has reached the bottom or end – when
it has terminated. The terminus is at the bottom – while the root is, as we indicated,
at the top. Termination happens only when we don’t have any formulas left besides
atomic letters. Thus, we can say that the terminal nodes are exactly those labeled by
atomic variable letters.
110 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
We also need to know how we proceed, given the root of the tree, to take actions
so as to originate the branches. We need rules for this. In the case of the parsing tree,
these rules are simply connective-symbol rules: for each connective symbol we find
in the given formula, we remove it and, underneath, we branch and label the subse-
quent nodes with the subformulas that remain after removal of the connective-
symbol. We call the part of the tree labeled by this remaining formula a node. We
start removing connective symbols beginning with the largest-scope connective
symbol and proceeding with removing narrower-scope connective symbols. Thus,
what we learned in preceding section regarding scopes is presupposed and put to
work in this section.
As we have seen, a tree has nodes, in addition to root and branches and their
constituted paths. The nodes at the very bottom, all marked by atomic letters, are
called terminal nodes. We can consider the rules that instruct us how to proceed to
be like recipes for how to draw a parsing tree: they are, themselves, shape-like or
schematic recipe-trees giving us instructions about how to proceed.
We begin with the rule for the only monadic connective symbol we have in our
formal language – the rule for the negation symbol: the ~-Rule. As we have done
before, we use φ to represent any wff (not necessarily atomic.) Note how we name
the rule we are applying (“~−R”) to the right of the downward arrow.
~φ
⇓~R
φ
The tilde rule is a monadic-connective rule: the tilde is, obviously, a monadic-
connective symbol. Removing the tilde, the rule leaves what is left from the for-
mula, which is φ. What is left is put on the node that is branched underneath. We can
now guess, correctly, that the other rules – all of them for the remaining binary
connective symbols, have two branches issuing – one to the left and the other to the
right: this is because removal of the binary connective symbol leaves the two for-
mula-parts that are connected by the binary connective symbol. We can show the
rules – the recipes, also called schemata – for the binary connective symbols.
We have completed the parsing tree ℑ(~ (~ p ≡ ~ (~ q ⊃ (s ∨ t)))) for the given
formula. We have reached the terminal nodes: only atomic letters remain at the ter-
minal nodes (marked underneath them, each by “⊙”); it is not possible, of course,
to apply rules on the atomic letters since the rules are for connective symbols. Let
us review the above parsing tree. The top box is the root and it contains – it is
labeled by – the given wff (well-formed formula). The largest-scope connective
symbol is the first occurrence of the tilde. We apply the ~R and we branch vertically
down to the second box which is a node. The largest-scope connective symbol in
this node is the binary connective triple bar – the first occurrence of this symbol. We
apply the rule for this connective; because it is a binary connective symbol, we have
now two branches splitting with one branch to the left and one branch to the right.
We continue by applying connective rules in the same way until we reach the termi-
nal nodes.
Let’s see now how we trace the paths in the above parsing tree. This is a helpful
exercise in anticipation of other tree types, which we will be using in subsequent sec-
tions. To show the paths, assisting by means of visual recognition as we try to grasp
the concept, we characterize each path by showing the nodes in the path. The root
belongs to every path. This should make sense, as you think about it and survey the
diagram of the parsing tree. To show the paths, we reproduce the parsing tree and
place superscripts to the nodes: counting from left to right, we use superscripts from
the positive integers. You should expect that paths may share branches – in other
words, a branch may belong to more than one path; and, as we have already pointed
out, the root belongs to every path. The terminal nodes, on the other hand, are not
shared: each terminal node belongs to one and only one path. To show that the root
belongs to all the paths, we place on it all the superscripts. Overall, there are 4 distinct
paths in this parsing tree. Similarly, we place more than one superscripts, as needed,
to show all the paths to which a node belongs. Check the symbol “⊙”: this indicates
that exactly above it the path has terminated; the branch you find there is terminal and
the node (marked by some atomic letter) is terminal. Starting with noting the terminal
nodes, check what superscript number each such node is given: then, follow upwards
to trace the path by including every node that has this superscript. Let us practice this.
112 3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic…
Here are the paths of this parsing tree. We are placing the formulas belonging to
each path within set-brackets.
Path1: {~ (~ p ≡ ~ (~ q ≡ (s ∨ t))), ~ p ≡ (~ q ≡ (s ∨ t)), ~ p, p}.
Path2: {~ (~ p ≡ ~ (~ q ≡ (s ∨ t))), ~ p ≡ (~ q ≡ (s ∨ t)), ~ q ≡ (s ∨ t), ~ q, q}.
Path3: {~ (~ p ≡ ~ (~ q ≡ (s ∨ t))), ~ p ≡ (~ q ≡ (s ∨ t)), ~ q ≡ (s ∨ t}, s ∨ t, s}.
Path4: {~ (~ p ≡ ~ (~ q ≡ (s ∨ t))), ~ p ≡ (~ q ≡ (s ∨ t)), ~ q ≡ (s ∨ t}, s ∨ t, t}.
Finally, we will show how we can extract all the subformulas of the given well-
formed formula from the parsing tree. Simply, we take all the formulas that mark the
nodes – all the nodes – and they all constitute subformulas of the given formula.
This includes the formula in the root: in other words, the formula itself is considered
to be one of its subformulas. If more than one nodes have the same formula, we
don’t repeat it; we include it once in the set of subformulas.
Using the set notation, we have:
SUBFORMULAS(~ (~ p ≡ ~ (~ q ⊃ (s ∨ t)))) =
= {~ (~ p ≡ ~ (~ q ⊃ (s ∨ t))), ~ p ≡ ~ (~ q ⊃ (s ∨ t)), ~ p, ~ q ≡ (s ∨ t), p, ~ q, s
∨ t, q, s, t}.
We close by remarking that every well-formed formula of ∑ has a unique parsing
tree. We could prove this by using a metalogical proof method, called Mathematical
Induction on the construction of the well-formed formula. We can also call the pars-
ing trees by the name of “decomposition trees.” We can then make the assertion that
every well-formed formula of ∑ has a unique decomposition tree. If any two pre-
sumed well-formed formulas of ∑ has the same parsing or decomposition tree, it
turns out after all that we are dealing not with two but with one well-formed formula.
An interesting procedure we can report on, related to the parsing tree system we
have devised, ℑ(∑), allows us to extract the Polish notation for the formula by read-
ing the formula’s parsing tree in the appropriate fashion. We begin from top left: we
3 Formal Logic of Sentences, Sentential Logic (also called Sentential Logic… 113
write the first connective symbol we find moving downwards; similarly with the
next connective symbol moving downwards on the left branch, if any; we disregard
compound formulas but we do write atomic variable symbols; we move to the next
branch to the right and we operate in the same fashion from top to bottom; we move
to the next branch to the right and we proceed in the same fashion; when we reach
nodes where branches are split to left and right, we read, again, from left and from
top to bottom;… and so on until we come to the terminus at the rightmost bottom
point of the rightmost branch of the parsing tree. Let us apply this procedure to the
example provided above: writing informally first we produce an expression and
then we can translate into ∑POLISH as we presented this formal idiom in the preced-
ing section. The formula we extract can be read straightforwardly based on the
procedure we have specified, formidable though it seems to be. The effort we are
expending in negotiating translations into the Polish notation may seem idle as a
pastime but there are many classics of logic, which use this notation; moreover, one
meets this notation in many fields in which symbolic languages are constructed.
Although at first hard to muster, the Polish notation presents decided advantages in
showing the structure of the formula (in terms of its major connective and scopes of
connectives) perspicuously once one becomes attuned to this formal idiom; but the
most impressive advantage consists in that no parentheses are needed for this nota-
tion – and this can economize significantly in symbolic resources when it comes to
implementations of formal languages. On the other hand, it may be claimed that this
notation is not intuitively appealing, but it can be questioned if this is not a result of
lacking familiarity with this formalism.
~ (~ p ≡ ~ (~ q ≡ (s ∨ t))) ↠ ~≡~p≡~q∨st ↠ NENpENqAst
3.3.2 Exercises
specified by our connectives. And what about the values or numbers with
which we are to carry out the computations? In sentential logic, we have
exactly two values: true and false. It might seem strange that we are comput-
ing with anything other than numbers; suppose, then, that we could be using
1 and 0. The point is that the definitions of our connectives will allow us to
determine uniquely what output we get for any specified inputs. If you think
that we are playing an unusual algebraic game deep down – that’s fine! Why
do you think that we cannot construct alternative algebras? We can! As an
example, roughly speaking: suppose we compute as follows: True and True is
True; True and False is False; and so on. Note that the connectives are the
operators and the values we are computing with are true and false. Using all
this information, can you explain why we need rules for connective symbols
in order to construct a parsing tree? Why do we “carve” our well-formed
formulas around connective symbols? Why do we start with the largest-scope
connective symbol?
2. Construct the parsing tree for each of the well-formed formulas given below.
After you have finished constructing the parsing tree present all the subformulas
of the given formula.
a. p ⊃ (q ⊃ p)
b. (p ⊃ (q ⊃ r)) ⊃ ((p ⊃ q) ⊃ (p ⊃ r))
c. (p ⊃ (p ⊃ p)) ⊃ p
d. ~ (~ p ∙ ~ ~ p)
e. p ⊃ (q ≡ (p ∙ q))
f. ~ ~ ~ p ∙ ~ (p ⊃ ~ q)
g. (p ∨ ~ q) ≡ ~ (~ p ∙ q)
h. (p ⊃ ~ q) ⊃ (q ⊃ ~ p)
i. ((p ⊃ q) ∨ (q ⊃ p)) ⊃ ~ (p ∙ ~ p)
3. Extract the Polish notation formulas for the formulas of the preceding exercise
by reading the parsing tree of the formula in the appropriate fashion.
Chapter 4
Sentential Logic Languages ∑
Our formal language ∑ has symbols for several connectives. The symbols denote or
refer to functions which we call truth functions: this means that their inputs and out-
puts are from the set of truth values, {T, F}. The connectives of ∑ are to be defined,
as we say, over {T, F}. Truth values are semantic elements; they provide a semantic
base – considering that logical meaning is a matter of truth value in deductive logic.
When we come to the study of the semantic system of truth tables, or to any use of
truth tables, we will see how meaning-narratives are to be constructed; this is also
called modelling – and, as we said above, this is characteristic of the semantic
approach. Indeed, we will be using the truth tables to define our connectives. This is
a staple of Logic textbooks. We will also present another, algebraic-looking, mode of
definition of the connectives. The alternative definitions betray an underlying alge-
braic infrastructure of the standard sentential logic. We can say, more precisely, that
a semantic formal language of sentential logic interprets systematically a special
algebra that has two numbers only and operations defined over those two numbers:
this is the famed Boolean algebra, which has found other important interpretations in
Set Theory, computer language programming, and electronic circuit designing to
mention a few celebrated examples.
The connectives of ∑ are defined over the two truth values, true and false, respec-
tively symbolized by “T” and “F.” The rules for our formal language are:
• There are exactly two truth values, true and false, T and F. This means that there
can be neither more nor fewer truth values.
• T and F are so related that T is not F and F is not T. This sounds trivial but the
point is that we do not allow degrees of being T or F and we do not allow
d efinitions that relate sentential variables to T and F (in which case we could
relate sentential variables to both or to neither T and F.) Sentential logic has an
on-off switch character about it: it is either T or F; it cannot be both and it cannot
be neither.
• Of the two truth values, T and F, T is the “winning” value (more broadly called
“designated”) and F is the “losing” (anti-designated) value. Alternative logics can
be constructed (which would also be truth-functional), whose base of truth values
contains more than the classical two truth values; one or more of those truth val-
ues ought to be designated, with the rest being anti-designated. Such alternatives,
not pursued in this text, can be motivated philosophically by various relevant con-
siderations; there is controversy surrounding such constructions and what they
stand for exactly. We may think of those alternative logics as admitting possibly
more than one ways of being true, and possibly more than one ways of being false,
while the classical or standard logic restricts the ways of being true and the ways
of being false to the minimal case. The standard logic of two values could emerge
as a special case of the alternatives but, something that is easy to miss, the logical
meanings (of the values themselves and of the defined connectives) are altered by
default when an alternative, many-valued, logic is constructed.
• Every sentential variable has to be assigned one or the other of T and F.
• No sentential variable can fail to take one or the other of the two values.
• No sentential variable can be assigned both T and F.
• We are ultimately interested in systematic assignments of T and F to all of the
individual variables (or atoms) of well-formed formulas. The truth table will
assist us in grasping this fundamental concept, on which we will be expatiating
further. We are interested in the totality of all possible truth-value assignments
(valuations or also called interpretations) to the atomic variables (and, by com-
putation, the resulting truth values for the entire formula for each case of valua-
tion of the atomic variables). The definitions of the connectives are also stipulated
for such totalities. As a ready approximation, and convenient example, we may
consider this: the logical meaning, or definition, of the negation connective is
understood to consist not just in its operation that turns true to false but rather to
consist in the totality of cases: turning true to false and turning false to true. In
the language of functions, the underlying truth function of the negation connec-
tive is not defined as a partial function (which would be the case in defining the
function only for one of the two available value-inputs.)
It is a foundational principle that the connectives receive their meanings from
what we call their truth conditions. This is not a familiar concept and we need to
concentrate in order to understand it. Do not seek parallels in familiar experiences
as this is an abstract subject and it might even have certain unintuitive aspects to it.
If we try to take our clues from language, we can venture a certain gloss that might
be helpful. A language like English has certain logic-words in it, like “not” and
“and” and, in fact, two different meanings of “either-or.” Such words are defined not
by means of their reference to some object (like “John” is defined by the person it
names or “bird” is defined so that its symbol could even be a hieroglyphic symbolic
4.1 Definitions of the Connectives by an Equational Method 117
representation that looks like a bird – conventionally laid down, even though it is
still a symbol.) When it comes to a word like “not,” which is a logic-word, what
object does it refer to? Could we fix a symbol for it that looks, and is agreed conven-
tionally to represent, some object? It seems that we cannot do that. Well, the defini-
tion of the meaning of “not” fixes that what is to be symbolized is the operation that
turns true to false and false to true. We just intimated that the definition of “not” is
given by its truth conditions: there are two possible cases for “not” – only two,
because “not” operates always on one sentence which it negates. One case is T and
the other is F. For binary logic-words, their truth conditions require that we take all
the possible cases (assignments of truth values to the atomic components or inputs)
and present the fixed result (what truth value, T or F, is yielded for each case.)
Bearing the above in mind, we can proceed to the definitions of the connectives. We
approach this task first by means of our alternative method, reserving the truth table
definitions for later.
definition(~) /~ T = F/~ F = T/.
For binary connective symbols, we need specification of all the logically possible
(and mathematically available) combinations of truth values (true and false) for the
two components. There is a mathematical formula that allows to determine always
and with precision and accuracy how many such combinations are logically possi-
ble: if the number of the inputs is n, then the logically possible combinations of truth
value assignments to the inputs is 2n. Thus,
• n = 1 (unary, monadic connective symbols): number of logically possible truth-
value assignments: 2n = 21 = 2.
• n = 2 (binary, dyadic connective symbols): number of possible truth-value assign-
ments to the atomic variables: 2n = 22 = 4. We will need a systematic method for
determining how we can produce all and only the possible combinations so that
we do not miss any and do not include unavailable combinations while we also
ensure that we do not redundantly repeat any combinations more than once. We
will see how this can be done.
• n = 3 (ternary, triadic connective symbols): number of possible truth-value
assignments to the atomic variables: 2n = 23 = 8. We don’t have any such connec-
tives in our formal language ∑.
• General case: n: (nary or n-place connective symbols): number of possible truth-
value assignments to the atomic variables: 2n.
Let us take the case in which we seek to determine all the logically possible
combinations of T and F as inputs for binary connective symbols. We can draw a
matrix as shown below.
2n = 22 = 4 T F
T <T, T> <T, F>
This chart shows us all the mathematically available combinations. The input-
combinations are represented within special brackets, “<” and “>”. There is a reason
for this: the order in which the symbols are written matters! These are ordered pairs:
for instance, the ordered pair <T, F> has the value T as left member and the value F
as right member; this is not the same as the ordered pair <F, T> for which F is the
left member and T is the right member. We need to write all of the ordered pairs;
each ordered pair expresses a possible combination of truth values; we must include
all the possible ordered pairs or combinations. There are more than one recipes for
how to do this correctly but we shall stipulate starting from the northwestern corner
of the inputs and moving east and then south-southwest and east again, as shown in
the above chart by the use of arrows. Accordingly, we have:
<T, T>, <T, F>, <F, T>, <F, F>.
This is our currently established order for presenting all the mathematically pos-
sible combinations of values assigned to two variables. When we use truth tables to
give definitions of our connective symbols, we will present the recipe in another way.
We give now the definitions of the connective symbols. We are doing this in the
metalanguage of our formal language, ℳ(∑). The symbol “=” we use is metalin-
guistic and is used in the way that is familiar from algebraic equations. This should
not be confused with the use of the same symbol – accidentally, as it were – when
we study predicate logic. In the formal language of predicate logic, this symbol
stands for identity (of the objects that are referred to) and not for equality.
DEFINITIONS
• ~ T = F /~ F = T
• T ∙ T = T / T ∙ F =F / F ∙ T = F/ F ∙ F = F
• T ∨ T = T/ T ∨ F = T/ F ∨ T = T / F ∨ F = F
• T ⊃ T = T/ T ⊃ F = F/ F ⊃ T = T/ F ⊃ F = T
• T ≡ T = T/ T ≡ F = F/ F ≡ T = F / F ≡ F = T
consequent) is false; it is true in every other case; it is true in every case in which the
antecedent is false regardless of the truth value of the consequent; it is true in every
case in which the consequent is true regardless of the truth value of the antecedent.
• The triple bar, symbolizing equivalence or the biconditional, is true in the cases
in which both inputs have the same truth value (regardless if this is T or F); it is
false on every other case; it is false in every case in which the inputs have differ-
ent truth values.
Accordingly, we could provide the definitions as follows, using the metalinguis-
tic symbol “| |” to indicate the truth value of any well-formed formula of ∑ (symbol-
izing such formulas by φ andψ):
• |~φ | = T when |φ| = F; |~φ| = F otherwise.
• |φ ∙ ψ| = T only when |φ| = |ψ| = T; |φ ∙ ψ| = F otherwise.
• |φ ∨ ψ| = T when either |φ| = T or |ψ| = T; |φ ∨ ψ| = F otherwise.
• |φ ⊃ ψ| = F only when |φ| = T and |ψ| = F; |φ ⊃ ψ| = T otherwise;
• Equivalently, |φ⊃ ψ | = T when either | ~ φ| = T or |ψ| = T; it is false otherwise.
• |φ ≡ ψ| = T only when |φ| = |ψ|; |φ ≡ ψ| = F otherwise or when |φ| ≠ |ψ|.
Another way that is available, and standard, for defining our connectives is the truth-
table method. We implement this approach through a formal idiom, ∑⊞, which we
set up formally in 4.3. This is the place, however, for presenting truth tables for the
first time since we need this device to define our connectives. The truth table is a
popular device that is used in formal logic; it works for sentential logic only (with an
exception for application under certain adjustments that restrict models in predicate
logic.) We will present the truth table device in gradual constructive steps. We should
bear in mind that we will move to the subject of computations with truth values in
4.2. We will then find out that the truth table for one or more well-formed formulas
of our formal language also requires computations across each one of its rows.
A truth table has rows and columns. We have a precise way for determining the
number of rows of our truth table, depending on the number of types of atomic
variables (atomic letters, atoms) in the well-formed formulas for which the truth
table is constructed. Notice that we are speaking of types, not occurrences, of atomic
variables. We should speak, more precisely, of occurrences of tokens of the types.
We start by illustrating this important conceptual distinction.
We may first use linguistic examples. In linguistic grammar, this is interesting
only for purposes of correct spelling – but also for theoretical grammatical analyses.
In the formal grammar we have for a language like our ∑, there are specific rea-
sons – like the one given presently – as to why this matters.
• “Essential:” two occurrences of the type ‘e’ and two occurrences of the type ‘s’;
one occurrence of the types ‘n,’ ‘t,’ ‘i,’ ‘a,’ and ‘l.’ Thus, there are 9 letters but 6
types of letters with two of them occurring twice and the rest occurring once.
• “Quality:” one occurrence of each of the types. There are 7 letters and 7 types of
letters in the word.
120 4 Sentential Logic Languages ∑
• “Aardvark:” three occurrences of ‘a,’ two occurrences of ‘r,’ and one occurrence
of each of ‘d,’ ‘v,’ and ‘k.’ Thus, the number of letters in the word is 8 but the
number of types is 5.
Now we turn to our formal language ∑.
• ~ p ⊃ ~ ~ p There are two occurrences of ⌜p⌝ but only one type with two
occurrences. There are altogether 6 symbols: four are connective symbols with
three occurrences of ⌜~⌝ and one occurrence of ⌜⊃⌝. (We apply our conceptual
distinction between types and occurrences of tokens also for the other symbols
in our formal grammar.) When it comes to atomic variables (atomic letters), we
have two atomic letters but only one type of atomic letters.
• ~ (p ∨ ~ q) There are two types of atomic letter, with one occurrence for each.
The total number of atomic letters is 2; the total number of types of atomic letters
is also 2.
• ~ ~ ~ p There is one occurrence of one type of atomic letter. We have one
atomic letter in the formula; also, one type of atomic letter.
• (p ∨ ~ (p ≡ t)) ≡ ~ (t ∙ (~ q ∨ (r ≡ ~ t))): In this well-formed formula (wff), we
have 7 atomic letters; we have 4 types (⌜p⌝, ⌜q⌝, ⌜r⌝, ⌜t⌝); we have two occur-
rences of tokens of ⌜p⌝ and three occurrences of tokens of ⌜t⌝.
Now that the distinction is clear, let us lay down precisely the recipe for how to
calculate the number of rows in a truth table we construct. Our truth table may be
for one or for more well-formed formulas. We take stock of all the well-formed
formulas we are given, for which we are about to construct a truth table. We count
the types of atomic letters we have. If the number of types for all the formulas is n,
then the number of rows of our truth table has to be 2n. This is the general case,
which we can now apply through specific examples.
• ~ (p ⊃ ~ (q ⊃ (~ r ⊃ s))): we have 4 atomic letters and 4 letter-types (one occur-
rence of a token of each); the truth table for this wff (well-formed formula) must
have 24 = 2 x 2 x 2 x 2 = 16 rows.
• {(p ∨ (q ∙ r)), (p ∨ q) ∙ (p ∨ r)}: for the two formulas in the set we have 7 atomic
letters but 3 types: the truth table for this wff must have 23 = 8 rows.
• {p ⊃ (q ⊃ ~ p), ~ p ∨ ~ q, p ≡ (q ∨ ~ (p ⊃ ~ ~ p))}: for all the given formulas, we
have exactly two types of letters, ⌜p⌝ and ⌜q⌝. The complete truth table for these
formulas must have 22 = 4 rows.
The number of columns is easy to determine. Each atomic letter type has a dedi-
cated column; each formula also has a dedicated column. We start from the left with
the atomic letter columns. When we will be constructing truth tables to examine
argument forms, all the premises and the one conclusion formula are counted as
separate.
The top of the table – not counted itself as a row even though it runs parallel to
the rows – has the atomic letter types and the given formula or formulas. If we have
one formula, for instance ⌜p ⊃ (q ≡ p)⌝, we begin from the top as follows. In the
truth table below, we show clearly the rows and the columns but in future cases the
drawing may not be as clear because we trust that, once we have become used to
4.1 Definitions of the Connectives by an Equational Method 121
working with truth tables, we can always figure out the detours of the rows and
columns. We have added here an extra row at the bottom, as a metalinguistic gadget,
for the purpose of explicitly pointing and identifying each row. This last, extra row
is not, officially speaking, a row of the truth table itself. The top row is also to be
thought of as not a proper row. This top row has the symbols of the atomic variables
or letters and the formula for which the truth table is constructed. We see that each
column is labeled by some symbol. For instance, column 1 is labeled by ⌜p⌝ while
column 6 is labeled by the connective symbol ⌜≡⌝. We indicate that we are not
showing all columns – that there are missing columns – by “⋮”. The particular truth
table we construct is considered as an instance of the characteristic truth table for
the given formula which is ⌜p ⊃ (q ≡ p)⌝. Because we have n = 2 for the number of
the types of atomic variables in the formula, we can calculate that the number of all
mathematically possible combinations of truth value assignments to the atomic
variables is 4. This is 22. We are able to generalize this observation. If n is the num-
ber of atomic variable types, then the number of possible assignments of truth val-
ues, which is the same as the number of rows of the formula’s truth table, is 2n.
Rows p q (p ⊃ (q ≡ p))
1 Atomic Atomic Main Connective
⋮ letter letter Symbol of the given wff
⋮
⋮
n
1st 2nd 3rd 4th 5th 6th 7th
column column column column column column column
Two Atoms.
p q
1 T T
2 T F
3 F T
4 F F
Three Atoms.
p q r
1 T T T
2 T T F
3 T F T
4 T F F
5 F T T
6 F T F
7 F F T
8 F F F
Once again, we see that the number of all possible assignments of truth values to
the variables is 2n, where n = 3 in this case; therefore, the number of rows too is 23
= 8. The truth table shows all mathematically available combinations of assign-
ments of truth values to the atomic variables or letters. There are no more possible
assignments of truth values to the atomic variables besides the ones shown on the
properly constructed truth table.
Now we are ready to use the truth table construction device for defining our con-
nective symbols. We will continue with a discussion of this definition, in which we
make references to the truth table itself in order to analyze the definition we have
provided. We start with the tilde.
p ~p
1 T FT
2 F TF
Since the tilde is a monadic connective symbol, we need a one-atom truth table.
We have two rows as determined by the formula 21 = 2. We write the vertical itera-
tion as <T, F> for the rows <1, 2>. We impose additional graphic design on our truth
table; we can do this for the sake of facilitating the presentation and absorption of
lessons. Notice that we have boxed and highlighted the definition of the tilde. These
truth values are given to the tilde – and, hence, to the whole symbolic expression ⌜~
p⌝ since the tilde is the main connective symbol of this expression. We need to make
certain remarks.
The truth table is like a map of all mathematically possible assignments of truth
values that can be given to the atomic components of the formula. In this case, there
4.1 Definitions of the Connectives by an Equational Method 123
is only atomic letter – only one component. The only two logically possible cases
are that this atomic component is true or it is false. Row 1 and Row 2 represent these
two distinct possibilities. There can be no row mixing 1 and 2. Rows 1 and 2 are like
two separate cases or states. The atomic component has to be in one or the other
state; it cannot possibly be in both states and it cannot possibly fail to be in any one
of the states. We can even use the truth value of each state to label or mark or
describe the state: thus, state 1 (row 1) is characterized by the value true for our
atomic letter (recalling the symbol we use for value, |p| = T); state 2 is characterized
by |p|= F. We notice that meaning – which characterizes the logically possible
states – is a matter of truth value (true and false) and nothing else. This is an impor-
tant lesson to always keep in mind. In state (row) 1, the value of ⌜~ p⌝ is false
because ⌜p⌝ is true. We know this by definition of the tilde symbol. In state 2 (row
2), the value of ⌜~ p⌝ is true because ⌜p⌝ is false. Nothing has been left out. There
is no other logically possible state. These are all the logically possible cases or
states. Notice how the definition we have just given, using the truth table method,
harmonizes with our earlier definition.
• Row 1: |p| = T ⇒ |~ p| = F.
• Row 2: |p| = F ⇒ |~ p| = T.
Except for the tilde, all the other connective symbols we have in our formal lan-
guage ∑ are for binary connectives. Accordingly, we will need a two-atoms truth
table to define each of them. We compact all these truth tables – as if we were given
a number of well-formed formulas for which we constructed one truth table.
We have exactly four rows. Make sure that you understand this part by now. Each
row represents one of the logically possible states or cases. There are no other logi-
cally possible cases. Each case is determined by an assignment of truth values
(true – false) to the atomic components or atomic letters: we have two types of let-
ters in the case of binary connective and, therefore, we have four rows in the truth
table: this means that there are exactly four logically possible cases or assignments
of truth values to the atomic letters. For each case, the truth table fixes the definition
of the connective symbol; in this way, the connective symbol is defined altogether:
the definition of the connective is given as a matter of truth conditions: what truth
value the connective symbol takes when the truth values of its atomic letters is
specified. Once again, let us see how the definitions provided by the truth table
above harmonize with the definitions given by our earlier method.
• Row 1: |p| = T and |q| = T ⇒ | p ∙ q| = T.
• Row 2: |p| = T and |q| = F ⇒ | p ∙ q| = F.
• Row 3: |p| = F and |q| = T ⇒ | p ∙ q| = F.
124 4 Sentential Logic Languages ∑
The metalinguistic symbol, called turnstile, and written as “⊢” symbolizes logi-
cal consequence. Intuitively, the formulas to the left are the premises and the for-
mula to the right is the conclusion of an argument that is valid if and only if the
relation of logical consequence obtains as noted. The symbol of converse turnstile
is also available: “⊣” is used to indicate that logical consequence obtains from right
to left. We realize that if logical consequence obtains in both directions, then the
formulas in the two sides are related by the relation of logical equivalence: this is
the characteristic relation of identity of logical meaning.
φ ⊣⊢ ψ.
In this case φ and ψ are logically equivalent, which means:
1. φ and ψ imply each other
2. φ and ψ are the logical consequence of each other
3. φ and ψ have the same truth values for all interpretations (value assignments of
T and F, true and false, to their atomic variable letters)
4. It is logically impossible for φ to be true/false while ψ is false/true
5. the formula constructed by joining φ and ψ with the logical equivalence symbol
is a tautology, which can be conceptualized as a zero-premises conclusion (a
conclusion that can be derived from the empty set of premises, or from any
premise-set whatsoever)
6. the converse is also the case: if the sentence formula that joins φ and ψ with the
equivalence symbol is a tautology, then φ and ψ are logically equivalent:
φ ⊣⊢ ψ iff ⊢ φ ≡ ψ.
Also, based on the observations we have made:
φ ⊣⊢ ψ iff ⊢ (φ ⊃ ψ) ∙ (ψ ⊃ φ).
φ ⊣⊢ ψ iff ⊢ (φ ∙ ψ) ∨ (~ φ ∙ ~ ψ)
7. φ and ψ can be inter-substituted for each other (not necessarily for all occur-
rences) in some formula χ without changing the truth value of χ
If φ ⊣⊢ ψ, then ⊢ χ(… φ …) iff ⊢(… ψ…).
If χ is a logical truth or tautology (a zero-premises conclusion), then it remains
a tautology if occurrences of φ are replaced by occurrences (tokens) of ψ, not
necessarily systematically throughout, and vice versa (if occurrences of ψ are
replaced by occurrences (tokens) of φ).
For the case of one-direction logical consequence, from φ to ψ, the correspond-
ing logical truth or tautology (the zero-premise formula) is the implication formula
with φ as antecedent and ψ as consequent. We can write this, which is known as the
Deduction Theorem, as follows:
• φ ⊢ ψ iff ⊢ φ ⊃ ψ
Returning to our initial presentation of logical consequence, with multiple prem-
ises and possibly (and although not usually) multiple conclusions, we have:
• φ1, … φn ⊢ ψ1, … ψk iff ⊢ (φ1 ∙ … ∙ φn) ⊃ (ψ1 ∨ … ∨ ψk)iff ⊢ ~ ((φ1 ∙ … ∙ φn) ∙
~ (ψ1 ∨ … ∨ ψk)) iff ⊢ ~ ((φ1 ∙ … ∙ φn) ∙ ~ ψ1 ∙ … ∙ ~ ψk).
126 4 Sentential Logic Languages ∑
cannot be tautologies either. Nevertheless, since this is a logic, after all, it must have
a definable and characterizing logical consequence relation! This means that we
have cases in which φ ⊢ ψ obtains and yet ⊢ φ ⊃ ψ does not obtain. It is clear, then,
that the Deduction Theorem fails in this case as a principle and we would be wrong
to try to formalize this logic by positing a Deduction Axiom.
Although alternative logics, whose semantic implementations are controversial,
lie beyond our scope, our brief excursion above disillusions us about any notions
that the Deduction Theorem necessarily obtains in any conceivable logical system.
Of course, this is so if one allows for a pluralistic view of logic. If one were to fol-
low in the footsteps of Aristotle, and many other including many modern logicians,
one might disavow such profligate proliferation of logics and insist that there is one
correct logic. One could even attempt to make the case that failure of the Deduction
Theorem is itself a criterion for disqualifying a formalism as logic. Note that a for-
mulation of such a logical system cannot be itself subjected to criticism; it is always
legitimate as a mathematical structure and it can be studied extensively in the area
of Metalogic; what is controversial is whether a plausible or acceptable semantic
construction that matches such a formal system ought to be admitted.
1. To prove the Deduction Theorem, we need to proceed from each side of the iff
(if and only if) to the other (from right to left and from left to right, in any order),
since this is a claim about logical equivalence and, as we know, logical equiva-
lence is logical implication that proceeds in both directions. Specifically, we
want to prove:
φ ⊢ ψ iff ⊢ φ ⊃ ψ.
Accordingly, our proof must have two parts, or subproofs:
If φ ⊢ ψ, then ⊢ φ ⊃ ψ; and.
if ⊢ φ ⊃ ψ, then φ ⊢ ψ.
2. We use, in this informal format of semantical-analytical proof construction, the
method, which we will also study formally in our natural deduction systems
later, known as Proof by Contradiction (and also identified historically by the
Latin name Reductio ad Absurdum, which we may abbreviate as “RAA.”) This
is a powerful proof method, abundantly present in mathematical proofs – to the
extent that more than half or the mathematics we have would need to be given up
if, for some reason or other, this method were rejected (which is the case for a
school of thought known as Mathematical Intuitionism.) In application of the
RAA method, we stipulate as a provisional assumption the negation of the pur-
ported conclusion we are supposed to derive; then we seek to prove validly that
a contradiction (logical absurdity) follows; if we prove a contradiction from the
premises and from the provisionally added assumed or posited premise which is
the negated conclusion, then we can lay down the conclusion we were supposed
to prove; at this point, upon obtaining success, the provisional assumption (the
negated conclusion we posited) is, as we say, discharged: it is an assumed burden
to be using an assumption that has not been authorized as self-validating: our
proof cannot terminate without getting rid of this burden, and the proof of the
absurdity shows, we claim, that this assumption cannot be accepted: hence, it is
4.1 Definitions of the Connectives by an Equational Method 129
1. Assume that
⊢ φ ⊃ ψ.
We will prove that.
φ⊢ψ
2. We posit as an assumption (for reductio, as we can say), that it is not the case that
φ⊢ψ
3. From 2: since it is not the case that logical consequence from φ to ψ obtains,
there must exist at least one interpretation or valuation for which φ is true and ψ
is false.
4. Since there is an interpretation for which φ is true and ψ is false (as we have
from 3), it follows that the implicative formula with φ as antecedent and ψ as
consequence is not a tautology; this is because the definition of the connective of
logical implication constrains us to accept as tautologous only implications for
which there is no interpretation that assigns true to φ and false to ψ. Hence,it is
not the case that ⊢ φ ⊃ ψ
5. But from 1 we have that
⊢φ⊃ψ
6. Hence, from 4 and 5, we have logical absurdity.
7. We have, therefore, by reductio proven that
φ ⊢ ψ.
• Having proven that the implication holds in both directions, we have shown that:
• φ ⊢ ψ iff ⊢ φ ⊃ ψ.
4.1.3 Exercises
output true for every input and the connective ⊥ returns the fixed output value
false for every input.
p ~p idp ⊤p ⊥p
1 T F
2 F T
truth values for the atomic variable input Accept Reject Accept Reject
Accept Reject Reject Accept
T T F T F
F F T T F
id p ~p ⊤1 p ⊥1 p
Now continue for the case of the definable binary connectives. There are sixteen of
them. Use the symbols from {∙, ∨, ⊃, ≡, ≢, 1, 2, ~1, ~2, ⊂, ⊅, ⊄, ⊤, ⊥, ↓, |},
taking into considerations that the familiar symbols in this set have been already
assigned to the connectives in our constructed formal logic and based on the fol-
lowing considerations.
a. ≢: this symbol connotes disequivalence (also called exclusive disjunction):
b. ⊂: this symbol connotes converse implication:
c. ⊅: this symbol connotes negated implication:
d. ⊄: this symbol connotes negated converse implication:
e. |: this symbol connotes the Sheffer connective (also called NAND and alternate
negation) and is called the Sheffer Stroke:
f. ↓: this symbol is called the Peirce Arrow or the Quine Arrow or Quine Dagger
and connotes the connective usually called Peirce connective, NOR or joint
negation:
g. 1: the connective connoted by this symbol returns the same truth value as the
truth value of the first input (which is p in our table.) We don’t find names for this
often. It is a pseudo-binary function: can you tell why if you are given the infor-
mation that a pseudo-n-ary connective is one whose truth table definition has
exactly the same outputs for any input with a connective of lower arity?
132 4 Sentential Logic Languages ∑
h. 2: this connective symbol connotes the connective that returns the negation of
the second input (which is q for the format we have been using).
i. ~1: this connective symbol connotes the connective that returns as output the
negation of the first input.
j. ~2: this connective symbol returns as output the negation of the second input.
k. ⊤: this connective symbol connotes the connective that yields fixedly the truth
value true for all input combinations.
l. ⊥: this connective symbol connotes the connectives that yields the fixed value of
false for all possible combinations of input values.
4. Continuing from the preceding exercise, try to name all the symbols. Respect the
names we have already provided for the symbols we have defined formally in
our symbolic language. Why is it important to have names for the symbols? How
are the symbols different from the connectives themselves?
a. Next, for each of the defined connectives, construct sentences of English which
are compounded by having two simple sentences joined by means of the connec-
tive. Take as your first sentence “it is raining” and as your second sentence “the
game is cancelled.” For instance, for implication: “if it is raining, then the game
is cancelled.”
b. How would you characterize the compounds with the fixed-value connectives as
their main connective?
c. Next, discuss if we can defend the choices we have made from the 16 available
connectives for presumably capturing the logical behavior of inclusive “either
or,” and “and.”
d. What about implication – the connective we presume to be matching the logical
behavior of “if-then” in language? Did we have other choices? Why should we
reject those other choices? Are there any problems you detect with the claim that
our implication connective indeed captures the logical behavior of the “if-then”
of language?
e. Which connective matches the linguistic exclusive “either or” (meaning that one
but exactly one of two options must be chosen)?
5. Our officially constructed formal language has connective symbols from the set
{~, ∙, ∨, ⊃, ≡}. It is good news that this set of connectives is what we call func-
tionally complete: we can define any mathematically definable connective (not
only unary and binary but of any arity) by using appropriate combinations of
iterations of the connective symbols we have. We can also examine this by means
of the truth table definitions of connectives. For instance, the triple bar can be
defined by using formulas that have only symbols from {∙, ⊃}. We show this
below. We have not yet studied how to use truth tables for computing truth val-
ues; as an exercise in figuring out how this is done, mark how the connectives
have been defined and follow the computations starting from lesser-scope con-
nectives and ending up with the value of the major-scope connective which is
also the value of the whole compound expression. Then notice if we have the
same values for every row for two formulas: that means that they are logically
4.1 Definitions of the Connectives by an Equational Method 133
equivalent; they have the same logical meaning and they are indeed, for this
reason, interdefinable. All this should allow you to observe how the formula with
the triple bar – which has been our truth table definition of this connective sym-
bol – is indeed interchangeable with the formula that uses only the conjunction
and implication symbols. This shows interdefinability.
p q (p ⊃ q) ∙ (q ⊃ p) p≡q
1 T T T T T T
2 T F F F T F
3 F T T F F F
4 F F T T T T
You might now realize that we actually have more connective symbols than we
need to have in order to be able to define all the mathematically definable connec-
tives of two-valued sentential language of any arity. After all, we just established
that we do not need the triple bar symbol, although our formulas would become
more cumbersome and perhaps less intuitive to write out. This means that we could
have fewer connectives (but not any which ones we choose) and still be able to
define all the mathematically definable connectives of the two-valued sentential
language.
a. Use the truth table to show that the following sets are themselves functionally
complete: {~, ∨}, {~, ∙}, {~, ⊃}. Take into account the following equivalences:
(p ∨ q) ≡ ~ (~ p ∙ ~ q)
(p ∙ q) ≡ ~ (~ p ∨ ~ q)
(p ⊃ q) ≡ (~ p ∨ q) ≡ ~ (p ∙ ~ q)
b. For any given set ℾ, if we can use only its symbols to define all the connective
symbols in {~, ∙, ∨}, we can declare that ℾ is functionally complete. This is
because the set {~, ∙, ∨} is itself functionally complete. As a clue: consider how
we use the truth table to define connectives: consider that, across each row, the
atomic variables can take truth values true or false (and, thus, we can have, for
instance, ⌜~ p⌝ and ⌜q⌝ if ⌜p⌝ is true and ⌜q⌝ is false on the row.) Next consider
that the connective can also be defined as the disjunction of the conjunctions of
the atomic variables of all rows in which the connective is defined as true. (For
instance: ⌜p ⊃ q⌝ can be defined as the disjunction of the conjunctions: ⌜p ∙ q⌝,
⌜~ p ∙ q⌝, ⌜~ p ∙ q⌝.) Prove that if any set of connectives (with one or more mem-
bers) can be used to define the connectives in {~, ∙, ∨}, then this set of connec-
tives is functionally complete.
c. From the 16 definable binary connectives we examined in preceding exercise, we
have the thrilling result that two binary connectives are just by themselves suffi-
cient for defining every definable connective of the two-valued logic. The one-
member sets are {|} and {↓} - the Sheffer Stroke and the Peirce Arrow. Show
how each one of these connectives can define all the members of the set {~, ∙,
∨} – thus (per preceding section of this exercise), each of these connectives is
functionally complete. Continue to define the other binary connectives by means
134 4 Sentential Logic Languages ∑
of the Sheffer Stroke and Peirce Arrow. Do the same for the definable unary
connectives.
6. There are two different meanings of “either or” or disjunction: inclusive and
exclusive. Two formulas are inclusive disjuncts of each other if and only if they
cannot be false together but they can be true together. Check the definition of the
inclusive disjunction symbol we have provided to verify that it agrees with the
way we presented it above in this exercise. Show that this is the case. Two for-
mulas are exclusive disjuncts of each other when exactly one of them is true
(which is equivalent to saying that one is true and only one is true, etc.…) We do
not have a symbol for the exclusive disjunction in our formal system. Let us use
the symbol “≢”.
a. Construct the truth table for ≢.
b. Construct the truth table for the negation of the equivalence symbol: ~ ≡. What
observation can you make in comparing to the truth table for ≢? Comment
on this.
c. Prove by means of semantic analysis that the following is a logical truth of the
two-valued logic: ⌜(p ≡ q) ∨ (q ≡ r) ∨ (p ≡ r)⌝. Comment on this fact. Does the
fact that the logic has exactly two truth values play a role? What role does the
number of different atomic variable letters play?
d. In natural languages like English, and most languages, there is an ambiguous
expression, “either or,” for both inclusive and exclusive disjunction. How can the
competent user of the natural language discern between the two? Why can’t we
allow for a similarly ambiguous symbol in our formal language? Do you have an
opinion as to which of the two uses of “either or” is more common in English?
e. The linguistic phrase “exactly one of Shack, Slack, and Stack came to the party”
appears to require translating by means of the exclusive disjunction symbol (for
symbols with subscripts translating the atomic sentences):
S1 ≢ S2 ≢ S3.
Are we allowed to dispense with parentheses here without incurring ambigu-
ity? To answer this, construct the truth table to determine if exclusive disjunction
is associative, which means that:
((S1 ≢ S2) ≢ S3) ≡ (S1 ≢ (S2 ≢ S3))
If this is so, the placement of parentheses does not affect the logical meaning
(true or false as the value of the compound expression), and we can dispense
with parentheses without incurring ambiguity.
But, now comes a surprise. It is wrong to translate by using the exclusive
disjunction symbol: why is that? Check the truth table again. Show why the
translation fails to capture the meaning “exactly one of … came to the party.”
Next, construct the truth table defining a ternary (triadic, three-place) connec-
tive symbol that does the job. Let us use the symbol “!1” for this symbol and
notice how we conveniently shift to implementing a prefix notation (placing the
connective symbol in the front, and in this case we also place the atomic vari-
ables within parentheses.)
4.1 Definitions of the Connectives by an Equational Method 135
p q p≢q
T T ?
T F ?
F T ?
F F ?
7. Our semantic connectives (defined over the set of truth values {T, F} and so
called semantic) interpret, as we say, functions of the Boolean algebra. A func-
tion compels an operation by which a number of inputs (depending on the arity
of the function) are entered and the given definition of the function is applied to
yield a unique specified output. For instance, in the case of the standard arith-
metical operation of addition, +(5, 7) = 12. The function is not the operation
itself, you should note. It is the abstract ordered n-tuple, a relation: for instance,
|+| = {<<x, y>, z> / z = x+y}. Now that we can count on some rudimentary
understanding of how to perform operations, let us consider the definition of a
reverse operation. Given that x+y = z, a reverse operation of addition can be
defined as such that: x+y = z if and only if z y = x. Yes, this operation
is definable and is the familiar arithmetical subtraction. We will now consider if
it is possible to define reverse operations for the operations that are interpreted
by our logical connectives.
Considering the case of inclusive disjunction. The reverse operation, if defin-
able, must correspond to some definable truth function (interpreted by a logical
connective.) Let us assume that. We require that:
(p ∨ q) ≡ r if and only if (r q) ≡ p
a. Take p and q to be both true, T. Plug in the values and continue. You are using the
known definitions of the symbols for inclusive disjunction and equivalence to
compute outputs. Because the metalinguistic clauses are connected by “if and
only if (iff)”, it is understood that you have to end up with the same truth values
on both sides of the “iff.” Let us make both sides T. They have to be both true if
either one is. For p and q, we assign T to both. Continue with the computations:
you seek basically to determine the value of r. Show that a logical absurdity
results, which proves that there is no definable reverse operation for the operation
corresponding to the inclusive disjunction symbol.
(p ∨ q) ≡ r if and only if (r q) ≡ p.
136 4 Sentential Logic Languages ∑
b. Do the same for the operations corresponding to the other connective symbols of
our system, except for the equivalence symbol.
c. Follow similar steps to show that reverse operations are definable for the opera-
tions which we have interpreted through the logical connectives of equivalence
and (from previous exercise) exclusive disjunction (or disequivalence, negated
equivalence.)
8. In our metalanguage for ∑, prove the following by means of semantical analysis.
a. φ ⊃ ψ, φ ⊢ ψ
b. φ ⊃ ψ, ~ ψ ⊢ ~ φ
c. φ ⊃ ψ, ψ ⊃ χ ⊢ φ ⊃ χ
d. φ⊢φ∨ψ
e. ~φ⊢φ⊃ψ
f. ψ⊢φ⊃ψ
g. φ ≡ ψ ⊣⊢ ~ (φ ≡ ~ ψ)
h. (φ ⊃ ψ) ⊃ φ ⊢ φ
i. φ ⊃ ψ ⊣⊢ ~ φ ∨ ψ
j. ~ (φ ⊃ ψ) ⊣⊢ φ ∙ ~ ψ
9. If we can find truth values for the atomic variable letters, for which the premises
of the logical consequence relation are all true and the conclusion is false, this
suffices to show that the logical consequence instance under consideration fails.
We have basically provided a model or interpretation (the assignments of truth
values to the individual variable letters) for which the considered instance of
logical consequence fails. Why is this sufficient? Next, provide interpretations to
show that the given instances of logical consequence all fail in the standard sen-
tential logic.
a. φ∨ψ⊢φ∙ψ
b. φ ⊃ ψ, ~ φ ⊢ ~ ψ
c. (φ ⊃ ψ) ⊃ ψ ⊢ ψ
d. φ⊃ψ⊢ψ⊃φ
e. ~φ∨ψ⊢ψ⊃φ
f. χ ⊃ (φ ⊃ ψ) ⊢ ψ ⊃ (φ ⊃ χ)
g. φ ⊃ ψ ⊢ ~ (φ ⊃ ~ ψ)
h. φ ≡ ψ ⊢ ~ (~ φ ∙ ~ ψ)
i. ~ (φ ≡ ψ) ⊢ ~ (ψ ∙ φ)
j. χ ∨ (φ ∙ ψ) ⊢ χ
the atomic components of any well-formed formula are given, then the truth value
of that formula can be determined by computational means. This is in evidence
already, given the way we have defined the connectives. If we think of the connec-
tives as giving directions about how to carry out computational operations, then, we
can inspect how the computation can be carried out for specified truth values of the
components. We need a guarantee that the computational operations always come to
an end, or terminate: we have such a guarantee although we are not proving that
here. It is important that we realize how those operations are to be carried out in a
specific order: we must end with calculating the truth value of the connective sym-
bol that has the largest scope. (We covered scopes and identification of the major
connective in 4.1.a to be prepared for this development.) The largest-scope connec-
tive symbol receives a truth value – for a specified assignment of truth values to the
atomic letters – and this truth value is the truth value of the whole formula.
We can see how this works, familiarly, in basic arithmetic; or, as we should say
more precisely, for the definitions of the operators of that arithmetic and having, in
our example, as such operators the familiar addition and multiplication defined as
usual over the natural numbers:
x=1
y=3
z=5
We can compute the value of the entire expression – which is written out in
proper formal grammar – for the assigned values of the atomic variables. We use
vertical bars, metalinguistically in this context, to indicate value. The values of the
atomic variables are from the natural numbers and, note, the value of the expression
is also a natural number.
|(x + y) ⨯ (y + z)|x=1, y=3, z=5 = |(1 + 3) ⨯ (3 + 5)| = |4 ⨯ 8| = 32
First we computed the value of the addition symbols, which have smaller scopes
than the major operator which is, in this case, the multiplication symbol. It is not
important whether we start with the addition symbol that is leftmost, or the other
way round. We could have specified this as part of a systematic process but it is not
important for us to do this because the outcome would not be affected. But we must
definitely compute smaller-scope operators first and proceed in increasing order of
scope-size and we must always terminate the computation with determining the
value of the major operator of the formula. In the arithmetical example, the multi-
plication sign is the symbol of the major operator (the largest-scope) operator of the
given expression. Now we show an example for our formal language of sentential
logic in order to begin to cultivate some familiarity with the process. We use one of
the methods we will be presenting. Preferences may vary; this might not turn out to
be your preferred method of computing truth values of formulas. All the methods
that are made available are guaranteed to reach the same – correct – result for every
case in which the same truth values are assigned to the atomic variables.
|(p ≡ ~ q) ⊃ ~ (~ p ≡ q)|p = T, q = F = |(T ≡ ~ F) ⊃ ~ (~ T ≡ F)| =?
To proceed, we must know the definitions of the connectives. We must begin
with the smallest-scope connective symbols – the tildes in this case – and proceed
to the computation of the truth value of the two occurrences of the triple bar (it is
not important in what order, from left to right or from right to left); and we must end
138 4 Sentential Logic Languages ∑
with the computation of the truth value of the major connective symbol of the for-
mula, which is the one and only occurrence of the horseshoe. The superscript num-
bers show the order of computation, starting with 1. We can affix the numeral 1 to
both tildes, indicating in this way that the order in which we carry out these two
computations is immaterial as it does not affect the outcome.
When the truth values (true – false) of the atomic (individual or simple) letters of a
well-formed formula of ∑ are known, then the truth value of the entire well-formed
formula can be determined. We compute with truth values, true and false, which
sounds odd but this is nothing to be concerned about. This is not the algebra you are
familiar with. There is an underlying algebra – a special algebra with only two num-
bers, historically considered as members of the set {1, 0} – and the operations of
this algebra are defined over the set of its numbers {1, 0}. This sounds like what we
say when we assert that the connectives we have in ∑ are defined over the values
{T, F}. The notion of computationality applies here for reasons we will examine
briefly. To begin with, we need to overcome the initial sense of oddity attaching to
using words like “computation.” Let us see an example of a computation with truth
values, based on the definitions of the connectives we have provided. We are using
the algebraic method we set up in 4.3.a but, as we will soon find out, the truth table
can also be used for writing out computations. Moreover, as we will see, we can also
use diagrams – somewhat like the parsing trees of 4.2 – which we can call Semantic
Computation Trees to carry out computations. As we know, the truth value of a for-
mula is the truth value of its major connective symbol.
Given the following formula and specified truth values for its component atoms,
determine the formula’s truth value.
• φ = (p ≡ ~ (q ⊃ (~ q ∙ ~ p)))
• |p| = T/|q| = F/
1. We plug in the truth values for the atomic letters.
2. φ = (T ≡ ~ (F ⊃ (~ F ∙ ~ T)))
3. Now we perform the computations. We need to begin with smallest-scope
connective symbols and move to larger-scope and ultimately to the major-
scope symbol whose truth value establishes the truth value of the formula
itself. Let us mark the connective symbols, to distinguish occurrences from
left to right, and, next, let us determine scopes. The scope of each symbol is
the collection or set of the symbols it controls (not counting parentheses,
which are simply auxiliary symbols.) Some connective symbols occur only
once; it is superfluous to mark them with superscripts when we seek to deter-
mine occurrences. We also mark occurrences of the atomic letters.
4.2 Computation of Truth Values of Well-Formed Formulas of ∑ 139
4. p 1 ≡ ~ 1 (q 1 ⊃ (~ 2 q 2 ∙ ~ 3 p 2))
5. Scope(≡) = {p 1, ~ 1, q 1, ⊃, ~ 2, q 2, ∙, ~ 3, p 2}/
Scope(~ 1) = {q 1, ⊃, ~ 2, q 2, ~ 3, p 2}/.
Scope(⊃) = {q 1, ~ 2, q 2, ∙, ~ 3, p 2}/.
Scope(~ 2) = {q 2}/.
Scope(∙) = {~ 2, q 2, ~ 3, p 2}/.
Scope(~ 3) = {p 2}/
6. Clearly, the triple bar has the largest scope. We start from smallest-scope con-
nectives and, progressively, moving to larger-scope connective symbols we
terminate with computation of the truth value of the triple bar. The two con-
nective symbols with only one letter in their scope are the second and third
occurrence of the tilde: it does not matter which one we compute first. We
start with those and move to the computation for the next connective symbol
which, in our case, is the dot: its scope has four symbols in it; next is the first
occurrence of the tilde with six symbols in its scope; and we end with the
triple bar which has in its scope nine symbols.
7. T ≡ ~ (F ⊃ (~ F ∙ ~ T)) =
T ≡ ~ (F ⊃ (~ F ∙ F)) =
T ≡ ~ (F ⊃ (T ∙ F)) =
T ≡ ~ (F ⊃ F) =
T≡~T=
T≡F=
F
We have computed the truth value of the given wff (well-formed formula) for the
specified truth values of the atomic letters. Let us show the above computation
schedule of steps with the computation that is executed in every step being placed
within a box.
T ≡ ~ (F ⊃ (~ F ∙ ~ T)) =
T ≡ ~ (F ⊃ (~ F ∙ F)) =
T ≡ ~ (F ⊃ (T ∙ F)) =
T ≡ ~ (F ⊃ F) =
T≡~T=
T≡F=
F
We can write our result synoptically as follows. Remember that we are using the
metalinguistic symbol “||” for truth value of a formula or atomic letter.
• If |p| = T and |q| = F, then |(p ≡ ~ (q ⊃ (~ q ∙ ~ p)))| = F.
This is an outstanding, and characterizing, feature of the standard sentential
logic: it is characterized by compositionality of logical meaning (which is truth
value): if the truth values of the atomic component letters are known, then the truth
value of the formula can always be determined with strict precision. This is the
result of the fact that we have only functional connectives: our connectives are
defined in such a way that the output they are set to give is always unique for any
specified combination of inputs. Another factor that allows computationality is the
140 4 Sentential Logic Languages ∑
fact – which can be proven – that we can never have an infinite number of symbols
in any well-formed formula of a language like ∑.
We retain the same example as in the preceding section. We will now learn how to
carry out computations of truth values of well-formed formulas by using a diagram-
matic approach. It is to be understood that these computations – as in this and in the
preceding section – are carried out in our metalanguage.
We saw in a preceding section, 4.2, how we can construct a parsing tree. That
type of tree is dedicated to decomposing the syntactic or grammatical structure of a
well-formed formula of our formal language Σ. The computational diagram we will
be constructing now can also be thought of as a tree; we may as well call this a
semantic tree since we are applying it to computing truth values (and, as we see time
and again, truth values are logical meanings in the standard sentential logic we are
studying.)
The rules for the semantic computation trees are shown schematically by the fol-
lowing shapes. The rules are named by the symbols of the connectives written next
to the arrow (for the rules applying to the tilde) or between the arrows (for the rules
applying to the binary connective symbols). The rules are themselves schemata –
figures – of semantic computation trees. The case of the conditional/implication
deserves special attention: F is computed only when the antecedent (left) is T and
the consequent (right) is F.
To make computations, at the top of our semantic computation tree we place the
given well-formed formula. The space that is labeled by this formula we call the root
of the semantic computation tree. Notice that, unlike the case of natural trees, the
root is actually at the top. We continue with plugging in the values and then we carry
out computations beginning from smaller-scope connectives and continuing until
ultimately we end up with the truth value of the largest-scope connective symbol (the
main connective symbol), which is the same as the truth value of the given formula.
4.2 Computation of Truth Values of Well-Formed Formulas of ∑ 141
We indicate what rule we have applied by using the name of the rule next to the
arrow (for unary connectives) and between the arrows (for binary connectives.)
It may be possible to compute the truth value of a compound (or complex) formula
of ∑ even when we have only partial or incomplete information about the truth
values of the component atomic variables of the formula. We can have no guaran-
tees that this is feasible; but it is possible that we might be able to carry on and finish
the computations. We might be able to proceed with the computations because there
is one only possible truth value in every possible case, as in the following example
(for which we use the computation method of 4.4.a.) The given values are symbol-
ized, as before, in our metalanguage, by the vertical bars on each side. Our given
information is incomplete; there is at least one atomic variable for which we are not
given an assigned, specified truth value.
|p| = T, |q| = F, |r| =? .
|q ⊃ r| =?
Plugging in the provided information, we have:
|F ⊃ _| =?
We are able to finish this computation, in spite of the absence of information
about the truth value of ⌜r⌝. If we examine carefully our definition of ⌜⊃⌝, we will
see that, in every case in which the antecedent receives the truth value F, the impli-
cational formula receives the truth value T. (We have to make sure that we remem-
ber the terms: antecedent is the formula to the left of the horseshoe and consequent
is the formula to the right of the horseshoe. The whole formula, with the horseshoes
as its main connective symbol, is called implicational.) Let us see the cases in which
the antecedent receives F as truth value. We highlight those, and only those cases.
• T ⊃ T = T/ T ⊃ F = F/ F ⊃ T = T/ F ⊃ F = T
Since in both cases the output is T, when the antecedent is F, then, in this case,
the truth value of the implicational formula does not depend on the truth value of the
consequent. Even if we don’t know the consequent value, it is necessarily the case
that the implicational formula receives T as its truth value. Taking advantage of our
algebraic notation, we can write this as follows, in our metalanguage:
|F ⊃ φ| = T
We write φ to indicate that the consequent does not have to be an atomic vari-
able – it can be compound.
142 4 Sentential Logic Languages ∑
Sustained study of the definition of the horseshoe will reveal that we have another
case in which the output is determined regardless of complete information: in every
possible case in which the consequent has the truth value T, the whole formula
receives the truth value T regardless of the truth value of the antecedent.
• T ⊃ T = T / T ⊃ F = F/ F ⊃ T = T/ F ⊃ F = T
|φ ⊃ T| = T.
We wonder now what examination of the definitions of other connectives may
show, to allow us to complete computations even with incomplete information about
the assigned truth values of the atomic variables. It is straightforward to inspect the
definitions (by any method we have used) to attest to the following results, which
we show below. We also include the results we have already discussed.
• |φ ⊃ T| = T
• |F ⊃ φ| = T
• |T ∨ φ| = |φ ∨ T| = T
• |F ∙ φ| = |φ ∙ F| = F
We can also show cases in which we are unable to complete the computation
when we lack information about the assigned truth values of certain atomic vari-
ables. We can, again, examine the definitions of our connectives to ascertain that
this is the case. For instance, if one conjunct is true (one, either one, of the joined
formulas by using the dot to join them), then we cannot determine what the truth
value of the whole formula is: there is one possible case in which the whole formula
is T and there is another possible case in which the whole formula is F. Let us see
what those cases are.
|T ∙ φ| =? ⇒ if |φ| = T, then |T ∙ φ| = T ∙ T = T.
|T ∙ φ| =? ⇒ if |φ| = F, then |T ∙ φ| = T ∙ F = F.
The same for:
|φ ∙ T| =? ⇒ if |φ| = T, then |φ ∙ T| = T ∙ T = T.
|φ ∙ T| =? ⇒ if |φ| = F, then |φ ∙ T| = F ∙ T = F.
Let us now inspect a list of such cases, in which incomplete information about
the assigned truth values makes it impossible to finish the computations.
• |φ ⊃ F| =?
• |T ⊃ φ| =?
• |F ∨ φ| = |φ ∨ F| =?
• |φ ∙ T| = |T ∙ φ| =?
4.2 Computation of Truth Values of Well-Formed Formulas of ∑ 143
We can actually provide equations – which are not going to be helpful with the
computations, as we have indicated – that relate compound formulas under incom-
plete information with the truth values of variables. Here is an example, for the case
examined above, in which we know that the truth value of one conjunct is true but we
do not know the truth value of the other conjunct. In that case, it is the truth value of
the unknown conjunct that determines the output: if the unknown conjunct receives T,
then the whole formula receives T; if the unknown conjunct receives F, then the whole
formula receives F: thus, the truth value of the whole formula is equal to the truth
value of the unknown conjunct. Clearly, this means, again, that we cannot compute.
Let us see certain equations in which we have an established relation between the truth
value of the whole formula and the truth values of atomic components.
• |T ⊃ φ| = |φ|
• |φ ⊃ F| = |~ φ|
• |φ ∨ F| = |F ∨ φ| = φ
• |φ ∙ T| = |T ∙ φ| = φ
• |φ ⊃ ~ φ| = ~ φ
• | φ ∨ (φ ∙ ψ)| = | φ ∙ (φ ∨ ψ)| = φ
Finally, we need to take into account the case in which, even under incomplete
information about the truth value of the component atoms, we may be able to estab-
lish that the given formula is a tautology – in which case it receives the truth value
T regardless of any, and for all, truth values of its atomic components – or a contra-
diction – in which case the formula has to receive the value F regardless of the truth
values of its atomic components. Clearly, this problem is related to the challenge of
determining the logical status of a well-formed formula (if it is a tautology, a con-
tradiction, or a contingency, which we learned how to do by applying the truth table
method in 4.5.) We may still provide some characteristic equations of tautologies
and contradictions, although it is not possible to provide an exhaustive list.
We must also keep in mind that the negation of a tautology is always a contradic-
tion, and the negation of a contradiction is a tautology.
• |φ ⊃ φ| = T
• |φ ∨ ~ φ| = |~ φ ∨ φ| = T
• |φ ≡ φ| = T
• |φ ≡ ~ φ| = |~ φ ≡ φ| = F
• |φ ∙ ~ φ| = |~ φ ∙ φ| = F
• |φ ⊃ (ψ ⊃ φ)| = T
• |(φ ⊃ ψ) ≡ ~ (φ ∙ ~ ψ)| = T
• |~ (φ ∨ ψ) ≡ (~ φ ∙ ~ ψ)| = T
• |~ (φ ∙ ψ) ≡ (~ φ ∨ ~ ψ)| = T
• |(φ ⊃ ψ) ≡ (~ ψ ⊃ ~ φ)| = T
144 4 Sentential Logic Languages ∑
truth value of the formula for one logically possible case. For a truth table, each
logically possible case is represented or modeled by a horizontal row (or, simply,
row, since by definition the rows are the horizontal lines.) The columns of the truth
table, on the other hand, are labeled, as we say, by the atomic variables and by the
formula or formulas for which the truth table is constructed.
• We are given valuations (truth-value assignments) for the atomic components
(ultimate components which are the atomic variables or letters) of a given well-
formed formula φ. But we are not given value assignments for all the component
atomic letters. We are asked to determine, if possible, the truth value of the given
formula.
• |p| = |q| = |r| = T
• |s| = F
• |t| =?
• φ = ~ ((p ∨ ~ (q ≡ ~ s)) ⊃ (~ t ∙ r))
• We “plug in” the given truth values and place a question-mark sign under
the unknown atomic variable. We are conducting this activity in our meta-
language, ℳ(∑). We advance in stages (labeling them by using numbers
from {1, 2, …} in the column under the “φ=” column. The labeling will
allow us to make precise references to the stages of this process when we
provide commentary.
• 5: we compute the value of the (only occurrence of the) wedge since we know
by now both values of the formulas connected by the wedge.
• here, we have reached the limit of what we can possibly compute on the basis
of the given information: accordingly, we add two rows, for the two logically
possible truth values of the unknown atomic letter; the information of all the
other values for the symbols remains the same, and, so, we repeat it across
each of the rows we have now created.
• 6a and 6b show the two rows we need to generate by assigning the two logi-
cally possible truth values to the unknown variable.
• Now, we have the following partial (elliptic) truth table to construct. Across
each row, which is considered as logically independent from the other row,
we carry on with computations in the same fashion we have been doing so
far. We place within boxes the new valuations we determine across each
row after we have assigned the possible values to the variable whose value
we are not given. When we finish the truth table, we reflect on the results.
We see that our given formula may possibly take the value T and may also
take the value F (in the other row.) Thus, there is no unique or determinate
truth value that the given formula has in the absence of specification about
the value of the unknown variable. We carry on with the computations, as
before. This time, each stage has operations carried out on two rows – the
two rows we have constructed, one for each logically possible assignable
value of the unknown letter.
table); therefore, there is no unique or determinate truth value for the formula and
this is sufficient to characterize the formula as indeterminate under the constraint of
the given information (which, as we have indicated, is elliptic or incomplete.)
Obviously, in the case of logical truths and logical falsehoods the formula must be
considered computable even under incomplete information about the values of the
components or under no information at all: this is because, by definition, a tautology
has the value T for every possible assignment of truth values to the atomic compo-
nents and a contradiction takes F for every possible assignment of truth values to the
atomic components. This is the case if we do not recognize that we are dealing with
a formula that expresses the logical form of a logical truth or logical falsehood.
These properties are certainly independent of our subjective reactions or any psy-
chological dispositions.
4.2.5 Exercises
1. Given the truth values for the atomic variables, we can compute the truth values
of the compound well-formed formulas that are given. This remarkable charac-
teristic of the standard sentential logic is called Computationality of Meaning.
Implement all the methods we have presented (quasi-algebraic, computation
trees, truth tables) to determine the truth values of the compound well-formed
formulas for the values (metalinguistically symbolized by “||”) of the atomic
variable letters.
|p| = T, |q| = T, |r| = F, |s| = F, |t| = F.
a. |~ ~ (p ≡ q) ≡ ~ (s ≡ t))| =?
b. |~ (p ∙ ~ q) ∨ (~ r ∨ ~ s ∨ t)| =?
c. |~ ~ (p ⊃ (q ⊃ (r ⊃ (s ⊃ t))))| =?
d. |((p ∙ t) ∨ (q ∙ s)) ≡ ~ (p ⊃ (s ⊃ t))| =?
e. |(p ⊃ ~ t) ≡ (s ⊃ ~ q)| =?
f. |((p ⊃ (q ∙ t)) ⊃ ~ ~ ~ r) ≡ s| =?
g. |(q ∨ (r ∙ s)) ⊃ ((q ∨ r) ⊃ (q ∙ s))| =?
h. |(p ≡ (q ≡ t)) ≡ ~ ((r ≡ s) ≡ ~ t)| =?
i. |(~ (r ∙ s) ∨ ~ (s ∙ t)) ⊃ ((s ∨ t) ∨ ~ (p ∙ q))| =?
j. |(p ∙ q ∙ ~ r ∙ ~ s) ≡ ~ (p ∨ ~ q ∨ s ∨ ~ t)| =?
2. We may be able to determine the truth value of a compound well-formed for-
mula when we know the truth values of only some of the atomic variable letters.
In the cases in which this is possible, determine the truth values for the given
symbolic expressions.
|p| = F, |q| = F, |r| = T, |s| = T; |x| =?, |y| =? .
a. |~ (x ∙ y) ⊃ (~ (r ∙ s) ⊃ (p ∨ q))| =?
b. |(x ∨ s) ≡ ~ (y ∙ q)| =?
c. |(x ≡ y) ∙ (p ∙ (q ⊃ r))| =?
d. |(x ⊃ p) ∨ (s ⊃ y)| =?
148 4 Sentential Logic Languages ∑
Our formal language ∑, whose grammar we laid out in 3.1, is now adapted for
implementation of a truth table system. This is one familiar and inexorable subject
that you always meet in introductory Logic texts. The type of formal system known
as Truth Table – possibly named so by Ludwig Wittgenstein, a twentieth-century
philosophic prodigy – is what we call a decision procedure: this means that this type
of system, implemented through some formal language like our ∑⊞, affords us a
mechanical procedure that is provably guaranteed to terminate in a manageable
number of steps and allow us to reach a result regarding whether an argument form
is valid or invalid, whether a set of formulas is consistent or inconsistent, and
whether a given formula has a logical property like being a tautology or a contradic-
tion or a contingency. This type of decision procedure – it is not the only one – is
available for the basic sentential logic we are studying at this point: it can no longer
be used in the full complement of the logic we call predicate, which we will study
in chapter 5. Of couse, it is assumed that the ideal user of the truth table method is
competent and knows how to implement the system: as always, an individual’s pos-
sible incompetence in using the formal system is not relevant to assessing the sys-
tem’s decisional efficiency.
4.3 A System of Truth Tables for ∑: ∑⊞ 149
We have already discussed the truth table method because we deployed it as one
of the methods available for defining our formal system’s connectives. We can con-
struct the truth table for one given formula φ, which we designate as ∑⊞(φ). Every
well-formed formula has a unique truth table; any well-formed formulas φ and ψ
that have the same truth tables are equivalent: we will have to specify what we take
as “same truth tables.” We can construct the truth table – also unique – for a set of
well-formed formulas, {φ1, φ2, …, φn}: ∑⊞({φ1, φ2, …, φn}). If we are given an
argument form, we can construct the unique truth table for that argument form:
∑⊞({φ1, φ2, …, φn}/..ψ). See how we place the premises of the argument form
within the set brackets and we use the metalinguistic symbol “/..” to separate the
proffered conclusion of the argument form.
Let us take the case of the truth table for one well-formed formula φ first. It is
crucial that we identify the number of types of atomic letters (also called individual
or single variables) in the formula. For instance:
• ⌜(p ∨ ~ p) ≡ ~ p⌝ has only one type of atom, ⌜p⌝, and three occurrences of this
type; we would mark the occurrences, as first and second and third, from left
to right.
• ⌜~ (p ∨ ~ (s ∙ t))⌝ has three different types of atomic letter, each one occurring
exactly once.
• ⌜t ⊃ (r ⊃ ~ (q ⊃ (s ∨ ~ (p ∙ ~ t)))⌝ has five types of atomic letter, with ⌜t⌝ occur-
ring twice while every one of {p, q, r, s} occurs exactly once.
We call the degree of a well-formed formula (wff) the number of types of letters
occurring in it. Thus, in our preceding examples, the first wff is of degree 1; the
second is of degree 3; the third is of degree 5. If we have more than one formulas,
the truth table we construct is determined by the degree of the wff with the larg-
est degree.
If the degree of the formula – or of the largest-degree formula – is n, our truth
table must have 2n rows. A truth table has rows and columns. The columns are one
for each of the atomic letter types and one for each of the well-formed formulas.
The number of rows, as we indicated, is 2n, where n is the number of types of atomic
letters. Thus, revisiting our preceding examples: the truth table for the first formula
has 2 columns (one for its letter type, ⌜p⌝, and one for the formula itself); the truth
table for this formula has 21 = 2 rows since the number of letter types n = 1. We show
this truth table below. We are not filling out the truth table entries for the formula
itself; we only show the columns (each one marked at the top) and the rows (marked
underneath the atomic letter type of the formula.) The symbols we use to mark, and
indicate, rows and columns, are metalinguistic and are added for visual and learning
facilitation.
Columns ⇀ 1 2
Rows p (p ∨ ~ p) ≡ ~ p
⇃
1 T
2 F
150 4 Sentential Logic Languages ∑
We could be writing the rows – or labeling them, if you will – with F for the first
row and T for the second row. Nothing depends on this but we must establish a
consistent procedure to follow systematically and we opt for legislating that, for one
letter type, we write from top to bottom as <T, F>. Note that it is not arbitrary that
we use “<” and “>” as symbols to enclose the truth values: this means that the order
matters; this pair is an ordered pair, as we call it. We specify that the first symbol to
the left is the top truth value, for row 1, and the second symbol of truth value, for
row 2, is the second one we write. To complete the truth table, we have to carry out
and complete the computations across each row: we use the truth values, for the
atomic letters, in that row. Each row is like its own independent case: this is a logi-
cally possible assignment of truth values to the atomic letters. Mark that each inde-
pendent row (not affected by assignments in any other row) is a unique assignment
of truth values to the atomic letters. Moreover, the truth table shows all the logically
possible assignments of truth value combinations to the atomic letters. Every row is
such an assignment. Thus, the rows show all, and only, the logically possible assign-
ments of truth values to the atomic letters of the formula. When we fill out the truth
table by carrying out and completing the computations across each row, we end up
showing, for the complete truth table, all the possible truth values the formula can
take. This will show us automatically if the formula is a logical truth or tautology:
that is the case, and only the case, when the formula is true for every row. We can
take this to mean that the formula takes the truth value true for every logically pos-
sible case: hence, it is a logical truth. If the formula takes the truth value false in
every row, then it is false for every logically possible case (every logically possible
assignment of truth values to the atomic components), then it is a logical falsehood
or a contradiction. And the last remaining case is when the formula is true in some
rows and false in other rows; that formula is called contingent – or a contingent or,
also, it can be called, logically indefinite or logically indeterminate.
Thus, the truth table is made possible because of the computational character of
the basic sentential logic: given the truth values of the atomic letters (the ultimate
atomic components), the truth value of the whole formula can be computed and
determined uniquely; this is done across each row (for each logically possible case
of assignments of truth values to the atomic components.) The truth table is like a
map of all the logically possible cases. All such logically possible cases are shown;
only the logically possible cases are shown. Thus, each row shows a logically pos-
sible case – which is also called in the literature a valuation or an interpretation (and,
of course, we can call it a truth value assignment.) Let us fill in to complete the truth
table of the preceding example. We box in the truth values of the formula (which, as
we know from our study of computations, are the truth values of the main connec-
tive symbols.) We subdivide the columns and number all of them, so we can specify
the order in which we carry out the operations. We can say that we generate sub-
columns in this way. We do this for illustrative purposes.
1 2 3 4 5 6 7 8
p (p ∨ ~ p) ≡ ~ p
1 T T T F T F F T
2 F F T T F T T F
4.3 A System of Truth Tables for ∑: ∑⊞ 151
1. Column 1: this is the column for the atomic letter type of our formula. Underneath,
for the two rows and from top to bottom, we write the truth values: <T, F>,
which, as we have explained, means T for the top row and F for the next row
from top to bottom.
2. Column 2: Now we are in our formula; we repeat the values for ⌜p⌝ for each row:
we can say that we plug in, across each row, the truth value of ⌜p⌝ for that row.
We will do this also in rows 5 and 8.
3. Column 3: this is the wedge column. But we have first to make the computations
for the tilde in column 4. The tilde in column 4 has smaller scope than the wedge
in column 3.
4. Column 4: this is the column for the first occurrence of the tilde. Its scope has
only in it the second occurrence of ⌜p⌝ in our formula. We do the computations.
Next, we go to column 3 to make the computations for the wedge.
5. Column 5: we plug in the values for ⌜p⌝ across each row.
6. Column 6: This is the triple bar column. This is the major or largest-scope connec-
tive symbol of the formula. This is where we end in our computations. The truth
values underneath this symbol are the truth values of the formula for each row.
7. This is the column for the second occurrence of the tilde in our formula. It has in
its scope only the third occurrence of ⌜p⌝ in the formula. We compute this before
we can compute the triple bar column. It does not matter if we compute this
before we compute column 4 or the other way round.
The order in which we carry out plugging truth values and carrying out computa-
tions is as follows: we begin with the rightmost column of the columns for the atomic
variable letter: from top to bottom we alternate Ts and Fs; the number has to be even
of course since, as we have intimated, the number of rows of a truth table is a power
of 2 and specifically it is determined as 2n where n is the number of distinct types of
atomic letters in the given formula. We then move to the next column to the left and
from top to bottom we alternate 2 Ts and 2 Fs; once again, since the number of rows
is a power of 2, the number of rows this being even, we have an even number of com-
binations to enter. For the next column to the left, from top to bottom, we alternate 4
Ts and 4 F’s; and so on. This is not the only possible recipe (see if you can determine
other plausible orders): 2 (plugging in values of the atomic letter) but, at any rate, it is
the one we implement in this text. This is the general mode and it is, as such, appli-
cable regardless of the number of letter types in the formula. We examined the case of
a formula with one letter type above. Next we examine a case in which we have two
types of atomic letters; Accordingly, the number of rows has to be 22 = 4, since the
number of types of atomic letters is 2. as we have specified, from top to bottom, for
the right-most letter we wrote: <T, F, T, F>; then we move to the next atomic letter to
the left and, from top to bottom, we write: <T, T, F, F>. We could have implemented a
different recipe but this is the one we will apply consistently; it is guaranteed to give
us all the possible combination of assignments of truth values, without repetitions. If
we read the truth value assignment across each row, now we read from left to right
and, thus, we have for the first row <T, T>, for the second row <T, F>, for the third row
<F, T>, and for the fourth row <F, F>. Study the truth table below.
152 4 Sentential Logic Languages ∑
on any other rows, that would mean that the contextual variation of valuations along
different contexts would have an effect on defining the value of a formula: in this
way we could model such notions as “necessarily” and “possibly” but such so-
called modal notions fall outside the scope of the basic sentential logic.
It is notable, however, that such notions as “logically possible” and “logically
necessary” and “logically impossible” and so on play a role in our definitions of
such core notions as argument validity; we say that an argument form is invalid if it
is logically possible to have an instantiation with all premises true and a false con-
clusion. This concept of logical possibility is left rather obscure, which should not
be tolerated in a strictly formal and precise an enterprise. It so happens that, for the
specific purposes of the basic and standard sentential logic, the rows of the truth
table represent exactly the logical possibilities we are talking about. A row repre-
sents a logical possibility. Thus, it is logically possible for a set of sentential formu-
las to be consistent or jointly satisfiable, as we will see, if and only if there is at least
one row across which all the formulas are true in their constructible truth table. The
official definition is that the formulas are consistent if and only if they can possibly
be true together – and, as we can see, this is expressible strictly and accurately in
terms of the rows of the truth table. To examine another example, we can say that a
formula has a logical form that is a tautology or necessary or logical truth if and
only if its truth table has (is labeled, we might say, by) true in every one of the for-
mula’s rows: again, the rows represent all the logical possibilities and the totality of
those logical possibilities characterizes logical necessity (or, we can this of it also as
being the case that it is logically impossible for the formula to be false since there is
no row on which it receives the truth value false.)
whether the conclusion is also true. This suggests that rows in which we have
false premises, or even some false premises, are not interesting to us when we
check for validity. We will take advantage of this when we present shortcuts for
the decision process.
We show an example below. The symbol “/..” is used to set off the conclusion
of the given argument form. Commas may be used to separate the premises, if
the argument form has more than one premises.
∑⊞((p ⊃ q) ⊃ p /.. p).
1 2 3 4 5
1 p q (p ⊃ q) ⊃ p /.. p <PREMISE, CONCLUSION>
2 T T TTTTT T <T, T>
3 T F TFFTT T <T, T>
4 F T FTTFF F <F. F>
5 F F FTFFF F <F, F>
There is no counterexample-row. The truth values of the premise and the conclu-
sion on the same row are shown informally by means of an auxiliary column that
has been added (column 5). There is no row across which all the premises are true
(in this case, the one premise is true) and the conclusion is false. Therefore, the
argument is valid.
2. Another application of truth tables is for determining the status of a formula: if
it is a tautology (all rows true), a contradiction (all rows false), or a contingency
(in some rows the formula checks true and in some rows it checks false.) Other
words for tautology are: validity, necessary truth, syntactically or formally ana-
lytic true sentence, logical truth, trivial truth. Such truths are not informative;
they reflect how we have defined the key logical terms: “and”, “not”, “either/or”,
“if-then”, “if and only if”. A formula is a contradiction if it checks false in every
row. Other words for this: invalidity, logical or necessary falsehood. Making
statement that have the logical form of a contradiction amounts to speaking non-
sense or committing logical absurdity whether one realizes this or not!
We construct the truth table for the given formula, ∑⊞(φ). We examine the
truth values the formula takes in all the rows: these represent all the logical pos-
sibilities or logically possible cases. Our given formula is a tautology if and only
if it has the truth value true, T, on every row. Given that the truth table method is
to be trusted (as it is indeed), we establish that this is a necessarily true sentence:
notice how we are able to map – so to speak – all logical possibilities by means
of the truth table: the rows represent all the logically possible cases, and there are
no other logically possible cases. Accordingly, we can capture the notions of
logical possibility and logical necessity, which can be elusive to the untrained
mind: each row represents a logically possible case; if the formula takes the truth
value true in all rows, then it is necessarily true. If the formula takes the truth
value false, F, in every row of its truth table, then it is logically necessarily false:
it is called a contradiction; other terms in the literature include: logical false-
4.3 A System of Truth Tables for ∑: ∑⊞ 155
formulas are true (receive T as the truth value.) Of course, this definition is a
special case (for n = 2) of the examination of the consistency of a set of any
number n of formulas, which we presented above. A pair of formulas is inconsis-
tent if and only if it is not consistent – in other words, if there is no row of
∑⊞(φ, ψ) across which both formulas receive the truth value T.
b) A formula φ implies formula ψ in one direction (one-directional implica-
tion or one-directional conditional), if and only if the truth table ∑⊞(φ, ψ) has
no row in which φ checks as T and ψ checks as F.
It is possible, of course, that φ implies ψ but ψ does not imply φ. The implica-
tion ψ ⊃ φ is called the converse of the implication φ ⊃ ψ. Therefore, conversion
(claiming that the converse of a tautological implication is itself necessarily true)
is an invalid inference to make in general. In the special case, which we will
examine below, in which φ and ψ are logically equivalent, both implication and
converse are necessarily true.
Assume that φ and ψ are related so that φ implies ψ. There is a definable
binary connective symbol of our logic, #, for which the formula φ # ψ is a tautol-
ogy. Given the definitions of the connective symbols, we can show that this con-
nective symbol is no other than the horseshoe (material implication or conditional
symbol.) Since φ implies ψ, from the definition we have given we know that
there is no row in which φ is true and ψ is false. We need to cross that row out.
We inspect the truth table to see that, given the crossing out, indeed, the implica-
tional formula is a tautology.
φ ψ φ⊃ψ
T T T
T F F
F T T
F F T
φ ψ φ≡ψ
T T T
T F F
F T F
F F T
φ ψ φ≢ψ
T T F
T F T
F T T
F F F
e) Two formulas φ and ψ are mutual contraries (related by the relation of contra-
riety) if and only if their truth table ∑⊞(φ, ψ) shows no row in which they are true
together but there is at least one row in which they are false together. The formulas
can be false together but they cannot be true together.
158 4 Sentential Logic Languages ∑
Assume that φ and ψ are mutual contraries. There is a definable binary connec-
tive symbol of our logic, #, for which the formula φ # ψ is a tautology. We do not
have this connective symbol in our formal language: we add it as “|”. The formula
φ | ψ is logically equivalent with the formula ~ (φ ∙ ψ) (negated conjunction). After
crossing out rows, based on the definition of contrariety, the formula φ | ψ is shown
to be a tautology.
φ ψ (φ | ψ) ≡ ~ (φ ∙ ψ)
T T F
T F T
F T T
F F T
(f) Two formulas φ and ψ are called mutual subcontraries (or related by the rela-
tion of subcontrariety) if and only if the truth table ∑⊞(φ, ψ) has no rows across
which they are both false; the formulas can be true together but they cannot be false
together.
Assume that φ and ψ are related so that φ and ψ are mutual subcontraries. There
is a definable binary connective symbol of our logic, #, for which the formula φ # ψ
is a tautology. Given the definitions of the connective symbols, we can show that
this connective symbol is no other than the wedge or vel (the inclusive disjunction
symbol.) We inspect the truth table to see that, given the crossing-out, indeed, the
implicational formula is a tautology.
φ ψ φ∨ψ
T T T
T F T
F T T
F F F
For the short truth table method: we start by making all the premises true and the
conclusion false. In this way we are trying to construct a counterexample (see
above.) We then proceed with computations. As soon as we have values for indi-
vidual letters we plug them in. If we succeed, we have a counterexample: this means
that the logical form is invalid. If we fail, the argument form is valid: we fail if we
run into logical impossibility or absurdity, which means that we find ourselves
assigning both T and F to the same symbol!
Example: Check the validity of the argument form [((p ⊃ q) ∙ (∼ p ⊃ q)) /.. q],
by using the partial and short truth table methods.First, we apply the full truth table
method to determine validity of the given argument form.
4.3 A System of Truth Tables for ∑: ∑⊞ 159
p q (p ⊃ q) ∙ (∼p ⊃ q) /.. q
T T T T T T FT T T T
T F T F F F FT T T F
F T F T T F TF F F T
F F F T F F TF F F F
There is no row with T for the premise and F for the conclusion. It follows that
there is no counterexample. The argument form is not invalid; it is valid.
Since invalidity is established only when we have a counterexample-row (for
which all premises are true and the conclusion false), a shortcut strategy called
Partial Truth Table is available: we begin with the column of the conclusion; we
then restrict our attention only to those rows for which the conclusion is false; no
other row can possibly yield a counterexample (which, as we know, is any row that
has all premises as true and the conclusion as false): and, if we do not have a coun-
terexample, we must determine the argument form to be valid.
p q (p ⊃ q) ∙ (∼p ⊃ q) /.. q
T T T F F F FT T T F
T F F T F F TF F F F
F T
F F
Additionally, we can make the partial truth table approach even more abbrevi-
ated and still be able to determine the correct answer as to validity. In the preceding
example, we could have stopped on the second row as soon as find a false premise:
since a counterexample is a row with all premises true and a false conclusion, it fol-
lows that this row cannot yield a counterexample since it has even one false premise.
In general, the partial truth table approach, as an informal strategic method for
pursuing a shortcut, recommends: determine the truth values from top to bottom for
the column of the conclusion of the argument form; determine the truth values of the
premises only for the rows for which the conclusion is false; you do not need to
continue with any one of these rows if you determine that a premise has a false
conclusion; survey the final result upon completion based on the recommendations:
the argument form is invalid if and only if there is any row with all the premises true
and the conclusion false.
There are some obvious strategies for a partial truth table approach in the cases
of determining consistency of a set of formulas or the logical status of a given for-
mula or relations between formulas by application of the truth table method. For the
case of consistency: as soon as a row is determined to have all the formulas true, we
may discontinue the decision procedure: it is sufficient to characterize the given set
of formulas as consistent since, by definition, a set of formulas is consistent if and
only if there is at least one row across which all the formulas receive the truth
value true.
160 4 Sentential Logic Languages ∑
We can discontinue the process as soon as we have a row on which the formula
is true and another row on which the formula is false: we may then correctly char-
acterize the formula as contingent (logically indeterminate or logically indefinite.)
In the case of checking for a tautology or a contradiction, having one row on which
the formula is true and another row on which the formula is false also establishes
that the formula is not a tautology and it is not a contradiction; it is, instead, a con-
tingency. These characterizations of logical status are jointly exhaustive (there are
no other characterizations besides these) and pairwise mutually exclusive (if the
formula has any one of these types of logical status, then it cannot have any of the
other types of logical status.) If we are continuing to determine that the formula is
true/false on consecutive rows from top to bottom, as we are trying to determine if
the formula is a tautology/contradiction, then we have to continue unless a row is
evaluated to have the truth value of the formula as false/true.
The method of the short truth table or also called quick computation method draws
on the principle that informs the type of proof that is known by the Latin name
reductio ad absurdum. Assuming a claim, we prove that this claim is false by deriv-
ing from this claim, and from other unreservedly accepted true premises, a contra-
diction. Assertions of any sentence and also of this sentence’s negation constitutes a
contradiction. We reason about our steps in implementing the short truth table in our
metalanguage. The point is, in this so-called negative approach, to show that our
given argument form is valid by ending up deriving a contradiction upon making the
initial assumption that our given argument form is invalid. Since there are exactly
two logical possibilities – that an argument form is valid or it is invalid and it cannot
be neither and it cannot be both – showing that the argument form is not invalid
means that it is valid. We have the advantage of knowing, from the definition of
invalidity, that our given argument form is invalid if and only if it has an instance in
which all its premises are true and its conclusion is false: in terms of the truth table
approach, this means that there is at least one row of the argument form’s truth table,
on which all the premises take T and the conclusion takes F. This row is what we
presume to reproduce in our opening move – our gambit. We make all premises true
and the conclusion false. We then proceed to make computations as we have learned
to do. What is relevant in this case is especially the prospect of making computa-
tions of truth values when we have incomplete information about the truth values of
the atomic variables. We will examine details soon. If the endeavor succeeds, we
essentially end up with constructing a row in which all the premises are true and the
conclusion is false – and this is a counterexample that is sufficient to establish inva-
lidity of our given argument form. If, however, we end up deriving a contradiction,
this proves definitively that no counterexample can be produced: after all, we made
4.3 A System of Truth Tables for ∑: ∑⊞ 161
assignments. We might indeed have other options. For instance, some conjunc-
tion might have received a T as a result of the opening assignment: this means
that both conjuncts must be assigned Ts, since this is the only way for conjunc-
tion to take the truth value T. Perhaps the conjuncts are themselves atomic sen-
tential letters, which means that we are ready to proceed to another step.
• As soon as truth values of atomic sentential letters (atoms) become available, we
plug them in, as it were, into all the places where the same letters appear in the
argument form. Essentially, what we are attempting to do in this procedure is
reproduce one single row of the complete truth table by means of which we can
check validity of the given argument form. Across this row, the atomic sentential
letters have fixed truth value assignments. We pretend, of course, that this one
row is a counterexample (not necessarily the only one but it takes even one avail-
able CEX to establish that the given argument form is invalid.) The preponderant
issue is whether we will be successful in this endeavor or not.
• There are exactly two possible outcomes which are mutually exclusive (precisely
one or the other is guaranteed to eventuate upon completion of the procedure
which is itself guaranteed to terminate within time that is sufficient for comput-
ability purposes.)
One possibility is that we are able to complete the full truth value assignment all
the way down to the atomic letters. In this case, all the symbols in the argument
form (except for the auxiliary symbols, the parentheses, of course) have received a
unique truth value. In this fashion, we have successfully reproduced a row of the
truth table for the given argument form, across which all the premises are true and
the conclusion is false. This means that we have established that at least one coun-
terexample to the argument form is available and, hence, the argument form is
invalid.
The other possibility is that some symbol will receive not a unique truth value
but - the only other possible alternative - both truth values T and F. As soon as this
happens we may terminate the procedure and determine that no counterexample to
the given argument form is available. In bivalent standard logic, logical consequence
may or may not obtain and there is no other alternative. It is like dealing with a light
switch that can only be on or off without any other combinations or gradations avail-
able. Hence, failure to establish even one counterexample - rather, proof that there
is no counterexample - entails that the argument form is not invalid and, therefore,
it is valid.
We show below how cases are determined based on initial truth value assign-
ments to the various sentential connectives. We should always keep in mind the
shortcuts we presented when studying computations with truth values.
• F ⊃ φ and φ ⊃ T receive T regardless of the value of φ.
• T ∨ φ and φ ∨ T always receive T regardless of the value of φ.
• F · φ and φ · F always receive F regardless of the value of φ.
• φ ⊃ φ and φ ≡ φ receive T regardless of the truth value of φ.
• ~ (φ ≡ φ) and ~ φ ≡ φ and φ ≡ ~ φ receive F regardless of the value of φ.
4.3 A System of Truth Tables for ∑: ∑⊞ 163
We plug in the truth value for ⌜p⌝ across the whole line.
2. (p ⊃ q) ⊃ p ⊨ p.
The value of the atomic variable, which we have dis-
covered, has to be the same across: remember that we
F ? T F F
are pretending to be constructing some putative row
that is a counterexample to the given argument form:
this is one row, across, and, as such, it has the same value assignment for each of the
atomic variables. Once we have discovered the truth value of an atomic variable, our
labor is facilitated significantly as we can put this value for all occurrences of this
atomic variable. Indeed, the contradiction we may reach, if the argument form is
valid and no counterexample can be constructed, would be in the form of assigning
both truth values, T and F, to the same variable or connective symbol – which is
logically absurd given the rules of the “game” in our standard logic, by value assign-
ments and computed values are set to be exactly one of T or F and cannot be both.
Since ⌜p⌝ obtains the truth value F across our row, the first implication symbol
receives T. This follows from the definition we have given for the truth function we
have called material implication and we noted it in our computational shortcuts.
Now, we notice something about the second occurrence of the implication symbol
(which is the major operator symbol in our premise.) The antecedent of this implica-
tion has received T and the consequent has received as truth value F. This compels
the third step:
Suppose that we want to prove, informally, that the negation of a tautology is a con-
tradiction and the negation of a contradiction is a tautology. We can do this by mak-
ing appropriate and relevant references to our familiar method of the truth table. We
will show this and some other examples and will include more such theses that can
be argued for by making references to the truth table as exercises. We use the set-
theoretic symbol “∈” to symbolize inclusion (being a member of a set) and we work
informally within our metalanguage as we argue in support of the various metalogi-
cal theses we examine. Given that the truth table is a faithful method – it provably
reaches the correct results – our theses carry weight within the broader analysis of
the standard sentential logic.
• φ ∈ TAUTOLOGIES(∑⊞) if and only if ~ φ ∈ CONTRADICTIONS(∑⊞).
• Because this is an “if and only if” thesis, we need to prove both directions: impli-
cation from left to right and implication from right to left.
• a) Assume that φ ∈ TAUT(∑⊞). This means that in every row of the truth table
∑⊞ (φ), the formula receives the truth value true, T. Now, we construct the truth
table of the negation of the given formula, ∑⊞(~ φ). Since the negation of T is
F, false, all the rows of the new truth table will obtain F as the formula’s truth
value. Hence, the formula, ~ φ, takes the truth value false in every row: therefore,
it is a contradiction: ~ φ ∈ CONTR(∑⊞). This shows that the negation of any
tautology is a contradiction.
• b) Assume that ~ φ ∈ CONTR(∑⊞). This means that the truth table of the for-
mula, ∑⊞(~ φ), determines the formula to be receiving the truth value F, false,
for every row. The truth values, F in every row, are for the main connective sym-
bol, which is the negation symbol. Given the definition of negation by the truth
table, it must be the case that the formula φ, which is negated in every row, has
the truth value T, true. (The negation symbol turns T to F and F to T, by defini-
tion.) Therefore, φ has the truth value T in every row. Were we to construct the
truth table for φ, the formula would have to take T as its truth value for every row.
Therefore, φ is a tautology: φ ∈ TAUT(∑⊞). We have shown that the negation
of a contradiction is a tautology.
• φ ⊃ ψ ∈ TAUTOLOGIES(∑⊞) if φ is a contradiction.
• It is given that φ, the antecedent of the implicational or conditional formula φ ⊃
ψ, is a logical contradiction. This means that in the truth table for the implica-
tional formula, ∑⊞(φ ⊃ ψ), the truth value for φ is false in every row. Given the
definition of the implication symbol (horseshoe), the implication must take the
truth value true, T, in every row: because, by the definition of the horseshoe, we
have F only when the antecedent is T and the consequent is F – and there is no
such row in this case. Therefore, the implication is a tautology. This shows that
an implication with a contradiction as antecedent is a tautology. This is called
sometimes a vacuous implication – an implication that is tautological on account
of its antecedent being true.
4.3 A System of Truth Tables for ∑: ∑⊞ 165
The term “range” was coined, in the sense we will use it, by Rudolf Carnap; it is
usually omitted from textbooks these days but it is useful for certain theoretical
purposes.
We have not studied set theory at this point (see chapter 10) but we need for now
only a basic fixing of some metalinguistic notation for our purposes. A set is an
abstract entity – we don’t worry about its metaphysical status – that is a collection
of objects of any kind. Obviously, we need symbols to refer to those objects and it
is those symbols we place within set-brackets, “{” and “}”, to indicate inclusion as
members of a set. It is understood, however, that the members of the set are the enti-
ties and not the symbols. The order in which the member-symbols are written is
irrelevant and repeating a symbol more than once does not add anything and so this
is prohibited.
Looking at a preceding example, consider the following set which we call the
range of the well-formed formula for which we constructed the truth table. We sym-
bolize the “range” by “𝓇” and we enclose the formula within parentheses as shown
in the example below. The information below announces that the formula, whose
range we are determining, receives the truth value T, true, for exactly the truth val-
ues, of <p, q> as shown. Checking the truth table for the formula, we establish that
the formula is true only in one row, in which both atomic variables are true. In the
ordered pair, enclosed within “< >”, the leftmost variable is as in the truth table we
166 4 Sentential Logic Languages ∑
are using; and so on for the variables to the right. Since we have two atomic vari-
ables in our formula, the values are for <p, q>.
𝓇((p ⊃ q) ∙ (∼ p ⊃ q)) = {<T, T>}
The members of the range are ordered pairs. We use “<” and “>” for enclosure of
the members of an ordered pair. Unlike in the case of sets, the order in which the
members of ordered pairs are written matters. It is easy to discern that we write the
ordered pairs with the value of the left-most individual variable first and the variable
to the right as second member of the ordered pair. The definition of a range of a
formula is: The range of the formula includes all and only the assignments of truth
values to the atomic variables of the formula, for which, assignments, the formula
is true.
Depending on the number of distinct types of atomic variable (letter) in the for-
mula, we can determine the range as a set of ordered n-tuples: restricting our interest
to the case of formulas with two distinct atomic variable letters, we speak of ordered
duplets or ordered pairs. The range, as a set, is a subset of what is called the Cartesian
product of the truth value set by itself. The Cartesian product of two sets A and B is
the set whose members are all the ordered pairs such that the first member of each
pair is a member of A and the second member is a member of B. To generate the
Cartesian product of a set by itself, we form the set of all the constructible ordered
pairs that have first and second member from the given set. Consider how this is to
be done for the Cartesian product of the set of truth values.
𝓥2 = {T, F}
𝓥2 x 𝓥2 T F
T <T, T> <T, F>
F <F, T> <F, F>
The range of a formula 𝛗 with two distinct atomic variable types is a subset of
the Cartesian product of the values-set by itself. A set X is a subset of a set Y if and
only if all members of X are members of Y. (Y may or may not have additional
members.) The relation of subsethood is symbolized by “⊆”. We show what all this
means below.
• 𝓥2 = {T, F}
• 𝓥2 x 𝓥2 = {T, F} x {T, F} = {<T, T>, <T, F>, <F, T>, <F, F>}
• 𝓇(𝛗) ⊆ 𝓥2 x 𝓥2
Let us now see something remarkable. First, we assess the range of the well-
formed formula made of the atomic variable ⌜q⌝ which is the conclusion of the
given argument form.
𝓇(q) = {<T, T>, <F, T>}
We know that the argument form is valid. In our metalanguage we can use the
turnstile symbol, “⊨”, to show validity of argument forms in the following sense:
what is to the left of the turnstile – the premises – are taken conjunctively (although
shown to be separated by commas); the conclusion is to the right of the turnstile; the
4.3 A System of Truth Tables for ∑: ∑⊞ 167
turnstile shown as such (not canceled as in “⊭”) indicates that the premises taken
conjunctively imply the conclusion. Thus, as we say, the relation of logical conse-
quence obtains. We can think of a logic as the collection of all its cases of logical
consequence. Thus, continuing with our example, we have:
(p ⊃ q) ∙ (∼ p ⊃ q) ⊨ q
Now we examine the range-sets of the two formulas and we see that they are
related in a certain way: the first (the premise-range) is a subset of the second (the
conclusion-range). A set X is a subset of a set Y if and only if all members of X are
members of Y. (Y may or may not have additional members.) The relation of subset-
hood is symbolized by “⊆”. Thus, we have:
𝓇((p ⊃ q) ∙ (∼ p ⊃ q)) = {<T, T>} ⊆ 𝓇(q) = {<T, T>, <F, T>}
This observation reflects a general result that – as we can prove – applies in the
case of logical consequence of sentential logic: A premise implies a conclusion if
and only if the range-set of the premise is a subset of the range-set of the conclusion.
In the case of more than one premises, we need to form a new set which is called
the intersection of the range-sets of the premises. The intersection or overlap of two
sets X and Y, symbolized by “⋂”, is the set that has as members exactly the mem-
bers which X and Y both share or have in common. Logical consequence obtains if
and only if the intersection of the range-sets of the premises is a subset of the range-
set of the conclusion. We see now an example of assessing valid logical conse-
quence in a case with more than one premises. It is straightforward to refer to the
truth tables of the premises and conclusion to ascertain that the range-sets are as
presented below. (One premise is ⌜p ⊃ q⌝ which is true, as know, for all value
assignment except for <T, F>; the other premise is ⌜p⌝ and the conclusion is ⌜q⌝.)
• p ⊃ q, p ⊨ q
• 𝓇(p ⊃ q) = {<T, T>, <F, T>, <F, F>}
• 𝓇(p) = {<T, T>, <T, F>}
• 𝓇(q) = {<T, T>, <F, T>}
• 𝓇(p ⊃ q) ⋂ 𝓇(p) = {<T, T>}
• 𝓇(p ⊃ q) ⋂ 𝓇(p) = {<T, T>} ⊆ 𝓇(q) = {<T, T>, <F, T>}
The concept of the range of a well-formed formula can be further put to good use
in making observations about deductive reasoning and about the characterization or
logical classification of statements. It so happens that the more members there are
in the range of a formula 𝛗 the less informative is the statement 𝜮 interpreted by
that formula 𝛗. Let us consider the case in which the range of a formula 𝛗 is the
same as the Cartesian product 𝓥2 x 𝓥2. The statement that is interpreted by such a
formula is a tautology since it receives the truth value True for all possible assign-
ments of truth values to its atomic components. No information is conveyed by this
statement. To understand this, let us consider each row of the familiar truth table to
be modeling a logically possible state of affairs. The truth table shows all logically
possible states of affairs for the basic statement logic we are examining. A tautology
is true in every row – in every logically possible state of affairs. Now let us consider
168 4 Sentential Logic Languages ∑
that we can use a well-formed formula of our formal language to label a state of
affairs if this formula makes a statement uniquely true at that state of affairs. A tau-
tology cannot label any state of affairs. The same is the case, of course, with a logi-
cal contradiction which is a logical falsehood – as the tautology is a logical truth.
Such statements cannot convey information – cannot characterize – any empirically
distinguishable case or state of affairs.
When it comes to logically contingent or indeterminate statements, their ranges
(more precisely, the ranges of their corresponding formulas) can be actually ranked
in terms of comparative content. But, we will see now that this is not informative but
logical content. The more members there are in the range of a formula 𝛗, the less
content can be packed into its matching statement 𝜮. Notice that our deductive-
reasoning concept of range is still not about how to actually engage in conveying
informational content: we merely record relative or comparative receptivity of logi-
cally contingent statements with respect to conveying information. The logical char-
acterization is simply of being a logically contingent statement rather than how such
a statement can be deployed in language to express informative content. The deduc-
tive notion that matters is that of logical consequence which, as we saw, is related to
relative range: premises whose overlapping range is a subset of some formula 𝛗
have 𝛗 as logical consequence if they are taken conjunctively. The relation that we
examine in this way, calling it logical consequence, makes the trivial case that less
content has as valid conclusion a more informative content insofar as there is no
chance, in such a case, of having any logical content in the premises which has been
left out in the conclusion. But what is crucial here is that we are speaking not of
empirically verifiable content but of logical content. By logical content we mean
truth conditions – the systematic assignments of truth values to the atomic compo-
nents of a formula, for which that formula takes the value True (in the semantic
interpretation.) Thus, the range shows the truth conditions of a formula (more pre-
cisely, of the statement that interprets that formula.) Validity can, then, be under-
stood as a matter of proceeding to more inclusive range – including the trivial case
in which the ascent is to equally informative content. The only case that is excluded –
and is taken to signal invalidity of logical consequence – is the case in which we go
from more to less logically informative content.
There are some corollaries we should note: The disregard for empirical content,
which is deeply characteristic of deductive logic, entails that we cannot impose a
requirement – in this formalist layout – that premises and conclusion must share
relevant content between them. (The challenge for formal logic in this respect is to
find some systematic device that might respect a formal requirement of shared con-
tent without resorting to restrictions that affect non-logical meaning. The results of
such efforts, falling under the heading of Relevantist Logic, fall outside our cur-
rent scope.)
Here is an example of a valid logical consequence in which a new atomic vari-
able appears in the conclusion. In semantic terms, this means that a statement has as
valid logical consequence the inclusive disjunction of this statement and any state-
ment whatsoever. A term that is used for such an apparent anomaly is paralogism.
We find this term especially used in connection with the definition of the
4.3 A System of Truth Tables for ∑: ∑⊞ 169
p q p p∨q
T T T T
T F T T
F T F T
F F F F
𝓇(p) = {<T, T>, <T, F>} ⊆ 𝓇(p ∨ q) = {<T, T>, <T, F>, <F, T>}
4.3.6 Exercises
1. Determine if the following claims are true or false and refer to the truth table to
provide analytical and detailed justifications for your answers:
a. Two contradictions can be mutually consistent.
b. A tautology implies a contradiction.
c. An inconsistent set of premises implies validly any conclusion whatsoever.
d. Two contradictions are mutually equivalent.
e. A tautology and a contingency are mutually consistent.
f. A contradiction is equivalent with itself.
g. Two logical contingencies cannot be mutually equivalent.
h. A contingency can possibly imply another contingency.
i. The inclusive disjunction of a contradiction and a tautology is a logical
contingency.
j. An argument form with all contingent premises and a contingent conclusion
cannot be invalid.
k. An inclusive disjunction of two contingencies cannot be a tautology.
2. Refer to the truth table to justify your answer to the following claims (which may
be true or false.)
a. If a logical contingency implies another logical contingency, then the two
contingencies have to be logically equivalent.
170 4 Sentential Logic Languages ∑
b. (p ⊃ q) ⊃ p, ~ p⊨ ∑⊞ p ⊃ (~ q ⊃ ~ p)
c. (p ⊃ q) ⊃ (q ⊃ p), ~ p ⊨ ∑⊞ ~ q
d. ~ (p ∨ q), q ⊃ p ⊨ ∑⊞ q
e. ~ (p ∙ q), ~ p ≡ q ⊨ ∑⊞ p ∨ ~ q
f. p ≡ (p ≡ ~ p) ⊨ ∑⊞ (p ⊃ q) ⊃ q
g. p ⊃ (q ⊃ r), p ∨ q ⊨ ∑⊞ (p ∙ q) ⊃ r
h. p ∨ (q ∙ r), ~ p, r ⊨ ∑⊞ p ⊃ (q ⊃ r)
i. ~ q ⊃ (~ q ⊃ ~ (q ⊃ p)) ⊨ ∑⊞ ~ p ∙ q
j. p ⊃ (q ⊃ ~ p) ~ (((p ⊃ q) ⊃ p) ⊃ p) ⊨ ∑⊞ r
k. (p ⊃ q) ⊃ p, ~ q ∨ ~ p ⊨ ∑⊞ p ∨ r
6. Use the truth table system we have constructed to determine whether the given
well-formed formulas are tautologies, contradictions or contingencies.
a. ⊨∑⊞ (p ⊃ (q ⊃ r)) ⊃ ((p ⊃ q) ⊃ (p ⊃ r))
b. ⊨∑⊞ (~ p ⊃ ~ q) ⊃ (q ⊃ p)
c. ⊨∑⊞ (p ⊃ q) ⊃ ~ (p ⊃ ~ q)
d. ⊨ ∑⊞ ~ ((p ⊃ q) ⊃ ((q ⊃ r) ⊃ (p ⊃ r)))
e. ⊨ ∑⊞ (p ⊃ q) ⊃ ((p ∙ s) ⊃ q)
f. ⊨ ∑⊞ (p ⊃ q) ⊃ ((p ∨ s) ⊃ q)
g. ⊨ ∑⊞ ~ (p ⊃ (q ≡ ~ p)) ∨ (~ p ∨ q)
h. ⊨ ∑⊞ (((~ p ⊃ p) ∙ (q ⊃ p)) ⊃ ~ q) ⊃ ~ (q ⊃ q)
i. ⊨ ∑⊞ (p ≡ (q ≡ r)) ≡ ((p ≡ q) ≡ r)
j. ⊨ ∑⊞ (p ≡ ~ q) ≡ (p ≡ q)
k. ⊨ ∑⊞ (p ⊃ ~ (q ∨ r)) ⊃ ((p ∙ q) ⊃ ~ r)
7. Use the truth table method to determine if the given sets of sentences are consis-
tent or inconsistent.
a. ⊨∑⊞ {~ p, ~ p ⊃ p, q ≡ p}
b. ⊨∑⊞ {~ (p ⊃ q), p ⊃ q, ~ q ⊃ p}
c. ⊨∑⊞ {p ⊃ q, p ⊃ (q ∨ r), ~ p}
d. ⊨∑⊞ {(p ⊃ q) ⊃ p, ~ p ⊃ p}
e. ⊨∑⊞ {~ (p ≡ q), (p ∨ q) ∙ ~ (p ∙ q)}
f. ⊨∑⊞ {s ⊃ t, ~ s ⊃ t, t ⊃ ~ (s ∙ q), ~ q}
g. ⊨∑⊞ {~ p ≡ ~ q, ~ (p ⊃ q), ~ (q ⊃ p)}
h. ⊨∑⊞ {p ∨ (~ q ⊃ p), p ⊃ q, q}
i. ⊨∑⊞ {~ ~ p, q ⊃ (p ∨ r), p ⊃ ~ q, q}
j. ⊨∑⊞ {~ (p ∨ (q ⊃ ~ q)), ~ p ≡ q}
k. ⊨∑⊞ {~ (p ≡ ~ (q ≡ r)), ~ (~ (p ≡ q) ≡ ~ r)}
8. Use the truth table system to determine what logical relations obtain between the
formulas that are given as ordered pairs. Possible relations: the first implies the
second, the second implies the first, they imply each other (they are mutually
equivalent), neither implies the other, they are consistent or inconsistent with
each other, they are mutual contraries, mutual subcontraries, or exclusive dis-
juncts of each other. It is possible certainly that more than one relations obtain.
172 4 Sentential Logic Languages ∑
b. ((p ⊃ q) ⊃ q) ⊃ p ⊨? p.
1. premise T; conclusion F.
2. |(p ⊃ q) ⊃ q| = T
3. |((p ⊃ q) ⊃ q) ⊃ p| = F
4. premise is T and F
x
11. Use the short truth table (quick computation) method to do the exercises in 8.
12. •
a. Why do we render the conclusion false for purposes of processing a short
truth table approach to the determination of validity of argument forms?
b. Does this mean that arguments with false conclusions must be invalid?
c. Why do we count on failure of the short truth table process to declare the
argument form valid – and what do we mean technically by “failure” in
this case?
d. How can we adapt the short truth table method to check for consistency of a
given set of sentences?
e. How can we adapt the method to determine the logical status of a formula (if
it is a tautology, a contradiction or a contingency?) Can we check if a formula
is a contingency by only one application of the method?
13. •
a. What is the range of a logical contradiction?
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑⊞ 173
The type of proof system we examine in this section is not semantic: it does not
work with assigning truth values (true and false) and with building models (as seen
in the rows of the truth table system, which can be thought of as representing logi-
cally possible states or cases.) The meanings of the critical components of the for-
mal language, the connectives, are not construed semantically by assigned as
referents truth values (true or false.) We have no interpretations (semantic interpre-
tations or valuations) to assign to the atomic and molecular components of senten-
tial formulas. There is no narrative, for instance, about logically possible states of
affairs represented by truthtabular rows, and no stipulation of abstract objects that
can sustain accompanying narratives. A prominent school of thought considers the
approach we are examining in this chapter as the proper way for constructing logi-
cal languages. Tasks of analyzing and investigating logic are also cast in a new light
as a result of opting for this approach with deep metalogical lessons accruing – but
this lies beyond the scope of our current inquiry. The approach that we will examine
next is called proof-theoretical. It enforces construction of proofs which are under-
stood as abstract collections of formally arranged (properly shaped) derived (proven)
lines effectuating the final derivation of the conclusion from the given premise-
lines. There are different varieties of natural deduction or proof-theoretic systems;
we will two types. The term “natural deduction” is used rather promiscuously across
the board of such systems although the original coinage of the term applies to a
specific type of proof-theoretic systems.
Since the proof-theoretic approach has to accommodate the fundamental desid-
eratum of defining the logical connectives, the question immediately arises as to
how this is to be done. The semantic approach, as we have seen, uses valuations or
assignments of truth values (true and false for the standard bivalent logic) and
defines the meanings of the connectives in terms of truth conditions. The by now
174 4 Sentential Logic Languages ∑
familiar truth table is one such semantic instrument for definition of the connectives.
The proof-theoretic approach has a different view of how the meanings of the con-
nectives are to be defined, and this view has profound philosophical implications.
The proof-theoretical approach does not depend on truth valuations for its pro-
nouncements on such fundamental subjects as what constitutes a correct argument
(accordingly, the term “valid” is semantic and not proof-theoretical), under what
conditions a sentence is trivially assertable (called “tautology” or “necessary truth”
on the semantic approach), or how a set of sentences are jointly assertable (“jointly
satisfiable” is the semantic term but “consistent” can be used more broadly.)
Assertability is itself rather a semantic concept, if taken to mean that only true state-
ments are right assertable. The proof-theoretic view sees the semantically valid
argument forms as proofs in which the conclusion line is correctly derivable from
the premise lines; it views the tautologies of the semantic approach as lines that are
self-justifying or trivially correct and can be laid down or added to a proof as lines
and also as derivable without any premises whatsoever; and it views sets of consis-
tent sentences as lines that can be all laid down together without absurdity being
derivable from these lines.
What we need to do next is to explain how correctness is understood on the
proof-theoretical approach. Moving from lines to a new derived line correctly is the
mechanism for proof-theoretical analysis. It so happens that this question, of cor-
rectness, is intrinsically entwined with the other remaining topic we need to present:
how does the proof-theoretical approach define the connectives – given that it does
not define them in terms of truth conditions or valuation assignments (of true and
false) to the atomic components of the formulas?
The proof-theoretic definitions of the connectives are given by means of the cor-
rect rules that can be used in proof-constructions to regulate two derivational
activities:
1. how formulas with a connective (as main connective symbol) are to be derived
justifiably on the basis of given lines: these are known as the introduction rules
for the connectives; and
2. how formulas with a connective as main connective symbol can be used (possi-
bly with other lines) to justify deriving a new line: these are known as the elimi-
nations rules for the connectives.
There are different views about this. One view, embraced by the originator of this
school of thought, Gerhard Gentzen, is that only the introduction rules are the ones
that fix the meanings of the connective while the elimination rules only draw out
(and should be only drawing out, normally) what is already packed, in terms of
meaning, in the given line with the connective as its main connective symbol.
Another view is that the elimination rules are the ones that determine the connec-
tive’s meaning and there is a view that both the introduction and elimination rules
set the meanings for the connectives.
In the proof theoretic approach, we construct proofs, line by line, starting with
given premises and deriving new lines with the proposed conclusion as the last or
terminal derived line; each new added line needs to be justified by appealing to one
of the rules we have in our inferential system. We can have only correct rules (whose
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑⊞ 175
validity can be verified by applying our familiar truth table method). The term
“validity” is semantic, of course, while “correct” is broader. We detect a relationship
between the proof-theoretic and the semantic approaches, which we will have the
opportunity to bring up again: there is a perfect match, harmony, between the
semantic and the proof-theoretic approaches: every valid argument form (checked
by a proven reliable semantic mechanism like the truth table) has to correspond to a
proof that successfully derives the conclusion from the premises of this argument
form; and vice versa, every correctly constructible proof or derivation of conclusion
from premises must match a decision of validity for the truth table check of the cor-
responding argument form.
In applying derivation rules we generate instances of the given rules: this is like
applying a “shape” or a recipe that instructs us how to proceed when we have given
material that fits the “shape” or rule we are given. In the proof-theoretic approach –
in contrast to the truth-tabular method – we apply rules to derive lines and, as
already noted emphatically, we do not speak of assignments of truth values to vari-
ables. Because of this line-by-line progression, this proof system has something
“natural” about it. Everyday arguments, and academically constructed proofs, pro-
ceed in this fashion, issuing from given assumptions that are accepted and aiming at
deriving a putative solution. In everyday linguistic usage, but also in academic and
scholarly contexts, such deductive proofs are not constructed systematically; check-
ing if the proofs are valid (or, we might say, correctly constructible) is not done,
with possibly dire consequences since proofs that don’t work might be accepted on
the basis of psychological or subjective convictions that “the proof is good.”
In our systematic construction of proofs or derivations, we begin with given
assumptions or premises and derive each line by applying our system’s given deri-
vation rules on lines that are already available in the sequence of proof lines (written
vertically, starting from the top): each line must be justified as based on some previ-
ous line or lines and by means of some correct derivation-rule that has been used.
We proceed in this way to the derivation of the proposed conclusion. When we find
proofs in other contexts, we don’t see justifications of such derivations given; in
fact, in everyday linguistics contexts, premises might be missing, not articulated and
supposed to be implied. Even in Mathematics, the proofs are informal in the sense
that we are not shown the justifications (the derivation rules that are used and the
lines on which the rules have been applied). Such proofs are informal but ours will
have justifications written next to each line.
Unlike the truth table and the tree systems, this proof method is not mechanical:
if we cannot construct a proof, then we will not have a way of knowing that there is
no possible way of constructing such a proof: it could be that our abilities have
failed us even though a proof is constructible in principle. With the truth table and
tree methods, subjecting invalid argument forms to the methods yields counterex-
amples – as you can ascertain by referring to the presentations of those methods.
This is not the case with Natural Deduction. We cannot construct counterexamples
to suggested argument forms that are not correct. We are avoiding the word “valid-
ity” here because “valid” is a characterization of argument forms in the semantic
methods (those methods that we studied in formal languages with true-false truth
values in them.) The Natural Deduction method, on the other hand, is
176 4 Sentential Logic Languages ∑
proof-theoretic; it is not semantic and this means that in this method we just manip-
ulate symbols according to rules; we cannot build models in a Natural
Deduction method.
Here is what we mean by model: think of the truth table and its rows; we have
remarked repeatedly that each row represents a logical possibility for the mean-
ings – which are defined in terms of true and false for the individual letters. Each
such logical possibility can be thought of as a logically possible world or state of
affairs. In this way, we are modeling. In a Natural Deduction proof, on the other
hand, we use derivation rules like recipes: by using such rules we derive new lines
from given and already derived lines and, in so doing, we follow the rules in the
same way we would be following a recipe step by step. We have no story to tell – as
in the truth table where the rows can be thought of as logically possible worlds.
As already indicated, in the case the standard sentential logic, the proof-
theoretical and the semantic approaches harmonize perfectly. Let us think of the
semantic side (where we have the truth tables in the case of the sentential logic) as
capturing what should be provable. The proof-theoretic side presents what is proce-
durally (proof-theoretically) derivable. The harmony between the two sides can,
then, be accounted for in a manner that sounds quite natural: what should be prov-
able in the kind of logic we have is derivable by means of available formal proof-
procedures. Considering, as by default, the truth table method to be yielding correct
results (about what should be provable), we can ascertain that any constructible
proof corresponds to an argument form that is determined valid by the truthtabular
test; and, conversely, every valid argument form (as checked by the truth table) cor-
responds to a constructible proof in the proof-theoretic method. It might be objected
that we have shown preference to the truth table approach, making it the ultimate
arbiter of correctness, but such a choice can be defended (in ways that lie beyond
our scope) without privileging one approach (semantic or proof-theoretic) over the
other. It is not to be taken for granted that this works for all logics. It does work in
the case of the standard sentential logic.
To give a few more details: the proof-theoretic side can be thought of as compel-
ling us to undertake syntactical procedures: manipulation of symbols according to
grammatical rules and by applying certain figure-like schemata (recipes) which are
the system’s derivation rules. In this approach, the underlying definitions of our
connectives in the system are set by the rules that are given for managing how to
introduce and remove any such connective symbol. Some logicians think that this
gets it right what logic is about. A semantic system, on the other hand, allows us to
present narratives: as we can do with the truth table whose rows can be thought of
as representing logically possible cases for assignments of truth values (true and
false, which are, again, semantic meanings that are used in defining the connectives
themselves.) The narratives we can construct semantically are themselves abstract –
they don’t correspond to empirical matters but to abstractions. Some interesting
terminology can be given in this context: the semantic models interpret the formal
proof-theoretic systems; the semantic models provide interpretations or valuations
or value assignments or cases or assignments of meanings according to a signature;
what are interpreted in this way are the items of the syntactically constructed sys-
tems. When we have complete harmony between the two sides – as the case is with
the standard sentential logic – that means that we always find an interpretation of
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑⊞ 177
any formula of the proof system (we can use, for instance, the truth table to check
that what is provable is also valid on the semantic side); and what checks as a valid
argument form in the truth table system can always be proven in the proof-theoretic
system. When we move from the proof-theoretic system to the semantic system suc-
cessfully, we say that the system is sound: only what should be provable is derivable
(or, if we can derive it then we can model it.) When we have success in moving from
the semantic to the proof system, we say that the logic is complete: what should be
provable (what can be modeled) is indeed constructible by means of formal deriva-
tions in the proof system. Logicians are most interested in proving such metalogical
results but our current purposes do not permit any details about this subject.
Our proof system, which we call ∑∎, has rule-schemata. These are like recipes
about how to proceed when we have certain lines in order to produce new line or
lines. The rules are for the different connectives. (We should strictly be using differ-
ent terms but, for convenience, we stick to our own terminology.) We use the same
grammar as we have used before. We will have to present the rule-recipes in a meta-
language; so, different symbols will be used for those rules-schemata. We equip our
system with more rules than we need; this is to make the proofs easier. Some of the
rules we include could be given up without losing any proofs we are able to pro-
duce; but, as mentioned, we pack our system to redundancy in order to make the
proofs easier. We will proceed connective by connective laying down the appropri-
ate rules. Eventually, we will graduate to rules that allow something remarkable to
happen – we will explain what that is and we call it replacement. As we proceed, we
will give examples of proofs that can be constructed only on the basis of the rule or
rules we have so far.
Every proof will be composed of lines. To the left we write the number of each
line (with numerals labeling the lines), starting with 1 and continuing sequentially.
A line can have on it a premise, a line derived from premises and/or from other
lines, and finally – when the proof terminates – the conclusion. To the right of each
line we write the justification: this comprises the name of the rule that has been
applied and the numbers of the lines (one or more lines) on which the rule has been
applied. In some systems of natural deduction, allowance is made for entering a
self-justifying line (some theorem or already proven conclusion of a proof) but our
constructed systems do not include this amenity. It is a neat feature that premises
themselves are considered self-justifying. The justification for a premise is written
as “Premise” or “P.” We will also have so-called Assumed Premises or, written in
justification, as “AP.” The catch is that an AP is like a debt we have: the proof cannot
end without paying off or discharging the AP. The rules we will learn will also
specify how APs can be discharged.
Here is an example of what a natural deduction derivation looks like:
Prove: p ∨ (q · r), ~ p ⊢ q.
1. p ∨ q P
2. ~ p P
⋮
n. q rule^(line#, ⋯, line*)
178 4 Sentential Logic Languages ∑
4.4.1 Grammar of ∑∎
The grammar for the natural deduction system is borrowed from ∑. Compendiously,
the grammar can be presented as follows, understood as the collection of formal
rules that exhaustively and exclusively determine what is acceptable as a well-
formed formula of the system (wff). The symbols from basic set theory (which we
cover appropriately in a separate chapter) are “∊” for inclusion in a set (which is the
set of well-formed formulas of ∑∎, WFF(∑∎)). The metalinguistic symbol “” is
used to represent any wff of the system, not necessarily an atomic variable letter but
possibly a compound formula.
• pi, …, qj, … ∊ WFF(∑∎)
• --this is the set of atomic (individual, single) variable letters or atoms which is a
so-called denumerable or countable set, which means that this set can be put in
one-to-one correspondence with the set of the natural numbers; the subscripts are
allowed but not required, up to denumerable infinity.
The notational amenities for a formal system for natural deduction like the one
we will be studying include also informal, metalinguistic expressions from a liber-
ally extracted open-ended fragment of the English enhanced with tokens of the con-
nective symbols we are using in the formal system. The justification lines are
regarded as metalinguistic. Numerals are used to label each line of the derivation (or
constructed proof) and justification lines are extended to the right of the formal
derivation-line: the justification line must provide the name of the derivation rule
that has been used and the numerals of the lines (or numeral of the line) on which
the rule was applied to sanction derivation of the current line. Given premises are
indicated as such (perhaps written as “premise” or “P” or with no indication even)
and are regarded as self-justifying. Assumed premises or assumptions (also indi-
cated informally in the justification line) are not self-justifying and the act of posit-
ing such a premise incurs a debt that has to be discharged before the proof can be
considered to have terminated permissibly. Because these expressions are presented
metalinguistically and, in that sense, informally, the apparent sloppiness in legislat-
ing strict notational restrictions is innocuous.
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑⊞ 179
The two rules for the dot are intuitively straightforward to justify on the basis of
expectations about the logical behavior of the connective that matches the logic-
word “and” of language. The first one allows us to put together two sentences that
have been both asserted, given to us or posited, as true, by conjoining them; given
the definition of “and”, the justification is easy to grasp: if two statements are true,
then the statement constructed by joining them with “and” in between has to be true
too. This rule is called conjunction. Thus, this is a conjunction or dot-rule – its spe-
cific name being rather unfortunate because this is not, of course, the only rule we
have for the conjunction. For instance, you are told: “It is raining today.” You are
also told that “the game is canceled today.” This allows you correctly to derive the
new sentence, “It is raining today and the game is canceled today.”
Regardless of the matching with the logical behavior of the linguistic “and,” this
is one of our rules. The derivation or proof (or Natural Deduction) system we are
constructing stands on its own. Speaking theoretically, the proper way to think of
such a system is as follows: there are recipe-like rules – which can be presented in
a figure-like or shape-like fashion and are called schemata (in the plural, with the
singular word being “schema.”) A rule schema is like a recipe that gives instructions
about how to proceed; the instructions are to be followed strictly, without any devia-
tion whatsoever. What may seem intuitively obvious is irrelevant and no licenses are
to be permitted or taken in the application of a rule; the application has to be so that
the instance you generate in applying the rule can be matched to the shape the rule
schema dictates. Trying to think of the rules-schemata as shapes is a beneficial first
step in orienting yourself to the enterprise of constructing proofs in Natural
Deduction (or in what is also called a proof-theoretic or derivation system.)
The dot will also have another rule, called Simplification, which can be justified by
appealing to the logical behavior of the logic-word “and” of language. We should
make it clear, however, that such justificatory appeals do not affect the legitimacy of
the derivation system which stands on its own. Some other remarks are in order at this
point. The proof-theoretic or Natural Deduction system is an alternative approach to
doing logic besides the truth-tabular approach which is called Semantic. The truth
tables defined the connectives of our standard logic in terms of inputs-outputs with
inputs being combinations of truth values from {T, F} and with the output being a
truth value also from {T, F}. The proof-theoretic approach we are following in this
chapter, on the other hand, understands the connectives to be definable by means of
the rules schemata we will be presenting. The two approaches to logic – the proof-
theoretic and the semantic – harmonize completely in the case of the standard senten-
tial logic. Every valid argument form, so checked by the truth-table method, is
derivable in the proof-theoretic system as a proof that has as premises the premises of
the argument form and has as conclusion the conclusion of the argument form.
(Strictly speaking, we have mappings of the formulas across the two systems and we
can speak of interpretations of the sentential formulas by means of proof-theoretic
system formulas and the other way around.) In the opposite direction, for every deriv-
able proof in the proof-theoretic system, we can construct an argument form with the
180 4 Sentential Logic Languages ∑
k. □.
⋮
l. △.
⋮
n. □ ∙ △ Conj(k, l)
This recipe shows that, given the sentences (possibly compound sentences) at
lines labeled k and l, we may – not necessarily right away – apply the rule for ·
called Conjunction by generating at some subsequent line, n, the formula that joins
the formulas at k and at l by the dot. We are allowed to introduce the dot, as it were.
The justification line is written to the right, as “Conj(k, l)”, and it instructs us to
write metalinguistically that the rule named “conjunction” has been applied to the
lines labeled by k and l to generate the line labeled by n, in which we have the gener-
ated conjunctive formula.
The order in which the lines are written in the justification does not matter. The
justification lines are presumed to be written out in our metalanguage which is freed
from schematic conventions. The top-to-bottom order in which the shapes are written
does matter, however. Compare the following presentation of the Conjunction schema.
k. △.
⋮
l. □.
⋮
n. △ ∙ □ Conj(k, l) Conj(l, k)
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑⊞ 181
As we see, the order in which the conjuncts in the generated formula are written
does not matter.
There are other ways in which the schema can be presented. We show some such
presentation options.
□, △ /.. □ ∙ △ Conj
□
△
□ ∙ △.
Let us see now examples of substitutions of well-formed formulas for the sche-
matic variables (box and triangle.) The substitutions are of wffs into the schematic
rule – the recipe. These are instances of application of the rule Conjunction.
1. ~ (p ∨ q) ≡ ~ q
2. (p ⊃ ~ q) ⊃ ~ ~ p
3. (~ (p ∨ q) ≡ ~ q) ∙ ((p ⊃ ~ q) ⊃ ~ ~ p) Conj(1, 2)
CORRECT.
1. ~ p
2. (s ≡ ~ t) ∨ ~ p
3. ~ p ∙ ((s ≡ ~ t) ∨ ~ p) Conj(1, 2)
CORRECT.
1. p ∙ q
2. q
3. p ∙ q Conj(1, 2)
WRONG!
3. (p ∙ q) ∙ q Conj(1, 2)
RIGHT!
1. ~ ~ p
2. ~ p ∙ ~ q
3. ~ ~ p ∙ (~ p ∙ ~ q) Conj(1, 2)
RIGHT!
4. ~ ~ p ∙ ~ p ∙ ~ q Conj(1, 2)
WRONG!
1. p
2. p ∙ p Conj(1, 1)
k. □ ∙ △
⋮
l. □ --S(k)
⋮
n. △ S--(k)
□ ∙ △ /.. □ --S
□ ∙ △ /.. △ S--
□ ∙ △ --S
□
□ ∙ △ --S
△
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑⊞ 183
The rules for the horseshoe – hence, for material implication – are as given by the
following schemata. Abbreviated names for the rules are MP for Modus Ponens,
MT for Modus Tollens and HS for Hypothetical Syllogism.
The material conditional (or material implication) connective of the standard
sentential logic is unsatisfactory if our purpose is to define a connective that models
the logical behavior of the linguistic phrase “if-then.” This selection is necessitated
by the setup of resources that become available when we construct a two-valued
logical system that has the features of the standard sentential logic – such features
as compositionality, for instance, which means that the logical meanings, under-
stood as truth values, of the compound formula can be determined precisely and
uniquely when the truth values of all the constituent atomic variables are given. In a
logical system like this, there are exactly sixteen mathematically definable binary
connectives. Our choice of a suitable implication connective would have to be con-
fined to this group of sixteen available connectives since “if-then” connects two
molecular components (the antecedent and the consequent, as we call them.) Once
connectives have been selected from the group of sixteen binaries (for instance, the
inevitable choice of the binary connectives for conjunction, the two types of dis-
junction and equivalence), we have the remaining binary connectives available.
Many more are eliminated as choices (for instance, the connective with constant or
fixed output the value true, and the connective with fixed output false.) Moreover,
184 4 Sentential Logic Languages ∑
Modus Ponens.
k. □ ⊃ △
⋮
l. □
⋮
n. △ MP(k, l)
□ ⊃ △, □ /.. △ MP
□⊃△
□
MP
△
Modus Tollens.
k. □ ⊃ △
⋮
l. ~ △
⋮
n. ~ □ MT(k, l)
□ ⊃ △, ~ △ /.. ~ □
MP
□⊃△
~ △ MT
~ □
Hypothetical Syllogism.
k. □ ⊃ △
⋮
l. △ ⊃ ○
⋮
n. □ ⊃ ○ HS(k, l)
186 4 Sentential Logic Languages ∑
□ ⊃ △, △ ⊃ ○ /.. □ ⊃ ○ HS
□ ⊃ △.
△ ⊃ ○ HS
□ ⊃ ○.
this rule compels us to posit an assumed premise φ; by using this and other lines
we have available (initially as given premises), we proceed to derive ψ. At that
point we may discharge the assumed premise φ (which means that we have paid
our debt for the assumption, to speak metaphorically) but, to do this, we are con-
strained to make one specific move: we introduce φ ⊃ ψ as the next line; this
discharges the assumed premise and, indeed, the proof cannot be terminated
without effecting such a discharge since the assumed premise φ was not given to
us but we only posited it on our own. We can appeal to linguistic practice to jus-
tify this rule: if under the assumption that φ we can prove validly that ψ, with the
proof process completed and with every line of the proof justified properly, so
that the proof retains truth across all the lines, then we have proven that “if φ,
then ψ” is a logical truth or tautology (and, as such, can be asserted.) This works
out neatly in the case of the standard sentential logic we have been studying: we
can say, using a common term in the bibliography, that the standard sentential
logic is characterized by the Deduction Theorem (using “⇒” as metalinguistic
symbol for “---a valid proof process proves___”):
• Deduction Theorem: If φ ⇒ ψ, then ⇒ φ ⊃ ψ (or φ ⊃ ψ is a tautology)
At a deeper level, there is a contentious point raised that this schematic rule
shows us what the meaning of “if-then” is: according to this view, the way in
which implication can be introduced properly, and is in practice warranted and
accepted in speech, as a result of positing a premise and then “collecting” it
establishes the meaning of “if-then.” Contrast this with the fact that, in an earlier
section, we defined the implication connective by using the truth table: that is
the semantic way of defining the meaning of an implication connective whereas
what we have done in this section is the proof-theoretic approach to the
definition.
It should be kept in mind that not all logical systems or formal languages have
the Deduction Theorem as characteristic or valid in them but our standard sentential
logic has it. Dealing with Conditional Proof (CP), we have introduced a new notion:
positing an assumed premise that needs to be discharged. We defer further examina-
tion of this procedural resource to 4.7.d because we will need it in the context of
discussing a rule for the wedge as well. Positing and discharging assumptions will
also appear in 4.7.e when the rules for the tilde are presented.
The three rules for the wedge are Disjunctive Syllogism, abbreviated as DS (with
two sub-schemata, −-DS and DS--), Addition (Add, with two sub-schemata, −-Add
and Add--) and Constructive Dilemma (CD.) The abundance of rules, including rule
sub-schemata, has the purpose of making proofs easier to construct.
188 4 Sentential Logic Languages ∑
Disjunctive Syllogism.
k. □ ∨ △
⋮
l. ~ □
⋮
n. △ DS--(k, l)
⋮
q. ~ △.
⋮
u. □ --
DS(k, q)
□ ∨ △, ~ □ /.. △ DS--
□ ∨ △, ~ △ /.. □ --DS
□ ∨ △.
~ □ --DS
△
□ ∨ △.
~ △ DS--
□
k. □
⋮
l. △ ∨ □ --Add(k)
⋮
n. □ ∨ △ Add--(k)
□ /.. △ ∨ □ --Add
□ /.. □ ∨ △ Add--
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑∎ 189
□ --Add
△ ∨ □.
Constructive Dilemma.
k. □ ∨ △
⋮
l. □ ⊃ ○
⋮
m. △ ⊃ ◊.
⋮
○ ∨ ◊ CD(k, l, m)
□ ∨ △, □ ⊃ ○, △ ⊃ ◊ /.. ○ ∨ ◊ CD
□∨△
□⊃○
△ ⊃ ◊ CD
○∨◊
Now we present two proof methods that proceed in a different fashion from what we
have presented so far. The schema for the first derivation method is called CD+ and
is different in its operational details from CD. The CD+ method, unlike CD, requests
that premises are posited or incurred, as we say. A posited premise can also be called
assumption – if we agree that the word “assumption” is not also to be used to refer
to a premise that is given. The difference between given premises and posited prem-
ises/assumptions is clear: given premises do not need to be justified, they are con-
sidered as self-justifying, they may be used in the course of the proof and the proof
can terminate without any further moves that are related to the given premises.
Incurred or posited premises, on the other hand, are not given and, so, they do not
provide an automatic justification that backs them up. Posited assumptions are
available to use but, because no sanction or justification has been given for them, the
proof cannot terminate without first executing certain required actions regarding
those posited premises; such actions have, then, to be included in the schema as
well. For the proof to terminate, when we have incurred or posited premises, we
have first to discharge those premises. The schema, accordingly, shows how we can
discharge the posited premise or premise so that we can terminate the proof (or
subproof, segment of the proof that is carried out based on the rule.) Imagination is
not required; the schematic shapes show us how the discharge is accomplished. Let
us, first, consider and briefly discuss the CD+ schema. The posited premises receive
a superscript “minus” next to their characterizing number and, once the discharge of
such a premise has been attained, we repeat the premise and number with a super-
script of “zero”. These lines are metalinguistic and are written from top-down as
justification lines. The premises that are given do not need superscripted minuses
because they are self-justifying; nor do they need discharge (replacement of the
minus superscript by the zero superscript.)
k. □ ∨ △ k
⋮
l. □ l⎺
⋮
m. △ m⎺
⋮
n. □ ⊃ ◊.
⋮
o. △ ⊃ ○.
⋮
x. ◊ ∨ ○
CD(k, l0, m0, n, o)
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑∎ 191
preceding section to define connectives.) We might now seek the schema that can be
understood as the elimination schema for the conditional connective: if we think
about this, Modus Ponens, which we introduced earlier as a rule schema for the
conditional, fits that role: given the conditional “if φ, then ψ” and also given that φ,
we have license to conclude that ψ. Thus, the conditional, which we find in one of
the premises, is eliminated.
The Constructive Dilemma schema may be thought, similarly, as elimination of
the inclusive disjunction connective while Addition is the schema for the introduc-
tion of the inclusive disjunction connective. We could have constructed a minimalist
system – with no rules that can be derived from other rules – and so that the rules
are introduction and elimination rules: such a system would be more difficult to use
for constructing proofs but it clearly has theoretical significance and it would also
show us how we may construct our standard sentential logic as distinguished from
other logics (some of which can prove less than our standard logic can prove): alter-
native logics would differ in the rules they have from the standard logic. (For such
a system see 4.7.4.a).
Next, we present the schema for the conditional proof (CP) rule. This rule also
works with positing an assumed premise, which means that we incur a debt, if we
may use such a metaphor; the application of the rule CP discharges the debt – dis-
charges the assumed or posited premise – and this makes it possible that the proof
can be terminated (not necessarily right away.)
k. □ k⎺
⋮
l. △ (… k ⎺ …)
m. □ ⊃ △
CP(k0 – m-1)
We may posit assumptions at any point in a proof but the termination of the proof
depends strictly on discharging those posited assumptions. Note how the justifica-
tion line that discharges the incurred charge, by reverting the superscript to zero, is
written as follows: the rule CP is applied on the lines from the line of the posited
premise to the line immediately preceding the line after which the discharge is pos-
sible. We stipulate that the discharges happen in a reverse order from that in which
they were introduced. We show this schematically as follows:
k. □ k ⎺
⋮
l. △ l⎺
⋮
(continued)
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑∎ 193
(continued)
m. ○.
⋮
n. ◊.
o. △ ⊃ ○ CP (l0 – n)
⋮
x. □ ⊃ ◊
CD(k0 – m)
Another rule that is usually unstated but applied nevertheless is that we may
repeat any line a second time – and a third time and so on. We can call this rule
Repetition (Rep) or Reiteration (Reit). Application of this rule is not needed usually
but let us consider a proof like the first one shown below which depends on the
Repetition rule and also utilizes conditional proof (and, interestingly, does not uti-
lize any given, but only posited, premises.)
1. p 1⎺
2. q 2⎺
3. p Rep(1)
4. q ⊃ p CP(20–3)
5. p ⊃ (q ⊃ p) CP(10–4)
We may think now together of CD+ and CP and discern how there is a way of
combining the two in appropriate fashion, as shown in the following derivation
schema that can be considered as an optional alternative to the CD+ schema we
presented earlier. Accordingly, this alternative schema requires a different labeling
and, to observe this, we call it “CD+cp.” It is intellectually rewarding to try to justify
this alternative schema which will require appealing both to the original CD+
and to CP.
k. □ ∨ △ k
⋮
l. □ l⎺
⋮
m. △ m⎺
⋮
n. ◊ (…, l ⎺, …)
⋮
o. ○ (…, m ⎺,…)
⋮
x. ◊ ∨ ○
CD+cp(k, l0, m0, n, o)
194 4 Sentential Logic Languages ∑
We may show the dependence of CD+cp on both CD+ and CP by means of the
following analytical breakdown that shows deployment of both of those rule sche-
mata. A difference we discern when we spell out the analytical presentation of the
schema below is that the latest method discharges the posited premises by using the
CP rule essentially.
k. □ ∨ △
k
⋮
l. □ l⎺
⋮
m. △ m⎺
⋮
n. ◊ (…, k ⎺,…)
⋮
o. ○ (…, l ⎺,…)
⋮
w. □ ⊃ ◊
(k0 – w-1)
⋮
u. △ ⊃ ○ (l0 – u-1)
⋮
x. ◊ ∨ ○
CD(k, l, m, n, o)
Applicability of the rule Rep comes to light in the following derivation of a thesis
of the standard sentential logic. In this case we derive a thesis (which matches a
necessary truth or tautology formula if we move over to the truth table system and
“translate” our formula there, so we can check by applying the truth table method.)
The thesis can be called self-implication. We need to depend on a posited premise
and this premise needs to be discharged appropriately by application of the CP rule,
as we show below. Without a rule for reiteration of a line-formula, like Rep, we
would have to make a different stipulation: we would have to allow the CP rule to
discharge in a fashion we may call “degenerate” (no moral connotations, of course),
so that given any formula we can then treat it as a zero-lines CP-derivation. A rule
like Rep is not trivial from a broader metalogical point of view because there are
alternative, non-standards, logics whose derivation systems can be obtained by
tweaking a system like the one we have here so that the Rep rule is banned explicitly.
1. p 1−
2. p Rep(1)
3. p ⊃ p CP(10–2)
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑∎ 195
Rules for ⌜~⌝: Indirect Proof Method (IP) and Double Negation (DN)
The negation rules include yet another schema that utilizes posited premises that
need to be discharged before the derivation process can terminate. The discharging
method is called Indirect Proof (IP) – also called Proof by Contradiction. We posit
a premise φ and derive a contradiction of the general form ψ ∙ ~ ψ with φ as the only
premise that is not given but posited: we have license, then, to derive ~ φ while
discharging φ. This is a tremendously powerful proof method; forfeiting it or disal-
lowing its use would result in being left with less than half of our mathematics
results. There are philosophic objections to this proof method but we will not dis-
cuss those here. The proof may be applied on a negated premise, ~ φ. In that case,
derivation of a contradiction licenses derivation of ~ ~ φ and discharge of the pos-
ited assumption φ. Notably, we do not yet have a rule that permits deriving φ from
~ ~ φ. Such a rule is needed and it will be part of our system. We can think of this
rule as our rule for elimination of the tilde while the indirect proof schema can be
thought of as our rule schema for introduction of the tilde. The use of a schema for
removal of the double negation (DN) as our negation-elimination schema breaks a
certain subtle symmetry between introduction and elimination rules for the negation
symbol – something that does not happen with the introduction and elimination
rules for the conditional and for the disjunction connective. Alternatively, we could
label this as the double-negation elimination rule. That would not change anything
regarding the metatheoretical significance of adding this rule. A logic that does not
have this rule, although it has all the other rules we have presented, would be a dif-
ferent logic from the standard system we are studying. This, of course, applies gen-
erally for omitting rules that are indispensable for characterization of the logic but
the ado about this specific mention is that there is a popular alternative logic, called
Intuitionist or Intuitionistic, which lacks the double negation elimination rule. To
ascend from that logic to our standard logic we can add the rule for eliminating the
double tilde symbol; there are other ways, additions of other “classical” rules, that
can also obtain the same result. It is interesting that, up to the point of discussing the
196 4 Sentential Logic Languages ∑
negation rules, the natural deduction setup we have constructed would not allow us
to differentiate between the classical and the intuitionistic logic, but, as soon as we
arrive at the negation rules, we have the opportunity of stopping short of allowing
for the double negation elimination rule and in this way we have drawn the line
around what is a system for intuitionistic logic. We will not dwell on this matter
further here but we mention in passing that, remarkably, if we withhold the rule for
removal of the double negation – while keeping all the other introduction and elimi-
nation rules – we have a different logic, not the standard sentential logic but an
alternative and non-standard logic called Intuitionistic Logic.
It is also crucial to underscore that the rule for the elimination of double negation
has a feature that we have not seen yet in any rule: it works in both directions, so to
speak. From ~ ~ φ we may derive φ but also, conversely, from φ we can derive ~ ~
φ. Conversion – deriving ψ ⊃ φ from φ ⊃ ψ is illegitimate or invalid but in cases in
which two formulas φ and ψ are logically equivalent, φ and ψ are mutually inter-
derivable. We learned about equivalence as a relation between sentential formulas
and, upon examining that relation, we noted that two mutually equivalent formulas
imply each other. Rule schemata like Double Negation are Equivalence Schemata
(also called Equivalential Schemata and, for a reason that will become clear soon,
Replacement Schemata.) Thus, we are building now a bridge to the next family of
derivation rules schemata, which are replacement schemata: such schemata not only
work in both derivational directions, as we explained, but also allow us to replace
any part of a formula in a proof line by its equivalent formula.
k. □
k⎺
⋮
l. ◊ ∙ ~ ◊.
m. ~ □
IP(k0 – l)
=======================.
k. ~ □ k⎺
⋮
l. ◊ ∙ ~ ◊.
m. ~ ~ □ IP(k0 – l)
========================.
k. ~ ~ □.
⋮
l. □ DN(k)
Here is a remarkable proof that derives the conclusion of the formula that
expresses the fundamental law of excluded middle (that any sentential variable may
take either true or false as its truth value.) Note its dependence on IP, Add, and also
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑∎ 197
on DN. A logical system that lacks DN cannot derive this conclusion but this may
actually be desirable insofar as the philosophic motivation for such an alternative
system may be rejecting excluded middle.
(For example, suppose that you regard not-φ as meaning “there is no proof that
assuming φ implies a contradiction – for some appropriate sense of “implies.”” This
is a different meaning of “not” from the standard one that is modeled in our truth-
table definition of the negation connective. In such a case, not-not-φ should not
license an inference to φ. It might be the case that we have a proof that not-φ leads
to a contradiction but that does not mean that we have proof of φ itself. Absence of
the double negation rule blocks the inference to φ or not-φ, but let see how this too
is desirable on the basis of a motivation that regards φ as a matter of proof-that- φ
and not-φ as a matter of proof that a contradiction can be derived from φ: φ-or-not-
φ means, on this approach, that either a proof of φ or a proof of not-φ (in the sense
we specified above) is always available – or should be considered as in-principle
available) for any φ. This, however, may not be the case! Suddenly, it seems sensi-
ble that we would want excluded middle – taking φ-or-not-φ to be a logical truth –
to be failing! This is so if we take truth and falsehood to be rightly dependable on
available proof-constructions in the sense we explained above.)
1. ~ (p ∨ ~ p) 1⎺
2. p 2 ⎺
3. p ∨ ~ p Add(2)
4. ~ (p ∨ ~ p) ∙ (p ∨ ~ p) Conj(1, 3)
5. ~ p IP(20 - 4)
6. p ∨ ~ p Add(5)
7. ~ (p ∨ ~ p) ∙ (p ∨ ~ p) Conj(1, 6)
8. ~ ~ (p ∨ ~ p) IP(10–7)
9. p ∨ ~ p DN(8)
By means of examples, we will see, next, how we can derive propounded conclu-
sions from given premises by applying our derivation rules as needed. It should be
pointed out that the requested conclusion may be derivable from the given premises
in more than one way – by means of application of different combinations of rules
and/or different sequential orders for the application of rules. We do not insist on
placing a premium on shorter proof sequences. It should also be the case that there
are no redundant premises – no given premises that, in the end, are not needed for
the construction of the proof – but textbook examples and exercises may actually
violate such a restriction. Another issue to mention, again, is that our rules are not
necessarily independent: this means that it might be possible to derive an instance
of what is an application of one of our rules by using other given rules in our system:
notice that this means that we do not really need this rule, we can dispense with this
non-independent or derivable rule. Nevertheless, proofs are easier to construct when
we have an abundance, and, indeed, redundancy of rules.
Surely, the following ought to be derivable – or, a proof sequence ought to be
constructible for the following trivial derivation. Any sentence ought to entail itself,
trivially. Our system should allow us to effectuate this derivation. We do not want to
make the case metalinguistically; we rather expect to be able to derive the indicated
conclusion from the premise, which happens to be the same as the conclusion. The
rule we have called Reiteration or Repetition is needed here. Since we have this
rule, we may apply it on the assumption-line, and the second that is immediately
yielded furnishes us the requested conclusion. The justification lines are presumed
200 4 Sentential Logic Languages ∑
written in our metalanguage; we can enhance with symbols, and we may use “⊣”
(converse turnstile) as our appointed symbol that indicate successful termination of
the proof.
• p⊢p
1. p P/.. p
2. p Reit (1) ⊣
The apparent difficulty with the following proof sequence consists in that we
need to discern the relevance of applying two rules that are elusive, not easy to think
of applying, for the beginner: the Import-Export rules and, in some step in the proof
construction, the Hypothetical Syllogism (HS) rule. Commencing with this proof,
with no specific strategy available to assist us, we may think of what rule we can
possibly implement to initiate an opening move (generation of a new line that can
be justified from the given lines and by appealing to one of the rules.) Because there
seems to be no other available move, we might actually be helped in this case: since
we have conjunctions as antecedents, the rule that we may find applicable as we
scan the available rules is the Import-Export rule: we can apply this rule twice,
which we do before we search for some other opportunity for subsequent applica-
tion of one of our rules. This seems to be tantamount to searching in the dark but this
is not an accurate characterization of what we do. The example does, however,
underscore that practice is indispensable when working with this proof method: pat-
terns become more readily discernable, mere experimentation with applications
recedes, after we have practiced derivations assiduously. Interestingly, in this exam-
ple, a strategy we will learn subsequently – about how to reverse-engineer the
proof – does not appear to be working.
• (p ∙ w) ⊃ t, (t ∙ q) ⊃ r, p ⊢ (w ∙ q) ⊃ r
1. (p ∙ w) ⊃ t P
2. (t ∙ q) ⊃ r P
3. p P/.. (w ∙ q) ⊃ r
4. p ⊃ (w ⊃ t) Import-Export (1)
5. t ⊃ (q ⊃ r) Import-Export(2)
6. w ⊃ t MP(3, 4)
7. w ⊃ (q ⊃ r) HS(5, 6)
8. (w ∙ q) ⊃ r Import-Export(7) ⊣
The following derivation seems to be requesting something odd. An analysis
shows that the single provided premise translates into our proof-theoretical system
a tautology of sentential logic (as we can ascertain or verify by using the truth table
method.) But from a tautology we should be able to derive validly only another
tautology. Indeed, the requested conclusion is also – translated into a semantic or
truth-tabular system – tautologous. We examine an example like this to illustrate
that our method is schematic, not concerned with evaluations but restricted with
abandon to the application of rules; also, we are interested in the power of our proof-
method: this method is guaranteed to be missing no rule that would be needed to
4.4 A System of Natural Deduction (Proof Method) for ∑: ∑∎ 201
2. ~ (t ∨ ~ t) 2⎺
3. ~ t ∙ ~ ~ t DeM(2)
4. ~ ~ (t ∨ ~ t) IP(20–3)
5. t ∨ ~ t ND(4) ⊣
We might wonder if we could possibly liberalize our system’s restrictions by
making available self-justifying lines (automatically justified, insertable at any
point in the proof sequence): these would be formulas that translate tautologies of
the sentential logic (which we can check for their logical status by using the truth
table method.) From a theoretical point of view, this is like a “cheat” since our sys-
tem is not supposed to smuggle in extraneous results. If something this were permit-
ted, it would be a nod to mere convenience and a dilution of our project. A better
case can be made for allowing insertion, as self-justifying lines, of formulas that
have already been proven by means of derivation procedure. This could shorten
proofs, perhaps make the proofs easier in other ways; obviously, one would have to
be keeping track of previous endeavors.
certain specified derivational means. We do not enter into such philosophical debates
in this context. The standard view is that theses are provable forever, so to speak,
regardless as to whether they are in fact proven or not. Our formal system is present
in its entirety (we do not need to torture ourselves speculating about the kind of
entity such a system may or what kinds of things it deals with); on this account, we
eke out the proofs as we compile the lines from an eternally subsisting systematic
aggregate. We may still defend a practice that would allow us to add schematic
instances of theses as readily available lines to be used in proofs; the justification
would not be based on an evolution of our system, and the fact of our discovery of
such theses would have to be regarded as irrelevant to their correctness.
NEGATED MATERIAL IMPLICATION
• ~ (□ ⊃ ◊) ⊢ □ ∙ ~ ◊
1. ~ (p ⊃ q) 1
2. ~ (~ p ∨ q) CE(1)
3. ~ ~ p ∙ ~ q DeM(~/∨)(2)
4. p ∙ ~ q DN(3)
IMPLYING SELF-NEGATION
• □⊃~□⊢~□
1. p ⊃ ~ p 1
2. ~ p ∨ ~ p CE(1)
3. ~ p Idemp(∨)(2)
STRENGTHENING OF THE ANTECEDENT
1. □ ⊃ ◊ ⊢ (□ ∙ △) ⊃ ◊
1. p ⊃ q 1
2. ~ p ∨ q CE(1)
3. ~ r ∨ (~ p ∨ q) Add(2)
4. (~ r ∨ ~ p) ∨ q Assoc(∨)(3)
5. ~ (r ∙ p) ∨ q DeM(~/∙)(4)
6. ~ (p ∙ r) ∨ q Comm(∙)(5)
7. (p ∙ r) ⊃ q CE(6)
1. (p ∙ r) ⊃ q conclusion
2. ~ (p ∙ r) ∨ q CE(1)
3. (~ p ∨ ~ r) ∨ q DeM(~/∙)(2)
4. (~ r ∨ ~ p) ∨ q Comm(∨)(3)
At this point we wish to derive, working backwards toward the given premise,
anything close to ⌜p ⊃ q⌝ which is exactly the given premise. Knowing that we can
use CE again, we continue:
5. ~ r ∨ (~ p ∨ q) Assoc(∨)(4)
6. ~ r ∨ (p ⊃ q) CE(5)
204 4 Sentential Logic Languages ∑
We ought to think at this point that we can derive line 6 from the given premise
by addition: there is no other way, as we have indicated, for introducing a new letter
symbol. We marvel that the derivation sequence we have engineered, starting from
the conclusion and working backwards, is like the proof we constructed above but
with the lines turned upside down. The symmetry breaks at line 6 of the reverse
(conclusion-premise) derivation sequence. Of course, addition is not a replacement
or equivalential rule; it is correct to apply only in one direction. This accounts for
the break of symmetry we have come upon. Up to this point, in our conclusion-
premise sequence, we have been applying replacement rules which are reversible. A
move we can insert at this point, which may seem complicated but possesses some
degree of elegance, is to insert the new letter that the conclusion of the requested
proof sequence contains. Because of the irreversibility of this move, we bracket the
line of the insertion of the missing letter (missing from the given premise); we also
bracket any subsequent line that depends on any other bracketed line. Thus, we may
continue. We have completed our reverse derivation once we have produced the line
with the given premise.
7. [r] [insertion of novel letter]
8. [~ ~ r] [DN(7)]
9. [p ⊃ q] [DS(6, 8)]
Now we have the proof we produced earlier if we turn this reverse derivation
upside down and remove all bracketed lines (except for the premise line.)
WEAKENING OF THE CONSEQUENT
• □ ⊃ ◊ ⊢ □ ⊃ (◊ ∨ △)
1. p ⊃ q 1
2. ~ p ∨ q CE(1)
3. (~ p ∨ q) ∨ r Add(2)
4. ~ p ∨ (q ∨ r) Assoc(∨)(3)
5. p ⊃ (q ∨ r) CE(4)
EXPLOSION
• □∙~□⊢◊
1. p ∙ ~ p 1
2. ~ q 2−
3. ~ ~ q IP(2–3, 20)
4. q DN(3)
EXPANSION
• ◊⊢□∨~□
1. p 1
2. ~ (q ∨ ~ q) 2−
3. ~ q ∙ ~ ~ q DeM(~/∨)(2)
4. ~ q ∙ q DN(3)
4.5 Other Natural Deduction Systems 205
5. q ∙ ~ q Comm(∙)(4)
6. ~ ~ (q ∨ ~ q) IP(2–5. 20)
7. q ∨ ~ q DN(6)
The implementation of the commutative rule on line 4 is needed because the
shape of the indirect proof rule as we have legislated it in our system has the negated
formula to the right and not to the left; we could have provided a liberalized schema
for this rule to forestall the need for such a maneuver, of course, but, we need to
apply the rules exactly as they are given – and this is a good opportunity to clarify
this matter in action.
SIMPLIFICATION OF ANTECEDENT
• (□ ∙ ◊) ⊃ △ ⊢ (□ ⊃ △) ∨ (◊ ⊃ △)
1. (p ∙ q) ⊃ r 1
2. ~ (p ∙ q) ∨ r CE(1)
3. (~ p ∨ ~ q) ∨ r DeM(~/∙)(2)
4. (~ p ∨ ~ q) ∨ (r ∨ r) Idemp(∨)(3)
5. ((~ p ∨ ~ q) ∨ r) ∨ r Assoc(∨)(4)
6. (~ p ∨ (~ q ∨ r)) ∨ r Assoc(∨)(5)
7. (~ p ∨ (r ∨ ~ q)) ∨ r Comm(∨)(6)
8. ((~ p ∨ r) ∨ ~ q) ∨ r Assoc(∨)(7)
9. ((p ⊃ q) ∨ ~ q) ∨ r CE(8)
10. (p ⊃ r) ∨ (~ q ∨ r) Assoc(∨)(9)
11. (p ⊃ r) ∨ (q ⊃ r) CE(10)
The proliferation of tedious moves in this proof cannot escape attention but the
example illustrates how the shapes of the rules, as they have been established in the
system that is in use, dictate how to move and no permission exists to disregard this
or depend on some extraneous intuitions or licenses. Although we have liberalized
the conventions about use of parentheses around disjunction and conjunction sym-
bols, we may also use parentheses: this is not prohibited; the placement of parenthe-
ses is reintroduced conveniently to guide and highlight the shifts of letters and their
relative locations as the proof unfolds line by line by applying the relevant rules of
derivation.
Exercises
truth table system in order to check: the truth table system is semantic, because
it works with true-false truth values, whereas our Natural Deduction system is a
proof-system. We can check validity with the truth table system.)
d. Why is it that failure to construct a proof for a given argument form does not
establish that the given argument form is not correct?
2. Prove that the following instances of logical consequence are valid in the
system ∑∎.
a. SIMPLIFICATION OF ANTECEDENT
(□ ∨ ◊) ⊃ △ ⊢ (~ □ ⊃ △) ∙ (◊ ⊃ △)
b. SIMPLIFICATION OF CONSEQUENT
□ ⊃ (◊ ∙ △) ⊢ (□ ⊃ ◊) ∙ (□ ⊃ △).
□ ⊃ (◊ ∨ △) ⊢ (□ ⊃ ◊) ∨ (□ ⊃ △)
c. NESTED IMPLICATIONS
(□ ⊃ ◊) ⊃ △ ⊢ (~ □ ⊃ △) ∙ (◊ ⊃ △)
(□ ⊃ ◊) ⊃ □ ⊢ ~ □ ⊃ ◊
d. COMBINING IMPLICATIONS
(□ ⊃ ◊) ∙ (△ ⊃ ○) ⊢ (□ ∙ △) ⊃ (◊ ∙ ○)
e. SELF-DISTRIBUTION OF IMPLICATION
□ ⊃ (◊ ⊃ △) ⊢ (□ ⊃ ◊) ⊃ (□ ⊃ △)
f. SWITCHING ANTECEDENTS
□ ⊃ (◊ ⊃ △) ⊢ ◊ ⊃ (□ ⊃ △)
3. Construct proofs in ∑∎, deriving the conclusion from the given premises.
a. (p ⊃ q) ⊃ p ⊢∑∎ p
b. p ⊃ q, ~ p ⊃ q ⊢∑∎ q ∨ s
c. p ⊃ q, p ⊃ ~ q ⊢∑∎ p ⊃ s
d. p ≡ q, q ≡ r ⊢∑∎ p ≡ r
e. ((p ⊃ r) ⊃ (q ⊃ r)) ⊃ s, q ⊃ p ⊢∑∎ s
f. p ⊃ r, q ⊃ r, p ∨ q ⊢∑∎ ~ r ⊃ q
g. p ≡ q, ~ p, ~ r ⊃ q, s ⊃ r ⊢∑∎ s ⊃ r
h. (p ∙ q) ⊃ s, t ∨ w, t ⊃ ~ s, s ⊃ ~ w, q ⊢∑∎ ~ p
i. (p ∙ q) ⊃ s, q ⊢∑∎ p ⊃ s
j. ~ (p ⊃ (q ⊃ ~ p)), p ⊃ (q ⊃ t), ~ t ∨ s ⊢∑∎ s
k. r ≡ (t ⊃ w), ~ (t ∨ r), r ⊃ (s ∨ t), ~ s ⊢∑∎ t
4. A logical thesis – which is semantically conceptualized as a logical truth or what
we called also a tautology – can be defined as a formula that is derivable from
the empty set of premises. Can we adapt our system ∑∎ to determine if a given
formula has the status of being a thesis? Why or why not? Can we use the proof
4.5 Other Natural Deduction Systems 207
We legislate that the same grammar is to be used as with the system of natural
deduction we have already studied. The grammar of the formal system regulates
how symbolic expressions are formed so that they are characterized as well-formed
or correct, with no other concatenations of symbols accepted as grammatically cor-
rect or well-formed. We number each line of the proof by the use of numerals from
the positive integers and we write all the way to the right of each line the justifica-
tion for the derivation of line, which can also be thought of as an account of what
bestows a “right” to obtain the written line by derivational means in our system. All
these expressions are spelled out in our enhanced metalanguage for the constructed
system, in the present case the metalanguage being (∑||), which is an open-ended
and conveniently adapted fragment of English reinforced with tokens of the connec-
tive symbols we use in the official grammar.
For the system ∑||, we add one more symbol, which is the so-called absurdity
symbol: ⏊. We should not say that this symbol is properly to be regulated since it is
considered as an indication of logical absurdity or nonsense. We could think of this
as an exclamation symbol that marks a derivation line to indicate that something has
gone wrong from a logical point of view. Yet, we are marking a line this way,
because, as we will see, the rules for the connective symbols enter into interplays
with the appearance and removal of this symbol. We could, by convention, even
present rules for this symbol – the symbol of absurdity – but this can also be frowned
upon on the basis that this symbol shows us that logical nonsense has accrued into
208 4 Sentential Logic Languages ∑
our proof lines. Corresponding to a semantic system of sentential logic, like the one
we used for truth tables, we can think of this symbol, ⏊, as denoting (we can speak
of denoting or referring in semantic terms) a sentence whose truth value is fixedly
false: the false sentence, and, of course, any sentence whose truth value is false
constantly (a logical contradiction) is logically equivalent (has the same logical
meaning) with our “false sentence” which is denoted by the absurdity symbol.
The Fitch-style proof system utilizes vertical lines, placed next to the numeral
labeling each line and the formal symbolic expression on the proof-line. New verti-
cal lines can be drawn to the right to mark the spot where a posited assumed premise
or assumption is entered; this generates a subproof which is territorially marked,
and pictorially indicated, by the vertical lines to the left. We have already borne wit-
ness to the significant role played by assumed premises and the machinery needed
for discharging any such assumed premises before the proof can terminate: the
Fitch-style proof shows visibly when such an assumed premise is laid down by the
vertical line to the left; and this vertical line cannot be the main or initial vertical line
(which is mandatory for any proof) that has been drawn next to the numeral that
labels the opening line. Thus, we have a main body of proof, always, which is ter-
ritorially bound by the leftmost vertical line and is continuous and uninterrupted to
the end of the proof (the terminating or final line with the target conclusion for-
mula.) Vertical lines drawn appropriately to the right mark the assumed premises
that are entered and they bound the subproofs that are generated accordingly. Since
we can start a subproof within a subproof (or, we can lay down an assumed premise
before another assumed premise has been discharged), this means that we can
always move to the right and draw a new vertical line that indicates a new assumed
premise being laid down and a new subproof initiated accordingly. There are strict
regulations, or constraints, on how the assumed are discharged, and in what order
they are to be discharged; also, there are appropriate regulations managing how
lines within subproofs are to be made available to other subproof areas and to the
main proof area. Since the visual line-markers are entrusted with the role of indicat-
ing assumptions and subproofs, the Fitch-notation dispenses with justification lines
that indicate that a premise or assumed premise has been entered. The premise or
premises are flanked to the left by the leftmost line while assumed premises, and
4.5 Other Natural Deduction Systems 209
their generated subproofs, are flanked by inner vertical lines. The outlay shows per-
spicuously how subproofs are nested within the main body of the proof and how one
subproof is nested within the other. Discontinuation of the vertical line for a sub-
proof is exacted when the assumed premise or premises are to be discharged; for
this purpose, the next line is written in the body of proof to left (which may be a
return to the main proof area or to a prior subproof whose assumption has not been
discharged yet.) The Fitch-style notation does require justification lines spelling out
the name of the rule that has been applied and the numerals labeling the lines on
which the rule has been applied. As an example, here is what a proof looks like.
We construct our system ∑|| in an austere way: we supply only rules that cannot
be dispensed with if the system is to be able to construct proofs for all, and only, the
argument forms that are determined to be valid in the semantic approach we fol-
lowed when we applied the truth table decision procedure. Because the truth table
method is known to have this characteristic of checking as valid all and only the
argument forms that should be accepted as “correct” in the standard sentential logic,
we can repair to the familiar truthtabular approach to use as our guide. It is to be
understood that, in so doing, we are translating back and forth because the natural
deduction or proof-theoretical approach is not a semantic approach – it does not
deal with truth values and it does not build corresponding narratives to give referen-
tial meanings to its items. Of course, the tediousness and ultimately cumbersome
excess of indicating systematically how we move back and forth between semanti-
cal and proof-theoretic system militates against showing this operation in detail. We
can use some metalinguistic notation to make plain the correspondence between the
truth table system and a natural deduction proof system like the one we are building
now. By “⊩” we symbolize, metalinguistically, the relation of logical consequence
for the truth table system: what is to the left of the semantic turnstile symbol is
understood to be a set of formulas which are the premises of the argument form to
be examined by means of our truth table procedure. To the right of the turnstile, is
the solitary conclusion formula (since our systems do not deal with multiple-
conclusions argument forms or proofs, although this is actually feasible.) The proof-
theoretical logical consequence, likewise, has a set of the given premises to the left
and the conclusion that is proven by means of a constructed proof to the right of the
turnstile which is in this case symbolized by “⊢”. Now, we can write out in our
metalanguage certain expressions that track the relationship we have been referring
to, and we can elaborate on what is presented. The set theoretic notation uses left
and right set-brackets for inclusion of the members of the set (of premises.) We keep
the symbols of formulas the same as a matter of liberal convenience but recall how
we spoke above of translating back and forth to go from the truth table system ∑⊞
to the proof-theoretic system ∑||.
• {𝓐1, ..., 𝓐n} ⊩ 𝓒 if and only if (iff) {𝓐1, …, 𝓐n} ⊢ 𝓒
• <The semantic and proof-theoretic systems should match, which means: there is
a way to construct a proof in the ⊢−system from the premises {𝓐1, .., 𝓐n} and
with conclusion 𝓒, if and only if there is a successful check by the truthtabular
method in the ⊩−system that the argument form with premises {𝓐1, .., 𝓐n} and
conclusion 𝓒 is valid.
210 4 Sentential Logic Languages ∑
• In the Fitch-style system, a constructed proof for a thesis – as we call what cor-
responds to a semantic tautology – will have a main body of proof area with no
given premises whatsoever; there will have to be at least one or possible more
vertical lines to the right generating subproofs; when all the assumed premises
have been discharged appropriately and, so, the subproofs have all been
completed, then the thesis is written in the main body of the proof . This should
be the only line written in the main proof area. In this fashion, we have con-
structed a no-premises proof for the thesis.>.
The Fitch-style notational system may look like the example below. Vertical lines
are used, placed next to the numeral that labels every line of the proof and he proof
lines themselves; additionally, it is mandated that vertical lines are drawn to the left
of every subproof that is generated by an assumed premise. Other methods can be
conjured up to mark the assumptions, and to indicate the discharge of an assumed
premise, but the Fitch style notational method surrenders this function to a visual
presentation that pushes vertical lines to the right for each subproof. Because of the
reliance on the visual representation, there is no need to enter characterizations of
premises and assumed premises, or to mark that discharge of assumed premises has
taken place, on the justification line. At any rate, the justification lines, like the
numerals, are not officially elements of the formal notation and are considered as
being transacted in the metalinguistic idiom we allow ourselves to have. This also
means that any adjustments to such conventions would not constitute a violation of
formal grammar.
The vertical lines are presumed continuous from top to bottom even as our nota-
tional convention shows them as segments of lines drawn to the left of each proof
line. We may use superscripts in front of subproof lines because some rules require
that two subproofs be generated before we can complete and discharge. We will be
using given assumptions (or given premises) as well as assumed premises or posited
assumptions which we lay down: the Fitch-style representation indicates this visu-
ally; although visual, this is not a psychologically flexible notational facility but a
strictly enforced regulation that is dictated by our formal requirements. We notice
that there are vertical strokes from top to bottom, extending from the first to the very
last line of the proof (or, better we should call it, derivation.) The left-most column
of such vertical strokes or lines encompasses or marks the territory of the main deri-
vation. Since this is the left-most array of vertical strokes, we may as well say that
the entire derivation is encompassed in this way. But the additional license we are
given is to generate subderivations or subproofs; these are generated by laying out
an assumed or posited premise; the subderivation that is generated in this way is
also marked, territorially, by its own vertical strokes, from the top to the bottom of
the subderivation: these vertical strokes are to the right (which is inevitable since all
the way to the left we have the main proof body marking lines that are essentially
placed next to the numerals by which we label the lines of the whole proof.) As we
will see, there are strict rules as to how a subderivation is considered to be com-
pleted, so that we can return to the body of proof outside (and, thus, to the left) of
the completed subderivation. It is also remarkable – and absolutely
212 4 Sentential Logic Languages ∑
1. | 𝓐1
2. |∶
3. | 𝓐n
4. | 1
| 𝓑1
5. | |∶
6. | 2
|𝓑k
7. | |∶
8. |∶
9. | |𝓑l
10. | |∶
11. | | |𝓑i
12. | | |∶
13. | | 𝓑j
14. | 𝓒m
15. |∶
16. |𝓒
The given premises are to the right of the leftmost vertical line which marks the
main-proof area. It is within this area of the main proof body that we have to termi-
nate when we derive the indicated conclusion 𝓒. Subproof lines are to the right:
each one is initiated by an assumed premise. It is possible that we need a subproof
4.5 Other Natural Deduction Systems 213
within a subproof; again, we move to the right and generate the subproof by means
of the assumed premise that is needed. The rules will guide us about how to dis-
charge the assumed premises and complete the corresponding subproofs. A crucial
restriction is that we cannot terminate a subproof before we have terminated all its
internal subproofs. Another way of putting it: the latest generated subproof has to be
completed before the preceding subproof is completed. Ultimately, no subproofs
can be left without completion before we come to the end of the main proof. It is
possible, however, to be returning to the main body of the proof before other sub-
proofs are generated subsequently. There are also vital restrictions that are needed
to regulate how lines can be moved from the main proof to subproofs and within
subproofs in all possible combinations – what such moves are permissible and what
moves are prohibited. If the requisite restrictions are disregarded, we would face the
calamity of being able to prove, from given premises or with no premises, conclu-
sions or theses that are not valid or tautologies in the corresponding semantic sys-
tem: in other words, we would be able to construct proofs for what should not be
provable! Indeed, one way to tweak the system, by imposing the appropriate restric-
tions, and to justify such restrictions, is by showing that something that should not
be provable can be proven without the restriction. Such proofs are called categori-
cal proofs.
To summarize the types of regulated procedural moves that are allowed in
this game:
1. vertical lines (strokes) to mark lines of proofs and subproofs (subderivations);
2. laying down assumed or posited premises to generate subproofs;
3. underlining of premises and assumed premises;
4. nesting or subordinating (to the right) the vertical lines for the subderivations
(with no limits as to nesting subderivations within subderivations);
5. completing subderivations to return to the superordinate subderivation (or to the
main body of the proof) in the reverse order in which the subderivations were
generated;
6. discharging the assumed premises that generate the subderivations by complet-
ing those subderivations (by means of applying the appropriate rule);
7. terminating the proof or derivation (always in the main body) with the line of the
conclusion-formula;
8. deriving categorical conclusion lines which, as we will see, are no-premise con-
clusions: in such proofs, only subordinate derivations are used.
Our system ∑|| will have the rules, all and only the rules, that are sufficient for
accomplishing the task of harmonizing with the semantic system and, thus (given
that this relation is transitive) with the characteristic logical consequence relation of
the standard logic of sentences. This means that proofs are more difficult to con-
struct in ∑||. The system can be conveniently liberalized by addition of other rules,
which is something one could do as a pastime. Our present purpose is to build a
proof system that is more austere, as we indicated already, and allows us to see more
clearly some of the deeper theoretical features that characterize the classical senten-
tial logic. The system we construct next, a sequence system, will also show us per-
spicuously such features – although, for the purist and, frankly, for the systematic
214 4 Sentential Logic Languages ∑
m. |𝓐
∶ |∶
n. | ∶ |𝓑1
∶ | ∶ |∶
u. | ∶ |𝓐 Reit m
∶ | ∶ |∶
z. | ∶ |𝓑k
(continued)
4.5 Other Natural Deduction Systems 215
(continued)
∶ | ∶ |∶ |𝓑2
∶ | ∶ |∶ |∶
∶ | ∶ |∶ |𝓑k Reit z
∶ | ∶ |∶ |∶
∶ | ∶ |∶ |∶
v. |𝓒
We may add a rule called “Repetition” and indicated by “Rep” in the justification
line: this rule allows rewrite of a formula within the same proof or subproof area. A
rule like this ought to be distinguished from Reiteration because there are restric-
tions that we need to apply to Reiteration, as we will have a chance to find out, but
the Repetition rule needs no restrictions or any tweaking to prevent proofs of invalid
argument forms from succeeding.
m. |𝓐
∶ |∶
n. | ∶ |𝓑1
∶ | ∶ | ∶
u. | ∶ |𝓑k
∶ | ∶ |∶
z. | ∶ |𝓑k Rep u
∶ | ∶ |∶
x. | 𝓐 Rep m
v. |𝓒
It may be questioned why we would even or ever need a repetition rule like the
one indicated but this will surface as an issue when we consider construction of the
proof of the thesis of self-implication, ⌜p ⊃ p⌝, (which ought to be provable cer-
tainly). We cannot attend to the details before we have legislated the rule for intro-
duction of the implication symbol.
∙-Rules.
The introduction rule for the conjunction connective should naturally sanction
deriving a formula with the conjunction symbol as its main connective symbol only
when both conjunct formulas have been given. We could point to linguistic actions
to motivate adoption of this rule – although our proof-theoretic “game” ought to be
understood as self-standing. The competent speaker understands the meaning of
“and” to be such that she will permit correct assertion of “φ and ψ” only if it is cor-
rect to assert φ and also it is correct to assert ψ. Notice that the order of assertions
216 4 Sentential Logic Languages ∑
of φ and ψ does not matter; what matters is that both are regarded as correctly
assertable. We will be able to prove that conjunction is commutative by using our
rules for this connective. In doing so, we see clearly that the order in which the
conjunct formulas are given is not important; the two conjuncts can be joined by
means of the conjunction symbol according to the rule for conjunction introduction,
“∙I”. This and all the rules we lay down can be justified by moving over to the
semantic system of the truth table and checking that argument forms that have as
premises the rule’s given formulas (translated, of course, into the truthtabular sys-
tem) and have as conclusion the rule’s derived formula are valid. Indeed, reasoning
semantically, if φ is true and ψ is true, it is a matter of logical necessity that “φ and
ψ” is true, it cannot be false; if “φ and ψ” could be false, then, by definition of the
connective conjunction, at least one of φ and ψ ought to be false: but this logically
impossible in this case since we are given that φ is true and ψ is true. Notice that, in
so reasoning, we are using a method called reductio ad absurdum (proof by contra-
diction, indirect proof). This is a classically valid rule of inference, but it is not valid
for other logics like one called intuitionistic. Of course, we are setting up a classical
sentential logic system here, so we don’t need to worry about this. The rules will
always be given by means of schemata: figures, shapes, which ought to be thought
of as instructions or recipes about how to proceed and, maybe, as something like
cookie cutter shapes that are betokened by the actual parts of the constructed proofs.
The rule schemata are presented in the metalanguage – hence, different symbols,
metalinguistic symbols, are used in writing the rules.
m. | 𝓐
∶ |∶
n. |𝓑
∶ |∶
| 𝓐 ∙ 𝓑 ∙I m, n
m. | 𝓑
∶ |∶
n. |𝓐
∶ |∶
| 𝓐 ∙ 𝓑 ∙I m, n
We indicate that the order does not matter, as we explained above. Strictly speak-
ing, we do not need to show this as it is implicit in the instructions that comprise
the rule.
The elimination rule for the conjunction symbol, “∙E”, authorizes deriving any
one, and both, of the conjuncts of the given conjunction formula. The linguistic
4.5 Other Natural Deduction Systems 217
motivation and the semantic, truth-tabular justification can be given in the same vein
as above.
m. |𝓐∙𝓑
∶ |∶
n. | 𝓐 ∙E m
m. |𝓐∙𝓑
∶ |∶
n. | 𝓑 ∙E m
We can say that the elimination rule for conjunction has two parts in it or we
could stipulate two rule schemata separately, one designated as the left- and the
other as the right-elimination rule. We opt to consider this as one rule with two parts
and we dispense with a requirement for showing whether the elimination is from the
left or from the right of the conjunction symbol.
Now, as an example, we show that conjunction is commutative. Strictly speak-
ing, to show that we have to construct the derivation in both directions, from ⌜ p ∙
q⌝ to ⌜q ∙ p⌝ and from ⌜q ∙ p⌝ to ⌜p ∙ q⌝. Note the so-called Quine brackets we use
here, since we are mentioning our symbols in this metalanguage we are deploying.
The proof in the opposite direction goes through similarly. We keep in mind that,
after we have laid down our rules for the equivalence or biconditional symbol, we
can also prove commutativity of conjunction by constructing a proof for ⌜(p ∙ q) ≡
(q ∙ p)⌝.
∑||(p ∙ q ⊢ q ∙ p)
1. |p ∙ q
2. |p ∙E 1
3. |q ∙E 1
4. |q ∙ p ∙I 2, 3
∨-Rules.
The rules for the inclusive disjunction symbol, as expected, are of the introduc-
tion and elimination variety. The elimination variety for the vel or wedge (as we
may refer to the symbol name) is the only elimination rule we have, which dis-
charges assumptions. This breaks a certain symmetry since all the other rules that
can be used to discharge assumed premises (and require positing of such assumed
premises for their application) are introduction rules. As we will, for instance, the
introduction rule for the implication symbol mandates that the antecedent of the
conditional we aim to derive (φ in φ ⊃ ψ) be laid down as an assumed premise: after
the line with ψ has been derived, we may invoke the rule for implication
218 4 Sentential Logic Languages ∑
appeal, not just as a matter of convenience but because we have metalogical proof
that this is the case – is to the truth-tabular method, if we are so inclined. And our
natural deduction system, with the irrelevantist rules and all, harmonizes with the
truth table system. It follows from this that, indeed, this logic does not observe rel-
evantist constraints, defined accordingly, but its defense will have to stand or fall by
appealing to the ability of the logical systems constructed for this logic to prove
what they ought to. If the controversy arises, it has to be carried out at a higher,
philosophically motivated level.
We could mandate, of course, certain relevantist restrictions: for instance, that no
formula can be derived from φ unless it shares with φ atomic variables. We can try
different variations of this, and we should because we might still end up, with the
revised rules, proving theses that we regard as irrelevantist. All this is interesting but
we must point out that by imposing such constraints we would have entered already
into the territory of alternative, non-classical logics, and this is something we cannot
do here.
Here is the schema or shape giving the recipe for the introduction rule for the
disjunction symbol: it has two parts to it. Perhaps we should regard this is two rules,
one for left-introduction and one for right-introduction of the symbol. We bypass so
tedious a request here, as it is customary to do. But we do so with a guilty con-
science because, as we have indicated so often, these are shapes or figures (“sche-
mata”) and they are not to be regarded as amenable to intuitive elaboration or
proximate application.
m. |𝓐
∶ |∶
n. | 𝓐 ∨ 𝓑 ∨I m
m. |𝓐
∶ |∶
u. | 𝓑 ∨ 𝓐 ∨I m
We could also present the schema as follows. In any case, there is a left and a
right variant, depending on whether the added formula is to the left or to the right of
the given formula.
m. |𝓐
∶ |∶
u. | 𝓐 ∨ 𝓑 ∨I m
m. |𝓑
∶ |∶
u. |𝓐 ∨ 𝓑 ∨I m
220 4 Sentential Logic Languages ∑
Even though it is a permissive or liberal rule – if by this we mean that any for-
mula can be joined to the given formula by the disjunction rule – still the rule does
not engender any anomalies for the logical system, which would be a problem.
Without getting into too many details, let us explain what is at stake. Arthur Prior,
aiming at a philosophic attack on the claim that the meanings of connectives are
defined by their introduction and/or elimination rules, coined a connective for which
he provided the following rules. He called this connective “tonk” and let us symbol-
ize it by “τ”. The problem, as we will see, is that a connective like tonk trivializes
the whole system in the sense that anything is provable from anything: this is beyond
irrelevance in that it is an entirely open-ended license to derive whatever from what-
ever (not necessarily in one step, but that is not important.) We will then argue that
our inclusive disjunction introduction rule does not work like a tonk-like connective.
m. |𝓐
∶ |∶
u. | 𝓐 τ 𝓑 τI m
m. |𝓐 τ 𝓑
∶ |∶
u. | 𝓑 τE m
And yet, the tonk connective rules seem legitimately drawn. In fact, its introduc-
tion rule, as you should notice, is like that for our disjunction connective while its
elimination rule is like that of our standard conjunction connective. But we cannot
simply ban this combination by arguing that rules should be kept separate: after all,
we will see that the rules of the equivalence connective are organically connected
with those for the implication connective. Still, something is fundamentally wrong.
We can prove anything form anything, as we said, and here is how, as it ought to be
evident from inspection of the rules.
1. | p
2. | p τ q
τI 1
3. | q τE 2
The point is that, surely, this is a would-be connective whose meaning is nonsen-
sical: if introduction and elimination rules were sufficient for defining the meanings
of connectives, we would have, as a consequence, absurdity. Since we cannot, by
definition, have absurdity, it follows that connectives are not properly definable by
means of their introduction and elimination rules. One way to respond to this pro-
found challenge raised by Prior is by showing that the tonk symbol does not corre-
spond to any genuine connective because its presented rules violate some reasonable
and basic requirement which all proper connectives ought to obey. One such require-
ment is so-called conservativity, and we will briefly define this constraint on the
definitions of connectives by means of rules; tonk violates conservativity but our
inclusive disjunction connective, as defined by its rules, does not.
4.5 Other Natural Deduction Systems 221
m. | 𝓐 ∨ 𝓑
∶ |∶
n. | 1
|𝓐
| ∶ |∶
u. | |𝓒
v. | 2
|𝓑
∶ | ∶ |∶
w. | |𝓒
z. | 𝓒 ∨E m, n-u, v-w
222 4 Sentential Logic Languages ∑
The formula that corresponds to the law of excluded middle, ⌜p ∨ ~ p⌝, cannot
be derived yet based on the rules we have. We will need to have at our disposal the
rules for negation and, as we will see, not only the introduction and elimination
rules for negation but more than that. As we will recount subsequently, there are
deep logical-philosophical issues involved in this. We are, however, already in a
position to prove theses that express such properties of inclusive disjunction as com-
mutativity and associativity and we show such proofs below, opting for the com-
mutativity proof while we leave the associativity proof as an exercise.
∑||(p ∨ q ⊢ q ∨ p)
1. |p∨q
2. | 1
|p
3. | | q ∨ p ∨I 2
4. | 2
|q
5. | | q ∨ p ∨I 4
6. |q ∨ p ∨E 1, 2–3, 4–5
The logical consequence (indicated metalinguistically by the turnstile symbol)
also proceeds in the opposite direction. We can prove, in the same way, that ⌜p ∨ q⌝
is derivable from ⌜q ∨ p⌝. This is as it ought to be because, by characterizing the
disjunction as commutative, we are asserting that the two sides of the formula are
logically equivalent (or, as we should put it in the context of a proof-theoretic sys-
tem, they are inter-derivable.)
⊃-Rules.
The introduction rule for the implication (or conditional) symbol is a subproof-
based rule. Let us recall that φ is called the antecedent and ψ is called the consequent
of the implicational formula φ ⊃ ψ. The antecedent of the target implicational for-
mula (which is to be derived by means of the rule ⊃I) must be laid down as an
assumed premise, in this way generating a subproof or subderivation: after the con-
sequent has been derived within the subproof, the rule for introduction of the impli-
cation symbol compels us to exit the subproof, which is considered completed, and
we may now derive φ ⊃ ψ. The assumed premise, the antecedent φ, is discharged
properly by application of the rule. This is also evident in saying that the subproof is
completed – because no such declaration can be made in cases in which the assumed
premise has not been discharged. The intuitive justification of this rule seems quite
irresistible, provided we think in focused fashion about implication – because every-
day linguistic practices and possible psychological biases may be prejudicing the
way we tend to think about if-then statements. For instance, although it ought to be
clear that the antecedent is not given as true when a conditional statement is given as
true, it may be that a competent user of the language could be assuming that the
antecedent is also given. But there is no warrant for making such an assumption. For
some reason, the implication rules may not be as crystal clear as we might be tempted
to claim in laying down these rules – for introduction and elimination of implica-
tion – but, once we turn our undivided attention to the subject, we should be able to
4.5 Other Natural Deduction Systems 223
find the rules obvious. So, if on the assumption that φ is true then ψ is true, it follows
that “if φ then ψ,” although, of course, this does not license us to think that φ is true
or ψ is true: it is the if-then statement that has to be true. Of course, for our formal
purposes, the justification can also be obtained by repairing to the truthtabular system
and checking the definition of the implication connective. We are ready, then, to pres-
ent the schema or shape of the introduction rule for the implication connective.
m. | |𝓐
∶ |∶
n. | |𝓑
∶ |∶
u. | 𝓐 ⊃ 𝓑 ⊃ I m- n
The justification line indicated that the subproof extended from lines labeled m
(on which the assumed premise was posited) through line n. The subproof has to be
nested: we cannot start in the main body of the proof, since the rule works only with
subproof and completion of the subproof as we can see. Of course, the nested sub-
proof we generate by laying down the assumed premise can be within another
nested subproof.
The elimination rule for the implication connective is a staple of symbolic logic.
The Latin name for this rule, which we used in earlier section for a different natural
deduction system, is Modus Ponens. The rule sanctions that we may derive the con-
sequent of a given implication insofar as its antecedent is also given. The corre-
sponding argument form we may check by means of the truth table system checks
valid, for sure. Accordingly, the elimination rule schema for the implication connec-
tive symbol is as follows.
m. |𝓐⊃𝓑
∶ |∶
n. |𝓐
∶ |∶
u. | 𝓑 ⊃E m, n
m. |𝓐
∶ |∶
n. |𝓐⊃𝓑
∶ |∶
u. | 𝓑 ⊃E m, n
224 4 Sentential Logic Languages ∑
Let us consider how we may derive the thesis ⌜p ⊃ p⌝. This is a proof constructed
for what would correspond to a tautology formula in the truth table. This is a no-
premises proof, which makes sense when we think of a logical truth as being, in a
natural deduction system, derivable from no premises (even from the empty set of
premises.) Although not a natural way of speaking, there is elegance in this way of
thinking about the truths of logic. Apparently, we have to rely on rules that work
with assumed premises (and discharge such premises for completion of the sub-
proof) in order to be able to construct proofs for no-premises formulas. It is in this
juncture that we have the opportunity to reflect on the use of the rule of repetition,
although it could be so arranged that we do not need that rule to effectuate the deri-
vation successfully.
∑||(⊢ p ⊃ p)
1. | |p
2. | |p Rep 1
3. | p ⊃ p ⊃I 1–2
Alternatively, we could accept, and give reasons to support, what we may call
degenerate discharge of the posited assumption (which, of course, would only apply
in this type of case, when the posited assumption is “collected” or discharged by
itself.) We may argue that, laying down a formula as an assumption, and thus initiat-
ing a subderivation, we already have the formula in the subproof (obviously) and,
therefore, we are within our derivational rights, so to speak, to complete the subde-
rivation by using the introduction rule for the implication symbol. The catch is that
introduction of the conditional symbol requires a subderivation and return to the
main body once the subderivation has been completed (thereby discharging the
assumed premise). Since we have placed the formula within the subderivation, we
may sanction as a way of completing the subderivation the introduction of the con-
ditional symbol connecting the assumed (now discharged) premise with itself. In
this way we dispense with use of the rule of repetition. The proof looks like this:
1. | |p
2. | p ⊃ p ⊃I 1–1
It may seem odd but the lines on which the introduction rule was applied should
include the assumption line and the line of the consequent of the derived implica-
tion: accordingly, another way of justifying this construction is by saying that the
antecedent (the assumed premise) line and the consequent line coincide or are iden-
tical in this case. Still, since our rule schema (as a recipe or shape) requires the
subderivation lines as inclusive of the assumed premise as well as the consequent
line, we write the lines for the justification “1–1” instead of “1” even though it is
only the line labeled by the numeral 1 that is included in the subderivation: we sim-
ply follow the rule schema. It is important, of course, that we can make the case that
the introduction rule has been applied correctly for completion of the subproof and
discharging of the assumed premise.
4.5 Other Natural Deduction Systems 225
m. | 1
|𝓐
∶ |∶ |∶
n. | |𝓑
u. | 2
|𝓑
∶ |∶ |∶
v. | |𝓐
z. | 𝓐 ≡ 𝓑 ≡I m-n, u-v
The elimination rule for the equivalence symbol also depends in its inception on
how the elimination rule for the implication works. Given the definition of logical
equivalence, we expect that, given φ ≡ ψ and φ, we may derive ψ (as we may do by
application of the ⊃E rule) and also that, given φ ≡ ψ and ψ, we may derive φ (also
by means of the ⊃E rule.) Thus, we expect that the ≡E rule has two parts (something
we also found in the cases of the ∙E rule and the ∨I rule.)
m. |𝓐≡𝓑
∶ |∶
n. |𝓐
∶ |∶
u. | 𝓑 ≡E m, n
m. |𝓐≡𝓑
∶ |∶
n. |𝓑
∶ |∶
u. | 𝓐 ≡E m, n
Of course, the vertical order in which the given premises are presented does not
matter. There is also a left-right factor here (as the case was for ∙E and ∨I). The rule
schema can also be given as follows:
m. |𝓐
∶ |∶
n. |𝓐≡𝓑
∶ |∶
u. | 𝓑 ≡E m, n
m. |𝓑
∶ |∶
(continued)
4.5 Other Natural Deduction Systems 227
(continued)
n. | 𝓐 ≡𝓑
∶ |∶
u. | 𝓐 ≡E m, n
m. |𝓐
∶ |
n. |𝓑≡𝓐 ≡E m, n
∶ |
u. |𝓑
m. |𝓑
∶ |
n. |𝓑≡𝓐
∶ |
u. | 𝓐 ≡E m, n
As we indicated, we must have a natural deduction system that can prove every-
thing that should be provable: this target can be characterized by shifting back to the
truth table method, which we have already investigated, and determining what argu-
ment forms are valid. All such argument forms, translated into the natural deduction
system, should have constructible proofs for the conclusions, and from the given
premises, of the proofs. The converse ought also to be the case: any proof we can
construct in the natural deduction system should correspond to an argument form
that tests valid by application of the trusted truth table decision procedure. We can-
not settle for anything less. In our Fitch-style proof system we aim to have a parsi-
monious collection of rules, as few as needed and no rules that can be proven from
other rules. To this effect, we have proceeded so far by assembling introduction and
elimination rules but, as we have anticipated, negation will pose a different chal-
lenge. This indicates a deeper underlying issue about our classical logic: if we
attempt to finish the system by deploying an introduction and an elimination rule for
the negation connective symbol (with the assistance, usually, of a symbol for logical
absurdity), then we do not have all the rules we need to be able to construct all the
proofs we ought to be able to construct. But we will have erected a formal system
for natural deduction for minimal logic, a sublogic of the classical logic everything
that is provable in this minimal logic is provable in the standard logic but the con-
verse does not hold. Next, we will add a rule that may at first seem too permissive –
to be called EFQ, after the Latin “ex falso quodlibet” or “from absurdity anything
may be proven”, and also called “explosion” in texts. At this point, we have reached
a natural deduction system for a popular, philosophically motivated, logic called
intuitionist or intuitionistic. But we do not have a system for the standard sentential
logic: characteristically, we are not able to derive in this system from double nega-
tion of a formula X the formula X itself. Of course, we could then add a rule for
double negation elimination and that addresses our problem: we ascend to the full
strength of the standard sentential logic. There are other ways we can proceed to
pass to the final step – other rules we can add to round up our building of the classi-
cal logic, including even some rule schema for implication rather than for negation.
If we try to construct proofs corresponding to these rules, which we need to add, we
would find that we are frustrated in our efforts: specifically, we may find that we are
stuck in being unable to eliminate double negation. And, indeed, we cannot prove
elimination of double negation by using all the other rules we had in our system,
including the introduction and elimination rules for negation. Of course, we need a
rule that permits such elimination – of double negation – and we cannot rely on
what may seem “natural” or obvious: this is never done; the rules that are given to
us within the formal system is all we have and such rule – accurately, rule schemata
or rule shapes – are like cookie cutters, to indulge in a metaphor, that we implement
as we construct instantiations in the actual construction of proofs.
The lesson is that the observed symmetry (with pairs of introduction and elimina-
tion rules for the connective symbols) is broken when it comes to the negation sym-
bol. This is not accidental. We have briefly followed the travails of building layers of
logical systems, from the minimal logic before we move to the negation symbols,
and then on to intuitionistic logic when we add to the introduction and elimination
4.5 Other Natural Deduction Systems 229
rules for negation the rule we called EFQ (ex falso quodlibet, which permits deriva-
tion of any formula whatsoever from the absurdity symbol that we have in the sys-
tem. In intuitionistic logic, as we intimated, we cannot derive elimination of double
negation. We cannot construct proofs corresponding to many other classically valid
argument forms or tautologies (as can be checked by use of the truth table.) There are
philosophic reasons, which motivate a defense of intuitionistic logic, that make elim-
ination of double negation inadmissible.: In intuitionistic thinking, the concept of
truth is to defined so that “X is true” if and only if “there is a constructed proof of X,”
with certain other strings attached. The game changes, so to speak. The fundamental
notions, like truth, are defined differently. This reflects a certain way of thinking
about mathematical truth – and also, extrapolated to linguistic matters, an anti-realist
or constructivist view that requires that truth can be assigned only for what is proce-
durally verifiable or at least in principle verifiable.. The meanings of the connectives
are also different in this logic given the fundamental reshuffling of deep notions like
that of truth. “X is false” ought to be defined, on the basis of this way of thinking, as
“a proof is constructed by means of which an assumption of X can be transformed
into a proof of logical absurdity – which is a derivation of the absurdity line in the
proof.” Now, given these alternative concepts of truth and falsity, it may not be the
case that it is not the case that X and yet it is not the case that X: reading back into
the intuitionistic concepts, this means: it may be the case that “there is no construc-
tive proof that there is no constructive proof that not-X” and yet this does not entitle
us to infer that “there is a constructive proof that X.” In other words, double negation
elimination ought to fail given these underlying and propelling philosophic motiva-
tions – and the same applies to the law of excluded middle which also fails for intu-
itionistic logic: it is not necessarily the case that either X or not-X has to be true since
it may be that there is no proof that X and there is no proof that not-X: hence, both
X and not-X can be false (in the intuitionistic sense of “false.”)
Following this logical-philosophic interlude, we may now lay down the negation
rules. We will allow ourselves the luxury of excess rules at this point: we will show
all the various rules that, as mentioned above, transition us to the classical logic.
First, the elimination and introduction rules for the negation connective are given.
m. |𝓐
∶ |∶
n. |~𝓐
∶ |∶
u. |⏊ ~E m, n
The elimination rule for negation requires interplay with the absurdity symbol.
This symbol, included in our grammar, can be thought of, if we repair to the
semantic view, as denoting a sentence whose fixed or constant truth value is false:
such a sentence is logically equivalent with a logical contradiction or the negation
of a tautology, and so on. Alternatively, the absurdity symbol can be thought of as
230 4 Sentential Logic Languages ∑
the truth value of connective that interprets a constant function (of any arity)
whose fixed truth value is false. We have not defined such connectives through our
use of the truth table but we are able to do so. Such a connective – for instance, a
binary connective – is one that takes the truth value false as output for every pos-
sible combination of inputs (or, on every row of the defining truth table). Notice,
however, that we opt for considering the absurdity symbol as denoting the false
sentence, we are essentially treating is as a sentence: this is not a connective but
we can conceptually think of it as a degenerate or zero-place, zeroary connective.
But we cannot use the truth table to define such a zeroary connective: we cannot
have atomic variables specified as inputs. There are metalogical implications to
all such moves we may take but discussing all this lies beyond our present scope.
Finally, we may think of the absurdity symbol as an exclamation mark, a symbol
for expressing that something has gone wrong in our proof construction but with-
out taking this to mean that our proof has to be abandoned. No matter how we
think of the absurdity symbol, it is arguable that we should not have introduction
and elimination rules for this symbol since we might find it hard to account for
what exactly we are doing laying down rules for a symbol that denotes a logically
absurd state of affairs – thinking semantically or in terms of true and false. Some
texts, however, may actually lay down introduction and elimination rules for the
upside-down T symbol: in that case, our rule above would have to be an introduc-
tion rule for the absurdity symbol with negation rules adapted accordingly. This
can also be defended: if we think of our proof-theoretic system as one with rules
that govern the manipulation of symbols in building lines upon lines, then we
don’t need to account for our game with symbols by means of any narrative in
which true and false claims can be presented. All we have is a game with rules
about how to move symbols and move on. In that case, we may as well legislate
rules for the absurdity symbol as well. We opt for not doing this and we continue
with the negation rules now that we have elaborated on this needed upside-down
T symbol of our system.
m. | |𝓐
∶ | ∶ | ∶
n. | |⏊
u.∶ | ~ 𝓐 ~I m-n
have seen this for the introduction rules for implication and for equivalence. The
only elimination rule we have found to require subproofs (two of them) is the
elimination rule for the inclusive disjunction connective. (The equivalence intro-
duction rule also requires two subproofs.) The rationale for the negation introduc-
tion rule can be readily provided; at any rate, shifting to the truthtabular system, we
may ascertain easily the validity of the matching argument form: notice how we
must render this argument form: the premise ought to be the implication “if 𝓐,
then 𝓑 and not-𝓑” and the conclusion is ~ 𝓐. (Mark also that, indeed, the license
for writing down the absurdity symbol, which we do not have in the truthtabular
system, is by having 𝓑 and also not-𝓑 and from this, of course, we have “𝓑 and
not-𝓑.”)
As of now, we have the minimal logic, as we explained above. With the rules we
have so far, we cannot prove “𝓐 or not- 𝓐”, an expression of what seems to be a
fundamental logical law, called Excluded Middle. We can be astute in constructing
a proof to this effect but our efforts are bound to be stymied as we will see. We are
able, however, to prove the formula corresponding to another seminal law of logic
(at least, thinking classically), which is the law of non-contradiction, “not (𝓐 and
not-𝓐).” Of course, the semantical correspondents of these expressions are tautolo-
gies or logically necessary truths: we know that we can construct such proofs in our
natural deduction system by having no given premises and by utilizing only appro-
priately selected assumed premises. (Thus, a tautology, translated into our natural
deduction system, is provable by means of a no-premises proof.)
1. | |p∙~p
2. | | p ∙E 1
3. | | ~ p ∙E 1
4. | | ⏊ ~E 2, 3
5. | ~ (p ∙ ~ p) ~I 1–4
But our effort to construct a proof for the excluded middle formula will stop
short of terminating. We will be left unable to eliminate the double negation! The
following fragment of a proof is not easy to conceive: it uses astutely the amenities
we have thanks to the rule for introduction of disjunction. We also utilize another
power we have: we can always generate a subproof within a subproof. We can make
formulas available within the subproof of the subproof, by reiterating. Every sub-
proof has to be completed appropriately, by using a fitting rule. The latest generated
subproof has to be completed first. We cannot use any formulas from subproofs that
have been completed already in subsequent derivations of lines. Given all this, we
proceed as shown below. The aim is to end up with the absurdity symbol so that we
can then prove our target formula once we have started with its negation. But we
don’t have a rule for eliminating double negation at this point. Nor do we have a rule
that allows us to derive 𝓐 in the main body of the proof if we have been able to
derive the absurdity symbol within a subproof for ~ 𝓐. It follows that we cannot
finish the proof. But with either one of the rules mentioned we would be able to
complete the proof.
232 4 Sentential Logic Languages ∑
∑||(⊢ ~ ~ (p ∨ ~ p))
1. | | ~ (p ∨ ~ p)
2. | | |p
3. | | | p ∨ ~ p ∨I 2
4. | | | ~ (p ∨ ~ p) Reit 1
5. | | | ⏊ ~I 3, 4
6. | | ~ p ~I 2–5
7. | | p ∨ ~ p ∨I 6
8. | | ⏊ ~E 1, 7
9. | ~ ~ (p ∨ ~ p) ~I 1–8
We have correctly constructed a proof of ⌜~ ~ (p ∨ ~ p)⌝ from no premises (the
empty premises-set.) But a proof of ⌜p ∨ ~ p⌝ is eluding us.
At this point, we may add either one of the rules we need; we will allow inclusion
of both rules. But we also need another rule, the rule we will EFQ (ex falso quodli-
bet, or Explosion.)With this rule, we move from the minimal logic to the logic we
called intuitionistic but we do not have full strength of the standard sentential logic
yet – and, to that end, we will move to the addition of one, or both, of the other rules
to complete our natural deduction system for standard logic.
m. |⏊
∶ |∶
n. | 𝓐 EFQ m
Could we avail ourselves of this rule to prove the excluded middle formula,
which we were unable to do above? This is not possible, alas. We may consider the
following tempting proof which, however, has a fateful error which we should be
able to spot.
1. | | ~ (p ∨ ~ p)
2. | | |p
3. | | | p ∨ ~ p ∨I 2
4. | | | ~ (p ∨ ~ p) Reit 1
5. | | | ⏊ ~I 3, 4
6. | | ~ p ~I 2–5
7. | | p ∨ ~ p ∨I 6
8. | | ⏊ ~E 1, 7
9. | | p ∨ ~ p EFQ 8
4.5 Other Natural Deduction Systems 233
This proof, however, has not terminated! The assumed premise on line, which
generated a subproof, has not been discharged. Generally, use of the EFQ rule will
not discharge an assumed premise, as it is obvious, and is not useful when subproofs
are contained in the proof. The above non-proof has no given premises, as we know,
and has a subproof generated by the assumption on line 1: but without discharging
this assumption, the proof cannot be completed, it cannot terminate. Indeed, we are
unable to move back to the main body of the proof (since we have not applied a rule
that discharges the assumption and completes the subproof): without moving to the
main body of the proof, we can inspect visually that we have not brought the proof
to a proper termination. It follows that we have not proven anything; a fortiori, we
have not proven what we set out to prove either.
We continue with the rules either one of which will allow us to complete our
proof. Each of these rules can be justified by using the other rule: so, we don’t need
both rules but we allow ourselves to have them both.
m. |~~𝓐
∶ |∶
n. | 𝓐 DNE
The name for this rule is double negation elimination, abbreviated as DNE.
m. | |~ 𝓐
∶ | ∶ | ∶
n. | |⏊
∶ |∶
u. | 𝓐 RAA m, n
This rule is called reductio ad absurdum and abbreviated as RAA. This is the
classical reductio rule and, as expected on the basis of everything we have said, it is
not valid in the intuitionistic logic. This rule along with the negation-introduction
rule we laid down above regulate what we may call a proof-type called proof by
contradiction or indirect proof.
Let us see how application of either of these rules will allow termination of our
proof for the excluded middle formula.
234 4 Sentential Logic Languages ∑
∑|| ⊢ p ∨ ~ p
1. | | ~ (p ∨ ~ p)
2. | | |p
3. | | | p ∨ ~ p ∨I 2
4. | | | ~ (p ∨ ~ p) Reit 1
5. | | | ⏊ ~I 3, 4
6. | | ~ p ~I 2–5
7. | | p ∨ ~ p ∨I 6
8. | | ⏊ ~E 1, 7
9. | ~ ~ (p ∨ ~ p) ~I 1–8
10. | p ∨ ~ p DNE 9
1. | | ~ (p ∨ ~ p)
2. | | |p
3. | | | p ∨ ~ p ∨I 2
4. | | | ~ (p ∨ ~ p) Reit 1
5. | | | ⏊ ~I 3, 4
6. | | ~ p ~I 2–5
7. | | p ∨ ~ p ∨I 6
8. | | ⏊ ~E 1, 7
9. | p ∨ ~ p RAA 1–8
Let us avail ourselves of the opportunity to show what other rules we may have
added to gain full deductive strength for the standard sentential logic.
• We could add as no-premises rule, allowing us to introduce at any point as given
premise, the excluded middle formula itself!
∶ |∶
n. | 𝓐 ∨ ~ 𝓐 tnd
tertium non datur (a third case, besides true and false, is not allowed!)
• We could add the following rule, with two subproofs generated by the assump-
tions 𝓐 and not-𝓐. If, in both subproofs 𝓑 is derived, then we may derive 𝓑 in
the main proof body while considering the assumptions as discharged and the
subproofs terminated. We can see that the standard rule for disjunction elimina-
tion is appealed to in this case but, in addition, it is assumed that there are indeed
exactly two cases, 𝓐 and not-𝓐, which points to the rule we considered previ-
ously and which is precisely the excluded middle formula.
4.5 Other Natural Deduction Systems 235
∶ |∶
m. | 1
|𝓐
∶ |∶ |∶
n. | |𝓑
u. | 2
|~𝓐
∶ | |∶
v. | |𝓑
z. | 𝓑 Dil m-n, u-v Dilemma
• Interestingly, this is a rule for implication which can also be added to give us the
full standard logic natural deduction system. It does not have an intuitive appeal
but it is classical valid as can be shown by conducting a truthtabular check.
∶ |∶
n. |𝓐⊃𝓑
∶ |∶ |∶
u. | |𝓐
v. | 𝓐 PR n-u Peirce’s Rule (Peirce’s Law)
1. |p ⊃ q
2. |~ p ⊃ q
3. |p ∨ ~ p /.. q
4. 1|p
5. |p ⊃ q Reit 1
6. |q ⊃E 4, 5
7. 2|~ p
8. |~ p ⊃ q Reit 2
9. |q ⊃E 7,8
10. |q ∨E 3, 4–6, 7–9
A proof that may cause anguish to the novice, because of the multiple nested
subproofs it requires, is the following.
236 4 Sentential Logic Languages ∑
∑||: (p ⊃ q) ⊃ q ⊢ ~ p ⊃ q
1. | (p ⊃ q) ⊃ q
2. | |~p
3. | | | p
4. | | |~ p Reit 2
5. | | |⏊ ~E 3, 4
6. | | |q EFQ 5
7. | | p ⊃ q ⊃I 3–6
8. | | (p ⊃ q) ⊃ q Reit 1
9. | |q ⊃E 7, 8
10. | ~ p ⊃ q ⊃I 2–9
As another example of a long proof, we proffer:
∑|| ~ (p ⊃ q) ⊢ p ∙ ~ q
1. | ~ (p ⊃ q)
2. | |~p
3. | | | p
4. | | |~ p Reit 2
5. | | | ⏊ ~ E 3, 4
6. | | | q EFQ 5
7. | | p ⊃ q ⊃I 3–6
8. | | ~ (p ⊃ q) Reit 1
9. | | ⏊ ~E 7, 8
10. |~ ~ p ~I 2–9
11. | p DN 10
12. | |q
13. | | |p
14. | | |q Reit 12
15. | | p ⊃ q ⊃I 13–14
16. | |~ (p ⊃ q) Reit 1
17. | | ⏊ ~E 15, 16
18. | ~ q ~I 12–17
19. | p ∙ ~ q ∙I 11, 18
subproof has been completed. Using this rule would allow us to prove classically
invalid formulas.
• Γ ⊢ φ ⊃ ψ:: obviously, we need to start a subproof with assumed premise φ
(possibly with additional nested subproofs) and complete by invoking (applying)
the ⊃I rule.
• Γ ⊢ ~ φ:: we generate subproof by positing φ as assumed premise (possibly
with additional nested subproofs); the completion has to be done with applica-
tion of the ~I rule.
• Γ ⊢ φ:: we generate subproof by positing ~ φ as assumed premise (pos-
sibly with additional nested subproofs); we complete by applying the RAA rule
or DN if we have first applied the ~ I rule.
• Often, a subordinate proof proceeds to the application of the rule EFQ after the
absurdity symbol has been derived: this is needed when we require first deriva-
tion of an implication formula: what we posited as assumed premise becomes the
antecedent of the implicational formula and what we derive from EFQ is the
consequent of the implicational formula; often, this implicational formula, which
has been derived in this way, stands in contradiction with another formula we
already have: this subsequent occurrence of the contradiction symbol may then
be used to apply ~I.
• Γ ⊢ φ ≡ ψ:: two subordinate proofs are needed: from assumed premise φ, the
line with ψ must be derived for the first subproof; and, for the second subproof,
φ must be derived from the assumed premise ψ.
• If we have a line with inclusive disjunction, we need, again, two subproofs in
parallel (but written one under the other in the vertical arrangement of the Fitch-
style system), both of which need to be completed with application of the
rule for ∨E.
• Thinking ahead, as we may say, is always a good idea.
4.5.b Consistency Check in ∑||.
The question arises naturally if a proof-theoretic system like ∑|| can be deployed
appropriately to investigate whether a collection of sentences is consistent, which
means, by definition, that it is logically possible that all the given sentences are true
jointly. In the semantic or modeling approach afforded us through the truth table
method, we can check if at least one row of the truth table for the given formulas has
them all receive the truth value true; that row represents a logical possibility and the
test of consistency determines that the sentences are consistent if there is at least one
row with all the formulas true across it. But the prospect of using a proof-theoretic
method raises a different challenge. Proof-theoretic systems cannot be outfitted to
be used as mechanical decision procedures whose termination will automatically
show the result or failure to obtain the targeted result – so that a counterexample can
even be constructed upon termination of the procedure by marking the truth values
for the atomic variables that make all premises true and the conclusion false (check-
ing for invalidity). A natural deduction system cannot guide to the mechanical
extraction of a counterexample, for instance. Nevertheless, there is a way that we
can check for characteristics other than provability of a thesis (which corresponds to
validity of the argument form in the matching semantical system.) Similarly, the
238 4 Sentential Logic Languages ∑
modeling: we ought to show that there is at least one interpretation that satisfies the
formula (as in the truth table case of the truth value true occurring on at least one
row) and there is at least one interpretation that satisfies the negation of the formula.
4.5.i Intuitionistic versus Classical Logic.
We examined, in the preceding section, how we can build our natural deduction
system so that it is sufficient for the purposes of proving formulas that are (semanti-
cally) valid for a logic called Intuitionistic. This was the case when we allowed
ourselves introduction and elimination rules for the connective symbols in {~, ∙, ∨,
⊃, ≡}, where by negation rules we deployed the rules we called ~I and ~ E, which
were schematically shaped with inclusion of the symbol, which we have in our
symbolic resources, for absurdity,” ⏊.” The rule we called ex false quodlibet was
allowed too for convenience: it is derivable from the other rules, it is not as we say
independent, and it is intuitionistically a correct rule. At that point, we do not have
all the rules we would need – even parsimoniously, not allowing derivable rules but
only independent rules – to obtain the strength of the classical logic: in other words,
what we have, which is an intuitionistic proof-theoretic system, cannot prove clas-
sically valid (and hence provable or derivable) formulas. We then saw what ways
there are available for ascending, as we might say, to classical logic or to obtaining
classical strength. We can see now how, indeed, the rules we added correspond to
proofs we cannot construct within our intuitionistic proof-theoretic system, which
we will call here ∑||ⅈ, with rules {∙I, ∙E, ∨I, ∨E, ⊃I, ⊃E, ≡I, ≡E, ~I, ~E, efq}.
Given that many classically provable formulas are not provable in ∑||ⅈ, we have
the immediate corollary that the connectives have different meanings – on the addi-
tional assumption that the meanings of the connectives are given proof-theoretically
by means of introduction and elimination rules or some combination of such rules.
Intuitionistic logic is motivated philosophically by views that do not accord with
basic assumptions on which the classical logic is based. The initial impetus origi-
nated from the philosophy of mathematics by the Dutch mathematician Brouwer
but expansion of the scope of motivating this logic has been offered by Michael
Dummett to encompass linguistic application. Intuitionists define truth as depen-
dent conceptually on the availability of means for checking or verifying or proving,
and so they see proof as time-dependent in a certain sense (although without turning
logic into an empirical science, which would be wrong.) For the intuitionist, “it is
true that φ” means “there is a constructed available proof or effective verification
method for deriving or showing that φ.” The meaning of negation is straightly dif-
ferentiated from the classical meaning. Now, “it is not the case that φ” means, intu-
itionistically, that “there is a constructed proof or effective verification method for
showing that φ is false or for refuting φ.” Accordingly, the intuitionistic meaning of
“not” becomes like “if-then”: “not-φ” means “there is an available constructed
proof or effective verification procedure such that, if φ is proven true then we have
a proof of absurdity.” Or, we may put it, there is a constructed and effective method
for deriving absurdity from the assumed premise that φ – which highlights our
felicitous inclusion in our formal system of the symbol for absurdity. It is right, of
course, that we would be defining negation on the basis of the derivation of the
absurdity symbol and not by deriving “φ and not-φ” since “not” would then be
240 4 Sentential Logic Languages ∑
presupposed. In our rules, though, we have as our rule for elimination of negation,
which yields the line with the symbol for absurdity, that we should obtain both φ
and not-φ. This is not a problem given that our schematic rules are presented in a
recursive way: we just play the game in this way, step by step. But as we talk about
what is happening, we may still seem to have a problem. We can adduce, however,
that absurdity is definable notionally in many ways, not necessarily depending on a
prior concept of negation: for instance, we could define absurdity by taking a neces-
sarily true sentence as being false!
The intuitionist can also speak of in-principle constructability of an effective
proof procedure but this should be understood to mean that the effective means for
constructing the proof are presently available, not that the proof is somehow avail-
able and waits to be discovered. The intuitionist is actuated by a deep antagonism to
the school of thought known as Platonic Realism, which accepts that there are mind-
independent abstract objects (it follows that this view is Realist) and there are to-be-
discovered truths about objects: this is the case for mathematical objects, but also
for concepts. The intuitionist, on the other hand, has a conceptualist or mentalist
view: the human mind constructs, rather than discovers eternally available, proofs.
Accordingly, the objects over which we speak ought to be effectively constructible:
this has profound implications for how we are to think about proofs and claims that
have to do with members of infinite series. Like Aristotle, in antiquity, who took
infinity to be an unactualized possible, the intuitionist bans claims about members
of infinite series, which, claims, cannot be constructively and effectively shown.
The intuitionistic dependence of truth on an available proof means that there is a
tense or temporal element that is entwined with the notion of intuitionistic truth.
This does not mean, however, that logic is to be treated, in revisionist fashion, as an
empirical science. First, once a proof is constructed it is retained forevermore and
this indicates that we are not dealing with discovery of empirical or factual matters.
Second, the construction of the proof itself shows that this is indeed a truth that, as
conclusion of a constructed proof, is not dependent on factual developments that
can verify or falsify the claim; the verification is entirely dependent on the construc-
tion of a proof and not on any factual heuristics. Third, even though lack of proof for
φ enjoins us from assigning the truth value true to φ, this does not mean that we may
consider φ to be not-true or false; recall, of course, that the meaning of “not” is not
the same as the classical one and, as this case demonstrates, not being true does not
mean that a sentence is false. Instead, before being proven, φ is considered without
truth value (but, let us note, this does not mean that φ has a third, non-classical truth
value but it is, rather, truth-value-less.)
It is a fascinating exercise, which we will forego however, to provide intuitionis-
tic justifications about the theses that we can, and for those that we should be able
to, prove in intuitionistic logic. What we can do, however, is to give a (certainly
non-exhaustive) list of theses of intuitionistic logic (which have to be classical the-
ses as well) and also of theses that can be proven in classical logic but not in intu-
itionistic logic. Our preceding study of rules for natural deduction has provided
sufficient information as to what rules we may and may not apply in constructing an
intuitionistic, as opposed to a classical, systematic proof. We should be able in prin-
ciple to verify that, as we construct a classical proof for φ (or from φ from premises
4.5 Other Natural Deduction Systems 241
{ψ1, …, ψn}), which is not intuitionistically available, we are always using some of
the rules that we have seen to be incorrect for intuitionistic logic. And, of course, as
we try to construct proofs corresponding to some of those intuitionistically unavail-
able rules, we should find ourselves having to use a rule like DNE or RAA (which,
as we have indicated, are not intuitionistically correct.)
Before we play around with such derivations (intuitionistic versus classical), we
briefly allow ourselves an excursion into the truth-table system, calling the intu-
itionistic truth table system ∑⊞ⅈ, as distinguished from the classical ∑⊞. We are
able to provide a mechanical way for checking if an argument form is intuitionisti-
cally valid or not for the limited case that includes the truth-tabular definitions of the
connectives. First we show the familiar definitions of the connectives in ∑⊞; next,
we show how, in ∑⊞ⅈ, we make appropriate replacements of symbols: there is a
systematic strategy in that we replace formulas on the rows by themselves if the
truth value is true and we replace them by their negations if the truth value is false.
What we see, then, in checking the pairs of the <∑⊞, ∑ⅈ> truth tables is this: the
arguments forms that have as premises the formulas in the columns for the atomic
variables and have as conclusions that formulas in the other columns are all intu-
itionistically valid!
∑⊞
∑⊞ⅈ === replacements of formulas for truth values (φ/T and ~ φ/F): valid argu-
ment forms in the added column (with “⊩” as the symbol for intuitionistic logical
consequence). The case of the equivalential formulas can be approached accordingly.
The law of excluded middle is not provable intuitionistically, nor should it be:
given the definition of truth as constructed-proof, it is possible that neither φ nor its
negation are true for the intuitionist insofar as there is no available constructed proof
242 4 Sentential Logic Languages ∑
• DN
n. ~~φ
∶
n+m. φ DL(n)
4.5 Other Natural Deduction Systems 243
• RAA
n. ~ ϕ i
∶
n+m. ⏊
∶
n+m+k. φ RAAi(n-n+m)
• CD
n. ~ i
∶
ϕi
n+m.
∶
n+m+k. ψ
∶
n+m+k+l. ~ ϕ i
∶
ψ
n+m+k+l+u.
∶
n+m+k+l+u+v. ψ CDi(n, n+m=n+m+k, n+m+k+l=n+m+k+l+u)
• PL
n. 1
∶
φ
n+m.
∶
n+m+k. φ PL1(n-n+m)
244 4 Sentential Logic Languages ∑
• TL
n. i
∶
n+m. χ
∶
ϕi
n+m+k.
∶
n+m+k+l. χ
∶
n+m+k+l+u. χ TLi(n-n+m, n+m+k-n+m+k+l)
4.5. Exercises
1. Construct proofs in ∑|| for the exercises in 4.4.2.e.
2. Construct the requested proofs in ∑||.
a. p ⊃ q, q ⊃ r ⊢∑|| p ⊃ r
b. p ∨ q ⊢∑|| q ∨ p
c. (p ∙ q) ∙ r ⊢∑|| p ∙ (q ∙ r)
d. (p ≡ q) ≡ r ⊢∑|| p ≡ (q ≡ r)
e. ~ (p ∨ q) ⊢∑|| ~ p ∙ ~ q
f. p ⊃ q ⊢∑|| ~ q ⊃ ~ p
g. (p ∙ q) ⊃ r ⊢∑|| p ⊃ (q ⊃ r)
h. p ⊃ (q ⊃ r) ⊢∑|| (p ∙ q) ⊃ r
i. p ∨ (q ∙ r) ⊢∑|| (p ∨ q) ∙ (p ∨ r)
j. (p ⊃ q) ∙ (p ⊃ r) ⊢∑|| p ⊃ (q ∙ r)
k. q ⊃ p, r ⊃ p ⊢∑|| (q ∨ r) ⊃ p
l. (q ∨ r) ⊃ p ⊢∑|| (q ⊃ p) ∨ (r ⊃ p)
m. p ≡ ~ q ⊢∑|| ~ p ≡ q
3. We can construct proofs deriving theses proof-theoretically in ∑|| from no prem-
ises. This can be done for implicational theses (formulas that are theses, semanti-
cally tautologies, and which have the implication connective symbols as their
main connective symbol.) But we can also construct proofs for any theses, by
using the negation of the given formula. How is this to be done? Having
answered, attend to constructing proofs for the following theses.
a. ⊢∑|| ~ (p ∨ ~ p) ⊃ ⏊
b. ⊢∑|| (p ⊃ p) ⊃ ((p ⊃ p) ⊃ (p ⊃ p))
c. ⊢∑|| p ⊃ (q ⊃ (p ∙ q))
d. ⊢∑|| (p ⊃ (q ⊃ r)) ⊃ ((p ⊃ q) ⊃ (p ⊃ r))
e. ⊢∑|| ~ (p ∙ q) ≡ (~ p ∨ ~ q)
f. ⊢∑|| (p ⊃ q) ⊃ (~ q ⊃ ~ p)
g. ⊢∑|| (((p ⊃ q) ⊃ q) ∙ (q ⊃ p)) ⊃ p
h. ⊢∑|| (p ⊃ q) ⊃ ((r ⊃ p) ⊃ (r ⊃ q))
i. ⊢∑|| ((p ⊃ q) ∙ (p ⊃ r)) ⊃ (p ⊃ (q ∙ r))
j. ⊢∑|| p ⊃ ((p ⊃ q) ⊃ q)
k. ⊢∑|| p ≡ (p ∙ p)
l. ⊢∑|| p ≡ (p ∨ p)
m. ⊢∑|| ((p ∨ q) ∙ ~ p) ⊃ q
n. ⊢∑|| (p ∙ (q ∨ r)) ≡ ((p ∙ q) ∨ (p ∙ r))
o. ⊢∑|| ((p ∨ q) ∙ (p ⊃ r) ∙ (q ⊃ s)) ⊃ (r ∨ s)
4. Construct proofs for the following theses, all of which are considered paralogis-
tic: they are all logically necessary truths (tautologies) of the standard sentential
logic, as can be verified by their translation into the formal language of the truth
table system and application of the truth-tabular check; it can be argued, how-
ever, that the intuitions of the competent user of language reject these are neces-
sary truths. Even though there is nothing paradoxical – and cannot be – about
4.5 Other Natural Deduction Systems 247
these theses, since they are trivially valid on the basis of the defined meanings of
the connectives of the standard logic, the point is that the theses are paralogis-
tic – as explained in the preceding sentences. How do we construct proofs in the
Fitch-style system for theses? As you construct the proofs for each thesis, note
the rules that you use and discuss whether removing some rule, and thereby
making proof of the thesis fail, can be recommended; examine the rule itself,
how it is justified, and the consequences of removing the rule. Realize that the
justification of the rule provided by the truth table cannot be renegotiated unless
you are ready to accept redefinitions of the connectives; this, of course, would
change the logic you are dealing with.
a. ∑||⊢ ~ p ⊃ (p ⊃ q)
b. ∑||⊢ q ⊃ (p ⊃ q)
c. ∑||⊢ (p ⊃ q) ∨ (q ⊃ p)
d. ∑||⊢ (p ⊃ q) ∨ (p ⊃ ~ q)
e. ∑||⊢ (p ∙ ~ p) ⊃ q
f. ∑||⊢ p ⊃ (q ∨ ~ q)
g. ∑||⊢ (p ≡ q) ∨ (q ≡ r) ∨ (p ≡ r)
h. ∑||⊢ (p ⊃ q) ≡ (~ p ∨ q)
5. Construct proofs in ∑|| for the instances of logical consequence that were pre-
sented in 4.5.i as intuitionistically valid; mark that no rules that are not intuition-
istically valid are needed to effectively construct the proofs.
6. Now construct proofs in ∑|| for the intuitionistically invalid instances of logical
consequence from 4.5.i and identify the rules that are not intuitionistically valid,
but are classically valid, and are needed for the effective construction of the proof.
7. Construct proofs in ∑|| to determine if the given sets of formulas are consistent.
Does failure to construct the proper derivation sequence establish that the given
set is inconsistent?
a. ⊢∑||{p, p ⊃ q, ~ q}
b. ⊢∑||{(p ⊃ p) ⊃ q, ~ q}
c. ⊢∑||{(p ⊃ p) ⊃ p, ~ p}
d. ⊢∑||{p ⊃ q, p ⊃ r, p, ~ q ∨ ~ r}
e. ⊢∑||{p ⊃ ~ q, q, p}
f. ⊢∑||{p ⊃ r, q ⊃ r, p ∨ q, ~ r}
g. ⊢∑||{(q ∨ r) ⊃ p, q, ~ p}
h. ⊢∑||{(p ∙ q) ∨ (p ∙ r), ~ q ∙ ~ r}
4.6. A Sequent System for Natural Deduction: ∑⇒.
Our symbolic idiom for a sequent-type natural deduction system, designated as
∑⇒, retains the symbolic resources and grammatical arrangements we have been
implementing for all our ∑ systems. We ought to reiterate that these are different
symbolic languages and we should really regard the overlapping notation and gram-
mar as a felicitous coincidence that happens to be convenient.
Our set of connective symbols is from {~, ∙, ∨, ⊃, ≡}. Here we simplify matters
by dispensing with the triple bar but keeping in mind that we can, and we will as an
248 4 Sentential Logic Languages ∑
exercise, work out rules for the equivalence connective symbol. The grammar of
well-formed formulas is as before. The sequents are written with use of the added
symbol of an arrow, “⇒”, which separates a left from a right side (left and right to
the arrow symbol.) This is significant as we will explain. We can have multiple well-
formed formulas on either and/or both sides of the arrow. In our metalinguistic pre-
sentation of the rule-schemata (the recipes of “cookie cutter” shapes that instruct us
to carry out the legislated maneuvers for this game of derivation), we indicate col-
lections of well-formed formulas by using capital Greek letters, while we use small
Greek letters from {φ, ψ, …, φi, …} for the formulas that are manipulated in accor-
dance with the connectives-rules. As always, subscripts are from the countable set of
the positive integers (this set is countable because its member, albeit in an infinite
series, can be put in one-to-one correspondence with the natural numbers.) We need
connectives rules, of course, but we will also require so-called structural rules. Such
rules dictate the manner of permissible manipulations of formulas on the two sides
of the sequence arrow symbol, including permissible additions and removals of for-
mulas and exchange or permutation of positions of the formulas. This is not an idle
matter or a bow to convenience: disallowing a structural rule may well mean that a
different logic – not the standard sentential logic – is obtained as we will have an
opportunity to find out when we briefly discuss issues pertaining to the distinction
between standard and intuitionistic logic – a subject that we pursued in the preced-
ing section and which serves as an opportunity for us to sketch, roughly, how alter-
native logics can be obtained based on changes in the rules of the derivation game.
We will find out that only introduction rules are provided for the connectives.
This may seem odd, as we have been habituated to having both introduction and
elimination rules for connectives in the type of natural deduction proof system we
pursued in earlier section. There is a systematic fashion in which we may match
some of the introduction rules in ∑⇒ to corresponding elimination rules in the
previously constructed system ∑||, which can be undertaken as an exercise. As for
the philosophic question about how this type of rule constitutes the meaning of a
connective, we defer a deeper discussion of this intricate subject.
A sequent has the general schematic form:
φ, Γ, Δ, Γ ⇒ Θ, Λ, Λ, ψ.
We indicate that repetition of formulas is not excluded, which further means that
structural transformational rules are needed to remove redundant formulas, if that
be allowed, or, for that matter, to permit reiteration of formulas (since it is permitted
to have multiple occurrences of formulas as we show.) The formulas that are the
targets for the connective rules are placed outside, which means all the way to the
left for the left side of the arrow and all the way to the right for the right side of the
arrow. To move such formulas inside, we will need structural rules. The schematic
shapes are, as always, figure-like or – as the ancient etymology of the word
4.5 Other Natural Deduction Systems 249
suggests – schematic: the moves that are permissible are shown in the schemata and
all those, and only those authorized maneuvers are permitted.
It is permitted that either side of the sequent can be empty. But the reading of the
empty side is different in each case: left-side emptiness indicates the empty set of
premises (if we may speak in terms of premise-conclusion nomenclature.) Thus, we
can also have:
⇒ψ
This can be read as: the formula ψ is provable from the empty set of premises
which we recognize as referring to what we know semantically as a tautology or
logical truth – and we have also covered in natural deduction as a no-premise
conclusion.
The right-side empty space indicates what we would characterize as logical
absurdity in semantic terms. Considering this, we notice that if we move ψ in the
left-empty sequent to the right side, we should have:
~ ψ ⇒.
If ψ represents, semantically interpreted, a tautology when it is to the right of a left-
empty sequent, then its move to the left should be prefixed with the negation symbol.
We can explain this: while a tautology, semantically speaking, is implied even by the
empty set of premises, a contradiction implies logical absurdity insofar as by absurdity
we understand the formula that has as its constant truth value falsehood. And, of
course, the negation of a tautology is a contradiction, and vice versa, as we know. This
move to the left shows us already what the rule for left-introduction of the negation
symbol should be in our system – and this expectation will not be disappointed.
Another observation we can make – which we could stipulate, but we can also
prove from the other stipulations as to rules, which we will be making, is this: the
formulas to the left of the arrow are to be considered as being joined by conjunction
while the formulas to the right of the arrow are to be considered as being joined by
inclusive disjunction. Accordingly, we have:
φ1, φ2, Γ, Δ ⇒ Θ, Λ, ψ1, ψ2 is the same as φ1 ∙ φ2, Γ, Δ ⇒ Θ,
Λ, ψ1 ∨ ψ2
A proof in ∑⇒ is the sequence of lines populated with formulas written in
accordance with the grammar of the formal system and such that every line is other
an axiom (whose schema we will provide) or derived from an axiom by application
of one of the rules (either structural or connective rule.) We produce the lines from
top to bottom and we label each line with a numeral from the positive integers while
to the right of each line we supply metalinguistically the justification by showing
the name of the rule applied and the line or lines on which the rule has been applied.
In this fashion, we follow the conventions we used for ∑||. It is more usual to follow
a different figurative arrangement which is, for certain purposes of metalogical anal-
ysis, more perspicuous (but also appears more cluttered and may even seem rather
counterintuitive in certain respects.) We will briefly show incidentally how this
alternative, which is encountered very frequently in texts.
250 4 Sentential Logic Languages ∑
The axiom-schema for our system, which is called the rule of assumptions, RA,
is as follows:
φ ⇒ φ RA
Thinning or Weakening.
Left.
m. Γ ⇒ Θ, ψ
∶
m+n. φ, Γ ⇒ Θ, ψ WL m
Thinning or Weakening.
Right.
m. φ, Γ ⇒ Θ
∶
m+n. φ, Γ ⇒ Θ, ψ WR m
4.5 Other Natural Deduction Systems 251
Formulas can be added, either to the left or to the right. The standard logic does
not observe restrictions about relevance in the interconnections of formulas, as we
had the opportunity to observe especially when discussing the rule of introduction
for the disjunction connective in the case of the ∑|| system. Any formulas can be
introduced, left or right, in a sequent. The justification can be extracted from consid-
ering, as it was mentioned earlier, that the formulas to the left are regarded as joined
by conjunction and the formulas to the right are considered to be joined by inclusive
disjunction: if a formula is derivable from a set of formulas, then addition of any
formula in the left does not affect the efficacy of the derivation (since ψ is derived
from the sequence of formulas, it has been derived already regardless of adding an
extra formula and this addition does not change anything); on the right, since the
formulas are taken disjunctively, we may justify the addition of any formula in a
sequent in the same way we accounted for the inferential rule we called introduction
of disjunction in the ∑|| system (and which we called Addition in the initial natural
deduction system we constructed.) We can also provide the justifications thinking of
premises, to the left and conclusions, disjunctively, to the right.
Another structural rule is what we call Exchange or Permutation, and this rule too
has a left and a right version. The order in which the formulas are written, to the left
or the right, does not matter and, accordingly, the order can be altered liberally and at
will by moving the formulas any way it is desired: the justification, keeping in mind
the conjunctive connection to the left and the disjunctive connection to the right, can
be given by appealing to the following fact, which we know by now: both conjunc-
tion and inclusive disjunction are commutative (which means that logical meaning, if
we speak semantically, is the same when the order of joined formulas is altered.)
Exchange or Permutation.
Left.
m. φ, ψ, Γ ⇒ Θ
∶
m+n. ψ, φ, Γ ⇒ Θ EL m
Exchange or Permutation.
Right.
m. Γ ⇒ Θ, φ, ψ
∶
m+n. Γ ⇒ Θ, ψ, φ ER m
The next rule we can justify by referring to the fact that both conjunction and
inclusive disjunction are characterized by the property, which gives its name to a
rule we used in the first natural deduction system we constructed: Idempotence.
This allows reiterated occurrences of the same formula to be removed as desired.
The corresponding rule in our present system is called Contraction and, again, it has
a left and a right version.
252 4 Sentential Logic Languages ∑
Contraction.
Left.
m. φ, φ, Γ ⇒ Θ
∶
m+n. φ, Γ ⇒ Θ
CL m
Contraction.
Right.
m. Γ ⇒ Θ, φ, φ
∶
m+n. Γ ⇒ Θ, φ CR m
The final structural rule is based on the characterizing property of classical impli-
cation, which gives its name to the rule we called Hypothetical Syllogism in the
initial natural deduction system we constructed. As we survey the schematic for this
rule, which is called Cut, we can discern this property of implication, which is also
called sometimes transitivity. Interestingly, there are no left and right versions for
this rule. The rule essentially connects a left and a right side with the same formula
and “cuts” this formula out. It can be shown, although it lies beyond our present
scope, that any derivation in a formal system like the one we are constructing, in
which cut is used at some move, can be reshaped as an alternative proof of the same
sequent without use of the cut rule.
Cut
k. φ, Γ ⇒ Θ
∶
l. Δ ⇒ Λ, φ
∶
m. Γ, Δ ⇒ Θ, Λ Cut k, l
Next we move to the rules for the rules for the connectives, with each connective
having a left-introduction and right-introduction rule depending on whether the out-
come of the rule application is the generation of the formula with the connective
symbol as it main symbol on the left or the right of the arrow. Some rules have more
than one parts (as we saw in the case of the ∑|| system as well) related to the left/
right placement of the connected formulas (for binary symbol rules.)
4.5 Other Natural Deduction Systems 253
~−rules.
left.
m. Γ ⇒ Δ, φ
∶
n. ~ φ, Γ ⇒ Δ ~L m
right.
m. φ, Γ ⇒ Δ
∶
n. Γ ⇒ Δ, ~ φ ~R m
∙−rules.
left.
m. φ, Γ ⇒ Δ
∶
n. φ ∙ ψ, Γ ⇒ Δ ∙L m
m. φ, Γ ⇒ Δ
∶
n. ψ ∙ φ, Γ ⇒ Δ ∙L m
right.
k. Γ ⇒ Θ, φ
∶
l. Δ ⇒ Λ, ψ
∶
m. Γ, Δ ⇒ Θ, Λ, φ ∙ ψ ∙R k, l
∶
n. Γ, Δ ⇒ Θ, Λ, ψ ∙ φ ∙R k, l
∨−rules.
left.
k. φ, Γ ⇒ Θ
∶
l. ψ, Δ ⇒ Λ
∶
m. φ ∨ ψ, Γ, Δ ⇒ Θ, Λ ∨L k, l
∶
n. ψ ∨ φ, Γ, Δ ⇒ Θ, Λ ∨L k, l
(continued)
254 4 Sentential Logic Languages ∑
(continued)
right.
m. Γ ⇒ Δ, φ
∶
n. Γ ⇒ Δ, φ ∨ ψ ∨R m
∶
u. Γ ⇒ Δ, ψ ∨ φ ∨R m
⊃−rules.
left.
k. ψ, Γ ⇒ Θ
∶
l. Δ ⇒ Λ, φ
∶
m. Γ, Δ, φ ⊃ ψ ⇒ Θ, Λ ⊃L k, l
right.
k. φ, Γ ⇒ Δ, ψ
∶
l. Γ ⇒ Δ, φ ⊃ ψ ⊃R k, l
3) _____, ____ ⇒ __ ⊃L 1, 2
4) p, p ⊃ q ⇒ q EL 3
5) ______ ⇒ ___, ___ ~R 4
6) ____, _____ ⇒ ___ ~L 5
7) p⊃q⇒~q⊃~p ⊃R 6
2. Fill in missing steps, either supplying names of rules and/or lines on the justifica-
tion lines or supplying formulas on the derivation lines or both, in the following
sequent derivations in ∑⇒. The labeling of the proof lines is actuated by using
the numeral-parenthesis format.
1) p ⇒ p RA
2) q ⇒ q RA
3) p ⊃ q, p ⇒ q ____ 1, 2
4) p, p ⊃ q ⇒ q EL ___
5) p ∨ q, p ⊃ q ⇒ q ____ 2, 4
6) p ∨ q ⇒ ________ ⊃R 5
1) p ⇒ p ______
2) q ⇒ q ______
3) r ⇒ r ______
4) p ⊃ r, p ⇒ r ⊃L ____
5) q ⊃ r, q ⇒ r ____ 2, 3
6) p, p ⊃ r ⇒ r EL ____
7) q, q ⊃ r ⇒ r ___ 5
8) p ∨ q, ______, _____ ⇒ r ∨L 6, 7
3. Why are we compelled to use structural rules, like Exchange (also called
Permutation), left and/or right, in carrying out the preceding derivations?
4. Devise left and right rules for the equivalence connective ⌜≡⌝ thus enhancing
∑⇒ to ∑⇒≡.
5. Can we prove empty-premises sequents in ∑⇒? Notice that in moving formulas
from one side of the arrow symbol to the other we may indeed derive lines with
no formulas (the empty sequence of formulas to the left). Consider the following
derivation and fill in what is missing. What is the characterization of a derivable
no-premises sequent? What is the semantic characteristic of the corresponding
translated formula in a truth-table system?
1) p ⇒ p RA
2) p ⇒ p ∨ ~ p ___ 1
3) ⇒ p ∨ ~ p, ~ p
4) ⇒ ~ p, p ∨ ~ p ___ 3
5) ⇒ p ∨ ~ p, p ∨ ~ p ∨R ____
6) ⇒ p ∨ ~ p ___ 5
a. What rule was used to derive the final line in the derivation?
b. Recall that in the alternative logic we have briefly studied, which is called intu-
itionistic logic, this should not be a derivable sequent: what does this tell you
about what rule may be unavailable in a sequent derivation system for intuition-
istic logic?
4.5 Other Natural Deduction Systems 257
even if the objects are themselves abstract and rather removed from ordinary intu-
itions. Although the availability of a narrative may seem as an impressive accom-
modation for an appeal to intuitions, at first glimpse, the point is rather that semantic
approaches work with assignments of meanings rather than with procedural manip-
ulations that are managed by means of applications of rules. The true and the false –
the two truth values of the standard logic – are to be thought of as the referents or
denotata of the meanings of sentences that are assertoric (would be asserted cor-
rectly if true) or, as they are usually called, declarative. The broader notion, how-
ever, is that of designated value – accompanied ineluctably by the notion of
anti-designated value. This information is omitted from standard introductory text-
books; in the case of the standard – also called “classical” logic – the omission does
not subtract anything that is needed for comprehension of the involved notions and
mechanics of the formal system. A deeper understanding of logic, however, and
especially the prospect of further embarking on the study of alternative (unortho-
dox, non-standard) logics, require indispensably that we deal with the notions of
designation and anti-designation.
Briefly, a designated value can be thought of as the “winning” value for the deter-
mination of what is to be preserved when we speak of validity: compare, based on
everything we have been studying, that the deductive notion of validity is preserva-
tion of truth. There are alternative views on what is to be preserved but we bypass
such views for our present purposes. A valid argument is basically one with a valid
argument form, as we know, and a valid argument form is one in which truth is
preserved, as we have learned, in the sense that it is logically impossible to have all
true premises and a false conclusion in any instantiation of the form. Also, the con-
cept of tautology or logical truth, as we have learned by now, means that there is no
logical possibility of assigning truth values to the individual components of the
sentence so that the formula is not true – or, it is true for every logically possible
truth-value assignment (case, option, interpretation, or, indeed, also called model-
ing.) The designated value is the true in the standard logic. There may be initial
bafflement as to what else may be conceivable but there are alternative logics –
motivated even by specific considerations in various fields of inquiry – which have
more than one designated truth value. Designated truth values may be thought of as
species of being true – if the notion of more than one way of being true can be com-
prehended. If this appears puzzling, consider, to mention only one example, how, as
we have discovered, in the classical or standard logic, inconsistent premises imply
any conclusion whatsoever (since such premises can never be all true on any simul-
taneous truth-value assignment, and, hence, by the definition of validity, the argu-
ment has to be valid since it is impossible to have all premises true!) But it runs
against rudimentary experiences that contradictions in the body of a narrative, a
theory or a story sanction inference to any conclusion whatsoever. One way to
develop an alternative logic that blocks such inferences utilizes more than two truth
values and such truth values have to be distributed between designated and anti-
designated. The designated can be considered as species of true and the anti-
designated as species of false.
4.5 Other Natural Deduction Systems 259
In our tree language, cultivating a habit that would come handy if one continues
with studies of non-standard logics, we will use the symbols of “+” for designation
and “-” for anti-designation. Of course, the true is the only designated truth value
and the false is the only anti-designated truth value. It is particularly neat – and
inevitable – that the only designated and only anti-designated value of the classical
language are disequivalent with each other. This raises a claim – albeit a controver-
sial one – that notoriously recalcitrant languages known as “liar-sentences” should
not be considered as receiving either one of these classical truth values. A sentence
like that (for instance, “someone who always lies admits that he lies”) are true if
false and false if true: hence, they are true if and only if they are false. This has given
rise to a disturbing challenge since time immemorial – since hoary antiquity. One
solution – not accepted by everyone – is that such sentences receive a designated
species of value which is not the classical true. Indeed, since such sentences seem
to be declaratory, they would have a designated truth value but this value should not
be considered as the classical true – for reasons intimated above. Hence, we con-
front here a case in which another species of true – or, we could say, another desig-
nated truth value – besides the classical true is in the logic. This logic, of course, is
an alternative to the classical or standard bivalent logic. And, it is important to
remark, the meaning of true in this alternative language is no longer what it was in
the classical language: now, “true” means something like “only true” or “entirely
true” whereas the other designated truth value may be thought of as meaning “both
true and false but assertable.”
conjunction. This is what we are given: this is the root of our tree. We consider every
formula on every line of the tree as designated or true. Thus, we have in the root a
conjunction as true. We can go back to the truth table method to verify that a con-
junction is true if and only if both conjunct formulas are true. “If and only if” means
an “if-then” that goes in both directions. We may restate what we have just pre-
sented as follows: if a conjunction is true, then both conjuncts have to be true; and,
conversely, if the two conjuncts are both true, then the conjunction must be true too.
We list the two conjuncts vertically underneath the conjunction at the root. In other
words, we legislate that “true and true” is represented as a vertical path with the two
true formulas written the one on top of the other. We connect the root with the con-
juncts written beneath it by means of a vertical arrow. The designations, of which
we spoke in the opening section of this chapter, are placed to the right of each well-
formed formula. This is shown below.
p ∙ q +.
↓
p+
q+
If we have an inclusive disjunction, which means that the main connective sym-
bol is the wedge, we know, and can check by using the truth table method, that the
whole formula is true if and only if either one or the other of the two disjuncts is
true: if ⌜p ∨ q⌝ is true then either ⌜p⌝ or ⌜q⌝ is true; and, conversely, if either ⌜p⌝ or
⌜q⌝ is true, then ⌜p ∨ q⌝ is true. To represent this we legislate that we append two
splitting branches underneath the given disjunctive formula. We can think of this as
representing one logically possible case or situation to the left and the other to the
right. This works but we cannot get into technical details to justify it. There is an
intuitive appeal to this – although it wouldn’t matter anyway, since we learn such
mechanical procedures as a matter of manipulating symbols. To connect the root
with the two splitting branches, we need a left-leaning and a right-leaning arrow.
p ∨ q +.
↙↘
p + q +.
We can build a tree for one or more formulas. If we are checking consistency of
a set of formulas, then we will have to build the tree for all those formulas; we place
them in the root of the tree. They are all designated as true. The challenge is to see,
upon completion of the tree, if any path remains open: this means that there is a
logically possible assignment of truth values to the atomic variables such that the
4.5 Other Natural Deduction Systems 261
formulas are all true (designated.) The open path records this logical possibility. At
least one path must be open for the verdict of consistency to be proffered. If we are
checking validity of a given argument form, we will need to build the tree for the
following formulas: the premises and the negation of the conclusion. We designated
all premises and anti-designate the conclusion. If any path remains open, this means
that there is logically possible truth-value assignment to the atomic variables, such
that all the premises are true and the conclusion is false (this is how we have con-
structed the check.) This, however, means that there is a counterexample: the assign-
ment that makes all premises true and the conclusion false is called a counterexample,
as we have already learned in previous chapters. This means that the argument form
that is under examination is thus shown to be invalid. Indeed, it is significant that
this method generates the counterexamples, if any such are available. Another way
of thinking about the validity check by the tree method is this: an invalid argument
is one in which the premises are consistent with the negation of the conclusion. See
above how we represented the check of consistency. If at least one path remains
open in the validity check, this means that the premises (all of which we designate)
are consistent with the negation of the conclusion (which we anti-designate.) Hence,
the argument form is invalid. We can also use this tree method to check if one given
formula is a contradiction; in that case we construct the tree for that one given for-
mula. If no path remains open upon termination of the tree construction, that means
that the given formula is a logical contradiction. To check if a given formula is a
tautology or contingency, we need to do a little more; we will discuss that.
Suppose that we have two formulas, one conjunctive and one inclusive-
disjunctive. Having seen the above examples, let us discuss this rudimentary
construction.
p ∙ q +.
p ∨ q +.
This constitutes the root of our tree. The formulas will have to be given to you.
Our tree will show us all the logical possibilities for the given formulas to be all true.
Remember, any formula on a tree is recorded as true. An important lesson: if there
is a tilde in front of a formula, then the whole – negated – formula is true: therefore,
the formula that is negated is recorded as false. You might need to concentrate on
grasping this and making sure that you are familiar and comfortable with this. In our
given tree above, we will work on the conjunction and on the inclusive disjunction.
Regardless of the order in which the root-formulas are given, it is a good strategy to
first do the vertical branches and subsequently apply the splitting branches. Students
often ask if this is crucial and decisive – in the sense that violating this instruction
changes the results: it does not change the results; this is only a strategic tip meant
to prevent too much cluttering with proliferating branches and multiple paths all
over. Try now to learn and retain another important instruction. When we work on a
262 4 Sentential Logic Languages ∑
formula, the information in this formula will have to go underneath every path that
is still available.
Here is what our tree for the formulas given above looks like.
p ∙ q +.
p ∨ q +.
↓
p+
q+
↙↘
p + q +.
p ∙ q +.
p ∨ q +.
↙↘
p + q +.
↓ ↓
p+ q+
In both cases, we have the same recording of logical possibilities. The tree shows
us the logically possible cases in which the given formulas are all true. By cases we
mean: assignments of truth values (true or false) to the ultimate individual variable
letters. Let us read this information up the paths of the tree above. The two paths,
from top to bottom, are:
<p ∙ q, p ∨ q, p, p> and <p ∙ q, p ∨ q, q, q>.
The root, and possibly other lines, are shared by more than one path. In the first
tree we constructed above, when we first operated on the conjunction, the paths were:
<p ∙ q, p ∨ q, p, q, p> and <p ∙ q, p ∨ q, p, q, q>.
In that case, both the root and <p, q> are shared by both paths. It is important to
become comfortable with this diagrammatic setup and to be able to discern the
paths of a tree.
4.5 Other Natural Deduction Systems 263
We will show now another example of what a finished tree for a given formula
looks like. You will not be able to understand this but look at the explanations given
at the margin to alert yourself as to what you need to learn.
We are given the formula:
~ (p ∨ ~ (q ∙ r)).
We will construct the tree for this formula, ∑↙↓↘ (~ (p ∨ ~ (q ∙ r))). But,
remember, we can construct the tree for more than one formulas. The capital letter
“R” denotes a rule; the label for the rule is given by the entire metalinguistic sym-
bolic arrangement with “R” suffixed at the end. Following the label for the rule that
has been applied, within parentheses, we include the numerals labeling the lines on
which the rule has been applied. The designation symbol “plus” and the anti-
designation symbol “minus” are considered as applied on the entire well-formed
formula to the left.
1. ~ (p ∨ ~ (q ∙ r)) + Root
↓
2. p ∨ ~ (q ∙ r)) - ~+R (1)
↓
3. p - ∨−R (2)
4. ~ (q ∙ r) - ∨−R (2)
↓
5. q ∙ r + ~−R (4)
↓
6. q + ∙+R (5)
7. r + ∙+R (5)
For instance, reading the above: we obtained line 2 by applying the rule ~+ on
line 1. Designating the given formula is at the root of the tree. The formula has the
tilde as the main connective symbol: designating a tilde-formula means that we omit
the tilde and anti-designate the remaining formula: this is because negating true is
false, as we know.
Line 3 comes from application of ∨−R on line 1. Apparently, this rule will
instruct us to produce vertical branches and two of them indeed. Negating a disjunc-
tion yields two negated formulas which correspond to the disjuncts of the formula
that is negated: we have encountered this law of the standard logic as one of the
instances of the DeMorgan Law. Line 4 was obtained from application of the same
rule on the same line – this is the other negated disjunct. Line 5 is obtained from line
4: anti-designating negation yields a designated formula: indeed, negation of nega-
tion cancels – the double negation is eliminable, which a crucial characteristic of the
264 4 Sentential Logic Languages ∑
standard logic and if this rule to eliminate double negation is withdrawn, we gener-
ate an entirely different logic. Lines 5 and 7 are obtained from application of the
pertinent rule for designated conjunction: this is perhaps the most intuitively appeal-
ing and straightforward rule; if the conjunction is designated – true – then it is logi-
cally necessary that both conjuncts are designated – true. The only assignment of
truth values to the conjuncts that yields true for the conjunction is the assignment of
true to both the conjuncts. This rule tends to persist throughout a gamut of alterna-
tive logics: it seems, in other words, that conjunction tends to be a rather classically
behaving connective.
Beginning with the setup of our formal system, we have first to specify the struc-
tural arrangements we make for this tree procedure. We have already said that we
put the given formula or formulas at the top and in what is considered to be the root
of the tree. The formulas are designated at the root; an exception is the conclusion
formula, if we are checking an argument form: the conclusion formula must be anti-
designated. Also, to check if a given formula is a tautology: we anti-designate it;
remembering that a contradiction is the negation of a tautology, we anti-designate to
check for the status of tautology; we designate to check for the status of contradic-
tion. This requires attention and reflection to grasp the mechanics and the justifica-
tion. The closure of all paths of the true for a given formula means that the formula
is a contradiction: no logically possible assignment of truth values is available for
making this formula true (hence the designation at the root). Other ways of saying
this are: the formula is not satisfiable since all the paths of its tree are closed (and
closure is generated when some individual variable receives both designated and
anti-designated value on the path.)
In addition to structural instructions about how to commence, with construction
of the root, we move next to structural instructions about how to proceed: we have
said that we proceed by applying connectives rules, which we will have to learn. We
need rules for designated and for anti-designated connectives. We also accept that
some rules are vertical – generating branches downwards or vertically – and other
rules are horizontal or splitting – generating a branch to the left and a branch to
the right.
Finally, we need to know when the tree process is considered to have come to an
end. Here is the Termination Instruction: a tree is considered to be finished (ended,
completed, terminated, saturated) only when we have no more connectives rules to
apply and the formulas we have at the terminal lines are individual or atomic vari-
ables (letters, variable letter) that are either designated or anti-designated. Looking
at the tree above, it is indeed completed: we happen to have only one vertical path
(because the rules we had to apply were all vertical) and we have ended with desig-
nated and anti-designated letters; moreover, we have no more designated or anti-
designated connectives rules to apply. Therefore, the tree has been completed or
terminated or finished. The single path of the tree is open: no letter is both desig-
nated and anti-designated. Hence, the tree checks or verifies the given formula as
being satisfiable. It is not a contradiction. We don’t infer, though, that it is a tautol-
ogy. For that check, we would have to anti-designate the formula and construct the
tree for that anti-designated formula: if it remains open, that verifies that the
4.5 Other Natural Deduction Systems 265
negation of the given formula is not a contradiction either: hence, the formula is
what we have called a logical contingency (logically contingent formula, logically
indeterminate, or logically indefinite formula.)
An important structural instruction has to do with determining if a path of a com-
pleted tree is open or close. A path is closed if and only if a designated letter and the
same letter anti-designated appear on the path. A path is open if and only if it is not
closed. A tree is closed if and only if all its paths are closed. A tree is open if and
only if at least one path is open.
Now we can continue with the designated connectives and anti-designated con-
nectives rules. These rules of the system are to be thought of as recipes that instruct
you how to proceed. We are providing the rules in a “metalanguage”, not in our
officially adopted formal language. Hence, we are using variable symbols that are
not in our formal language. Those variable symbols (a box and a diamond) stand for
any formula, possibly a complex or compound formula. We need to be careful about
how substitutions into the rules work. This is an initial obstacle that proves to be a
stumbling block for learners. When we apply the recipe given by a rule, we have to
be very careful to figure out what goes in for the box and what goes in for the dia-
mond. Example will follow.
Connectives Rules for ∑↙↓↘.
(continued)
266 4 Sentential Logic Languages ∑
(continued)
⊃−rule ⊃+rule
□ ⊃ ◊ - □ ⊃ ◊ +
↓ ↙ ↘
□ + □ - ◊ +
◊-
≡+rule
□≡◊+
↙ ↘
□+□-
◊+◊-
≡−rule
□≡◊-
↙ ↘
□+□-
◊-◊+
The above are all the connectives-rules we have for the Tree Method. Other rules
(structural rules) for the Tree Method include: closure – when any designated indi-
vidual variable and the same variable anti-designated are on the same path of a tree
(following paths from top to bottom), then we consider that path closed. A closed
path represents a logically impossible assignment of truth values (interpretation,
valuation, option, case, or modeling.) (Obviously, this path is impossible because it
records an individual variable – for instance, p – as both true and false!) A tree that
has all its paths closed is considered to be a closed tree. We indicate closure by plac-
ing the symbol “⊠” underneath the last or terminal node of the closed path. If a tree
has at least one open (non-closed) path, it is considered to be an open or satisfied
tree. A completed open path may be indicated by placing underneath the symbol “⊕”.
Termination of a tree is accomplished only when all connectives-rules that can be
applied have been applied and we have terminal nodes of the tree that are marked
only by individual variables and negated individual variables. It can be proven that
all sentential logic trees terminate – no infinite paths in any tree. In higher logics, we
do run into the issue of possibly getting infinite trees.
The tree method can be used to determine validity/invalidity of argument forms,
consistency/inconsistency of sets of formulas, and the status of a given formula (if
it is a tautology, a contradiction, or a contingency.) The tree method, used correctly,
is guaranteed to give us the right results for the standard sentential logic.
4.7.1 Validity Test by the Tree Method.
To determine if a given argument form is valid by using the tree method:
1. We place the premises at the root of the tree as designated formulas.
2. We add the conclusion formula to the root as anti-designated and add it to the
premises at the root of the tree.
4.5 Other Natural Deduction Systems 267
3. We apply connectives rules to construct the tree for the set of formulas we have
at the root (the designated premises and the anti-designated conclusion.)
4. We complete the tree: the tree terminates.
5. We inspect the tree. If there is any open path, that path gives us a counterexample
to the argument form: the given argument form is invalid. To have a counterex-
ample means: there are assignments of truth values to the individual letters of the
given formulas, for which all the premises are true and the conclusion is false. If
we read the information up the open path, we find those truth values. Clearly, this
is a counterexample: Since we made all the premises true (designated) and the
conclusion false (anti-designated), we have shown that this is possible – we have
an open path, which shows logical possibility that all the formulas are true (and
those formulas are the premises and the negated conclusion – hence, we can pos-
sibly have all true premises and false conclusion!) To have a counterexample is
to have an invalid argument form. We can also say that having an open path (at
least one, one or more, even one) shows that the argument form is invalid.
6. If all the paths are closed – no paths are open – we say that the tree itself closed.
This shows that the given argument form is valid: closed tree (all paths closed!)
shows validity. To justify this: the closing of all the paths shows that there is no
logical possibility for having all the formulas at the root being true: this means
that we cannot have all the premises and the negated conclusion being true
together. Think about it: this means that there is no logical possibility that we
have all premises true and the conclusion false.
7. Summing up: open tree (at least one path open) shows invalidity; closed tree (all
paths closed, no path open) shows validity.
4.7.2 Consistency Test by the Tree Method.
The consistency test by implementation of the tree method requires applying all
the relevant rules for the connectives to the point of saturation (which means that no
connective symbols are left on which rules can be applied.) Speaking of consis-
tency, and bearing in mind that this is a property of sets of formulas, we can spell
out the criteria for consistency in terms of the tree of the formulas in the set: the set
is consistent if and only if there is at least one open path in the completed (saturated)
tree. The existence of an open path means that there is an assignment of truth values
to the atomic variables of the given formulas (which label the nodes at the root of
the tree), for which all the formulas are true. Given that we continue until comple-
tion of the tree, this means that we end with the atomic variables, whether desig-
nated or anti-designated: for those truth values assigned to the occurrences of the
atomic variables (as indicated, true for designation and false for anti-designation),
the given formulas are all true. In other words, there is a logical possibility, charted
by any open path upon completion, that all the formulas in the set are true, which we
recognize as defining logical consistency in accordance with the definition of this
concept we have been using repeatedly.
4.7.3 Status Test by the Tree Method.
By logical status we mean if a given formula is a tautology, a contradiction or a
contingency.
268 4 Sentential Logic Languages ∑
1. We complete the tree for the given formula, which is at the root of the tree
and designated if we are checking for the status of contradiction; or anti-
designated if we are checking for the status of tautologousness. A status
test is always for just one given formula.
2. If, upon termination of the tree for a designated formula, there is at least
one open path (if the tree is not closed), we cannot determine if the for-
mula is a tautology or a contingency. But we know that it is not a
contradiction.
3. If the tree is closed for a designated formula (all the paths are closed, no
path is open), then we determine that the given formula is a
contradiction.
We have to figure out, then, how to check for the status of being a tautology or a
logical contingency. Let’s lay out the mechanism for checking if the given formula
is a tautology.
2. Determine if the following argument form is valid or invalid. Draw the tree on
the back of the page, as it was done in the preceding example. Then determine if
the argument form is valid or invalid.
~ (p ∨ q), ~ p ⊃ s /.. s ∨ t
3. Finish the tree below and determine if the given set of sentential formulas is
consistent. Also write on the margin the explanation where the question mark is
placed. [A set of sentential formulas is consistent if and only if their tree is not
closed – or has at least one open path.]
1. ~ (p ⊃ q)
2. p ≡ ~ q
3. ~ q
↓ ⊃rule applied to line 1
4. p
5. ~ q
↙ ↘?
4. Which of the tree paths are closed? Place a “⊠” symbol underneath each closed
path – where the question marks are placed.
1. ~ ~ p
2. ~ ~ ~ ~ p
3. ~ (p ∨ (p ⊃ p)) ∨ ((p ⊃ p) ⊃ ~ p)
↓
4. p
↓
5. ~ ~ p
↓
6. p
↙ ↘
7. ~ (p ⊃ (p ⊃ p)) 7a. ((p ⊃ p) ⊃ ~ p)
↓ ↙ ↘
8. p 8a. ~ (p ⊃ p) 8b. ~ p
~ (p ⊃ p) ↓?
↓
9. p 9a. p
10. ~ p 10a. ~ p
??
5. Determine whether the following well-formed formulas of sentential logic tau-
tologies, contradictions or contingencies.
a. ⊩∑↙↓↘? ~ (((p ⊃ q) ⊃ p) ⊃ p)
b. ⊩∑↙↓↘? (~ p ⊃ q) ⊃ (~ q ⊃ p)
c. ⊩∑↙↓↘? ((p ⊃ q) ∨ (q ⊃ p)) ≡ ~ ((p ∨ q) ≡ ~ (~ p · ~ q))
d. ⊩∑↙↓↘? ((p ≡ q) ≡ r) ≡ (~ p ≡ (~ q ≡ r))
e. ⊩∑↙↓↘? (p ⊃ (q ⊃ r)) ⊃ (~ p ⊃ (r ⊃ ~ q))
f. ⊩∑↙↓↘? (p ⊃ (p ∨ q)) ⊃ (p ⊃ (~ q ⊃ p))
4.5 Other Natural Deduction Systems 271
g. ⊩∑↙↓↘? (p ⊃ ~ q) ⊃ ~ (p ⊃ ~ ~ q)
h. ⊩∑↙↓↘? ((p · q) ⊃ r) ≡ ~ (q ⊃ (p ⊃ r))
i. ⊩∑↙↓↘? ~ (p ≡ q) ≡ ((p ∨ q) · (p ⊃ q))
6. Determine if the following argument forms are valid or invalid.
a. (p ⊃ q) ⊃ p ⊩∑↙↓↘? p
b. (p ⊃ q) ⊃ r, q ⊩∑↙↓↘? r
c. (p ⊃ q) ⊃ r⊩∑↙↓↘? (q ⊃ r) ∨ (r ⊃ (q ⊃ r))
d. (p ⊃ q) ⊃ q ⊩∑↙↓↘? p ∨ ((p ⊃ q) ⊃ p)
e. p ⊃ (p ⊃ (q ⊃ p)) ⊩∑↙↓↘? q ⊃ (q ⊃ (~ q ⊃ p))
f. p ≡ q, p ≡ ~ q ⊩∑↙↓↘? ~ (p ≡ q)
g. ~ (p ≡ q), ~ (q ≡ r) ⊩∑↙↓↘? ~ (p ≡ r)
h. p ⊃ q, ~ q ⊃ r ⊩∑↙↓↘? (p · r) ⊃ q
i. p ⊃ (q ∨ r), ~ q ⊩∑↙↓↘? (p ⊃ r) ∨ (p ⊃ q)
7. Explain how we can use the tree system as a decision procedure for determining
whether a given set of formulas is consistent or inconsistent and then apply to the
following given sets of formulas. Supply systematic truth value assignments to
the atomic variables of the formulas, which satisfy all the formulas, for the sets
that are determined to be consistent.
a. ⊩∑↙↓↘? {p ⊃ r, p, q, ~ r}
b. ⊩∑↙↓↘? {p ⊃ (~ q ⊃ ~ p), (p ⊃ ~ q) ∨ (p ⊃ q)}
c. ⊩∑↙↓↘? {p ⊃ q, ~ p ⊃ q, q ⊃ ~ p, q ⊃ p}
d. ⊩∑↙↓↘? {(p · ~ q) ∨ (~ p · q) ∨ (q · ~ r) ∨ (~q · r) ∨ (p · ~ r) ∨ (~ p · r),
(p ∨ q ∨ r) · (~ p ∨ ~ q ∨ ~ r)}
e. ⊩∑↙↓↘? {~ (p ⊃ (q ⊃ ~ p)), q ⊃ (~ q ⊃ p), ~ p ⊃ (p ⊃ q)}
f. ⊩∑↙↓↘? {u ⊃ (t ⊃ (~ u ⊃ ~ t)), u ≡ (t · u), (t ∨ (u ⊃ t)) ⊃ (~ u ≡ t)}
g. ⊩∑↙↓↘? {((p ≡ p) ∨ r) ⊃ s, (p ≡ q) ∨ r, ~ s}
8. Construct proofs in ∑|| for the exercises in 4.3.e and 4.4.2.e.
For instance, translating into a formal language for sentential logic will not yield
symbolic characterization of logically important internal parts of sentences: the
translation can only be of, as we call them, unanalyzed sentences. For instance,
“Socrates is wise” can only be translated by some atomic (individual, single) vari-
able (which, as stipulated by our grammar, is any capital letter possibly with a sub-
script from the positive integers.) If the validity of an argument form depends on the
role played by internal parts of sentences, our translation will not allow us to discern
this: From “Socrates is wise” we ought to derive a valid inference to the conclusion
“at least one person is wise” but the argument form symbolized in sentential logic
is as below (with the key “S” for “Socrates is wise” and “A” for “at least one person
is wise”), and this is not a valid argument form since it may well take interpretations
with a true premise and a false conclusion:
S /.. A
The translation is undertaken, accordingly, bearing in mind that its effectiveness
is directly dependent on the available stock of symbolic resources in the formal
language. Insofar as our purposes are proportioned to this availability (by restricting
ourselves to the analysis of logical characteristics that can be fully assessed within
sentential logic), then we reap the benefits of unambiguous and perspicuous
symbolization.
We will carry out translations into our metalanguage ℳ(∑), which means that
we have available to us the symbolic resources of ∑ in addition to fitting fragments
of English. We use capital letters, possibly with subscripts from the positive inte-
gers, to translate the meanings of declarative assertoric sentences of English; we
have symbols for connectives and parentheses as auxiliary symbols that must be
used for ensuring that no ambiguous reading of the formula is possible and may not
be used when no such risk exists. A treacherous enterprise is to determine what parts
of the logical structure of a sentence are truth-functional: this means that such ele-
ments match symbolic resources available within sentential logic – while non-truth-
functional elements cannot be translated and the situation is as if the formalism we
have in sentential logic cannot “see” such parts. Indeed, we have some rough sur-
mising we can make as to how our connectives can be matched with logic-words
and logic-phrases in language (like “not,” “either-or,” “if-then,” and “if and only if”)
but we should be able to translate any truth-functional linguistic item. To do this, we
bear in mind:
1. A connective symbol may be correctly matched to more than one logical phrases
in language. All that matters is: the truth conditions under which sentences with
such logical phrases are true/false. By conditions we mean assignments of true/
false (truth values) to the atomic sentences that are combined. For instance, the
binary connective that we use for conjunction has the following truth conditions:
the complex sentence generated by conjoining two sentences (for instance, ⌜A ∙
B⌝) is true if both components (⌜A⌝ and ⌜B⌝) are true; it is false in every other
case, for every other combination of assignments of true/false to the components
(with such combinations being true/false, false/true and false/false.) Any logical
phrase in the language that shows this logical behavior can be correctly trans-
4.5 Other Natural Deduction Systems 273
lated, and must be translated, by the symbol dot. Such logical phrases abound.
For instance, “however” connects the sentence that contains this expression with
the previous sentence so that the complex of the two sentences is true only if
both are true and it is false in every other possible case. This shows that “how-
ever” behaves as a logical phrase in the same manner in which “and” behaves.
The aspect of contrast that is created rhetorically by means of using “however” –
which, contrast, is not generated by using “and” – is significant but not in such a
way that it can affect the logical characteristics (at least, the truthfuctional logi-
cal characteristics) of the compound sentence.
2. In our sentential formal language, we may combine formulas by placing connec-
tive symbols in grammatically appropriate places in order to generate further
compound or complex formulas – and this may continue on and on. We take
advantage of this to symbolize such truth-functional logical phrases as “not
both” or “neither---nor.” There are definable truth-functional connectives in the
sentential logic that can be chosen, which directly match the logical behaviors of
the above phrases. We show below how we can define them by using the truth-
tabular method. But these logical connectives (although truth-functionally defin-
able) are not available in our formal language: hence, we must use combinations
of the symbols we do have in order to carry out the translation. We show how we
can do this.
NOT-BOTH.
Not having the symbols {|, ↓} in our stock of symbols, we have to translate “not
both” and “neither-nor” periphrastically – by using combinations of available
symbols:
Use of Parentheses
• It is crucial to use parentheses to prevent any ambiguity from arising in reading
the formula that renders the translation. Ambiguity is a common linguistic phe-
nomenon: it may be used creatively but it represents a logical pathology.
Ambiguity arises when more than one meanings can be assigned to a sentence by
the competent user of language and it is not evident which one of the meanings
is intended by the presenter of the sentence.
• Parentheses (the auxiliary symbols that are available in our formal grammar)
are to be used to prevent ambiguity. They may not be used for any other rea-
son – which means that, if no risk of ambiguity arises, parentheses may be
omitted. Some formal grammars legislate conventions for the purpose of
economizing on the use of parentheses but we will not do this here.
• Consider the formula, ⌜ A ⊃ B ∨ C⌝: this has the following possible logical
meanings, which are not mutually equivalent. More precisely, the logical
forms exemplified by the two possible translations below are not logically
equivalent to each other. To be logically equivalent, two formulas must
4.5 Other Natural Deduction Systems 275
take exactly the same value (true or false) as output for all assignments of
true/false to their component individual sentence-variables. This is not the
case in this example; therefore, there are two distinct logical senses and it
is not clear which one is presented. Both translations must, then, be given:
this is what we have called “disambiguation.”
• A ⊃ (B ∨ C)
• (A ⊃ B) ∨ C
• We don’t write ⌜~ (A) ⌝: there is no ambiguity arising from this omission.
Notably, we must also say that we consider the tilde to be “binding” more
closely than any other connective symbol. To understand what this means
consider:
• ⌜~ A ∨ B ⌝ cannot be confused with ⌜ ~ (A ∨ B)⌝ because the scope of the
tilde is always taken as the individual variable symbol that follows (unless
parentheses are used to enclose symbols and “make them like one.”)
• We may omit external or outside parentheses since no risk of ambiguity arises
from this.
• Inclusive Disjunction, Conjunction and Equivalence (Biconditional) are tran-
sitive (have the transitive property.) This has consequences for how we can
liberalize our symbolic conventions. To see what we mean by transitivity (or
the transitive property), let us consider that the familiar addition and multipli-
cation of standard arithmetic are transitive and this means that we may omit
parentheses without confusion as to the scope of each operation symbol. Let
us consider:
• 1 + (2 + 3) = (1 + 2) + 3 = 1 + 2 + 3
• 1⨯(2 ⨯ 3) = (1 ⨯ 2) ⨯ 3 = 1⨯ 2 ⨯ 3
• Transitivity can be thought of, itself, as a parentheses-shifting license.
Omission of parentheses can then be practiced without engendering ambi-
guity. Accordingly, we may omit parentheses for formulas that have as
main symbols the wedge, the dot or the triple bar (but not the horseshoe
because material implication is not transitive!) Notice that, in our logic, the
equality symbol is replaced by the symbol for equivalence
(biconditional.)
• (A ∨ (B ∨ C)) ≡ ((A ∨ B) ∨ C) ≡ (A ∨ B ∨ C)
• (A ∙ (B ∙ C)) ≡ ((A ∙ B) ∙ C) ≡ (A ∙ B ∙ C)
• (A ≡ (B ≡ C)) ≡ ((A ≡ B) ≡ C) ≡ (A ≡ B ≡ C)
• If a sentence of English, which is offered for symbolic translation, is ambigu-
ous, then we must enforce disambiguation: this means that we are compelled
to offer all possible formal translations as alternatives without committing to
any one of those renderings.
• The number of right parentheses must be equal to the number of left parenthe-
ses in any well-formed formula of ℳ(∑). Conversely, any formula or sym-
276 4 Sentential Logic Languages ∑
A B
Any triangle has three angles. The capital of the US is Washington, DC..
TRUE TRUE
NECESSARILY TRUE NOT NECESSARILY TRUE
capital of the US is Washington, DC” is not determined as true or false either by the
meanings of the logic-words in it (or by virtue of its logical form, we might also
say), or by the meanings of its non-logical words like “capital” and the names
“Washington, DC” or “US.” Intuitively, we can think of alternative histories in
which some other city had been established as the capital of the US. It is not a matter
of logical necessity that Washington, DC, should be the capital. Special attention is
needed at this point because people show a marked and instinctive attachment to
what is actual – taking it also as “necessary” but this species is not logical necessity
since the sentence, in our example, could logically be false (even if in an alternative
story of possible world) and thus is not logically necessarily false. We should think
for our purposes of actuality as one among an open number of logical possibilities.
Now we can see the consequences of this distinction for our current purpose of
discussing truth-functionality. We continue to work with the sentences we have
symbolized by “A” and “B.”
A B
Any triangle has three angles. The capital of the US is Washington, DC.
TRUE TRUE
NECESSARILY TRUE NOT NECESSARILY TRUE
⇓ ⇓
assuming that “necessarily” can be captured by assuming that “necessarily” can be captured by
some function whose symbol is “∟” some function whose symbol is “∟”
A ∟A A ∟A
T T T
F
In one case, applying the operator symbol we should expect an output “T” since
it is the case that the meaning is necessarily true while also being true; but in the
other case, while true, the meaning is not necessarily true and, so, we must place “F”
for the output of the operator symbol.
If we take the negations of these sentences: they are false but, again as before, the
negation of the (meaning of the) triangle-sentence is necessarily false whereas the
negation of the meaning of the sentence about the capital of the US is not necessar-
ily false (for the same reasons that have been given.)
If we, then, try to construct the truth-tabular definition of the presumed truth-
functional operator for logical necessity, we run into an absurd result as shown below:
p q
T T/F ?
F T/F ?
Under the assumption that our operator is truth-functional, there is, by definition,
the expectation that a unique output is definable for every specified input from the
set {T, F}. Hence, we run into absurdity since we need to assign both T and F! It
follows that “logically necessarily” is not truth-functional. See below examples that
show why “before” is not a truth-functional binary connective.
278 4 Sentential Logic Languages ∑
A B
Ronald Reagan is President of the US in the Bill Clinton is President of the US for the
period ----. period ---.
TRUE TRUE
A before B A before B
TRUE FALSE
⇓ ⇓
assuming that “before” can be captured by assuming that “before” can be captured by
some function whose symbol is “β” some function whose symbol is “β”
⇓ ⇓
AβB=TβT=T BβA=TβT=F
In one case, when the inputs are <T, T>, the output is <T> but in the other case
when, again, the inputs are <T, T> the output is <F>. All that matters from the truth-
functional point of view is the specification of the truth values that are inputs in the
definition of the connective; according to this, we cannot establish a unique or truth-
functionally specific output.
• It is interesting that, if we attempt to translate what seem like negated sentences,
with the negation inside the scope of a non-truth-functional logic-word, we
should not treat this as a negation after all. For instance, an example like “it is
necessarily not the case that a triangle have two angles” presents a sentence with
the “not” logical-particle inside the scope of “necessarily” which is not a truth-
functional word, as we have explained. The meaning of this sentence is not logi-
cally equivalent with “it not necessarily the case that a triangle have two angles”;
it is logically equivalent with the meaning of “it is not possible that a triangle
have three angles” with “possibly” also being a non-truth-functional logical
word. We must, then, translate by using a variable letter – not a negated vari-
able letter.
• We cannot give an exhaustive list of non-truth-functional words but we may
indicate some characteristic cases in a tentative list:
4.5 Other Natural Deduction Systems 279
(continued)
It is not morally permissible to ~ <T>m ~P
torture animals. ** here we have the option of “seeing”
the negation because the negation is
not within the scope of a NTF operator
It is probable that you draw an 𝑝A A
ace.
You might draw an ace. 𝑝A A
The earth shook before the EβF B
building fell.
The building feel after the dog F 𝛼 D A
hauled.
Sadly, we had to leave. 𝜍⫪L S
It is physically impossible for a ~ ◊pM I
body to move with a speed that ~P
exceed the speed-of-light
constant. ^We have the option of translating
with one sentence or by negating the
symbol for the sentence “a body
moves with ….” Since we need to
specify in our key what the negated
sentence is (notice that “any” changes
to “some”), we might compel the
single-sentence translation.
This is an appropriate juncture for approaching a subject that is vital for translat-
ing into a formal language like ∑. We need to be able to recognize whether we
are expressing through our translation a simple (individual, single, atomic) state-
ment or a complex (compound) one. Based on what we have already presented,
a meaning expressed by a sentence could be compound under the proviso that
non-truth-functional parts are regarded but, if we have to ignore the non-truth-
functional items because we lack the symbolic resources, then we may have a
simple statement that we are translating. Recognizing whether a meaning
expressed by an English sentence is, for our purposes, simple of compound is a
task that is, clearly, indispensable and is part of the activity of symbolic transla-
tion. Here are examples.
4 Sentential Logic Languages ∑ 281
There are two different senses carried by the single expression “either-or” in most
languages: there is the inclusive disjunction sense (for which, at least one of the
conjoined sentences must be true and both may be true too) and the exclusive dis-
junction sentence (for which one of the conjoined sentences must be true but not
both can be true – a choice is demanded, or exactly one of the conjoined sentences
is true.) Context in linguistic presentations may remove ambiguity as to which
sense – inclusive or exclusive – is intended. We show schemata for translations and,
to rouse interest, schemata for logically equivalent expressions (although, it should
be noted, translations should track the logical form of the stated meaning as closely
as possible.)
• Inclusive: p ∨ q, ~ (~ p ∙ ~ q), ~ p ⊃ q, ~ q ⊃ p
• Exclusive: (p ∨ q) ∙ ~ (p ∙ q), (~ p ∙ q) ∨ (p ∙ ~ q), (p ∨ q) ∙ (~ p ∨ ~ q), ~ (p ≡ q),
~ ((p ⊃ q) ∙ (q ⊃ p))
Observations Regarding the Horseshoe Symbol
• Our symbolic resource for material implication, the horseshoe, is not satisfactory
for translating the “if-then” of language in its usual senses but it should be con-
sidered adequate insofar as our purposes for using our formal instrument are
clearly circumscribed.
• A⊃B
• ⌜ A⌝ occupies the antecedent position and ⌜B ⌝ occupies the consequent posi-
tion: we need to become accustomed to using “antecedent” and “consequent”
respectively for the formulas to the left and right of the horseshoe symbol.
4 Sentential Logic Languages ∑ 283
Exercises
Key:
--A: Aston comes to the party.
--B: Brenda comes to the party.
--C: Clara comes to the party.
--D: Dwayne comes to the party.
--E: Electra comes to the party.
a. A∙B∙~C∙D∙~E
b. ((A ∨ B) ∙ ~ (A ∙ B)) ⊃ ((C ∨ E) ∙ ~ (C ∙ E))
c. (A ∙ B ∙ C ∙ D ∙ E) ≡ ~ (~ A ∨ ~ B ∨ ~ C ∨ ~ D ∨ ~ E)
d. (A ∙ ~ C) ⊃ (B ≡ (D ∙ E))
e. (A ⊃ ~ B) ∙ (B ⊃ ~ C) ∙ (C ⊃ ~ A)
f. (D ∨ E) ⊃ ((~ A ∙ ~ B) ∙ (C ⊃ ~ E))
g. ~ (A ∨ B) ∨ ~ (C ∙ E)
h. (A ∙ (C ⊃ E)) ⊃ (~ B ⊃ ~ D)
i. (D ⊃ ~ B) ∙ (A ⊃ ~ C)
j. ~ (A ∙ E) ∙ ((B ∨ C) ∙ ~ (B ∙ C)) ∙ (D ⊃ A)
k. (~ C ∙ ~ E) ⊃ ~ (A ∙ B)
l. ~ D ⊃ (A ⊃ (B ⊃ C))
4. Provide translations into our symbolic language ∑. Specify your own key or
non-logical lexicon (symbols, from capital letters, for the statements.)
a. Build it and they will come.
b. The only option for avoiding bankruptcy so that we can overcome our present
difficulties is to adopt exactly one of the options of either borrowing or laying
off personnel.
c. The only option for avoiding bankruptcy and overcoming our present diffi-
culties is to adopt exactly one of the options of either borrowing or laying off
personnel.
d. Even if we fail, we don’t fail.
e. Cline and Slane are friends [of each other].
f. Cline is Slane’s friend but Slane has no friends.
g. If you lose the queen but you don’t lose both knights, you don’t have to forfeit
the game.
h. If premise 1 and premise 2 are both true but the conclusion can be false, then
the argument is not valid.
i. To go to work, you must take route A after you have taken route B.
j. Slack is unhappy only if he is contradicted.
5. Provide translations into our symbolic language ∑ by using the provided inter-
pretative key. It is understood that the meanings are stated for relevantly speci-
fied locations and time-periods: the statements are not affected, in other words,
by variations in contexts. Note also that “it rains” and “it is raining” are trans-
lated by the same letter because the translated meaning is: “it rains at---- and
during___.” Pay attention to the differences in meaning between “if-then,” “---
only if___” and “--- if and only if___.” Also pay attention to the placement of
4 Sentential Logic Languages ∑ 287
“if”: what follows “if” is the antecedent no matter where in the English sentence
the “if” is found; but what follows “only if” is the consequent of the conditional
(implicative) statement. Some sentences may be expressing truth-functionally
simple statements: in such a case, use capital letter “X.”
• It rains: R
• The game is canceled: C
• The rules of the game dictate cancellation: D
• The ball floats in rain-water: F
• We get to go home: H
• Our team wins: W
a. If the ball floats in rain-water, then the game ought to be canceled.
b. If it rains but the ball does not float in rain-water, then the rules of the game
do not dictate cancellation.
c. The game was cancelled after it rained.
d. If it does not rain, then the game is not canceled.
e. The game is canceled if and only if it rains.
f. The ball does not float in rain-water but it is raining.
g. We get to go home only if the game is canceled.
h. Unless the game is canceled, we don’t get to go home. (If the game is not
canceled, then we don’t get to go home.)
i. Our team wins only if the game is not canceled.
j. Our team wins if the rules of the game do not dictate cancellation.
k. Raining is not a sufficient condition for game cancellation or for going home.
(It is not the case that (if it is raining, then (the game is canceled or we get to
go home.)))
Chapter 5
Formal Predicate Logic (also called
First-Order Logic) ∏
The construction of predicate logic systems constitutes a dramatic advance over the
categorical logic that available for millennia, based on Aristotle’s seminal achieve-
ment. Modern predicate logic solves the ancient problem of the expressibility, and
consequent investigation, of relations. The n-ary predicate symbols of modern pred-
icate logic allow formalization of, correspondingly, n-ary relations – a feat that had
eluded ancient logicians. It is relevant that modern predicate logic bypasses meta-
physical considerations that bewildered thinkers and students of logic and were
deemed fundamental challenges that confronted even the systematic arrangement of
a logical apparatus. Of course, the mathematical amenities that are available to the
modern logician eluded the pioneers and practitioners of premodern logic. The
modern predicate logic treats, as it ought to do, predicate symbols as non-logical
symbols (also called, especially in older texts, non-logical constants.) This is appro-
priate: considering the formal character of deductive logic – its aloofness from
content-related meaning and its dependence on formal structures – there ought to be
no dependence on the meanings of predicates which can be attributed to objects
arbitrarily and variably – and, thus, present as liable to the type of discovery and
verification that characterize empirical endeavors. Let it be understood that a logical
predicate – not to be confused with the grammatical term – corresponds to verb-
phrases of language. For instance, consider the following meaningful sentence of
English: “Schmuck runs and laughs.” Even though the grammatical predicate is
complex, the sentence is simple: nevertheless, from a logical standpoint, the mean-
ing of the sentence is compound because the abstract conditions under which the
meaning is true require, for the meaning of the sentence to be true, that both
“Schmuck runs” is true and “Schmuck laughs” is true. Thus, the logic-word “and”,
which is modeled by our familiar conjunction connective of sentential logic, does
indeed carve the sentence so that we have a compound sentence. Indeed, logic-
words are invariable – which is essential for the machinery that allows us to extract
the logical form of a given sentence (or, more correctly, of its declarative meaning.)
The “and” ought to be fixed in the logical form of the sentence: the predicates are
To remind ourselves of the terms we use: we have atomic (also called simple and
individual) variables (or variable symbols or letters); we have symbols for the con-
nectives in our formal language; and we have parentheses to be used as auxiliary
symbols only for the purpose of preventing and removing ambiguity and for no
other reason.
The regulations we have laid down for how to form strings of symbols are still in
force. Thus, we have:
• If a symbolic expression φ is well-formed (a well-formed formula or wff), then
• ~ φ is also a wff.
• If symbolic expressions φ and ψ are wffs so are: φ ∙ ψ, φ ∨ ψ, φ ⊃ ψ, φ ≡ ψ.
• Parentheses are used only to prevent and remove ambiguity.
• Nothing else is admitted in ∑ as well-formed; any symbolic string not observing
the above grammatical regulations is not a wff of ∑ and cannot be read.
Some texts do not retain the individual variables of sentential logic. After all,
since we will now have symbols that allow us to express internal parts of sentences,
we may not need to symbolize a sentence as a whole. For our purposes, this is not
important and we might as well retain the atomic sentential variables.
Now we are ready to add symbols for ∏μ. Subsequently, in section 5.3, we
expand to the full idiom ∏ρ=. To begin with, we need symbols for two kinds of
terms, predicate letters (also called predicate constants and non-logical constants),
and two kinds of symbols for quantification that are called quantifier symbols. Our
formal idiom stands on its own but, for assisting comprehension, we will see how
these symbols can be matched with internal parts of sentences of a language like
English. As always, we are not talking about the grammatical structure of the lan-
guage but about its logical structure. We need to pay attention to every detail and not
make any assumptions based on how the English grammar itself works.
Let us take an English sentence like the one expressed in (S). We are really talk-
ing about the meaning expressed by this written sentence.
(S) John is a student.
To symbolize this sentence for the purpose of investigating its logical structure,
we need a type of symbol that allows us to symbolize names like “John.” We con-
sider such proper names to be like labels or tags that pin down the item or entity we
are referring to; we are labeling the entity or item. The meaning of a name, for us,
is simply the object or entity it refers to; nothing else. This might not accord with
other intuitions you may have about the meanings of names but it is important to
realize that logical meanings, in the basic logical systems we are studying, are con-
sidered to be the referents of the symbols – the things referred to. This condition
makes out logics extensional – as they are called. This is how we treat proper names
for formal predicate logic purposes: a name’s meaning is the thing to which the
name refers. (We take sentential symbols to refer to truth value, True or False. We
will find out soon what referents we assign for predicate symbols. The variables of
predicate logic do not have referents in the system we are constructing.) We will use
small letters from the English alphabet for names. This is one kind of term we will
have but, recall from above, we need two kinds of terms. You will soon find out what
5.1 Grammar of our Formal Language of Predicate Logic: ∏ 293
the other type of term is. The names will be called within our formal language indi-
vidual constants. The other kind of term – the one without referents – is called
individual variable. It might be unfortunate that we have so many different terms but
you need to focus on learning and retaining them.
It is also apparent, from our example, that we need some type of symbol to
express the property of being-a-student which is attributed to the person named
John in the sentence. This type of symbol we will call predicate constant or logical
predicate constant or predicate symbol or predicate letter or non-logical constant.
This is the time to explain that logical predicates – what our predicate letters refer
to – are not the same as the grammatical species known as “predicate.” A kind of
magic has to happen here – and this has widespread implications. We don’t have use
for the verb “to be” (“is” in the sentence we are using as an example); we make it
disappear or, better, we reveal that this verb does not play a role in the logical struc-
ture of the meaning of the sentence. This applies to the use of “to be” as a copula or
connective tissue, as in the given example. The only meaning of the verb “to be”
which we will express is the one that means identity: when we get to the grammar
of ∏ρ=, we will introduce a symbol for it and we will then explain what we mean
by “identity”.
This disappearance of “to be” is significant. Many philosophic riddles in the his-
tory of thought have depended intrusively and recurrently on what seem to be con-
fusions around meanings of the verb “to be.” Modern logic takes the view that a
sentence like the one in (S) expresses the following meaning (which does not sound
idiomatically acceptable as English but, remember, this is the logical and not the
surface-linguistic grammar we are talking about.)
(S)-LOGIC:: John students.
In our stipulated grammar, we will be placing the predicate symbol before the
variable symbol: Thus, it is as if we are writing “Students John.” If we think in terms
of mathematical functions, consider “Student(John)” and then proceed to make a
convenient arrangement that allows you to drop the parentheses around the
name “John.”
Thus, our logical predicates are like verbs really, and we also have to think of
those verbs as not depending on a use of “to be.” Remember this in order to retain a
firm grasp of how we scan the meanings of meaningful English sentences for
formal-logical purposes.
There is a good reason why predicate symbols are sometimes called non-logical
constants or non-logical symbols. We can explain this simply although a good deal
of ingenuity and theoretical subtleties are hiding behind it. A good way to think of
a logical concept is by taking such a concept to be invariable – we cannot make it
change – by changing the context (for instance, a story we tell or factual and empiri-
cal matters that are introduced.) All our connectives in sentential logic are logical
concepts; regardless of what narrative we might concoct, the meanings of “not” and
“and” and the rest are not affected; they remain the same exactly. But, now, let us
consider the concept of the logical predicate “student.” We could make up a narra-
tive – a possible option or possible state or a model, if you will – in which John is a
student; we could also make up another story in which the same entity named John
294 5 Formal Predicate Logic (also called First-Order Logic) ∏
is not a student. Thus, in the first story “student” as a concept includes the entity
named John; but the same concept, “student,” changes, as it does not include the
entity named John in the second story. Thus, this concept, the logical predicate,
shifts along with contexts; it is not invariable; it is not a logical but a non-
logical notion.
In the reflections we just introduced, something else deserves attention: it seems
that the meaning of a logical predicate is, for formal logic, the collection or set of
things/entities that we put in it when we build a model. In one context or model, the
meaning of “student” included the person named John but in the other model it did
not. We have seen so far that names have as meanings the items/entities they refer to
within specified models; logical predicates have as meanings the collections of
things from some model, which have the indicated property. It is noticeable that the
models we are talking about have to be defined in terms of some non-empty set of
things (which we take to be in the model); this is called the domain or universe of
discourse of the model. Additionally, the model requires, for its proper construction,
a valuation that assigns names (individual constants) to all the objects in the domain
and sets of domain-objects to each predicate constant. (We will see later what hap-
pens in the case of predicates that are n-ary. The logical predicate “is-a-student” is
monadic or, as we can also call it, unary or one-place.)
As an example of a model, we have:
𝔐 =<ⅅ, ||>.
ⅅ = {○, △, ▭}.
We still need one other type of term – besides individual constants – and two
types of quantifier. We will work toward the right direction by reflecting first on
the quantifier symbols we need. We continue by using another example to assist
our thinking.
We should have symbolic resources to express a word like “all.” This type of
symbol will be one of the two quantifier symbols; this one will be called the univer-
sal quantifier symbol. This is an internal-structure logic-word in a sentence. It is not
a monadic connective. It is not a non-truth-functional connective. We will see that it
can be analyzed as a connective – as an “and” – but that will be something we will
do later (and it applies only if the model we are playing with has a finite number of
things in it.) As it is, we can see that “all” is an internal part that we must render in
our formal language.
Standard predicate logic only allows symbols for two types of quantifiers: the
universal, which we have already presented, and the so-called existential quantifier.
Attention is needed here because the standard predicate logic’s existential quantifier
symbol refers to “some” in a special meaning: “at least one.” In the standard predi-
cate logic, we do not have quantifier symbols for other quantifier expressions of the
language – like “a few,” “many,” “most,” “exactly five,” and so on. Other logics can
handle such expressions but not our basic predicate logic, which is also the classical
or standard predicate logic, of which our formal language ∏ is an idiom.
We are only left with one more type of symbol: a symbol for a type of term
besides the other type of term we have seen already – the individual constant. The
remaining type of term is called individual variable. To understand how this works,
we need to do something you might not expect. We need to look into the conven-
tions we follow in order to construct the formal grammar of a language like ∏. You
cannot second-guess this; you will need to be open to what follows and to absorb the
constructive details step by step.
5.1 Grammar of our Formal Language of Predicate Logic: ∏ 297
Grammatical considerations for our formal language dictate that our individual
variables are like ambiguous pronouns of a language like English. This might seem
odd: we don’t tolerate ambiguity in a formal language, and, indeed, we will not end
up having ambiguity! The reason that we start with this notion of an ambiguous
pronoun has to do with how we build up our symbolic grammar: it should all come
together in the end and you should be able to see what has happened; but you need
to follow all the details patiently. This ambiguous pronoun notion is actually our
own concoction: in a language like English, we can have an ambiguous pronoun
(“he,” “she,” “it” with the possibility that more than one items might be referred to)
but the context could well remove ambiguity. This might not always be possible, of
course. But we would hope that it is – that we can remove the ambiguity: “he wants
an excuse not to take the exam” is disambiguated in a context when we point to a
student or when we have it from the context that some student has just acted in a
specific way, so that this must be the referent of “he.” A logician’s ambiguous pro-
noun, on the other hand, is deliberately or systematically ambiguous: it is con-
structed as such and it is doomed, so to speak, to be ambiguous. It cannot be
disambiguated. You will see why this is done.
For individual variables, the convention is to use small letters starting from “x”
and continuing to the end of the alphabet; like with the all the other symbols, we
may use subscripts from the positive integers. The small letters from the alphabet
(possibly with subscripts) that are before “x” are reserved to be used as symbols for
individual constants (the names.) Now, we need to learn something important.
(S″) x is a student.
Students(John)
If you check the model we built above, with John in the domain of the model and
specification of the meaning (valuation or extension) of the logical predicate is-a-
student, you can ascertain intuitively that both the universally quantified sentence
and the existentially quantified sentence are true in this model. Reflect that we are
at liberty to construct some other model in which these sentences are not true. This
is as it ought to be. We want only logical truths (or tautologies) to be true in every
model we can possibly construct. But a sentence like “John is a student” is a contin-
gent sentence and should not come off true in every possible model we can con-
struct. (Of course, contradictions or logical falsehoods should also come out as false
in every logically possible model we can construct.)
(S″: “x is a student”) is not a logical sentence, it is what we have called an open
sentence, but all the conversions we have shown produced logical sentences.
Replacing the variable ⌜x⌝ by a name produces a sentence; also quantifying either
universally or existentially produces a sentence. This grammatical approach to con-
structing our strings of symbols accounts for the unexpected convention of treating
individual variables as lacking reference (or, which is the same for us, lacking
meaning.) We start with the open sentence (no references, hence no meanings, for
the individual variables) and we replace variables by constants (names) or we quan-
tify (or we combine both) in order to generate sentences.
Let us take an example from mathematics.
This is not the way we should be expressing ourselves. Unless ⌜x⌝ and ⌜y⌝ are
specific names, referring to items, we do not have a meaningful sentence. In ordi-
nary practice, we don’t bother about the details but here is the conversion to mean-
ingful sentences we have been talking about. Indeed, it is to be presumed that the
above sentence (T) really means (T’) below.
The above comments cover the terminology of our formal language and explain
how the construction works and why we need the symbols and the syntactical-
formation arrangements we impose on those symbols. You should memorize the
names of the concepts we have introduced along with their definitions and, patiently,
you should also pay attention to and reflect on the various subtle points that have
been made about all these notions. In summary, these are the resources we need to
provide symbols for:
Notice that we have placed sometimes “if-then” in the front; this helps make a
point but our grammar from ∑ does not allow prefix notation for binary connec-
tives; we should rather should expect to see the if-then placed in between the sen-
tences that it connects. Proceed to the examples and exercises to practice this topic
before you return for the formal grammar of ∏μ. As a rule of thumb, “all F are G”
expressions are implicative or conditional: “for all x, if x is an F, then x is a G.”
Expressions of the form “some F are G” are conjunctive: “there is at least one x,
such that x is a F and x is a G.”
5.2 Monadic Predicate Logic: The Formal Language ∏μ 301
5.1.1 Exercises
1. Determine if the statements expressed by the given sentences below are single or
compound. Your understanding of what we called parsing in the preceding sec-
tion is relevant to this purpose. Also realize, however, that depending on context,
parsing may be disallowed. For instance, if Gee and Lee perform as a pair, then
“Gee and Lee performed” is not equivalent in meaning to “Gee performed and
Lee performed” but it is a single statement. In our earlier study of the standard
sentential logic, we discussed truth-functionality: non-truth-functional logic-
words are not visible to the formal resources of the standard logic (the sentential
as well as its extension that is the predicate logic we are examining): conse-
quently, if the main logic-word is non-truth-functional, we have a simple state-
ment. Negations, certainly, are logically compound statements since the
truth-functional negation word is a logic-word. Some of the statements are mul-
tiply quantified, something we will examine systematically in subsequent sec-
tions; it is, however, rather straightforward to examine such statements keeping
in mind that we take a statement like “a hates b” to mean that the pair comprised
of a and b, <a, b>, in the left-to-right direction, is a member of the set of all pairs
such that the first hate the second.
a. If everyone is corrupt, then Schmuck is corrupt.
b. If everyone comes to the party, then we need more food.
c. Myrtle and Electra are married to each other.
d. Agamemnon is the king of Nowhere.
e. It is possible that someone will land on Mars someday.
f. It is possible that someone will land on Mars someday but it is not possible
that someone now alive will land on Mars someday.
g. No one came to the party.
h. It is not the case that if the building is built then clients will come.
i. If no one is happy then someone or other must be unhappy.
j. Everyone likes someone.
2. Identify the underlined words as connectives, individual constants, variables,
predicate constants or quantifier phrases.
a. Everyone is a student.
b. Everyone is a student.
c. Either John is a student or Mary is a student.
d. Either John is a student or Mary is a student.
e. Either John is a student or Mary is a student.
3. Parse the following English sentences, so as to show the logical structure of their
meanings. Specifically, pay attention as to whether the sentences are: conjunc-
tions, implicative sentences, disjunctions, quantified sentences, simple unana-
lyzed sentences (possibly due to the presence of non-truth-functional logic-words,
which we discussed already in the context of translations into our formal idiom
302 5 Formal Predicate Logic (also called First-Order Logic) ∏
for sentential logic. Some sentences may be multiply quantified, for which we
will develop grammatical conventions in subsequent section. A multiply quanti-
fied sentence has quantifiers within the scopes of other quantifiers (like “every-
one likes everyone” which can be characterized in terms of logical pattern as
“for every x and every y, x likes y”.)
a. If everyone is saved, then everyone is happy.
b. Everyone who is saved is happy.
c. Either everyone is happy or no one is.
d. Because everyone is happy, everyone is saved.
e. Someone is happy since everyone is happy.
f. It is a matter of logical necessity that it cannot be that everyone is happy and
someone is not happy.
g. There is someone is who is loved by everyone else.
h. There is a unique king of France who is not bald.
We start with the formal language ∏μ which is like a formal idiom of the Monadic
Predicate Logic (also called Monadic First-Order Logic.) We will subsequently
extend this to the full formal logic idiom ∏ which is the Polyadic or Relational
Predicate Logic (also called Polyadic or Relational First-Order Logic) with Identity.
We could even add symbols for functions to ∏ but we defer this for more advanced
texts. Our first and more limited idiom of predicate logic has only predicate symbols
that are monadic – hence, the title of this logic. This system has certain metalogical
characteristics that are no longer available when we rise to the relational predicate
logic system. A predicate symbol is monadic (also called unary and one-place) if it
is accompanied only by one variable or individual constant. Based on what we
recounted in the preceding section, accompanied by an unbound variable, a predi-
cate symbol fails to symbolize a meaningful sentence and constitutes what we called
an open sentence or sentential function; if it is followed by an individual constant,
it can symbolize a meaningful sentence. Let us check linguistic examples:
x is a student. [open sentence]
y is a teacher. [open sentence]
John is a teacher. [meaningful sentence]
Mary is a student. [meaningful sentence]
Every x is such that x is a student. [x-variable bound – meaningful sentence]
At least one x is such that x is a teacher. [x-variable bound – meaningful.
sentence].
The predicates are monadic because they are applied on one variable or constant.
It was one of the most decisive and brilliant insights in the history of logic that we
could also symbolize relations if we use predicate symbols that can take more than
one variables or constants. Once again, variables that are not bound by quantifiers
condemn the whole expression to the status of an open sentence. For example.
5.2 Monadic Predicate Logic: The Formal Language ∏μ 303
CONNECTIVES
Symbol
Name of the Symbol
What the Symbol Refers to
~ Tilde Negation
∙ Dot Conjunction
∨ Wedge Inclusive Disjunction
⊃ Horseshoe Conditional / Implication
≡ Triple Bar Biconditional / Equivalence
(,) Parentheses [also called Auxiliary Symbols]
---The tilde is the only Monadic or Unary or One-Place connective symbol.
All the other connective symbols are Binary or Dyadic or Two-Place.
∏μ
(continued)
5.2 Monadic Predicate Logic: The Formal Language ∏μ 305
ℳ(∑) == ℳ(∏μ).
φ, ψ, …, φ1, …
We are now addressing our metalanguage. We incorporate the metalan-
guage of ∑. We use these variables to talk about well-formed formulas of ∑.
But, in the case of our metalanguage for our predicate idiom, ℳ(∏μ), these
metalinguistic variables no longer have to be talking about simple or com-
pound sentences. The possibility of free variables, about which we have been
talking profusely, pushes us in this direction. The metalinguistic variables we
see above refer to well-formed formulas of ∏μ. As we have already intimated,
well-formedness and sentential formula go apart in the case of predicate logic.
It may be that a well-formed formula, let’s call it φ, has one or more free vari-
ables in it. In that case, we are not talking about a sentential variable; just about
a well-formed formula. A standard expression we use is: “x is free in φ.”
You may have noticed, incidentally, that we don’t use the corner symbols
(“⌝” and “⌜”) when we mention metalinguistic variables. This is because such
variables are by design presented to be as names. That is not the case for the
symbols of our proper constructed formal language – which is the so-called
Object Language. We can show the difference by an example.
Mary is her name. [This is fine, without any quotation marks.]
“Mary” has four letters. [Here we absolutely must have the quotation
marks. Consider the nonsensical implications of “Mary has four letters” if we
are trying to say something about the writing of Mary’s name.]
Accordingly: when we refer to symbols from ∏μ, we place them within
special metalinguistic symbols that are called “corners:” “⌜” and “⌝”. When
we use and talk about metalinguistic symbols, we don’t need such corners.
That is because those are functioning like names already within our
metalanguage.
When we don’t use but, instead, talk about or mention symbols we place
quotation marks around them – but for symbols from the Object Language
∏μ we use corners as we have said.
Next, we need to present the Grammar – or, more precisely speaking the
SYNTAX of ∏μ. We retain the Syntax of ∑, which we might repeat
summarily.
Applications of the proper syntax – and nothing but applications of the
proper syntax – yield well-formed formulas (wffs) of ∏μ. We can think of the
collection, or set, of all wffs of ∏μ as the set named WFF(∏μ) and we indi-
cate, metalinguistically, that a formula belongs to that set by the symbol “∊”.
So, “φ ∊ WFF(∏μ)” means that the formula φ is a member of the set of well-
formed formulas of ∏μ; this is a long way of saying that our formula is well-
formed, is a wff, given the grammar and syntax of ∏μ.
5.2 Monadic Predicate Logic: The Formal Language ∏μ 307
∏μ:: SYNTAX
• pi
• if φ ∊ WFF(∏μ), then ~ φ ∊ WFF(∏μ)
• if φ and ψ ∊ WFF(∏μ), then
• φ ∙ ψ ∊ WFF(∏μ),
• φ ∨ ψ ∊ WFF(∏μ)
• φ ⊃ ψ ∊ WFF(∏μ)
• φ ≡ ψ ∊ WFF(∏μ)
• Ax, Bx, …, A1x, … ∊ WFF(∏μ)
• Aa, Ab, …, Ba, …, A1a, … ∊ WFF(∏μ)
• ∀x ∊ WFF(∏μ)
• ∃x ∊ WFF(∏μ)
• ∀xφ ∊ WFF(∏μ) insofar as x is free in φ -- see below for explanation
• ∃xφ ∊ WFF(∏μ) insofar as x is free in φ -- see below for explanation
• Nothing else belongs to WFF(∏μ).
∀xFx, ∃xFx: these are wffs and they can also be interpreted as sentences.
How did we proceed from the first wff to the quantified wffs?
Consider ⌜Fx⌝ as our φ:
Now, notice that ⌜x⌝ is free in φ: it is not within the scope of any quantifier sym-
bol; it is not bound by any quantifier symbol.
Next: we bind the free variable (in this case, the only variable and, importantly,
the only free variable):
∀xFx
∃xFx
The only free variable has been bound; it has been brought within the scope of
some quantifier symbol whose variable letter is the same as the initially free variable
letter. Now we have no free variables. But, remember, even if there are free vari-
ables, the formula can still be a wff.
Let us now show more examples of binding, and also examples of non-binding
which result in wffs that cannot be interpreted as sentences. Moreover, we begin to
see failures of syntactical correctness due to violations of our syntactical stipula-
tions. A symbolic expression that is not a wff is ill-formed or not well-formed and
it cannot be scanned by the user of the formal system – it is like what gibberish is in
the context of a spoken language.
Fx ∨ ~ Fx ⇒ ∀x(Fx ∨ ~ Fx)
Fx ⊃ Fy ⇒ ∃xFx ⊃ ∃yFy
Fx ≡ Gy ⇒ ∃xFx ≡ ∀yGy
p ⊃ Fx ⇒ ∀x(p ⊃ Fx) -- we have not forbidden this quantification.
∀x(∃xFx ≡ ∀yGy) -- not a wff: ⌜x⌝ is not free in ⌜Fx⌝!
∀x∃yFx -- not a wff: ⌜x⌝ is not free in ⌜Fx⌝!
Fx ∙ Gx⇒ ∃xFx ∙ Gx -- partial binding.
We offer a few examples.
Examples of well-formed formulas that are open sentences. Consider that even
one unbound variable renders the formula an open sentence. Of course, individual
constants do not receive binding: they are semantically understood as referring to
some discrete individual object or entity. It is a common error for beginners to take
the case of individual constants as similar to that of individual variables when it
comes to quantification: but the two cases are drastically distinct. The individual
constants stand as they are in the formula – cannot be bound – and no quantifier can
be allowed to bind individual constants. We indicate the free variables within set
brackets for each formula. If we have more than one free variables of the same type
of letter, then we use superscripts to mark occurrences counting from left to right.
• Fx ≡ ~ Fx == free: {x1, x2}
• Gz ∨ ∃x(Ft ⊃ Gx) == free: {z}
• (Fa ⊃ ~ ~ Fa) ⊃ (~ Fx ⊃ ~ ~ ~ Fx) == free: {x1, x2}
• ∀x(Fx ∨ ∃y(Fy ∨ Gz)) == free: {z}
• ~ Tu ∙ ∀w(K21y ⊃ ∃z(Kt ∨ (Lw ∙ Fz))) == free: {u, y}
• ∃x(Fx ∨ Rx) ⊃ ~ ∀y(Fy ∙ ~ Gx) == free: {x3}
• ~ ∀w ~ ∀z ~ ∃y(Ft ≡ ~ (Fy ∙ ~ Lw ∙ ~ ~ Gz)) ∙ Lu == free: {u}
• Fa ≡ ~ ∃x (~ Fa ⊃ (Fx ∙ Gy)) == free: {y}
5.3 # Standard Logic and Existential Commitment 309
The Aristotelian logic of categorical syllogisms was built on exactly four types of
first-order sentences, which have been traditionally symbolized, metalinguistically,
by the letters A, I, E, and O. We can translate these sentential schemata into our
first-order symbolic language – and, to be precise, the translation (indicated below,
metalinguistically, by “↝”) is into our enhanced metalanguage for ∏μ which has
tokens of our symbols for quantifiers, variables, constants or names, and predicate
letter symbols.
• A: All S are P; All(S, P) ↝ ∀x(Sx ⊃ Px)
• I: Some S are P; Some(S, P) ↝ ∃x(Sx ∙ Px)
• E: No S are P; E(S, P) ↝ ~ ∃x(Sx ∙ Px)
• O: Some S are non-P; O(S, P) ↝ ∃x(Sx ∙ ~ Px)
The system constructed by Aristotle assumed that every class (what we regard as
logical predicate) has members; no empty classes are to be contemplated. Perhaps,
one would appeal to a linguistically based commonsensical assumption: if we were
told that “all of the Myrmidons are soldiers” we would consider the utterance of the
statement to be committing to the existence of at least one Myrmidon. On the other
hand, “all unicorns have horns” strikes us as pointing in the opposite direction, of
discountenancing existence of any such entities, of which, nevertheless, we want to
predicate that they have horns. The technical issue is that Aristotelian logic cannot
determine argument forms to be valid unless this existential presumption, to the
effect that there are no empty predicate classes, is posited. Our predicate logic sys-
tem, on the other hand, as an idiom in the family of modern predicate logic lan-
guages, allows for empty predicate classes. We will see that the semantics of
predicate logic uses sets – collections of items that are discrete – that interpret our
predicate symbol letters: accordingly, the predicate “student” defined strictly for
some specified domain or universe of discourse is interpreted semantically as the set
of all the members of the specified domain, which are students. An empty set is
definable, as a set with no members whatsoever. If no students exist in our specified
310 5 Formal Predicate Logic (also called First-Order Logic) ∏
domain, then we say that the predicate “being-student” is interpreted in this domain
by the empty set. Notwithstanding the technical flavor of this, there is some intuitive
basis to which we may appeal in supporting this setup. It is important that our
semantics is, in this way, extensional: logical meanings are understood as given by
what they refer to. The names or constants of our language refer to objects in our
domain and the predicate letters are taken to be referring to sets as indicated above.
Aristotle did not have this semantic machinery at his disposal. The modern logic of
predicates commits us, then, to having a non-empty specified domain but possibly
empty predicate sets. Another benefit, if it can be considered as such, is that thorny
traditional issues of existence are set aside: it appears now as ill-conceived and mis-
leading to be pondering over questions of existence when it comes to transactions in
logic: notice that the copula-use of the verb “to be” disappears as unnecessary and
redundant (the logical predicate “being-student” which we may write as “student”
absorbs the copula, so “John is a student” can be captured by “Student(John).” The
other, perennially challenging, use of “to be”, in the sense of “exists”, is also treated
in a specific way now that we have selected our meaning-apparatus: this may also
sound open to challenges and controversial, but what happens with our predicate
semantics is that the quantifiers, all and some, applied over a specified domain,
generate existential commitment in a straightforward way that is not open to second-
guessing. Given that the domain is populated by members (it cannot be empty, as we
said), existential privileges are conferred by fiat, as it were, to those members. This
might sound arbitrary but the idea is this: from the point of view of logic, what
exists and what does not exist are not proper subjects of inquiry; those are, rather,
empirical questions but deductive logic is not based on factual or empirical investi-
gation. We may consider, for example, a sentence like “Centaurs exist”: although
the reference is to some fantastic entity, the quest for ascertaining existence of such
entities or not is not the business of logic. This means that we may construct a
domain with members, some of which are members of the “Centaur” set. This does
not mean that the empirical question as to whether such creatures exist has been
settled in the affirmative: we simply have a narrative with Centaurs in it. But we
should not worry that this can affect our logical analysis: we can always construct a
different narrative, a different model, in which the set of Centaurs is empty. This is
as it should be since it is a factual and not a logical question as to whether Centaurs
exist. Of course, this applies to other entities, of which we are certain that they exist.
For instance, we may build a model with a domain and predicate interpretations so
that the set of humans is empty. It is not true that there are no humans but it is not
logically necessary that humans exist! It is, sadly, logically possible that there are
no humans.
Aristotle’s assumption that the predicates commit to existence, which our mod-
ern logic abandons, means that there are argument forms that are valid in the
Aristotelian system but which can check as valid for our modern predicate logic
only if we add, as extra-logical meaning-postulates, that such-and-such kinds of
things indeed exist. If we check the sentential types on which the Aristotelian logic
is based, A-sentences imply I-sentences and E-sentences imply O-sentences. This,
however, fails in modern logic – unless we stipulate for the S-predicates in the types
5.3 # Standard Logic and Existential Commitment 311
that they are instantiated (that there is at least on S-thing or that the S-class is not
empty.) We may say, then, that, given the way the modern logic of quantifiers (pred-
icate logic) is set up, we can show that this addition of existential commitment for
the predicates is needed to render those inferences valid.
Let us try the case of A-I implication. Assume that the A-sentence is true: all S
are P. Now we check if we can make the I-sentence false without running into logi-
cal absurdity: if that is the case, then, we have a counterexample to the A-I conver-
sion claim, which is valid for Aristotelian logic but not for the modern logical
semantics. Let us assume then that it is false that: some S are P. If this sentence is
false, then its negation must be true: not(some S are P.) The negation of “some S are
P” is logically equivalent with “all S are not-P.” Both the A sentence and the nega-
tion of the I sentence are universally quantified: this means that they are true of any
object in our domain. Let us take any chance object of our domain, and call it “t.”
We have: it is true that “if t is S then t is P” (notice that universally quantified state-
ment is indeed if-then or implicational statements: “all F are G” says indeed that
“for any x, if x is F, then x is G.”) For the negation of the I-statement, we have: “if t
is S, then t is not-P.” Let us take stock of what we have so far:
1. A: all S are P.
2. not-I: not(some S are P.)
3. not-I: all S are not-P. (2: converting not-all to some-not)
4. A – applied for any object named t: if t is S, then t is P. (1: t)
5. not-I – applied for any object, and so, for our object named t: if t is S, then t
is not-P.
6. from 5, by the classical rule we know as contraposition, and also by applying
double negation elimination: if t is P, then t is not-S.
7. from 4 and 6 by the classical rule we know as hypothetical syllogism or transitiv-
ity: if t is S, then t is not-S.
8. from 7: in standard logic this is equivalent with the following (which we can
derive by applying the rule we studied in natural deduction by the name “condi-
tional exchange” and then apply the rule we know as Idempotence): not-S.
If this argument is valid, then we ought to have ended with a logical absurdity.
The final line we derive has the statement that there are no things that are S. Clearly,
if this has to be false (if there can be no classes with no members), we have shown
this to be a valid argument. If, on the other hand, we may have empty classes, as the
case is in modern logic, then there is no logical absurdity in claiming that there are
no things in the domain that are S: the S-set is empty. We have no absurdity in that
case, and we have produced a counterexample (or, better, a counter-model) to the
argument we have been examining. We see, then, plainly, that we can bring modern
logic in alignment with Aristotelian logic, in this particular case of the validity of
the A-I conversion, only by stipulating that any chance predicate letter S, interpreted
by a set, cannot be empty.
Having examined this issue regarding existential commitments, we need to notice
also that, when it comes to existential claims about existing entities, modern logic, or
at least the standard version we are presenting, makes a commitment to existence of
312 5 Formal Predicate Logic (also called First-Order Logic) ∏
all the entities that are being talked about. Although predicate-sets can be empty, the
entities in the domain that are being talked about are taken to definitely exist.
Although we have not formally introduced symbols and semantic accommodations
for identity yet, (which we will do as we extend from ∏μ to ∏ρ=), we can informally
make some sense, here in our metalinguistic ruminations, of the following symbolic
expression which checks as a necessary or logical truth of the standard predicate
logic (with relational symbols and with the identity symbol added.)
∀x∃y(x = y).
We read this: for all things named x (referring to objects in our specified domain,
which we must always have as given), there is at least one thing named y such that x
and y co-refer (refer to one and the same thing in the domain.) If we think of what this
is saying, we have: anything you can name refers to some object in the domain, which
exists. Accordingly, existential claims, as we have already intimated, are simply man-
aged by quantificational claims that range over the given domain: and by placing
something in the given domain, we are committed to its existence. We will try to
prove, informally, this claim. We may also make another deep observation about our
formal language: even though it is an old and profoundly significant philosophic claim
that there is no logical predicate “exists” we will see that this can be circumvented.
Immanuel Kant criticized the famous ontological argument purporting to prove God’s
existence (due to Anselm of Canterbury) by pointing out that “exists” is not a logical
predicate; this means that when predicates are defined, whether the things that have
the predicate predicated of them exist or not does not do anything to the definition of
the concept. If you have the definition of the concept “triangle of the Euclidean geom-
etry,” for instance, then whether anything existent has this predicate predicated of it
does not matter. It turns out that, in modern relativistic physics, there are no Euclidean
triangles (which are supposed to have 0 degree of curvature) in existence in the uni-
verse. Nevertheless, your grasp of the concept “Euclidean triangle” is not affected by
this cancellation of existential claims at all. And if it turned out (very unlikely but logi-
cally possible) that Euclidean triangles exist, after all, that too would not change any-
thing for the definition of this predicate. This is what Kant had in mind in claiming
that existence is not a logical predicate. (Because Anselm’s ontological argument for
the existence of God treats existence as a predicate attaching to, and detachable from,
things, it is said to fail. Short of this flaw, the ontological argument is perplexing: it
should fail because it derives an existential, hence empirical, claim as a conclusion in
a deductive argument that uses only logical premises and definitions: hence, the exis-
tential claim about God’s existence emerges as a necessary truth if the argument is
sound – but no existential claim can be a necessary or non-empirical truth! It is hard
to tell what is wrong with this argument, however, unless Kant’s critique is accepted.)
It turns out, however, that the expression we studied above can show us a way to
define a logical existential predicate: we can say of a thing named t that it has the
existence-predicate if and only if:
∃x(x = t).
5.4 Expanding ∏μ to ∏ρ =/∏μπφ=: Polyadic (or Relational) Predicate Logic… 313
i.e. there is at least one thing named x such that x and t co-refer or refer to the
same thing: in other words, something exists such that the thing named t refers to
that thing. The standard modern predicate logic dispenses this existential privilege
to everything we have in our domain. We can remedy this in different ways but this
would generate a different, non-standard predicate logic within a family of systems
that are called collectively Free Logic languages.
Let us try to prove, informally, that everything we talk about (everything we give
a name to in the domain) in the standard predicate logic exists. And, let us keep in
mind, we need to name or label all our objects in the domain. A name too is under-
stood as a labeling device here: whether one thinks that this is the right theory of
what a name is or not, a name has meaning referentially – the meaning of a name is
simply the thing that is named by the name. This is in alignment with the strictly
extensionalist character of this logic – in that this logic treats meaning as a matter of
what is being referred to or denoted, and not as a matter of what is connotatively
associated with the sentential part.
Let us assume, to see if we can reach logical absurdity, the negation of what we
are trying to prove. We see below, step by step, how we may proceed:
1. ~ ∀x∃y(x = y) negation of the existential-commitment statement
2. ∃x∀y(x ≠y) from 1: converting quantifiers to each other and negating
identity: there is something named x such that, nothing
in the domain has a name so that they refer to the same
thing: in other words, there is some name for a non-exis-
tent thing.
3. ∀y(t ≠ y) from 2: call the thing we are talking about by the name t.
4. t ≠ t from 3: but for “all” we can be talking of any object
whatsoever: so, why not the one we named t? But now
we have reached logical absurdity: whatever we may
think of identity, it is surely logical absurd that the thing
referred to by the name it is not … identical with itself.
Having reached a logical absurdity, we must determine the claim that was negated
as being a logical truth: basically, everything in the domain exists or, more precisely
and in detail, everything we have a name for in the domain refers to an existent
object. This proof would fail to issue in logical absurdity if we did not allow passage
from a universal statement (line 3) to an instantiation (what we get in line 4 when
we claim that the thing named t can be said to exemplify the universal claim.) It
sounds, though, that this cannot be done: if we have a universal claim, then this
claim must surely apply for every object in the domain – and so, in the expression
with any name that names some object in the domain. Nevertheless, we could indeed
stipulate that, because not all things that are being talked about should be taken as
existing, we do not permit such instantiation at liberty: we could demand that only
if a thing is indeed existent we can use a name to refer to it. To move from line 3 to
line 4 we would be requiring an express statement that a thing named by t indeed
exists. Of course, this would not be a standard logic idiom – it would be some spe-
cies of what is called Free Logic. On the other hand, think what you will of this, our
314 5 Formal Predicate Logic (also called First-Order Logic) ∏
standard logic has a machinery attached to it, which makes conferral of existence
automatic for everything that is being talked about. It follows, then, that, in this
standard logic, it is absurd to say of anything that it does not exist – and it is redun-
dant to say of anything that it exists. This seems to add support to the Kantian claim
that existence is not a logical predicate but we realize that this is rather a feature of
a specific formal-logical apparatus like the one we have at our disposal in the stan-
dard predicate logic.
We will now add symbols, and make appropriate arrangements for grammatical
formation, to expand our monadic logic system ∏μ to a formal idiom for what we
call Polyadic or Relational Predicate Logic (∏ρ=) with function symbols and the
identity symbol. From a metalogical point of view, this is a radical extension because
we now lose certain desirable features of the monadic logic – when it comes to
reaching effective decisions about logical properties. We do not engage in analytical
presentation of any such issues. Let us keep in mind that Predicate Logic is also
called First-Order Logic and, especially in older texts, Quantification Theory.
Showing more detail, we can designated relational or polyadic logic (with function
symbols and identity) as ∏πφ=. We concentrate on the grammar of ∏πφ=, our formal
language for Relational Predicate Logic with functions and identity; in subsequent
section, we will learn how to construct models for predicate logic.
It was a brilliant stroke of ingenuity in the history of logic that detected an option
of managing the formal study of relations by means of logical predicates. Relations
had presented formidable challenges to metaphysical thinkers of antiquity. The
study of the logic of relations eluded Aristotle and baffled scores of astute logicians
of later periods. The insight that allows us to treat relations by means of the formal
processing of special predicate symbols can now be explained. Notice that the pred-
icate symbols we have included in our grammar so far are monadic (also called
unary and one-place) predicate symbols. A monadic predicate symbol must be
accompanied, as dictated by our grammar, by one term: this can be a variable – in
which case we have a sentential function, not a meaningful sentence representa-
tion – or it can be accompanied by an individual constant – in which case we have
a sentence that can express a meaning and, when we construct models, can be true
or false.
The following are well-formed formulas, although the first is not a sentence but
a sentential function:
Fx, Fa
5.4 Expanding ∏μ to ∏ρ =/∏μπφ=: Polyadic (or Relational) Predicate Logic… 315
The variables and the constants are called TERMS. The predicate symbol is
accompanied, then, as we can say, by one term symbol. This can be called the
input – sometimes also called the argument of the predicate. It is confusing to use
the word “argument” here and we will avoid it. We can indicate the arity or degree
of the predicate symbol by using an appropriate superscript from the set of integers.
Of course, monadic predicate symbols have arity or degree 1. This can be indicated
perspicuously as shown below although, to avoid clutter, it can be omitted since no
ambiguity results from such an omission. We may also use subscripts for our predi-
cate symbols, as allowed by our grammar. The subscripts are taken from the set of
positive integers.
being formed by concatenation of the predicate symbol and variable symbols. The
formula to which quantifiers may be attached is called the matrix of the formula. As
we have pointed out, this is also a wff (well-formed formula) insofar as it is formed
correctly according to our grammatical instructions. By following this recipe for
formation, we are able to avoid what we call vacuous quantification, examples of
which are shown below:
It is to be understood that the grammar we specify for ∏πφ= and the symbolic
resources we are managing by means of this grammar are simply squiggles – speci-
fied symbolic shapes that are manipulated in a strictly specified grammatical fash-
ion. We are appealing to linguistic interpretations for illustrative purposes. When we
take to building models for (∏πφ=), as we already intimated when we caught a
glimpse of the process above, then we can formally fashion interpretations of our
grammatical symbols so that predicate symbols are interpreted by linguistic rela-
tions, constants are interpreted by names, and so on.
We still have to add function symbols, and also the identity symbol, to our gram-
mar. To arrive at the point where we can construct narratives – models – for inter-
pretations of these symbols, we have to wait for the section on semantic models.
The main issue is that we have a separation between grammatical manipulation of
symbolic resources, which is what we lay down at first, and the semantic or meaning-
based approach: in the latter case too, of course, we use the formal symbolic
318 5 Formal Predicate Logic (also called First-Order Logic) ∏
resources of our formal language but we also model so that we are able, coherently,
to speak of meaningful sentences which can be true or false (within some specified
model.) For as long as we are addressing the grammatical side of our formal setup,
we speak, instead, not of meaning (or truth and falsehood) but of well-formedness
(proper or correct grammatical or syntactical formation according to our specified
formal grammar.)
• FUNCTION SYMBOLS = {f, g, h, …, f1, …}
• fx ∊ WFF(∏πφ=)
• fxy ∊ WFF(∏πφ=)
• fa ∊ WFF(∏πφ=)
• fax ∊ WFF(∏πφ=)
• fxb ∊ WFF(∏πφ=)
• fx1x2…xn ∊ WFF(∏πφ=)
• fa1x2…xn ∊ WFF(∏πφ=)
• fa1a2…xn ∊ WFF(∏πφ=)
• fa1a2…an ∊ WFF(∏πφ=)
• ⋯
We can see that function symbols have degrees or arity, like predicate symbols.
A function symbol is accompanied by constants and/or variables. We can use super-
scripts to indicate the arity of the function but this is unnecessary as we can tell the
degree of the function by counting the number of constants/variables that accom-
pany the function symbol. Those symbols of constants/variables are called inputs of
the function. We could, but we don’t, symbolize as follows:
f(x), f(a), …
You might be familiar with functional notation from Mathematics. Notice how
we can have unary functions, binary functions, and so on to n-ary functions. (We
always take the arity to be finite.) We omit from our symbolization the parentheses
to avoid clutter. We also have the option, as with predicate symbols, of using super-
scripts to indicate degree but we may as well dispense with such superscripts
because there is no ambiguity: the number of symbols accompanying the function
symbol, in well-formed formulas, shows the degree of the function symbol itself.
• unary (also called monadic, one-place, singulary) functions: fx, gy, ha, fa,
gb, gz, …
• binary (also called dyadic, two-place) functions: fxx, fxy, gzw, fab, gcc,
hxa, fby…
• nary (also called n-place) functions: fx1x2…xn, fa1…xn, fa1a2…xn, …
Considering our selection of symbols for functions, and to avoid clashes, we
must withdraw those symbols from the stock of symbols for constants. Thus,
Let us point out – and insist that this be committed to memory – that function
symbols are TERMS, like the individual constants and the variables. Like the con-
stants, and unlike the variables, function symbols that are accompanied by constants
and/or bound variables have referents: they refer, in the modeling we will be doing,
to specified objects in the domain. This means that function symbols that are accom-
panied by constants and/or bound variables can be parts of meaningful sentences. If,
however, we have any free variables accompanying the function symbol, then the
formula must be a sentential function. To obtain an intuitive grasp of how our func-
tion symbols will work in our semantic modeling, let us consider linguistic exam-
ples in which we find what we will model as functions:
• father of___ monadic function
• mother of___ monadic function
• sum of __ and --- binary or dyadic or degree-2 function
• father of John --the function has a name as input: this is a term that
refers to someone.
• The sentence “the father of John is Bill” is meaningful. This alerts us that we will
need a symbol for identity, which symbol we will be adding to our grammar
shortly. Notice that “is” in this sentence stands for the use of the verb “to be” in
the sense of identity.
• In contrast: “the father of x is y” is a sentential function. We have identity here
too but the input, and the variable with which we identify the function, are
unbound variables.
• “Someone is the father of Bill” is a sentence: we read, “there is an x, such that x
is the father of Bill.” Here the variable ⌜x⌝ is bound by the existential quantifier.
Hence, we have a sentence – not a sentential function.
Finally, we add the identity symbol. We anticipated its need by means of the
linguistic illustrations we deployed. In predicate logic, we mean by identity between
two terms that the two terms co-refer or refer to the same (identical) object in the
domain of our model. Of course, we first introduce the identity symbol syntactically
and specify its grammar, as we have been doing with our symbols. Looking ahead
to the semantic modeling, let us repeat again that identity of two terms means that
the two terms name or pick out exactly one and the same object in the domain. Thus,
identity has to be understood as a binary relation that relates two terms. You might
wonder as to why we make a special arrangement for an identity symbol instead of
treating the matter by using a dedicated binary predicate symbol. For instance, we
could be specifying our grammar as follows:
The binary predicate symbol ⌜I⌝, or any other predicate symbol accompanied by
two terms, could do the trick. Nevertheless, there is a good reason why we do not
manage identity by means of a relational predicate. Any predicate (unary or n-ary)
is a non-logical constant. We have already pointed this out and here is, again, what
we mean by this. We have said that a predicate is to be interpreted or valuated, in a
320 5 Formal Predicate Logic (also called First-Order Logic) ∏
model, as referring to the set of all the domain-objects that have the property repre-
sented by the predicate. Thus, the semantics of a binary identity-predicate would
make the symbol refer, and have as meaning, the set of all pairs of things that are
identical with each other. Now, for any predicate – it being a non-logical constant –
we should always be able to build some other model in which the set of the predicate
has different members; we can make one model in which the predicate set has all the
member of the domain in it and we can make another model in which this predicate
has the empty set as its referent. For instance, take the predicate “human”: we can
make a model in which we have all humans – thus the set for “human” is the entire
domain; but we can also make – why not? – Another model in which there are no
humans, so the set that is the referent for “human” in that model is the empty set (the
set without members.) This applies for the case of every model, and for any arity of
predicate. There is a world – a narrative, a model – in which Jack loves Jill and there
is another perfectly fine (coherent, consistent, logically possible) narrative or model
in which it is not true that Jack loves Jill. Thus, the binary predicate “love” can have
the pair <Jack, Jill> as member in one model but not have this pair in it in some
other model. But, when it comes to identity, we notice that something different hap-
pens. In every model, the pair of any object with itself (let’s say, the pair <≬, ≬>, for
any object ≬) has to be in the identity-set! This shows us that identity does not
behave like a non-logical constant: unlike non-logical constants, an identity set
would have to have those pairs made of objects and themselves – for all the objects.
Approached in a different way, we can say that we actually have logical truths
about identity. For instance, “everything is identical with itself” is a logical truth; it
must be, if anything is. But this is not what happens with non-logical or predicate
constants: take any predicate, or relational predicate, and you should never expect
to find a logical truth depending on the meaning of that predicate. For instance,
“Jack loves Jill” cannot be a logical truth. Even if it is true, actually true, and even
if the two members of the pair swear to eternal devotion to each other, this is not a
truth of logic! Sad to say, it is logically possible that “Jack loves Jill” is not true: an
alternative logical possibility, a model or possible narrative exists, can be con-
structed, in which it is not the case that Jack loves Jill.
The symbol for identity we will use is “=”. We read this as “identity” and not as
“equality.” We don’t have a notion of equality in our symbolic language. We are not
engaged in using algebraic means. Even though we might be used to calling “=” the
equality sign, it is for us in predicate logic the identity sign. We use infix notation,
which means that we place the identity sign between the related terms (the identified
terms which are related, and can be called relata.)
• λ = μ ∊ WFF(∏πφ=)
where λ and μ ∊ TERMS = {x, y, z, …, x1, …, a, b, …, a1, …, fx, fa, …, f1x, …}.
Notice that the terms include the functions. Thus, the identity symbol can relate
function symbols. To use a linguistic illustration, “the mother of John is the mother
of Mary.” An identity formula represents, semantically, a sentence if there are no
free variables related. In other cases, the formula represents a sentential function.
Thus, “someone is the father of Jill” is a sentence (but notice that we don’t know
5.4 Expanding ∏μ to ∏ρ =/∏μπφ=: Polyadic (or Relational) Predicate Logic… 321
how to symbolize this yet, although we can reflect already that the “is” in this sen-
tence has the meaning of identity: “there is an x, such that x is identical with the
father of Jill.”) Here are examples:
• a=b sentence; the constants ⌜a⌝ and ⌜b⌝ refer to the same object.
• x=y sentential function.
• x=a sentential function – free variable x.
• ∀x∀y(x = y) sentence: everything is identical with everything.
• ∃x(x = x) sentence: something is identical with itself.
• ∃x(x = b) sentence: something is (identical with) the object referred
to by b.
• ha = hb sentence: if ⌜hx⌝ stands for “mother of x” then this means that
the mother of the entity referred to by ⌜a⌝ is the same as the
mother of the entity referred to by ⌜b⌝.
• ∀x∃y(hx = hy) sentence: everyone’s mother is someone’s mother.
• ∃x(fx = j) sentence: someone’s father is John. (In better idiomatic English:
“John is someone’s father.” Strictly speaking, if we are scan-
ning our grammar without regard for English idiomaticity, we
can say: “there is at least one x, such that the father of x is iden-
tical with the person denoted or referred to by the name ‘John’.”)
We can compose function symbols in many ways. Consider, matching linguistic
examples with symbolizations (retaining our key of symbolization as in the preced-
ing examples):
• The father of the father of John: ffj
• The father of the mother of Mary: fhm
• The mother of the mother of someone is Mary: ∃x(hhx = m)
• We can formalize “the grandmother of Mary” in two different ways, by using
functional symbols (adding “h1” for “grandmother”): ⌜h1m⌝ or ⌜hhm⌝
• The mother of Mary is related by the R-relation to the father of John: Rhmfj
• The mother of the father of the mother of John: hfhj
Whatever we can formalize or symbolize by use of function symbols we can also
formalize by the use of relational predicate symbols – and the other way round. This
can be done provided that there exists a unique y to which x is R-related: in that
case, and only in that case, ⌜fx = y⌝ can replace ⌜Rxy⌝. It is interesting to note what
happens to the arity and degree of the symbols. Let us attempt this. We use the trans-
lation key: “Fxy” and “Mxy” as predicate symbols for “x is the father of y” and “x
is the mother of y” respectively. We retain in this key, “fx” and “hx” as, respectively,
function symbols for “father of x” and “mother of x.” We can tell that we are dealing
either with predicate symbols or with functional symbols. The predicate letters are
capital letters and the functional symbols are small letters from the part of the alpha-
bet in the set {f, g, h, …}. We have stipulated all this in our official symbolic
322 5 Formal Predicate Logic (also called First-Order Logic) ∏
grammar. The number of accompanying term symbols shows us the arity of the
predicate symbol or the degree of the functional symbol.
• Mary is the mother of John: Mjm
• Mary is the mother of John: hj = m
We can show the arity and degree by superscripts. We have indicated that we can
dispense with such symbols but we are also at liberty to introduce them for some
purpose.
• Mary is the mother of John: M2jm
• Mary is the mother of John: h1j = m
We notice that the arity of the predicate, n is related to the degree of the function,
d, by means of the formula:
n=d+1
We also mark the use of the identity sign, which is needed when the functional
symbol is used. We now proceed to show more examples of relational-predicate
versus functional-symbol formalization.
• There is someone who is John’s father: ∃xFxj
• There is someone who is John’s father: ∃x(fj = x)
• Mary is John’s mother: Mjm
• Mary is John’s mother: hj = m
• John has no mother: ~ ∃xMxj
• John has no mother: ~ ∃x(mj = x)
• Everyone who has a mother also has a father: ∀x(∃yMyx ⊃ ∃zFzx)
• Everyone who has a mother also has a father: ∀x(∃y(mx = y) ⊃ ∃z(fx = z))
Our well-formed formulas can look daunting to the uninitiated and unpracticed
eye. Many texts use parentheses within which they enclose the term symbols (sym-
bols for variables, constants, functions and also the symbols for the inputs of func-
tions – all these are term symbols.) We dispense in our grammar with such
parentheses; our facility for tracking what term symbols accompany what symbols
can develop as a result of concentration and practice; we allow ourselves, according
to our grammatical arrangements, to use superscripts from the natural numbers to
show arities of predicate symbols and of function symbols. Using those superscripts
facilitates learning and practice in the grammar and recognition of the composi-
tional parts of the formulas: of course, prolonger practice should allow us to dis-
pense with the use of superscripts without running astray in keeping track of what
symbols go with what symbols in the well-formed formulas. For now, we will work
on examples patiently in order to make clear how this grammar works. It is to be
kept in mind, as well, that the grammatical accommodations and arrangements that
were laid down for the grammar of our sentential formal idiom we learned are still
in effect: after all, our predicate logic formal systems have been extending, as we
say, the sentential formal language.
5.5 Parsing Trees of Well-Formed Formulas of ∏πφ=: ℑ(∏πφ=) 323
• f3xab - the superscript indicates that the function is ternary or three-place; three
input symbols are needed for well-formedness; one is a variable symbol (this is
allowed by the grammar), and the other two are individual constant symbols;
although this formula cannot be interpreted semantically as a sentential formula,
it is still well-formed; let us concentrate on understanding why semantic inter-
pretation as a sentential formula is not available: this is because this is an open
sentence or sentential function insofar as it has a free (unbound) formula. But
now compare the following example of a well-formed formula that is also avail-
able to be semantically interpreted as a sentential formula.
• ∃xf2xa - if we take a semantic reading, we have: at least one x is f-related to the
item named ⌜a⌝. In a more concrete interpretation, if ⌜fxy⌝ stands for “x is the
father of y”, then we have: at least one person is the father of the person named
⌜a⌝. Of course, exactly one person can be the father of someone. It is also true
that at least one person (the weaker statement) is the father of someone named
insofar as the right person is indicated. We are not interested, actually, in empiri-
cal confirmations – something that we will continue to stress and explicate, espe-
cially when we turn to the working out of semantic models. If the semantic
interpretation turns a false sentence – relative to some specified context or
model – that is irrelevant for our discussion of how the grammar works.
• ∀x∀y∃zR3f2axg2ayz - the predicate symbol is ternary or three-place: three
accompanying terms are needed; one is the binary function ⌜f⌝ which is appro-
priately accompanied by two input symbols, ⌜a⌝ and ⌜x⌝, both of which are,
appropriately terms (remember, terms include variables, constants and functions
and, certainly, also inputs of functions); the second accompanying term that is
input to the predicate symbol, is another binary function symbol, ⌜g⌝, which is
itself, appropriately, accompanied by two input symbols, ⌜a⌝ and ⌜y⌝; the final
input symbol to the ternary predicate symbol is the variable ⌜z⌝. All the formulas
are bound – no variable is free: hence, in terms of characterizing this formula as
an open sentence of sentential formula – it is a sentential formula.
• P1f1x ⊃ ~ R2ag2f1aa - this is an open sentence since there are unbound variables but
it is also a well-formed formula; the main connective symbol is the horseshoe; we
can think of quantifier symbols themselves as being aligned with connectives
(assuming that we interpret our formulas semantically over finite domains); if the
largest-scope symbol is a quantifier symbol, that certainly supersedes connective
symbols; or, to put it otherwise, connective symbols can be within the scope of some
quantifier symbol; but in the present formula, the largest-scope symbol is a connec-
tive symbol and that should be remembered because ignoring it can lead to errors
when it comes to applications of rule sin proof-systems. Notice that the predicate
symbol in the antecedent (to the left of the horseshoe) is monadic: it is accompanied
by one term, a function symbol that is itself monadic and has one symbol (a vari-
able) as its input; the consequent – to the right of the horseshoe – we have a binary
predicate symbol accompanied by two term symbols: ⌜a⌝, a constant symbol, and a
binary function symbol ⌜g⌝ which has two symbols as inputs – the monadic func-
tion ⌜f⌝ with input ⌜a⌝ and, as second input symbol to ⌜g⌝ the constant symbol ⌜a⌝.
Anticipating our parsing or decomposing diagraming of the next section, let us
show perspicuously how this grammatical arrangement of symbols works:
324 5 Formal Predicate Logic (also called First-Order Logic) ∏
○○ P1f1x ⊃ ~ R2ag2f1aa
⇙⇘
P1f1x ~ R2ag2f 1aa
⇙⇓ ⇙⇘
P1 f1x ~ R2a g2f 1
aa.
⇙⇘ ⇙ ⇓ ⇘
f1 x R2 a g2 f1a a
⇙ ⇓ ⇘
g2 f1a a
⇙⇘
f1 a
• Regarding the identity symbol, we take it to be binding more strongly than other
symbols: this allows us to dispense with parentheses; notice that we have essen-
tially adopted such a convention for the negation symbol. We don’t write,
• ~ (p).
• but
• ~p
• But we need,
• ~ (p ⊃ q).
• because.
• ~ p ⊃ q.
• has the negation sign bound strongly to the first following atomic variable.
• Compare similarly:
• a = b ⊃ ∃x∃y(x = y).
• instead of writing (although it would not count as an error to write):
• (a = b) ⊃ ∃x∃y(x = y)
• We observe the same convention for the symbol “≠” which we conveniently
incorporate in our symbolic notation.
As we did for well-formed formulas of our sentential logic idiom, we also subject
well-formed formulas of ∏ρ= to a parsing or decomposition by means of a tree-
based procedure. The parsing gives us decomposition of the well-formed formula ψ
into all the subformulas of ψ (including ψ itself which is a degenerate case, meaning
that every formulas is its own subformula in a sense.)
The rules for the parsing tree, which we legislated for the connective symbols of
the sentential logic in 4.2, are transferred here; we add parsing rules for the new
5.5 Parsing Trees of Well-Formed Formulas of ∏πφ=: ℑ(∏πφ=) 325
symbols we have in the predicate logic idiom. Our predicate logic language has
extended our sentential logic idiom and, accordingly, we have had additional forma-
tion rules introduced on top of those we were implementing in the sentential logic
idiom. We repeat, for convenience, the decomposition rules of the sentential logic
and then we show the schematic (figure-like) representations of the rules for the
symbols that have been added to obtain ∏ρ=. Since we have the option of using
superscripts to mark the arity of predicate and function symbols, we may as well
take advantage of this at will: doing so facilitates visual recognition of arity and also
assists in ascertaining that the formation of the given formula is grammatically cor-
rect; finally, the parsing process may also be facilitated (although practice should
allow one to dispense with such visual assistance.)
The quantifier rule leaves a formula in which the variable of the quantifier is left
unbound or free; given the formation rules we have legislated in our formal gram-
mar, this is expected because the quantifier symbol is presumed, for grammatical
correctness, to be attached to a formula in which the quantifier’s accompanying
variable is free. Notice, though, that variables other than the one that is attached to
the quantifier symbol may or may not be free: as we know, we count a formula that
has free variables as well-formed (although it is not a sentential formula but what
we have called an open sentence or sentential function.)
We continue the decomposition at a level below that of well-formed subformu-
las; this means, that we break predicate symbols away from their accompanying
term (variable or constant) symbols; we do the same with function symbols.
Because of this, our determination of the set of subformulas of the given formula
will require focused attention to extract from the information of the parsing tree.
We use the metalinguistic symbols “Φn” and “ℊn” for predicate and function sym-
bols of n arity. Also we use “λ” for term variable (which can be a variable or a
constant symbol.)
326 5 Formal Predicate Logic (also called First-Order Logic) ∏
The subformulas we extract from the parsing tree do not include, of course, the iso-
lated symbols for predicates, functions, variables and constants since these are not, by
themselves, grammatically correct or well-formed formulas of our formal system. The
benefits we gain from parsing beyond subformulas have to do with tracking the arities
of predicate and function symbols and matching with the number of accompanying
term symbols (terms include the variables, the constants and the function symbols.)
5.5.1 Exercises
∃y∀xRyx ⊃ ∀x∃yRyx.
⇙⇘⊃R
∃y∀xRyx ∀x∃yRyx
⇓∃R ⇓∀R
∀xRyx ∃yRyx
⇓∀R ⇓∃R
Ryx Ryx
⇙⇓⇘RedR ⇙⇓⇘PredR
Ryx Ryx
5 Formal Predicate Logic (also called First-Order Logic) ∏ 329
Reading the symbols for which rules are applied, from left and exhausting
vertically before we move to the right, we have first a Polish notation that uses,
metalinguistically, our symbols; subsequently we render into the proper Polish
notation.
⊃∃y∀xRyx∀x∃yRyx ⤇ CΣxΠxRyxΠxΣyRyx
Next, discover the Polish notation for the given formulas below.
a. ∀x∀yRxy ⊃ ∃x∃yRxy
b. ∃x∀y(Fy ≡ y = x)
c. ∃x∃y(fx = y ≡ fy = x)
d. Rab ⊃ ~ ∀x∀y(Rxy ∨ Ryx ∨ x = y)
e. ∀x∃yRxy ⊃ (∃xRxx ⊃ ∃y∀xRxy)
f. ~ ∀x∃yRxy ≡ ∃x∀y ~ Rxy
Chapter 6
Translations from English into ∏πφ= (also
called Symbolizations, Formalizations)
We translate the meanings of sentences of English into our formal symbolic lan-
guage ∏πφ=. The symbolic resources we have at our disposal, regulated by the gram-
mar we have instituted, can be shown compendiously:
The symbolic language of predicate logic (also called first-order logic or quanti-
fication theory) is minimally adequate for translating statements from the language
of mathematics and, historically, this is a factor that motivated the development of
this formalism. Nevertheless, translations of completely descriptive statements of
certain mathematical laws may well require a higher-order language, which allows
for quantification over predicate variables: we do not study higher-order logics in
the present text.
We have available: the symbols for the standard connectives; predicate symbols
of any arity; variable symbols, possibly with subscripts from the denumerable set of
positive integers; individual constant symbols also with subscripts as needed; the
quantifier symbols; the identity symbols; and function symbols with subscripts as
needed. We may omit superscripts that indicate arity. We also have the auxiliary
symbols of parentheses to sue only to prevent ambiguity of the formulas.
The observations for translation into sentential logic, which we studied in earlier
chapter, still apply, of course, since the connective symbols of predicate logic are
the same as those of sentential logic – the predicate logic extends the sentential
logic. We need next to turn to the intricate subject of providing guidance for how to
symbolize (formalize, translate) into ∏πφ=.
Wx: x is wise.
Vx: x is a visitor to Lilliputia.
1. Every resident of Lilliputia is either benevolent or evil and cannot be both. ↝
∀x(Rx ⊃ ((Bx ∨ Ex) ∙ ~ (Bx ∙ Ex)))
2. All teachers in Lilliputia are neither benevolent nor evil but they are wise.
↝ ∀x(Tx ⊃ (~ Bx ∙ ~ Ex ∙ Wx))
3. Some students are wise only if they are evil. (We can agree to omit “in Lilliputia”
since the context has been set in this respect. We may also take it, in continuing
from the remark in the preceding example, that, unless otherwise stated in
explicit way, the statements are about the residents of Lilliputia; this allows us
to dispense with the disambiguation we did in the preceding example.)
↝ ∃x(Sx ⊃ Ex)
4. If no one is both student and teacher, then no one is benevolent unless they
are wise.
↝ ~ ∃x(Sx ∙ Tx) ⊃ ~ ∃x(Bx ∙ ~ Wx)
5. All students are teachers if and only if no teachers are evil.
↝ ∀x(Sx ⊃ Tx) ≡ ~ ∃x(Tx ∙ Ex)
6. Since no teacher is evil, then either all students are not wise or some teachers
are both wise and benevolent. (Here, we need to make a judgment as to whether
the disjunction is inclusive or exclusive. The English expression “either-or” is
inherently ambiguous as to between the two senses. The meaning here is rather
garbled, to test our translational skills, but it appears that a minimal condition
for drawing a distinction is established by the disjunction and, if this is the case,
we may take it to be inclusive disjunction.)
↝ ~ ∃x(Tx ∙ Ex) ⊃ (∀x(Sx ⊃ ~ Wx) ∨ ∃x(Tx ∙ Wx ∙ Bx))
7. Some visitors are evil but only benevolent visitors can be teachers in Lilliputia.
↝ ∃x(Vx ∙ Ex) ∙ ∀x((Vx ∙ Tx) ⊃ Bx)
8. No one can be both a visitor and a student if they are evil and vice versa.
↝ ∀x(Ex ⊃ ~ (Vx ∙ Sx)) ∙ ∀x((Vx ∙ Sx) ⊃ ~ Ex)
9. Every person in Lilliputia is either green or red and cannot be both.
10. Those who love only things do not love people.
11. In Lilliputia, every red person get whatever they like
We are ready next to move to the task of translating relational statements and
also statements whose precise translation, within the limits of our symbolic
resources, require also use of symbols for functions and for identity. We consider
the symbol ⌜≠⌝ as part of our symbolic language, inter-substitutable with ⌜~ (x =
y)⌝. We also omit parentheses around the equation and inequation symbolic
expressions trusting that no ambiguity can result from this as to scopes. It is nota-
ble that predicate symbols can be used to effectuate translations that can be car-
ried out with function symbols (provided that there exists a unique y that is
R-related to x), and the other way round. The predicate symbol has to be of arity
n + 1 if the function symbol that can be used for the translation (rendering the
same meaning) is of arity n. We add the symbol “∃!” for uniquely quantifying
340 6 Translations from English into ∏πφ= (also called Symbolizations…
existential quantifier (and we will learn in subsequent section how to develop this
symbol.) For instance,
∃!xF2xy ↝ x is the father of y
corresponds to:
f1x = y ↝ y is the father of x
• Tram is the father of Sam (with “t” and “s” as the individual constants or names
for Tram and Sam respectively):
↝ ∃!x(Fxs ∙ x = t) or ↝ fs = t
• D = {x / persons and objects in the fictitious city of Lilliputia}
Key:
Gx: x is a green person of Lilliputia.
Rx: x is a red person of Lilliputia.
Tx: x is a thing.
Lxy: x likes y.
Gxyz: x gives to y z.
fx: the father of x.
mx: the mother of x.
Fxy: x is the father of y.
Mxy: x is the mother of y.
Lxy: x loves y.
Kxy: x likes y.
u: Urutaga.
w: Watataga.
1. Every person likes at least some thing. ↝ ∀x∃y((Px ∙ Ty) ⊃ Lxy)
Consider that we can have, equivalently, the following. Standard sentential
logic shows these as logically equivalent expressions (as we may attest by refer-
ring to the rule we called Exportation-Importation in a natural deduction system
we constructed).
↝ ∀x∃y(Px ⊃ (Ty ⊃ Lxy)).
Consider also the following. Indeed, there are rules we can specify, as we will
see, as to how to move quantifiers in and out of scopes of connective symbols.
This move is legitimate – the expressions are logically equivalent.
∀x∃y(Px ⊃ ∃y(Ty ∙ Lxy)).
We will be showing such alternative renderings below but there is always an
imperative to opt for the translation that shows more closely the structure of the
logical meaning of the English sentence we are translating.
2. Every person likes some other person but there is no person who is liked by
everyone other than himself or herself. ↝
∀x(Px ⊃ ∃y(Py ∙ y ≠ x ∙ Lxy)) ∙ ~ ∃x(Px ∙ ∀y(Py ⊃ (y ≠ x ⊃ Lyx)))
3. A person gives another person a thing, only if he or she likes that other person.
↝ ∀x∀y((Px ∙ Py ∙ x ≠ y ∙ ∃z(Ty ∙ Gxyz)) ⊃ Lxy)
6.1 # Tips for Translation (Symbolization, Formalization) 341
4. No green person likes himself or herself if there is not some red person whom
he or she also likes. ↝
~ ∃x((Px ∙ Gx ∙ Lxx) ∙ ~ ∃y(Py ∙ Ry ∙ Lxy)).
∀x((Px ∙ Gx ∙ Lxx) ⊃ ∃y(Py ∙ Ry ∙ Lxy)).
∀x(~ ∃y(Py ∙ Ry ∙ Lxy) ⊃ ((Px ∙ Gx) ⊃ ~ Lxx))
5. If there is at least one red person who likes all green persons, then there is at
least one green person who likes all red persons. ↝
∃x(Px ∙ Rx ∙ ∀y((Py ∙ Gy) ⊃ Lxy)) ⊃ ∃x(Px ∙ Gx ∙ ∀y((Py ∙ Ry) ⊃ Lxy)).
The sentence “If there is a red person who likes all green persons, then there is
a least one green person who likes all red persons” would be conveying mean-
ing in ambiguous fashion: it is not clear if “some” or “at least one” is meant, or,
if, instead, “exactly one” or “one specific person” is meant. And we also have
all the combinations for taking “a” to be “at least one” for one part of the sen-
tence and “exactly a specific one” for the other part of the sentence. We would
then disambiguate: this means that we present all the possible translations but it
is linguistic context, possibly not accessible to us, that would have to be used in
picking which one of the possible translations should be applied. In the next
section we learn how to translate expressions involving a uniquely specified
entity – and, indeed, also numerical expressions like “at most two” or “exactly
four” and so on.
6. No one likes those who don’t like themselves. ↝
~ ∃x(Px ∙ ∃y(Py ∙ ~ Lyy ∙ Lxy)).
Notice that it is not specified if this applies also in the case of the same person
or in the first person: meaning that no one likes himself or herself, either, insofar
as they don’t like themselves; this sounds trivially true and, if it were to be
excluded, “unless it is about themselves”, we would be expressing a contradic-
tion: “no one likes those who don’t like themselves, unless it is about them-
selves” which in that case would mean that they both like and do not like
themselves. Translatability is another matter, however; the meaning would be
still translatable. As translated above, the meaning that is rendered does not
exclude the first-person case. Otherwise, we ought to render as follows:
~ ∃x(Px ∙ ∃y(Py ∙ y ≠ x ∙ ~ Lyy ∙ Lxy))
7. Green persons do not like their mothers while red persons do not like their
fathers. ↝
∀x((Px ∙ Gx) ⊃ ~ Lxmx) ∙ ∀x((Px ∙ Rx) ⊃ ~ Lfx)
8. No one gives the same thing to their father and mother. ↝
∀x∀y∀z((Px ∙ Ty ∙ Gxyfx) ⊃ ((Tz ∙ Gxzmx) ⊃ z ≠ y)
9. Red persons do not like red things if and only if green persons give only red
things to green persons. ↝
∀x(((Px ∙ Rx) ⊃ ~ ∃y(Ty ∙ Ry ∙ Lxy)) ≡ ∀x∀y∀z((Px ∙ Gx ∙ Py ∙ Gy ∙ Tz) ⊃
(Gxyz ⊃ Rz))
10. Urutaga likes no one but Watataga but Watataga likes no one but Watataga. ↝
∀x(Lux ≡ x = w) ∙ ∀x(Lwx ≡ x =w)
11. Those who love only things do not love people.
↝ ∀x((Px ∙ ∀y(Lxy ⊃ Ty)) ⊃ ~ ∃z(Pz ∙ Lxz)).
342 6 Translations from English into ∏πφ= (also called Symbolizations…
We can lay down systematic rules for how to move quantifier symbols in and out of
parentheses and nested scopes of other quantifier symbols as well as how to reletter
quantifier variables while we do so, all the while generating logically equivalent
formulas. It is provable that these shifts result in generation of symbolic formulas
that are logically equivalent with ones that are so transformed. There are decision
procedures that require implementation of such shifts, although we do not study
such methods in the present text. With respect to translating, and having as available
alternative translations other formulas that are logically equivalent with one another,
we can seize the opportunity to show how quantifier symbols can be moved – and
also, specifically, how they can be extracted from and inserted within parentheses
and how relettering can be also be carried out at the same time. Let us continue with
translating English sentences that constitute discourse about the model we used for
our preceding examples; we do this to motivate the present subject of showing rules
of quantifier symbol extraction-insertion and relettering. The most rigid type of
relettering assigns different variables to each quantifier symbol, respecting the vari-
ables that are bound accordingly. At this point, however, any pretenses of a transla-
tion to appealing to our intuitions (assuming some basic training in translations, of
6.1 # Tips for Translation (Symbolization, Formalization) 343
course) seems to dissipate. The relettering is useful for certain decision procedures,
which, as already indicated, we do not pursue in this text.
18. Red people love their parents only if green people do not.
↝ ∀x∀y((Px ∙ Rx ∙ Py ∙ Fyx) ⊃ Lxy) ⊃ ∀x∀y((Px ∙ Gx ∙ Py ∙ Fyx) ⊃ ~ Lxy).
↝ ∃x∃y∀z∀w(((Px ∙ Rx ∙ Py ∙ Fyx) ⊃ Lxy) ⊃ ((Pz ∙ Gz ∙ Pw ∙ Fwz) ⊃ ~ Lzw))
19. If anyone likes his father, Watanaga does.
↝ ∃x(Px ∙ Kxfx) ⊃ Kwfx.
↝ ∀x((Px ∙ Kxfx) ⊃ Kwfx)
• ∀x(φ ∙ ψ) ≡ (∀xφ ∙ ∀xψ) [x is free in φ and in ψ before bound by the quantifiers]
• ∃x(φ ∨ ψ) ≡ (∃xφ ∨ ∃xψ) [x is free in φ and in ψ before bound by the quantifiers]
• ∀x(φ ∨ ψ) ≡ (∀xφ ∨ ψ) [x is free in φ but not in ψ before bound by the quantifiers]
• ∀x(φ ∨ ψ) ≡ (φ ∨ ∀xψ) [x is free in ψ but not in φ before bound by the quantifiers]
• ∃x(φ ∙ ψ) ≡ (∃xφ ∙ ψ) [x is free in φ but not in ψ before bound by the quantifiers]
• ∃x(φ ∙ ψ) ≡ (φ ∙ ∃xψ) [x is free in ψ but not in φ before bound by the quantifiers]
• (∃xφ ⊃ ψ) ≡ ∀x(φ ⊃ ψ) [x is free in φ before bound by the quantifiers]
• (∀xφ ⊃ ψ) ≡ ∃x(φ ⊃ ψ) [x is free in φ before bound by the quantifiers]
• (φ ⊃ ∀xψ) ≡ ∀x(φ ⊃ ψ) [x is free in ψ before bound by the quantifiers]
• (φ ⊃ ∃xψ) ≡ ∃x(φ ⊃ ψ) [x is free in ψ before bound by the quantifiers]
cannot translate. Our formal system does not allow for construction of such
quantifiers.
2. There are many students in this class. There may well be some ambiguity in the
expressions that ordinarily use “many”, depending on context to remove ambi-
guity as to whether this is meant to assert that a significant number of students
exist or to deny a contextually based claim that very few students exist. In either
case, the symbolic expressivity that we need to have at our disposal requires that
we have defined non-standard quantifier symbols. This can be done – by using
set-theoretic concepts in our semantics – but such devices are not available to a
formal idiom like ours, within the standard first-order logic of quantifiers.
3. Half of the residents of Patatonia are brave. If we have access to a metalinguistic
set-theoretic apparatus, we can specify that the semantics of such an expression
requires for truth that the extension of the brave-predicate has as cardinality
(number of members) exactly one half of the number that is the cardinality of the
semantic model’s domain. Such a maneuver would define a quantifier – indeed,
generalized to any predicate symbol, the definition would be setting the seman-
tics for the quantifier symbol for one-half. Nevertheless, the standard predicate
logic does not have access to such resources.
4. More than enough students volunteered to make the trip financially feasible. We
would have to implement semantic definitions for the appropriate quantifier
symbol. Since formal languages do not permit or tolerate ambiguity or – in the
case of the standard logic – vagueness, the inherently vague (and contextually
ambiguous) “more than enough” would have to be fixed by specifying what
counts as “more than enough.” For instance, one possible way for defining such
a quantifier could be: the sentence “more than enough F are G is true if and only
if the intersection or overlapping of the sets that are the semantic values or exten-
sions of F and G in a model has cardinality (number of members) that exceeds n,
where n is a constant number.” The number n can be expressed as a ratio of some
number m over the number that is the cardinality of the domain of the model.
Obviously, considerations are entering in all this, which appear extra-logical but
the formalism, once settled, is to be deployed systematically and so that the
results we reach by applying it are not liable to instability. We can say that what-
ever pragmatic or contextually specified considerations enter into the fixing of
the values of the numbers we referred to above, are mere motivations for the
formal system but the system itself, once constructed, stands on its own. The
standard predicate logic does not permit construction of such unorthodox quanti-
fier symbols.
5. There are some students – so that it is implied that there are also some people
who are not students. We cannot have as valid inference, in our system, from
“there are some students” to “there are also some who are not students.” The
meaning of “some” is, as we know by now, “there is at least one” and it is pos-
sible that both “there are some students” and “everyone is a student” which then
excludes the possibility that “there are no students.” Of course, we could define,
alternatively and in some non-standard logic, a quantifier symbol for “some”
(not the same meaning as our quantifier symbol for “some”), so that the infer-
6.1 # Tips for Translation (Symbolization, Formalization) 345
ence from “there are some students” to “there are some who are not students”
would go through, in the specified semantics of that system, as valid.
6. If being brave is virtuous and Sama is brave, then there is at least one virtue that
someone has. This would require the resources of second-order logic, which
permits quantification over predicates. We may paraphrase to translate into our
formal language but the resulting formula would not be a logical truth even
though statement seems to be logically (necessarily, trivially) true. This should
not be surprising. Ascending from the standard sentential logic to the extension
that is the standard predicate logic, as we have done in this text, we similarly
faced the prospect of argument forms that ought to check as valid but whose
symbolic expressions in sentential logic are not valid – although, when rendered
with the symbolic resources made available in predicate logic, they do indeed
come out as valid. Second-order logic has been studied extensively and would be
covered in a more advanced text. There are certain metalogical results about
second order logic, which are problematic, but this lies beyond our current scope.
Translations of Numerical Statements.
In addition to our cache of symbolic resources, which we make liberally avail-
able to metalinguistic uses, we will need some novel metalinguistic symbolic nota-
tion to present generalizations of translations of numerical statements to the case of
n objects. We introduce this convenient notation here before we proceed.
n
•
i 1
i =(φ1 ∙ … ∙ φn)
n
• (x
i , j 1
i ≠ xj)) = (x1 ≠ x2 ∙ x1 ≠ x3 ∙ … ∙ xn-1 ≠ xn)
i j
n
•
i 1
i = φ1 ∨ … ∨ φn
n
• (
i , j 1
i * λj) = (λ1 * λ2) ∙ (λ1 * λ3) ∙ … ∙ (λ2 * λ3) ∙ … ∙ (λn-1 * λn) / [* ∊ {=, ≠}]
inj
• (
i , j 1
i * λj) = (λ1 * λ2) ∨ (λ1 * λ3) ∨ … ∨ (λ2 * λ3) ∨ … ∨ (λn-1 * λn) / [* ∊ {=, ≠}]
i j
Numerical Translations:
• There is at least one F: ∃xFx
• There are at least two Fs: ∃x∃y(Fx ∙ Fy ∙ x ≠ y) ≡ ∃x1∃x2(Fx1 ∙ Fx2 ∙ x1 ≠ x2) ≡
2 2
∃x1∃x2( Fx i ∙ (x i ≠ xj))
i 1 i , j 1
i j n n
• There are at least n Fs: ∃x1…∃xn( F i ∙ (x i ≠ xj)))
i 1 i , j 1
i j
346 6 Translations from English into ∏πφ= (also called Symbolizations…
j 3i
n 1 n
• There are exactly n Fs: ∃x1∃x2…∀xn+1((Fxn+1 ≡ (x j = xi)) ∙
i , j 1
F x )
i 1
i
j n i
6.1.4 Exercises
o. If there is any country that has both a king and a queen, then Utopia is not
that country.
p. Alice sees the queen of Utopia but the queen of Utopia sees only the king
of Utopia.
q. There is at most one queen of utopia.
r. There are exactly two people whom Alice sees.
s. There are at least three countries that have no king or queen.
t. Royals admire only themselves.
u. Royals admire none but themselves.
v. No royal is the king or the queen of every country.
w. Everyone except for the king of Utopia can see anyone.
x. Anyone but the king of Utopia can see everyone.
y. Only the king of Utopia does not see anyone but Alice.
z. The king of Utopia is the only royal of any country, who sees Alice.
5. Translate into English the following well-formed formulas of ∏πφ=. Of course, it
is not presumed that the statements made by the symbolic expressions are
always true.
Key: j: Jocasta; o: Oedipus; a: Antigone; fx: the father of x; mx: the mother of
x; Mxy: x is mother of y; Fxy: x is father of y.
a. ∀x∀y∀z(Fyx ⊃ ~ Myx)
b. ∃x∀y(x ≠ y ∙ Myx)
c. ∀x∃yFyx
d. ∃x∃y(fx = y ∨ fy = x)
e. ~ ∃x(x = mx) ⊃ ~ ∃x(x = fx)
f. ∃y∀x∃w∀u((Mxy ≡ x = y) ∙ (Fuw ≡ u = w))
g. ∀x∃y∀z((Fzy ≡ z = y) ∙ Fyx)
h. ~ ∃x∃y∃z(mx = y ∙ mx = z ∙ y ≠ z)
i. ∀x∀y∀z((fx = y ∙ fx = z) ⊃ y = z)
j. ∀x∀y(y = mx ⊃ y ≠ x)
k. fa = o ∙ ma = j
l. Mjo ∙ Mja ∙ Foa
m. ∃x∃y∃z((x = my ∙ y = mz ∙ x = mz) ∙ x = j ∙ y = o ∙ z = a)
n. ∃x∀y((mfy = my ≡ y = x) ∙ x = a)
Chapter 7
Semantic Models for ∏: ∏⧉
We retain the grammatical resources we have made available for formal languages
of predicate logic (first-order logic), ∏, and implement those resources, with the
grammar we have legislated, as the semantic formal system ∏⧉. We will pursue
two different approaches for reasons that are soon to become revealed. Let us recall,
from the earlier section introducing predicate logic, that a semantic model is an
abstract structure that comprises an non-empty domain, also called universe of dis-
course, which may or may not be finite; and a function (meaning that it assigns
unique outputs to inputs), which has different names you may come across in the
literature: valuation, interpretation, signature, value-assignment, semantic valua-
tion. This function we symbolize by “||” and other symbols used in texts include:
σ, 𝑣, 𝔣.
Let us remark, then, that the semantics of predicate logic is carried out by means
of models. It may be asked whether it makes sense to speak in this way with respect
to the basic sentential logic. The answer is affirmative: a model encapsulates a logical
option that we can construct and make sense of on the basis of objects that we spec-
ify, so we can talk about it in a meaningful way. Attention is needed, though: this is
not, and should not be, an empirically constituted or factually verifiable construction.
The game is composed as an abstract enterprise and we may use it of course to build
narrative about items that seem to reflect empirically or factually recognizable
affairs – but it is important to grasp that our items are abstract, they are constructed
entirely from scratch and the availability of things to talk about tracks whether we
can meaningfully carry out this discourse with meanings (semantics) understood,
again, on the basis of what has been constructed. The possible expression of items
and issues that seem familiar from actual experience, if it arises, is only an auxiliary
aid that may facilitate the enterprise we are carrying out; but this enterprise itself is
settled exclusively by means of the formal devices we use and cannot depend or be
liable to be checked by reference to any concrete or actual conditions. To return to the
case of the sentential logic: our familiar truth table method, which has provided the
semantics for the standard sentential logic, can be thought of as a modeling. The
symbols we have in sentential logic are rather uncomplicated: sentential variables are
thought as being assigned meaning (hence, the semantics again), as it is done with
modeling: and, notice, the meanings (or interpretations) are not, of course, actual or
factual events or objects, but they are truth values (of which we have exactly two in
the standard sentential logic – true and false.) Each row of the truth table can be
thought of as modeling exactly one specific state of affairs or logical possibility (but
notice again that the appeal to intuitions in a phrase like “state of affairs” is only an
incidental assistance to our imagination whereas the things were are talking about are
abstract.) Certain conditions are imposed, which the student of logic would need to
investigate closely as a greater degree of precise and focused analysis is provided: for
instance, it is important that each row of the truth table (each logical possibility, logi-
cally possible option or case, interpretation, valuation, value assignment, logically
possible state of affairs) is independent of every other row; the entire schedule (map,
collection) of the rows establishes all and only the logically possible value assign-
ments to the atomic sentential variables; and so on. This is what an analysis on the
basis of the semantic modeling is like in the case of sentential logic. But in the case
of predicate logic, the plot thickens.
In predicate logic we have added symbolic resources and these new formal ame-
nities must be managed semantically. A marked contrast to the case of the unex-
tended sentential logic is that compositionality of logical meaning is not available in
a trivial way anymore: by compositionality of meaning (certainly, a semantic cate-
gory since we are talking about logical meaning) we are referring to the character-
istic of the sentential logic by which the logical meaning (which is the truth value)
of a compound sentential formula is precisely and uniquely (functionally) determin-
able if the logical meanings (truth values) of the atomic variables in the formula are
given. The meanings of the atomic variables themselves are set by fiat (laid down,
legislated) as being either the true of the false truth value. We may think of the set
with members true and false as the domain of our model, with all constructible enti-
ties (sentential formulas) as being one or the other. But in the case of predicate logic,
we have quantified expressions like:
∀xφ, ∃xφ == with variable x free in φ
Removing the quantifier, we have now variable x free in the symbolic expression
φ, which we call the matrix of the quantifier (when the quantifier symbol is attached.)
We know, however, that the remaining symbolic expression, φ, is not a sentence: it
does not have a semantic referent (a logical meaning, which is true or false as the
case was of course also in sentential logic): this expression is, rather, as we have
called it, an open sentence or a sentential function. It is like a sentence of language
(speaking only analogically, of course), in which there are pronouns and are such
that no contextual information can be used to specify what the referents are for these
pronouns. The pronouns are systematically ambiguous (the grammar of our predi-
cate logic formalism dictates this), and, accordingly, there can be no meaning (true
or false) assigned to the grammatical sentence. Although it may be well-formed
grammatically, the open sentence has no logical meaning. It follows, then, that, in
7 Semantic Models for ∏: ∏⧉ 353
the case of predicate logic, we lack the ready computational facility – semantic
compositionality – that allowed determination of the meaning (truth value) of the
sentential formula on the basis of the meanings (truth values) of its component parts
(atomic variables.) A different approach is needed. Let us now return to the begin-
ning and lay down the basics for predicate-logic models. The model has a non-
empty domain, not necessarily finite, and a function that uniquely assigns to symbols
their logical meanings (also called interpretations, valuations, signature.)
𝔐 = <ⅅ, ||>.
The domain of the model has to have at least one member; empty domains are
not allowed, by foundational stipulation, in the standard predicate logic semantics.
An alternative approach to predicate logic, within a family of logics known as Free
Logic (logic free of existential presuppositions) permits empty domains. A common
approach to constructing such a logic is to construct models with a pair of domains
(inner and outer domain, as they are usually called), with one domain comprising all
and only the objects that are considered as actually existing while the other domain
has as members the things that are not deemed to be actually existing (perhaps fic-
tive or mythical entities.) But in the standard approach to predicate logic, which we
are following, an empty domain is disallowed to begin with. This alerts us to the fact
that a translation of “nothing exists” into our formal language would have to be
checked as a logical falsehood. Or, “every named thing exists” must be a logical
truth. Moreover, we must name objects – on the first approach we undertake but also
on the second approach we follow below temporary assignments of variables as
names of objects are rendered available by stipulation. Therefore, the impression is
created based on this that our standard logic makes it a matter of logic that every-
thing exists is necessarily true and nothing exists is necessarily false. This may be
thought of as a serious defect since it is rather an empirical, not a logical, matter
whether things exist or not. On the other hand, the commitment to this position
about existence can be hailed as a success for the reason that it removes from the
territory of logical analysis issues that are metaphysical and have proven histori-
cally perplexing and vexing: it is like saying that “let whatever your theory commits
to be granted existence;” in that case, the metaphysical issue is displaced on the task
of examining if theories are good or not – and, in this case, assuming that scientific
theories pass the proper tests as to what a good theory is, then we let science replace
idle metaphysical speculation. Nevertheless, there may well be lingering doubts
about all this and, after all, the development of alternative predicate logics, men-
tioned already, is motivated at least in part by deep philosophical considerations as
to how the standard logic handles existential commitment. Another related subject
is this: there is a view, due to Immanuel Kant, that existence is not a logical predi-
cate. Of course, we will not argue otherwise: the only predicate that is treated as
logical is the binary relational predicate of identity. Now, Kant’s point is that exis-
tence as a predicate that can be predicated of concepts is idle notionally – it does not
add or take anything away from the concept whose characterization is provided
exclusively and exhaustively by definition. If you know what is meant by the con-
cepts “dollar” and the “five” then the statement “five dollars” is not affected by
354 7 Semantic Models for ∏: ∏⧉
whether someone does or does not have five dollars in her pocket. Or, if you con-
sider the concept of a Euclidean triangle, provided definitionally in the Euclidean
geometry, nothing changes about what we mean by “Euclidean triangle” when it
turns out, as it has turned out, that no such things exist in the universe (since the
geometry of relativistic physics is most assuredly non-Euclidean and we take rela-
tivistic theory to be the “correct” physics of our universe.) There is a famous proof
of God’s existence, known as Anselm’s ontological argument, that purports to prove
the existence God, defined as that entity than which nothing greater can be con-
ceived to exist. The proof is provocative, for better or worse, in that it concludes to
an empirical statement (existential statement) but there is not a single empirical
premise in the proof itself! And yet, it is not easy to find where the mistake is in the
proof. But the proof does treat existence as a logical predicate, in the above-
mentioned sense in which this is defined by Kant: hence, we have the mistake.
Notice now, on the basis of what was said above about our predicate logic, that for
our predicate logic, with its prohibition of empty domain and assignment of mean-
ings (names) to entities, “everything exists” is trivially true (a logical truth) and
“nothing exists” is trivially false (a logical contradiction.) This seems to corroborate
Kant’s view but, it should be pointed out, we might actually be misled in so thinking
because what we are rediscovering in this way may well be just our choice of logical
formalism. Can we really claim a heuristic (discovery-related) success for our pred-
icate logic? Can a logic have such characteristics? These are deep questions and we
cannot pursue them here but let us see briefly how we can show that existence is
granted automatically, trivially, to everything we are talking about.
1. ~ ∀x∃y(x = y) -- we assume, for proof by contradiction, that it is not the
case that for every x there is at least one y whose name co-refers with x (in other
words, that everything is, in meaning, identical with the meaning of an existent
thing – meanings are referents…)
2. ∃x∀y ~ (x = y) -- we convert, as we examine in detail in other sections: the
quantifiers are inter-convertible: accordingly, we have: there is some x such that
for all y, it is not the case that the names of x and y co-refer.
3. ∀y ~ (t = y) -- we give a specific name to the y, since it is some specific
thing: this constant, t, is what we call an eigenparameter; it has to be new,
and it is.
4. ~ (t = t) -- since we are now dealing with universal quantification,
in line 3, we can choose any name whatsoever; we choose t again; this is what
we obtain, which is surely absurd since it negates a fundamental law, which we
regard as a logical law, the law of self-identity: we have proven, by indirect proof
or proof by contradiction (reductio) that everything exists…
5. ∀x∃y(x = y)
The valuation function assigns logical meanings to the symbols, specified as fol-
lows. We will pursue two alternative options, for the second one of which all terms
(including individual variables) are assigned referents with the variables receiving
temporary referents. In general, we leave this option out at first, considering
7 Semantic Models for ∏: ∏⧉ 355
terms: the referent, what is named, is certainly the object in the domain. Since we
cannot place objects in sets notationally, we find ourselves in the position of having
to use symbols, separate from the names, for representing the domain objects
themselves.
• ℳ=<ⅅ, ||>
• ||: CONSTANTS = {a, b, …, ai, …} ↦ ⅅ
• ||: FUNCTION SYMBOLS = {f11, …, g1k, …, fni, …} x ⅅn ↦ ⅅ
• ||: PREDICATE SYMBOLS = {A11, …, A1 n, …, An1, …} ↦ ℘(ⅅn):: ⅅn = ⅅ x …
x ⅅ:: n times)
• ||: SENTENCES ↦ {T, F}
As an example, we may consider the case (model) represented as follows:
99 ℳ=<ⅅ = {⦅, ⦆}, ||>
99 ||: CONSTANTS = {l, r} ↦ {⦅, ⦆}:: |l| = ⦅, |r| = ⦆.
99 ||: FUNCTION SYMBOLS = {f, g} x ⅅ1 ↦ {⦅, ⦆}:: f(⦅) = ⦆, g(⦆) = ⦅.
99 ||: PREDICATE SYMBOLS = {L, R} ↦ ⅅ2:: |L| = {< ⦅, ⦆>}, |R| = {⦆, ⦅}.
We remark that the model is remarkably abstracted from any recognizable nar-
rative although one could be provided to assist imagination and intuitive grasp. It
is crucial that the formal character of the model as structure is particularly high-
lighted by the fact that nothing is gained, except psychologically perhaps, by
tending to more concrete narratives to accompany the model. In the present case,
a discourse could be constructed to guide the investigator of the model in a pal-
pable fashion: for instance, the objects could be said to be left-looking and right-
looking mirrors, with names as “l” and “r”; the function symbols can be interpreted
(using a different term “interpretation”, rather confusedly, since we do have an
interpretation or valuation for any model) as “to the left of” and “to the right of”
(with the conditions of existence and uniqueness of the output satisfied in this case
for the definition of the function symbols to be meaningful in the model); finally,
the binary predicate symbols can be concretized further as “to the left of” and “to
the right of.” For this narrative, even topographical attributions are made (regard-
ing the relative placement of the mirrors and not only their presumed inherent
charactersistics as left- and right-looking.) The impression may be gained that we
have a modeling that is in some ways more reliable for purposes of study but this
is an entirely unsubstantiated surmise: it may be only a matter of facilitating the
psychology of examination, appealing to intuitions and providing related (possi-
bly pedagogically helpful) amenities but nothing changes for the mechanics of
using semantic models in this way. We will be using the example we have con-
structed in subsequent analysis.
We will next examine the two distinguished cases: one in which every member
of the model’s domain is named (possibly by more than one co-denoting names or
constants) and the case in which no such assumption is made and an alternative
modeling approach, initially conceived in a certain version by Alfred Tarski, is
implemented. The truth conditions will be specifically laid out for each case.
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 357
Unlike the case of sentential logic, the semantics for predicate logic cannot preclude
cases in which we produce an infinite countermodel – a model in which one or more
formulas φ, φ1, …, φn are false (thus showing that φ is not a logical truth or that the
set {φ1, …, φn} is inconsistent) or an argument form has true premises and false
conclusion (showing that the argument form is invalid.) The availability of infinite
domains for models alerts us to the fact that, unlike with sentential logic, we cannot
rely generally on applying some mechanical decision procedure (like the truth table)
that can be executed by programming arrangements and is guaranteed to terminate
within a finite number of steps. We may need to rely on producing a countermodel
in refuting claims but, as we have hinted, we run also into the prospect of counter-
models with infinite domains. We examine as an example the claim that we can
prove that relation that is characterized by seriality also has to be characterized by
symmetry; this is not a valid inference as we can show by configuring a counter-
model but, as we will see, the countermodel may well have an infinite domain.
• A relation R is serial iff: ∀x∃yRxy
• A relation R is symmetric iff: ∀x∀y(Rxy ⊃ Ryx)
1. ∀x∃yRxy Assumption
2. ~ ∀x∀y(Rxy ⊃ Ryx) Negation of the conclusion –
for reductio, but it will turn
out to be unsuccessful show-
ing invalidity
3. ∃x∃y(Rxy ∙ ~ Ryx) Converting the quantifiers
and negating implication (2)
4. Ra1a2 ∙ ~ Ra2a1 Instantiating the existential
quantifiers—new constants
for instantiation of existen-
tial quantifier symbols
(which are called eigenpa-
rameters) (3)
5. ∃yRa1y Universal instantiation (1):
we continue by saturating
which means that we instan-
tiate for each eigeneparame-
ter we have so far: the
removal of quantifier sym-
bols is undertaken from left
to right
6. ∃yRa2y Universal instantiation (1)
358 7 Semantic Models for ∏: ∏⧉
• 𝔐 =<ⅅ, ||>
We have a semantic model with a non-empty domain and the interpretation func-
tion (signature, valuation, value assignment.) Our transactions are carried out, with
generous symbolic enhancements of our metalinguistic resources, in the predicate
logic language ∏⧉⌹ for models with all named domain objects. Since the model-
ing is the instrumentality we employ for semantic purposes – related to logical
meaning, and hence true and false – we need to present precisely and conspicuously
what the truth conditions are and we do this by recursive definition. Logical mean-
ing is a matter of true or false, truth value, for sentential formulas as the case was in
sentential logic in which the sentential formulas are unanalyzed (as we have no
symbolic resources for representing internal components of a sentence.) The
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 359
meanings of individual constants and function symbols are objects of the domain
and the meanings of the predicate symbols are presented by means of sets. We also
have the identity symbol as a logical symbol and this must be included in our seman-
tic conditions (truth conditions.) We will be using metalinguistic variables in our
symbolically enhanced fragment of English that provides the metalanguage for our
system ∏⧉: ℳ(∏⧉). After we lay out the truth conditions we return to the exam-
ple of the previous section to test our semantic definitions. The logical consequence
relation for the logical system as modeled by a model 𝔐 is symbolized metalinguis-
tically by “⊩𝔐, σ” or “𝔐, σ⊩” as we include a symbol for the valuation or assign-
ment, which we symbolize metalinguistically by “σ” (with the symbol “g” instead
being quite common in the literature.) Accordingly, the subscripts for model and
value assignment are placed on the interpretation function as well. If context pre-
vents any ambiguity as to such matters, however, the model and signature metalin-
guistic symbols may be omitted without incurring any risk of ambiguity.
We say that a model and value assignment in the model satisfy the formulas to
the right of the turnstile. Thus, we coordinate satisfaction conditions (or satisfaction
constraints) with the semantics of valuation or value assignments.
Truth Conditions for Semantic Model – case of model with all named objects
• 𝔐, σ⊩ φ if and only if (iff) |φ|𝔐, σ = T ∈ {T, F}/ φ ∈
SENTENCES
• 𝔐, σ⊩ ~ φ iff |~ φ| = T iff |φ|𝔐, σ = F/ φ ∈
SENTENCES
• 𝔐, σ⊩ φ1 ∙ φ2 iff |φ1 ∙ φ2| = T iff |φ1|𝔐, σ = |φ2|𝔐, σ = T
• / φi ∈ SENTENCES
• 𝔐, σ⊩ φ1 ∨ φ2 iff |φ1 ∨ φ2| = T iff |φ1|𝔐, σ = T or
|φ2|𝔐, σ = T
• /φi ∈ SENTENCES
• 𝔐, σ⊩φ1 ⊃ φ2 iff |φ1 ⊃ φ2| = T iff |φ1|𝔐, σ = F or
|φ2|𝔐, σ = T
• /φi ∈ SENTENCES
• 𝔐, σ⊩φ1 ≡ φ2 iff |φ1 ≡ φ2| = T iff |φ1|𝔐, σ = |φ2|𝔐, σ
• / φi ∈ SENTENCES
• 𝔐, σ⊩ Φnλ1λ2…λn iff <|λ1|𝔐, σ, |λ2|𝔐, σ, …, |λn|𝔐, σ> ∈
|Φn|𝔐, σ
• / Φn ∈ PREDICATES, λi ∈ CONSTANTS or FUNCTION SYMBOLS
• 𝔐, σ⊩ ∃υΦυ iff |λj|𝔐, σ ∈ |Φn|𝔐, σ
for at least one ○ ∈ ⅅ, such that |λi|𝔐, σ = ○.
/ λj ∈ CONSTANTS or FUNCTION SYMBOLS
• 𝔐, σ⊩ ∀υΦυ iff |λj|𝔐, σ ∈ |Φn|𝔐, σ
for all ○ ∈ ⅅ, such that |λi|𝔐, σ = ○.
/ λj ∈ CONSTANTS or FUNCTION SYMBOLS
• 𝔐, σ⊩ 𝔣λ1…λn = λn+1 iff |λn+1|𝔐, σ = ○ such that: |λ1|𝔐, σ =
○1, …, |λn|𝔐, σ = ○n, and |λn+1|𝔐, σ = ○n+1; |𝔣|𝔐, σ: <○1, …, ○n> ↦ ○n+1
360 7 Semantic Models for ∏: ∏⧉
• A model and value assignment in the model satisfy identity of an n-ary function
symbol accompanied by constants with a constant symbol λ if and only if the
object assigned by the ordered n-tuple of objects denoted by the constants by the
function is the same as the object denoted by λ. The restriction is that an existent
and unique object is assigned by the function to any n-tuple of objects as inputs.
It is important to avoid a common mistake: the function symbol does not have a
truth value as a referent: it is a term symbol. Note that we have provided truth
conditions for the identity of the function symbol with a unique constant symbol
(name) from the domain of the model.
• The identity formula for any two constants is satisfied in a model and value
assignment if and only if the constants have the same domain object as their
referent or denotatum.
As an example, we consider the model we set up earlier.
99 ℳ=<ⅅ = {⦅, ⦆}, ||>
99 ||: CONSTANTS = {l, r} ↦ {⦅, ⦆}::
99 |l| = ⦅.
99 |r| = ⦆.
99 ||: FUNCTION SYMBOLS = {f, g} x ⅅ1 ↦ {⦅, ⦆}::
99 f(⦅) = ⦆.
99 g(⦆) = ⦅
99 ||: PREDICATE SYMBOLS = {L, R} ↦ ⅅ2::
99 |L| = {< ⦅, ⦆>}.
99 |R| = {<⦆, ⦅>}.
We determine the truth value of certain presented formulas in the model. Unless
these are logical truths or logical falsehoods, the formulas can obtain different truth
values in different models and, indeed, a type of semantic exercise is available,
which requires building a model in which a given, logically contingent, formula is
true or false, as specified. Note below the steps we unfold, obeying the definitions
of the semantic conditions that have been laid down. As we move in implementing
the semantic conditions for the removal of quantifier symbols in nested-quantifier
formulas, we move from inside to outside (or from right to left.)
• 𝔐, σ⊩ ∃x∀yLxy iff
• |∃x∀yLxy|𝔐, σ = T iff
• |∃xLxa|𝔐, σ =T for all objects ○a ∈ ⅅ, such that |a| = ○a iff
• |Lab|𝔐, σ = T for some object ○a and for all objects ○b ∈ ⅅ
Now that we have determined the outlook of the semantic conditions for valida-
tion of the formula (for the formula to be true in the given model for some valua-
tion), we work backwards, as shown below, to confirm or disconfirm whether the
semantic conditions are satisfied. We check each one of the objects in the domain
and see if it is L-related to all the objects of the domain (including itself): because
the formula has the existential quantifier as its outmost symbol and with the
362 7 Semantic Models for ∏: ∏⧉
universal quantifier symbol as nested in, the conditions for truth are satisfied if at
least one of the objects is L-related to all the objects in the domain.
• ⦅: |Lll|𝔐, σ = F because <⦅, ⦅> ∉ |L|𝔐, σ --
the ordered pair with the two occurrences of the object denoted by l is not mem-
ber of the extension of L in the model.
|Llr|𝔐, σ = T because <⦅, ⦆> ∈ |L|𝔐, σ --
the ordered pair with members the objects denoted by l and r is a member of the
extension of L in the model.
|∀xLlx|𝔐, σ = F because one of the two cases for this objects fails satisfaction.
Notice how we can develop the given formula in a certain way (expressing the uni-
versal quantification as conjunction and the existential quantification as inclusive
disjunction over this domain with finite number of members. This development is
correct in any such model.
|∃x∀yLxy| = |∃x(Lxl ∙ Lxr)| = |(Lll ∙ Llr) ∨ (Lrl ∙ Lrr)|.
So far we have established that:
|Lll ∙ Llr|𝔐, σ = |F ∙ T|𝔐, σ = F.
We repeat the same procedure for the other object, which is denoted by the inter-
pretation function of the model by “r”.
• ⦆: |Lrl|𝔐, σ = F because <⦆, ⦅> ∉ |L|𝔐, σ
• |Lrr|𝔐, σ = F because <⦆, ⦆> ∉ |L|𝔐, σ
• Returning to the developed expression of the given multiply quantified formula:
• |Lrl ∙ Lrr|𝔐, σ = = |F ∙ F|𝔐, σ = F.
• Already, we had:
• |Lll ∙ Llr|𝔐, σ = |F ∙ T|𝔐, σ = F.
• Putting together, we have:
• |∃x∀yLxy| = |∃x(Lxl ∙ Lxr)| = |(Lll ∙ Llr) ∨ (Lrl ∙ Lrr)| 𝔐, σ = |F ∨ F|𝔐, σ = F.
The formula that was examined in the preceding example presents all nested quanti-
fier symbols (quantifier symbols within the scope of other quantifier symbols) in a
special type of form that is called prenex form. Examination of the truth values of
formulas in given models is facilitated significantly by first converting given formu-
las into equivalent expressions of the formulas in prenex form.
A prenex form of a well-formed formula φ is an equivalent well-formed formula
ψ with all quantifiers pulled up in the front (with the remaining part of the formula
as the matrix.) The pulling of the quantifier symbols to the front so as to generate a
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 363
prenex form of the given formula is feasible by using certain rules, which we will
examine. In the course of executing the shifts, we may reletter the variables.
The equivalences that permit shifts of quantifier symbols for the construction of
the equivalent prenex form of a given formula are presented below. The list includes
related information we were compelled to present in the section on translations, in
which case challenges also arise, understandably, regarding the rendering of multi-
ply quantified formulas with nested quantifier symbols.
• ∀x(φ ∙ ψ) ≡ (∀xφ ∙ ∀xψ) [x is free in φ and in ψ before bound by the quantifiers]
• ∃x(φ ∨ ψ) ≡ (∃xφ ∨ ∃xψ) [x is free in φ and in ψ before bound by the quantifiers]
• ∀x(φ ∨ ψ) ≡ (∀xφ ∨ ψ) [x is free in φ but not in ψ before bound by the quantifiers]
• ∀x(φ ∨ ψ) ≡ (φ ∨ ∀xψ) [x is free in ψ but not in φ before bound by the quantifiers]
• ∃x(φ ∙ ψ) ≡ (∃xφ ∙ ψ) [x is free in φ but not in ψ before bound by the quantifiers]
• ∃x(φ ∙ ψ) ≡ (φ ∙ ∃xψ) [x is free in ψ but not in φ before bound by the quantifiers]
• (∃xφ ⊃ ψ) ≡ ∀x(φ ⊃ ψ) [x is free in φ before bound by the quantifiers]
• (∀xφ ⊃ ψ) ≡ ∃x(φ ⊃ ψ) [x is free in φ before bound by the quantifiers]
• (φ ⊃ ∀xψ) ≡ ∀x(φ ⊃ ψ) [x is free in ψ before bound by the quantifiers]
• (φ ⊃ ∃xψ) ≡ ∃x(φ ⊃ ψ) [x is free in ψ before bound by the quantifiers]
We can approach this as a matter of first converting implications to inclusive
disjunctions and then observing the following distributive equivalences regarding
quantifier symbols distributing over disjunctions and conjunctions. Subsequently,
we can convert back to implicational formulas. We should find that we obtain the
results above. First we show how conversions to and from implicational formulas
and inclusive disjunction formulas can be executed.
(φ ⊃ ψ) ≡ (~ φ ∨ ψ)
___(∃xψ ∨ ∃xφ) ≡ ___∃x(ψ ∨ φ)
___(∀xφ ·· ∀xψ) ≡ ___∀x(φ · ψ)
For example:
(∀xFx ⊃ ∃xGx) ≡ (~ ∀xFx ∨ ∃xGx) ≡ (∃x ~ Fx ∨ ∃xGx)
≡ ∃x∃y(~ Fx ∨ Gy) ≡ ∃x∃y(Fx ⊃ Gy).
We could reach the same result following straightforwardly the rules for the
implication symbol given above.
(∀xFx ⊃ ∃xGx) ≡ ∃x∃y(Fx ⊃ Gy)
As examples of conversions to prenex form, we offer:
1. ∀y(∃xRxy ⊃ ∃zRzy) ≡ ∀y∀x(Rxy ⊃ ∃zRzy) ≡
∀y∀x∃z(Rxy ⊃ Rzy)
2. ∀x∀yRxy ⊃ ∀z(∃wRzw ∨ Rzz) ≡
∃x∃y(Rxy ⊃ ∀z(∃wRzw ∨ Rzz)) ≡
∃x∃y(Rxy ⊃ ∀z∃w(Rzw ∨ Rzz)) ≡
∃x∃y∀z∃w(Rxy ⊃ (Rzw ∨ Rzz))
364 7 Semantic Models for ∏: ∏⧉
We have neatly treated the semantics of predicate logic, providing models, for the
case in which all objects in the model’s domain are named. We did this in the formal
system ∏⧉⌹ but now we move laterally – as we do not change anything about the
symbolic resources and grammatical arrangements – to the system ∏⧉⌻, allow-
ing that not all objects in the domain have names. We cannot relax the other assump-
tion, that the domain cannot be empty, without transporting ourselves to a different
area of alternative predicate logics, which we do not address in this text (barring
some fleeting references to Free Logic); but we must confront the pressing need of
handling the case in which not all objects of the domain are named. There may be
more objects than there are names. Indeed, nothing should prevent us from applying
this remarkable type of formal system, which has sufficient expressive power to
translate mathematical statements, to narratives (models, indeed) in which the
domain is the set of natural numbers, for instance. Moreover, we cannot have
decomposition of formulas as we enjoy in sentential logic: removing quantifier
symbols, we are left with what we have called open sentences which do not have
truth values as referents (which is to say that they do not have logical meanings,
although, of course, they are grammatically correct – rather like sentences of a lan-
guage, which are systematically constructed with pronouns that are irremediably
ambiguous, having no possible referent and, hence, unable to contribute toward
determining a truth value for the composite sentence.) The stipulation as to the nam-
ing of all objects ensures that all open sentence formulas can be converted to sen-
tences by substitutions of individual constants that name objects of the domain. This
stratagem, however, only obscures the fact that there is no compositional truth-
functional structure to the syntactically formed sentences (which are open sentences
before they have quantifiers attached to bind the free variable letters.) Our present
task, accordingly, is to face the dire necessity of dealing with the semantic disposi-
tion of formulas without counting on having sufficiently available constant letters to
use. This requires a strategic breakthrough in our thinking, and the standard approach
is due initially to Alfred Tarski, although the system used is not exactly what Tarski
deployed. Indeed, there is wide open variety across textbooks in the specific treat-
ment of this type of model, but the essential ideas are the same.
To make a long story short, the key tactic that instigates the approach of generat-
ing series of assignments for the variables (and for the constants, which have to be
fixed for all series, of course.) The catch is this: how can we assign meanings –
objects from the domain – to the variables since the variables are taken by construc-
tion to be non-referring (meaningless) symbols? The response, making the
breakthrough possible, is this: we assign temporary referents (objects of the domain,
which confer then temporary meaning to the variables.) We may analogize to the
case of a natural language, and specifically to the case in which variation of context
may specifically assign meanings to pronouns like “he” or “she” or “it.” What is
needed is to deal with such open-ended assignments, which are transient but preci-
fied or specifically constructed, in a systematic way. The sophisticated approach to
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 365
variants differ from each other in at most one matching between terms (including
for three variables) and domain objects. When we say that the variation is for at
most one pairing, this implies that we can have two variants that are identical as well.
σ1: σ1(a) = ①, σ1(b) = ②, σ1(f(①) = ②, σ1(x) = ②, σ1(y) = ②, σ1(z) = ①
σ2: σ2(a) = ①, σ2(b) = ②, σ2(f(①) = ②, σ2(x) = ②, σ2(y) = ②, σ2(z) = ②
σ3: σ3(a) = ①, σ3(b) = ②, σ3(f(①) = ②, σ3(x) = ②, σ3(y) = ②, σ3(z) = ③
σ4: σ4(a) = ①, σ4(b) = ②, σ4(f(①) = ②, σ4(x) = ②, σ4(y) = ②, σ4(z) = ①
We can now provide the semantics for models with domains that have unnamed
objects; we cast in bold the clauses of the recursive definitions, which are altered to
accommodate the new case by using the temporary assignments variant method.
Truth Conditions for Semantic Model – case of model with unnamed objects
• 𝔐, σ⊩ φ if and only if (iff) |φ|𝔐, σ = T ∈ {T, F}/ φ ∈
SENTENCES
• 𝔐, σ⊩ ~ φ iff |~ φ| = T iff |φ|𝔐, σ = F/ φ ∈ SENTENCES
• 𝔐, σ⊩ φ1 ∙ φ2 iff |φ1 ∙ φ2| = T iff |φ1|𝔐, σ = |φ2|𝔐, σ = T
• / φi ∈ SENTENCES
• 𝔐, σ⊩ φ1 ∨ φ2 iff |φ1 ∨ φ2| = T iff |φ1|𝔐, σ = T or |φ2|𝔐, σ = T
• /φi ∈ SENTENCES
• 𝔐, σ⊩φ1 ⊃ φ2 iff |φ1 ⊃ φ2| = T iff |φ1|𝔐, σ = F or |φ2|𝔐, σ = T
• /φi ∈ SENTENCES
• 𝔐, σ⊩φ1 ≡ φ2 iff |φ1 ≡ φ2| = T iff |φ1|𝔐, σ = |φ2|𝔐, σ
• / φi ∈ SENTENCES
• 𝔐, σ⊩ Φnλ1λ2…λn iff <||λ1||𝔐, σ, ||λ2||𝔐, σ, …, ||λn||𝔐, σ> ∈ |Φn|𝔐, σ
• / Φ n
∈ PREDICATES, λi ∈ VARIABLES, CONSTANTS or
FUNCTION SYMBOLS
• 𝔐, σ⊩ ∃υΦυ iff ||λj||𝔐, σ ∈ |Φn|𝔐, σ for at least one ○ ∈ ⅅ,
such that ||λi||𝔐, σ = ○/ λj ∈VARIABLES, CONSTANTS or FUNCTION SYMBOLS
• 𝔐, σ⊩ ∀υΦυ iff ||λj||𝔐, σ ∈ |Φn|𝔐, σ for all ○ ∈ ⅅ, such
that ||λi||𝔐, σ = ○
• / λj ∈ VARIABLES, CONSTANTS or FUNCTION SYMBOLS
• 𝔐, σ⊩ 𝔣λ1…λn = λn+1 iff |λn+1|𝔐, σ = ○ such that: |λ1|𝔐, σ = ○1, …,
|λn|𝔐, σ = ○n, and |λn+1|𝔐, σ = ○n+1; |𝔣|𝔐, σ: <○1, …, ○n> ↦ ○n+1
• / 𝔣 ∈ FUNCTION SYMBOLS, ○i ∈ ⅅ, λi ∈ CONSTANTS/ constraint: ○n+1
exists and is unique
• 𝔐, σ⊩ λ1 = λ2 iff ||λ1||𝔐, σ = ○ ∈ ⅅ and ||λ2||𝔐, σ = ○ ∈ ⅅ
• / λi=1, 2 ∈ VARIABLES, CONSTANTS, FUNCTIONS.
We can see the semantics in action for an example with a model that does not
have names for its objects.
ℳ = <ⅅ, ||, || ||>.
ⅅ = {①, ②, ③}
|f(①)| 𝔐, σ = ②
|F| 𝔐, σ = {①, ③}
|R|𝔐, σ = {<②, ②>, <③, ③>}
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 367
• There is, moreover, at least one temporary assignment variant for x, such that:
• ||∃y(Fx ⊃ Ryx)||𝔐, σ(x) = ② = T.
• Accordingly, we have determined for this model:
• |∃y(Fx ⊃ Ryx)|𝔐, σ = T.
7.1.4 Exercises
1. Determine the values of the formulas within the given models in ∏⧉. The
domain objects are all named. Generally, when it is not otherwise specified,
∏⧉ is to be understood as comprising the narrower case of named-
objects models.
2. We construct a model in which a binary relational predicate R is both serial and
transitivity. Notice the complementary graphical representation of the extension
of the binary relation R. Constructibility of such a model shows that the two
properties of relations (seriality and transitivity) are consistent with each other.
We use the approach as in the case of unnamed objects, making temporary
assignments to individual variables. Fill in the details as indicated.
Seriality: ∀x∃yRxy.
Transitivity: ∀x∀y∀z((Rxy ∙ Ryz) ⊃ Rxz).
𝔐 = <ⅅ = {○, ◻, ♢}, |R| = {<⎔, ◻>, <◻, ♢>, <♢, ♢>}>.
⎔⟶◻⟶♢⟲
a. First we show that R is serial. We make each possible assignment to x (assign-
ing an object in the domain) and it suffices to show that for each such assign-
ment of an object # there is at least one temporary assignment to y such that:
|| ∃yRxy||𝔐, σ(x) = # = T /.
for ○ being any object in the model’s domain.
I. ||x||𝔐, σ = ○
i. ||y||𝔐, σ = ○ ⤇
<||x||𝔐, σ(x) = ○, ||y||𝔐, σ(y) = ○ > ∉ |R|𝔐.
⤇ ||Rxy||𝔐, σ(x) = ○, σ(y) = ○ = F
ii. ||y||𝔐, σ = ____ ⤇ <_____, _____> ∈ |R|𝔐
⤇||Rxy||________ = _______
iii. ||y||𝔐, σ = ____ ⤇ <_____, _____> ∉ |R|𝔐
⤇||Rxy||________ = _______.
Decision: ||x||𝔐, σ = ○.
⤇ || ∃yRxy||𝔐, σ(x) = ○ = _____
II. ||x||𝔐, σ = ◻ ⤇ ---
Decision: ||∃yRxy||𝔐, σ(x) = ◻ = ___
III. ||x||𝔐, σ = ○ ⤇ ---
Decision: || ∃yRxy||𝔐, σ(x) = ♢ = ___
Decision: |∀x∃yRxy|𝔐 =?
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 369
d. a = b, ~ Rab ⊩ ~ ∀xRxx
e. ∃xFx, fa = b, Rba ⊩ ∃xRfaa
f. ∃y∀xRyx ⊩ ∀x∃yRyx
g. ∀x∀yRxy ⊩ ∀xRxx
h. ∀x(Fx ∨ ~ Fx) ⊩ ∀x∃y(x = y)
6. Construct models that satisfy the given formulas in ∏⧉⌹ (models in which the
formulas are true). Does this prove that the formula is a logical truth? What step
ought to be taken subsequent to constructing a satisfying model to determine if
the formula is or is not a logical truth?
a. ⊩ ∀x(Fx ⊃ Gx) ⊃ ∀xFx
b. ⊩ ∀xFx ⊃ Rab
c. ⊩ ∀xFx ⊃ ∃yRay
d. ⊩ (∀x(Fx ⊃ Gx) ∙ ∃x ~ Fx) ⊃ ∃x ~ Gx
e. ⊩ (∃xFx ∙ ∃xGx) ⊃ ∃x(Fx ∙ Gx)
f. ⊩ ~ ∃x(Fx ∙ Gx) ⊃ (∃x(~ Fx ∙ Hx) ∙ ∃x(Gx ∙ Hx))
g. ⊩ ∃xRfafbx ≡∀x∀y∀zRxyz
h. ⊩ ∀x∀y(x = y ⊃ (Rxy ∨ Ryx))
7. Construct satisfying models for the given sets of formulas in ∏⧉⌹. Does suc-
cessful construction of such a model prove that the given set is consistent?
a. ⊩ {∀x∀yRxy, ∀x∀y(Rxy ⊃ ~ Ryx)}
b. ⊩ {∀x∀y∀z((Rxy ∙ Ryz) ⊃ Rxz), ∀x∀y((Rxy ∙ Ryx) ⊃ x = y)}
c. ⊩ {∀x ~ Rxx, ∀x∀y∀z((Rxy ∙ Ryz) ⊃ Rxz)}
d. ⊩ {Rfafb, ∃x∃y(Rxy ⊃ ((x ≠ fa) ∙(x ≠ fb))))}
e. ⊩ {~ ∃xFx, ∀xFx ≡ Rab}
f. ⊩ {∀x∀y((Fx ∙ Fy) ⊃ x = y), ∃x∀y(Fy ≡ x = y), ∃x∃yRxy}
g. ⊩ {Rab, Rbb, ~ ∃x∀yRxy}
h. ⊩ {∀x∀y∀z((Rxy ∙ Ryz) ⊃ Rxz), ∃xRxx, ∃x∀yRxy}
8. Extract the prenex forms of the given well-formed formulas of ∏⧉.
Apply relettering when pulling quantifier symbols to the front, as indicated.
a. ∀x(Fx ∨ ∃yFy)
b. ∃yFy ⊃ ∀xDx
c. ∃x(Fx ∙ ∀y(Fy ⊃ y = x))
d. ∀xFx ≡ ∀xGx [consider: (φ ≡ ψ) ≡ ((φ ⊃ ψ) ∙ (ψ ⊃ φ))]
e. ∀z(∃xRxz ⊃ ∃wRzw)
f. ∃x(∃yGyx ⊃ ∀zRxz)
g. ∀xFx ∨ ∀y(fy = a)
9. Construction of models gives us a means for checking if any one of a given num-
ber of formulas is, as we call it, independent of the others: this means that the
specified formula φ cannot be proven by all the other formulas that are given.
This is crucial in establishing that an axiom, from a given set of axioms, is inde-
pendent and, hence, indispensable in the axiomatization of a system. The way to
7.1 ∞ Countermodels with Infinite Domains, ∏⧉∞ 371
To obtain a formal proof-theoretic system for natural deduction, we may extend any one
of the formal systems we deployed for sentential logic. To avoid expansive use of space
in this text, we present a system that extends ∑∎ with appropriate additions of symbolic
resources and regulative schematic rules for derivation, so that we generate ∏πφ=∎. The
Fitch-style proof-theoretic system for natural language, ∑||, which we have also con-
structed can be expanded accordingly – and the same is the case for the sequent system.
First, we lay out the symbols and grammatical arrangements for well-formedness
(what counts as a well-formed or grammatically correct formula in the system) for
∏πφ=∎, adding to the preexisting resources and applying the syntactical arrange-
ments to the newly introduced symbols.
(continued)
(continued)
• If φ ∊ WFF(∏πφ=∎), then ∀uφ ∊ WFF(∏πφ=∎) / u ∊ VARIABLES(∏πφ=∎)
• If φ ∊ WFF(∏πφ=∎), then ∃uφ ∊ WFF(∏πφ=∎) / u ∊ VARIABLES(∏πφ=∎)
• If λi ∊ TERMS(∏πφ=∎) and λj ∊ WFF(∏πφ=∎), then λi = λj ∊ WFF(∏πφ=∎)
• fjλ1… λn ∊ WFF(∏πφ=∎) / λ1, …, λn ∊ TERMS(∏πφ=∎), fj ∊ TERMS(∏πφ=∎)
• Nothing else is a member of the set WFF(∏πφ=∎) -- <closure clause>
All the derivation rule schemata of ∑∎ are included in the proof-theoretic system
∏πφ=∎; rules for the added symbols. The predicate symbols are non-logical constants;
constraining them with postulates as to how specific such symbols may be manipu-
lated would constitute an extra-logical maneuver. Notably, and as it ought to be
expected, there can be no replacement of predicate symbols by other predicate sym-
bols. Thinking in terms of the semantics (models) for predicate logic, we can appreci-
ate that the extension of a predicate (the members of the set that defines the meaning
of the predicate symbol in the model) is a non-logical happenstance: another model
can be constructed by open-ended license, in which separate model the same predi-
cate symbol can be given a different extension. Of course, since what we are dealing
with is predicate or first-order logic (and not second-order logic), we have no quanti-
fication symbols over predicate symbols; semantically speaking, we cannot produce
a model, for instance, for inferring that “at least one color is bright” from “green is a
bright color.” It follows, again, that we cannot produce predicate symbols – any more
than we can inter-substitute them. The quantifier symbols are the logical symbols that
have been added. Moreover, the identity symbol is distinguished as a logical symbol:
it may be noticed that identity could be understood as a binary or two-place relation;
but the identity symbol is the only relational symbol whose regulated schematic man-
agement is not a matter of extra-logical stipulation but it is considered that we are
actually dealing with a logical symbol for which we must provide schematic rules. If
we think of a finite domain, we can define the universal quantifier symbol as a con-
junction (with all x F as logically equivalent with the conjunction of Fλi for each term
λi which semantically thinking represents the name of a member of the domain.
Similarly, the existential quantifier can be defined as an inclusive disjunction of Fλi
for each term λi. Given this, the quantifier symbols are determinately related and can
inter-converted: this is obvious once we apply the DeMorgan Laws, which we have
included in their expressions as rule schemata in ∑∎ and by extension in ∏πφ=∎:
• ∀xFx ≡ (Fa ∙ … ∙ Fm) ≡ ~ (~ Fa ∨ … ∨ ~ Fm) ≡ ~ ∃x ~ Fx
• ~ ∀xFx ≡ ~ (Fa ∙ … ∙ Fm) ≡ (~ Fa ∨ … ∨ ~ Fm) ≡ ∃x ~ Fx
• ~ ∃xFx ≡ ~ (Fa ∨ … ∨ Fm) ≡ (~ Fa ∙ … ∙ ~ Fm) ≡ ∀x ~ Fx
• ∃xFx ≡ (Fa ∨ … ∨ Fm) ≡ ~ (~ Fa ∙ … ∙ ~ Fm) ≡ ∀xFx
This observation suggests adding schematic rules for the interconvertibility
(exchange, interchange) of the quantifier symbols, as a matter of convenience even
though our other rules for derivation ought to be sufficient for deriving the conver-
sion instances.
We provide the terms of introduction and elimination of the quantifier symbols.
This hints at the mechanics of the Fitch-like system we examined as ∑||, which can
8.1 A System of Natural Deduction for ∏πφ=: ∏πφ=∎ 375
8.1.1 ∃
I: Rule for Introduction of the Existential Quantifier
Symbol (Existential Generalization)
The same rule, for which important constraints apply, as we will specify, can also
be presented with a shape that presents a well formed formula and regulates how the
existential quantifier symbol may be attached to it: the variable x of the quantifier
∃x is to replace uniformly an individual constant letter in the formula.
8.1.2 ∀
I: Rule for Introduction of the Universal Quantifier
Symbol (Universal Generalization)
The basic idea is that the constant that is to be uniformly replaced by the quanti-
fier variable is random: if we have proof or asserted verification that F is true of an
object that has not been selected for any reason whatsoever but is any random item –
and is presumed specifically to be any item – then we may correctly infer that F is
true of any object: hence, we can introduce the universal quantifier symbol.
The definability of the universal quantifier in terms of conjunction over a finite
domain allows us to justify this rule as follows considering a domain {b1, …, bn}.
8.1 A System of Natural Deduction for ∏πφ=: ∏πφ=∎ 377
We use the familiar rule we have in our natural deduction system, which we have
named Conjunction.
1. Fbi 1 generic constant symbol: it could be any constant symbol
2. Fb1 2 since it can be any constant symbol
3. …
4. Fbn n+1
5. Fb1 ∙ … ∙ Fbn ∙I(2, …, n+1)
6. ∀xFx definition(∀)(5)
Alternatively, the schema may be as follows – with both schematic shapes sanc-
tioning the same derivations as correct.
Constraints:
1. the letter λ that is replaced by the variable in ∀x does not occur free in any line
of the proof, on which ℱ… λ… depends.
2. every occurrence of λ in ℱ… λ… must be replaced uniformly by the vari-
able in ∀x.
8.1.3 ∀
E: Rule for Elimination of the Universal Quantifier
Symbol (Universal Instantiation)
k. ∀xℱ…x…
⋮
n. ℱ…λ… [λ/x] ∀E(k) λ is any constant letter
k. ∀x□.
⋮
n. □ [λ/x] ∀E(k) λ is any term letter
Constraints:
1. every occurrence of the variable in ∀xℱ…x… must be replaced uniformly by the
same letter in ℱ… λ….
Violation of the restriction would permit for erroneous derivations to go through.
1. ∀xLxx
2. Lab ∀E(1) WRONG! [violation of restriction 1]
Certainly, we should not be able to infer from “all people like themselves” that
“Abe likes Beth.”
8.1.4 ∃
E: Rule for Elimination of the Existential Quantifier
Symbol (Existential Generalization)
k. ∃xℱ…x…
⋮
n. ℱ… λ… [λ/x] ∃E(k) λ is a new letter
7. …
8. □
9. …
10. Fbn 10−
11. …
12. □
13. □ ∨E(2, 3–5, 6–8, …, 10–12)
k. ∃xℱx.
k+1. ℱλ1 ∨ ℱλ2 ∨ … ∨ ℱλn
⋮
| ℱλ1 λ is a constant letter
|⋮
|□
| ℱλ2
|⋮
|□
|⋮
| ℱλn.
|⋮
|□
n. □ ∃E(k, −--)
As with the implementation of the rule for elimination of the inclusive disjunc-
tion symbol in the Fitch-style proof-theoretic system we studied, subordinate proofs
are arranged for each of the formulas of the predicate letter along with each of the
constant letters, since, by definition, the existential quantifier symbol is taken as a
finite inclusive disjunction. Each subproof yields, when completed, a formula; the
outcome of the derivation, by application of the elimination rule, is the formula that
has accrued in the subproofs. The completion of the derivation means that, by appli-
cation of the elimination rule, the assumed premised for all the subproofs are dis-
charged. Adopting this as our rule for elimination of the existential quantifier symbol
results in more cumbersome derivations but it benefits from showcasing the deep
theoretical affinity with the derivation rule for the corresponding logical connec-
tive – inclusive disjunction. If, as an exercise, the extension of the Fitch-like system
we have constructed is undertaken, it would be elegant and appropriate to adopt this
schematic shape for the rule for elimination of the existential quantifier symbol.
Instead, we opt for the less elaborate schema we presented above. As a justification,
by reference to rules for the logical connective of inclusive disjunction, we may
appeal to the rule we included in the formal system for natural deduction, which we
call Disjunctive Syllogism.
380 8 Proof-Theoretical System for Predicate Logic: ∏πφ=
1. ∃xFx 1
2. Fb1 ∨ Fb2 ∨ … ∨ Fbi ∨ … ∨ Fbn definition(∃)(1) == at least one is true
3. ~ Fbi 3 == presuming verification for bi
4. Fb1 ∨ Fb2 ∨ … ∨ Fbn ∨ Fbi Assoc(∨)n-i(3)
5. Fb1 ∨ Fb2 ∨ … ∨ Fbn DS(4, 5)
6. …
7. Fbj DS(…)
Let us analyze this proof. This is a schematic proof sketch and should be properly
written – even if pedantically – with metalinguistic symbols. On line 4, we indicate
that multiple applications of association (with the accompanying parentheses, not
shown, used for grouping and indications of shifting) and subsequently we apply
the rule of Disjunctive Syllogism. This continues for all disjuncts that are verifiably
false until one disjunct remains. The point is not to privilege an irrelevant epistemic
test about what we know verifiably about the disjuncts. The point is rather to show
how at least one disjunct is true as other disjuncts may turn out to be false. The affin-
ity with rules for inclusive disjunction is evident in the crucial application of the DS
rule. Since this is not about verification, the disjunct that is presumed true at the end
(as being at least one disjunct that is true) has to be understood as being a specific
but unknown disjunct. This consideration dictates the specific constraint on applica-
tion of this rule: that the constant letter that is used for implementation of the rule
for elimination of the existential quantification symbol is a new symbol, not in the
proof up at that point.
It is philosophically important that, in proceeding in this fashion, we consider
this rule to yield at least one true disjunct regardless as to whether our constructive
proof-theoretic arrangements can specify the disjunct or present specific conditions
under which this disjunct could be verified as being true: this is a deep characteristic
of the standard logic (and the standard predicate logic), which would have to be
absent from an alternative logic – one like the logic we have called Intuitionistic as
we have briefly examined it – which understands commitments to asserting as true
to depend strictly on the actual availability of a constructive method for producing
and verifying the formula that is said to be true.
Alternatively, the schema may be as follows – with both schematic shapes sanc-
tioning the same derivations as correct.
k. ∃x□.
⋮
n. □ [λ/x] ∃E(k) λ is a new term letter
Constraints:
1. the letter that is introduced can be distinguished by use of a technical name,
found in the literature sometimes, as an eigenparameter: it is a new letter that
8.1 A System of Natural Deduction for ∏πφ=: ∏πφ=∎ 381
does not occur on any line of the proof; in the justification line, the introduction
of the eigeneparameter can be marked by use of a superscript “ℇ” on the term
letter that is the new or eigenparameter letter.
2. the replacement of the variable letter by the eigenparameter letter is uniform
throughout the formula that is the matrix of the existential quantifier symbol.
1. ∃x∀yLxy 1
2. ∀yLay ∃E(1)
3. Laa ∀E(2)
4. ∃xLxx ∃I(3)
Returning to the initial example, of the pseudoproof, and executing the moves
without violations of restrictions, we arrive at a conclusion that ought to be correctly
derivable: given that “everyone like someone” we should be able to infer correctly
that “someone loves herself or himself” only on the additional assumption that there
is exactly one person in the discourse (which is provided by the second given prem-
ise that states that all names or constant letters co-refer.)
1. ∀x∃yLxy 1
2. ∀x∀y(x = y) 2
3. ∃yLay ∀E(2)
4. Lab ∃E(3) --CORRECT—an eigenparameter (new con-
stant letter) is used for elimination of the existential quantifier symbol
5. ∀y(a = y) ∀E(4)
6. a = b ∀E(5)
7. Laa =E(4, 6)
8. ∃xLxx ∃I(7)
ÖÖ Restriction 2: Quantifier symbols are introduced from right to left.
ÖÖ Restriction 3: Banning Vacuous Quantification: We enforce a restriction that also
made sense when we contemplated and structured the syntactical or grammatical
setup for a predicate logic system. We ban vacuous quantification in a broad
sense, defined as: a) having a quantifier symbol that has no variable to bind, or b)
as a (vacuously) iterated quantifier attempting to re-bind a variable that is bound
by another quantifier of the same kind (universal or existential.) Given this
exception, the following proof is blocked, as it should be.
ÖÖ Restriction 4: Main-Connective Restriction. Regardless as to whether we treat
quantifiers as connectives or not – based on considerations about the cardinality
of the domain, if it is finite or not – we impose a restriction that is worth institu-
tionalizing. We set it as a restriction, which we call the Main-Connective
Restriction. This prohibits making replacements of variables bound by quantifi-
ers when the formula has a main connective other than the quantifier (assuming
that we are thinking of the quantifier as a connective.) A corollary of this restric-
tion, which also imposes a relevant restriction: no elimination of quantifier sym-
bol is permitted when the quantifier symbol does not have in its scope all the
occurrences of the variable of the quantifier symbol. (For instance: in ⌜∀xFx ⊃
∃xGx⌝ elimination can lead to the pseudo-derivation (incorrect derivation) of
⌜∀xFx ⊃ Ga⌝, succeeded by ⌜Fa ⊃ Ga⌝ and then ⌜∃x(Fx ⊃ Gx)⌝: this is patently
incorrect as we can see from an interpreted instantiation: from “if everyone is
saved, then there is a savior” we could infer that “there is someone who, if he or
she is saved then he or she is a savior.”
8.1 A System of Natural Deduction for ∏πφ=: ∏πφ=∎ 383
that we allow ⌜∀x(x = x)⌝ as insertable by the identity introduction rule. This for-
mula, however, is derivable from the identity rule, as restricted to constant and func-
tion symbols, and by subsequent application of the ∀I rule: for any random constant
or function symbol, we can use the identity introduction rule and then use ∀I since
the randomness of the constant or function letter allows use of ∀I as we know. The
rule is not applied on any line; the line is insertable at any point: as a self-justifying
line, this is akin to a given premise and this is, accordingly, indicated in the justifica-
tion line by marking the line numeral, as we do with given premises in this formal
system.
⋮
n. λ = λ =I n λ is a constant or function symbol
The elimination rule for the identity symbol gives as a schematic shape that is like
one for inter-substitutivity of equivalents: indeed, since the meaning of a constant or
function symbol (to think semantically for a moment) is the referent of the symbol,
identity of the symbols means that we have the same referent and, hence, the same
logical meaning. For the elimination rule for the identity symbol, the rule schema
permits replacement of any letter by any other letter that is identical with the given
letter within the matrix of a predicate symbol or a function symbol. Accordingly,
alternative rule schemata can be presented, both of which result in the same results
in implementation on proofs.
Function symbols are terms in the predicate logic system. They are not logical sym-
bols and the rules that can be provided for management of this symbolic resource
capitalize on the type of symbol this is (given that it is a fixed matter what type of
symbol the function symbol is); and on how identity, which is a logical symbol,
386 8 Proof-Theoretical System for Predicate Logic: ∏πφ=
applies – in which case the identity rules that have been provided ought to be suffi-
cient already for regulation of how to play with function symbols but the related,
and redundant, function rule may be provided anyway for the sake of convenience.
Regarding the type of the symbol, rules for elimination and introduction may be
provided in such a way as to respect the function symbol’s type as a term (and spe-
cifically a constant) but also, importantly, to place appropriate constraints that
observe that functional symbols require, by definition, unique outputs for specified
inputs: since it is a term, the function symbol may be eliminated as replaced by a
unique constant symbol: accordingly, the constant symbol used to replace the func-
tion symbol must be new, not occurring at any point in the proof and not available
subsequently except for universal instantiations (eliminations.) The introduction
rule for a function symbol ought to be so established as to stipulate that the function
symbol may replace an individual constant under the proviso that the constant
occurs uniquely in all relational predicate symbols in which it occurs along with
other term symbols. Since the existential assumption is trivial for constant terms in
the standard predicate logic, we do no need to impose an existential constraint in
addition to the uniqueness constraint.
Accordingly, applications of universal and existential elimination rules with
matrices that are formulas with function symbols ought to be treated as compound
quantifier-function rules.
k. □ fλ1…λm occurs in □
⋮
l. □ [λn / fλ1…λm] fE
provided that λn is identical with the output and unique.
k. □ λn occurs in □
⋮
l. □ [fλ1…λm / λn] fI
provided that λ is unique or identical with any other terms for which the function
symbol is also substitutable.
Failure to observe the uniqueness constraint results in error as illustrated by the
following pseudo-proof in which presumably we have a derivation from “every-
thing is the origin of b” to the putative conclusion “b is the origin of everything.” If
we symbolize and attempt to carry out the proof by using relational predicate sym-
bols, the proof does not go through, as it should not. But the pseudo-proof provided
below succeeds, erroneously, because it fails to observe the uniqueness constraint.
In the first pseudo-proof, this is in evidence in failing to identify all the constants
with one other, as it ought to be done considering that they are all presented as out-
puts of the same function. In the second pseudo-proof, which is by indirect proof or
reductio, the violation is in evidence in that as shown by the fact that more than one
applications of elimination of the function symbol take place instantiating to differ-
ent constant terms (of which it is not stipulated that they are equal to one another.)
8.1 A System of Natural Deduction for ∏πφ=: ∏πφ=∎ 387
Although we cannot expatiate on this interesting subject, we can still briefly draw
contrasts between intuitionistic versus classical sentential logic, as we did in earlier
section with respect to sentential logic. Predicate logics extend sentential logics;
accordingly, the meanings of the connective symbols for intuitionistic logic are the
same as those for intuitionistic sentential logic. The crucial question that arises is
how intuitionistic quantifier symbols definitions are to differ from the correspond-
ing ones for the classical predicate logic. Intuitionistic logic understands truth as
constructive actual verification by means of an available proof: this means that, if
for instance we are dealing with an infinite domain like that of the natural numbers,
we may have a proof that it is not the case that all numbers possess property F and
yet be unable to produce a specific case in which we prove that some number lacks
this property F: therefore, the inference from ~ ∀xφ to ∃x ~ φ should not be intu-
itionistically valid, and it is not; this gives us a flavor of the stunning disagreements
between classical and intuitionistic logic – because, certainly and as we have seen,
the above inference is classically valid. As another example, before we proceed to
enumerate instances of intuitionistically valid and intuitionistically invalid infer-
ence schemata, we may consider this: It may be possible to convert a proof of ψ to
a proof of ∃xφ for some specific number in an infinite set, like that of the natural
numbers; and yet, we may be unable to construct a proof that converts a proof of ψ
388 8 Proof-Theoretical System for Predicate Logic: ∏πφ=
to a proof of φ for any specific number: this means that the inference from ψ ⊃ ∃xφ
to ∃x(ψ ⊃ φ) is intuitionistically invalid (although the converse is intuitionistically
valid.) This inference is, of course, classically valid and it is one of the schemata we
rely on to produce prenex formulas as we explained earlier. A corollary is that a
process of extracting a prenex formula that is equivalent with any given well-formed
formula is not generally available in intuitionistic logic.
• ~ ∀x ~ Fx ⊬∑ⅈ ∃xFx
• ⊬∑ⅈ ∀x(Fx ∨ ~ Fx)
• ~ ∀xFx ⊬∑ⅈ ∃x ~ Fx
• ~ ∀x ~ Fx ⊬∑ⅈ ∃xFx
The following are intuitionistically valid instances of logical consequence.
99 ∀xFx ⊢∑ⅈ ~ ∃x ~ Fx
99 ∀xFx ⊢∑ⅈ ∃xFx
99 ∀x ~ Fx ⊢∑ⅈ ~ ∃xFx
99 ∀x ~ Fx ⊢∑ⅈ ∃x ~ Fx
99 ∃x ~ Fx ⊢∑ⅈ ~ ∀xFx
99 ∃xFx ⊢∑ⅈ ~ ∀x ~ Fx
99 ∃x(Fx ⊃ φ) ⊢∑ⅈ ∀xFx ⊃ φ --the converse inference does not hold
99 ∃x(φ ⊃ Fx) ⊢∑ⅈ φ ⊃ ∃xFx --the converse inference does not hold
99 ∀xFx ∨ φ ⊢∑ⅈ ∀x(Fx ∨ φ) --the converse inference does not hold
99 -- [the following inferences are valid in both directions in ∑ⅈ]
99 ∃x(Fx ∙ φ) ⊣⊢ ∃xFx ∙ φ
99 ∀x(Fx ∙ φ) ⊣⊢ ∀xFx ∙ φ
99 ∃x(Fx ∨ φ) ⊣⊢ ∃xFx ∨ φ
99 ∀x(Fx ⊃ φ) ⊣⊢ ∃xFx ⊃ φ
99 ∀x(φ ⊃ Fx) ⊣⊢ φ ⊃ ∀xFx
8.1.11 Exercises
1. Are the following derivations of lines correct or incorrect? Discuss. What rule
is presumably to be applied and is the rule applied correctly or not?
P1 ∀x∃y(Rxy ∙ Sxy) P
P2 ∃y(Ray ∙ Sby)
P1 ∀x∀y(Rxy ⊃ Ryx) P
P2 ∀y(Ray ⊃ Raa)
P3 Rab ⊃ Raa
P4 ∀y(Ryb ⊃ Ryy)
P5 ∀y∃x(Ryx ⊃ Ryy)
P1 ∃x∃yRxy P
P2 ∃x∃ySxy P
8.1 A System of Natural Deduction for ∏πφ=: ∏πφ=∎ 389
P3 ∃yRay
P4 Rab
P5 ∃ySay
P6 Sab
P7 Rab ∙ Sab
P8 ∃y(Ray ∙ Say)
P9 ∃x∃y(Rxy ∙ Sxy)
P1 Rab P
P2 ∃yRyy
P1 ∃xRafx P
P2 Rafa
P3 ∃xRxfx
P1 ∃x∃yRxy P
P2 ∃yRay
P3 Rab
P4 ∃xRax
P5 ∃y∃xRyx
P1 Fab ⊃ ∀xFax P
P2 Fab ⊃ Faa
P3 Fab ⊃ ∀xFxx
2. Identify and discuss the errors in the following pseudo-proofs. Also give
instances of the invalid inferences that correspond to the pseudoproof. For
example, for the first pseudo-proof below: There is at least one student; there-
fore, everyone is a student. (!)
P1 ∃xFx P
P2 Fa ∃E(1) – ae (a is an eigenparameter)
P3 ∀xFx ∀I(2)
P1 ∃xFx ∙ ∃xGx P
P2 ∃xFx ∙E(1)
P3 ∃xGx ∙E(1)
P4 Fa ∃E(2) - ae
P5 Ga ∃E(3) - ae
P6 Fa ∙ Ga ∙I(4, 5)
P7 ∃x(Fx ∙ Gx) ∃I(6)
P1 ∀x∃y(x ≠ y) P
P2 ∀x(x ≠ a) ∃E(1) - ae
P3 a≠a ∀E(2) – a
P4 ∃x(x ≠ x) ∃I(3)
P1 ∀x(Fx ∨ Gx) P
P2 Fa ∨ Ga ∀E(1)
P3 Fa ∨ ∀xGx ∀I(2)
P4 ∀xFx ∨ ∀xGx ∀I(3)
390 8 Proof-Theoretical System for Predicate Logic: ∏πφ=
P1 ∀xRxx P
P2 Raa ∀E(1)
P3 ∀yRay ∀I(2)
P4 ∀x∀yRxy ∀I(3)
3. Construct proofs in ∏πφ=∎ for the given argument forms.
a. ∀xFx ⊢ ∏πφ=∎ ∀yFy
b. ∀xFx ∨ ∀xGx ⊢ ∏πφ=∎ ∀x(Fx ∨ Gx)
c. ∃x(Fx ∙ ~ Gx), ∀x(Gx ∨ Hx) ⊢ ∏πφ=∎ ∃x(Fx ∙ ~ Hx)
d. ∀x∀y(Rxy ∨ Ryx)⊢ ∏πφ=∎ ∀xRxx
e. a = b, b = c ⊢ ∏πφ=∎ a = c
f. ∀x(x = fb), ∀xRxx ⊢ ∏πφ=∎ fbb ∙ bfb
g. ∃x∀yByx ⊢ ∏πφ=∎ ∀y∃xByx
h. ∃x∃yRxy ⊢ ∏πφ=∎ ∃y∃xRxy
i. ∀x(Fx ⊃ Gx)⊢ ∏πφ=∎ ∃xFx ⊃ ∃xGx
j. ∀x∃y∀zRxyz ⊢ ∏πφ=∎ ∀x∀z∃yRxyz
k. ∀x(Ffx ⊃ x = a) ⊢ ∏πφ=∎ ∃xFfx
l. ~ ∃x∀yRxy ⊢ ∏πφ=∎ ∀x∃y ~ Rxy
m. ∃x(∀y(Fy ⊃ x = y) ∙ Fx)⊢ ∏πφ=∎ ∃x∀y(Fy ≡ x = y)
4. Construct proofs in ∏πφ=∎ for the given empty-premise argument forms (hence,
proofs of theses.)
a. ⊩∏πφ=∎ ∀x(Fx ⊃ Gx) ⊃ (∀xFx ⊃ ∀xGx)
b. ⊩∏πφ=∎ ∀xFx ⊃ ∀yFy
c. ⊩∏πφ=∎ ∀xFx ⊃ ∃yFy
d. ⊩∏πφ=∎ (∀x(Fx ⊃ Gx) ∙ ∃xFx) ⊃ ∃xGx
e. ⊩∏πφ=∎ ∃x(Fx ∙ Gx) ⊃ (∃xFx ∙ ∃xGx)
f. ⊩∏πφ=∎ ~ ∃x(Fx ∙ Gx) ⊃ (∃x(~ Fx ∙ Hx) ∙ ∃x(Gx ∙ Hx))
5. Recalling the parallel task for the mechanics of constructing natural deduction
proofs in sentential logic, discuss how ∏πφ=∎ can be deployed to determine con-
sistency of a set of given formulas of predicate logic. Then determine, if possi-
ble, consistency of the given formulas.
a. ⊩∏πφ=∎ {}
The formal system ∑↙↓↘ was presented in earlier chapter as a decision procedure
for sentential logic. We extend that system and adopt the grammar of ∏πφ=∎ to gen-
erate ∏ρ=↙↓↘ with the addition of rules for the quantifier symbols and identity.
The predicate logic trees may be non-terminating if there is an infinite number of
available letters for implementing the rule for the universal quantifier symbol.
8.2 A Tree System for Polyadic Predicate Logic: ∏ρ =↙↓↘ 391
Unlike the sentential logic tree system, which can be proven in metalogical analysis
to be a correct decision procedure that harmonizes with the truth table system in
terms of determining the same logical properties for the same formulas, the predi-
cate logic tree can be effective only if restricted to fragments of the predicate logic
language. We cannot enter into details about this subject in the present text.
The rules of ∑↙↓↘ are retained. Familiarity with that system is presupposed
before continuing with the present section. Briefly we recall that we use plus and
minus signs to designate and anti-designate formulas. Rules for the connective sym-
bols are provided, some of which rules are splitting while others are vertical. The
basic structural rules regulate how the implementation of the tree procedure begins,
proceeds and terminates properly. Because of the operation of the splitting or
branching rules, a tree may have branches issuing downward from the root which is
placed at the top; branches that are connected downward to other branches form
paths. The tree is completed or terminated when no more rules can be applied to
it – except for the prospect of non-terminating trees which arises for the first time in
the case of predicate logic. An open path of a terminated tree indicates that all the
formulas along the path can be true together: accordingly, the formulas at the root
of the tree are co-satisfiable or consistent as a set, and this suggests the decision
procedure for checking consistency. A closed path is one that has a terminal closed
branch: closure occurs when any formula appears both as designated and as anti-
designated on the same path. Closure may be executed immediately or deferred
depending on whether the rule is liberalized accordingly. A closed path indicates a
logically impossible state of affairs when we consider the value assignments to the
atomic variables of the formulas on the path as comprising what we may regard as
states of affairs or logical possibilities. If the root of the tree is labelled by formulas,
closure of all the paths of the terminated tree shows that the set of formulas, for
which the tree has been constructed, is inconsistent. If the root is labeled by one
formula, the formula is a contradiction if, and only if, all the paths of its completed
tree are closed. The check for the logical status of tautology requires placing at the
root of the tree the negation of the given formula: since tautologies are the negations
of contradictions, the closed tree (a tree with all paths closed when it is terminated)
shows that the formula that has been negated is a tautology. A contingency is a for-
mula that has neither its tree nor the tree for its negation closed. The validity check
through the tree method requires negating the presented conclusion and attaching it
to the premises of the argument form to constitute the root of the tree: the argument
form is valid if and only if the tree with the premises and negated conclusion at the
root is closed: since all paths are closed, there is no logically possible case or value
assignment to the atomic variables, for which all the premises are true and the con-
clusion is false – which matches precisely our standard and familiar definition of
validity of argument form. Checks for determining relations between formulas can
be devised accordingly.
All the above conceptual and mechanical specifications are preserved and all we
need to do is to add the rules for the quantifier symbols and for the identity symbol –
both for designated and anti-designated formulas.
392 8 Proof-Theoretical System for Predicate Logic: ∏πφ=
Specifically, closure is effectuated for any branch that has any formula □ as both
designated and antidesignated. Invalid argument forms are correctly determined by
the tree system: any open branch in the completed tree for the validity check records
the truth values of the atomic variables (true for designated and false for antidesig-
nated) and the values (referents) of terms for which there is a counterexample (bet-
ter, a countermodel) to the given argument form: we may see, by a slight abuse of
terminology and for the sake of convenience, that an open path is a counterexample
to the given argument form. It is provable, although we will not prove it here, that
any argument form that is checked as invalid by the tree method is invalid by the
truth table method and any valid argument by the tree method is determined as valid
(all paths of the completed tree are closed) when checked by the tree method.
k. ∃x□ +
⋮
n. □ + [λ/x] ∃+(k) λ is a new letter (eigenparameter)
----------------------------------------------------------------------------------------.
k. ∃x□ −
⋮
n. □ − [λ/x] ∃−(k) λ is any letter
----------------------------------------------------------------------------------------.
k. ∀x□ +
⋮
n. □ + [λ/x] ∀+(k) λ is any letter
----------------------------------------------------------------------------------------.
k. ∀x□ −
⋮
n. □ [λ/x] ∀−(k) λ is a new letter (eigenparameter)
Next we lay down the rule schemata for the identity symbol. We use designation
and anti-designation again. The designated rule allows for intersubstitutivity of the
identicals into any matrices of quantified formulas. The antidesignated formula,
antidesignating self-identity, is a closure formula: the branch is closed when the
self-identity formula receives antidesignation. We likewise need designation and
antidesignations rules for the symbol of negated identity, “≠”, which we have made
available in our grammar.
k. λ1 = λ2 +.
⋮
l. □
⋮
n. □ [λ1/λ2] =+(k, l)
⋮
u. □ [λ1/λ2] =+(k, l)
----------------------------------------------------------------------------------------.
k. λ = λ −.
⋮
l. λ ≠ λ + =−
⊠ ≠+
Here is how we can prove that identity has the commutative and transitive
properties.
1. a = b + Premise /.. b = a
2. b = a − Negated Conclusion
↓
3. a = a – =+(1, 2) == b/a in (2), given (1), according to the rule =+
↓
4. a ≠ a + ≠+(3)
⊠
1. a = b + Premise
2. b = c + Premise /.. a =c
3. a = c − Negated Conclusion
4. a = c + =+(1, 2)
⊠
Function symbols are treated as constant letter symbols with the added proviso
that formulas expressing existence and uniqueness of the function output is granted
and have to be explicitly appended to the tree. This makes the use of functional
symbols rather complicated and engenders certain tricky conditions that create
opportunities for errors and for spurious proofs to go through – as we examined in
some detail when dealing with the natural deduction system for polyadic logic in
preceding section.
394 8 Proof-Theoretical System for Predicate Logic: ∏πφ=
Regulations for the order of nested quantifier symbol eliminations are also
needed. Eliminations proceed from outside, or from the quantifier symbols with
broader scope, toward inner, or narrower scope, quantifier symbols. We check that
application of our tree system checks correctly a valid quantifier shift (as exempli-
fied in the derivation from “there is at least one person who likes everyone” to
“everyone is liked by at least one person”); and the system also checks as invalid the
illicit universal/existential quantifier symbol switch (as exemplified in the derivation
from “everyone likes at least one person” to “there is at least one person who is liked
by everyone.”) We present the tree for the second, illicit quantifier symbol shift,
producing a countermodel. We leave the other tree construction as an exercise.
1. ∀x∃yLxy + Premise/.. ∃y∀xLxy
2. ∃y∀xLxy − Negated Conclusion
↓
3. ∃yLay + ∀+(1)
↓
4. Lab + ∃+(3)
↓
5. ∀xLxb − ∃−(2)
↓
6. Lcb − ∀−(5)
⊕ ==the only path remains open – providing a countermodel
The countermodel can be read off the open path of the tree for the argument form
expressing an illicit quantifier symbol switch. We use the metalinguistic symbolic
conventions we have introduced earlier to indicate the countermodel. The tree we
have constructed is not saturated – we have not instantiated the universal quantifier
symbol for all constant letters. To produce the countermodel, we take into consider-
ation the information in the premises and in the conclusion and accordingly produce
extensions of the predicate symbol. The countermodel is for a three-member
domain. From the tree open path we read that the ordered pair of the objects named
by a and b belongs to the extension of the predicate symbol whereas the order pair
of the objects named by c and b does not. We supply more information for the coun-
termodel as it is fit.
𝔐 = <ⅅ = {①, ②, ③}, ||>.
|a| = ①.
|b| = ②.
|c| = ③.
|L| = {<①, ②>, <②, ③>, <③, ①>}.
Every object is related to at least one object but there is no object to which all
objects are related. Reading off the open path of the tree <①, ②> belongs to the
extension of R and <③, ②> does not.
Notably, we do not need rules of the introduction and elimination variety: the
type of tree we are implementing, known as negation tree, requires negation of
formulas (negation of conclusion for validity check, negation of the formula for
tautology check); accordingly, to examine the validity of the argument form to
8.2 A Tree System for Polyadic Predicate Logic: ∏ρ =↙↓↘ 395
8.2.1 Exercises
1. Determine by use of the ∏ρ=↙↓↘ tree system whether the following argument
forms are valid or invalid. Can we determine invalidity by this method? Can we
extract the countermodel values (truth values of atomic formulas, referents of
individual constants, extensions of predicate symbols) by this method?
a. ∀xFx, ∃xGx ⊢ ∏ρ=↙↓↘ ∃x(Fx ∙ Gx)
b. ∀x(Fx ⊃ Gx), ∃x ~ Gx ⊢ ∏ρ=↙↓↘ ∀x ~ Gx
c. ∃x(x = fx), ∀xRfxx ⊢ ∏ρ=↙↓↘ ~ ∃xRxx ⊃ ~ ∃xfx
d. ∃x∀y(y = x ≡ x = x) ⊢ ∏ρ=↙↓↘ ∀x∃y(y = x ≡ x = x)
e. ∃x(Fx ∨ ~ Fx) ⊢ ∏ρ=↙↓↘ ∀xFx ∨ ∀x ~ Fx
f. ∃xFx ∙ ∃xGx⊢ ∏ρ=↙↓↘ ∃x(Fx ∙ Gx)
g. ∃x∃yRxy ⊃ ∃y∃xRxy, ∀x∀y(Rxy ⊃ (x = y ∨ fx = fy)), ∀x∃y(fxy ≡ Rxy) ⊢
∏ρ=↙↓↘ ∃x∃y(Rfxfy ∨ Rxy)
descriptions. This view originated with the German logician and one of the founders
of modern logic, Gottlob Frege, who opted for taking the meaning of a name to
consist in its extensionalist dimension – a logical predicate that can be predicated of
the entity that is referred to, and, as we know by now, with the predicate being
extensionally defined as the set of entities that are members of the set that is the
denotation or logical meaning of the predicate: with the additional constraint, in the
case of a name, or of a definite description, that the set is a singleton – meaning that
it has exactly one member. Thus, to use the classical example deployed by Frege
himself, the name “Aristotle” should be fixed through its extensional meaning as
“the teacher of Alexander the Great” with the predicate “the teacher of Alexander
the Great” denoted (not connoted) as the singleton set with the entity in question as
the set’s only member. Interestingly, it might be objected that the name “Aristotle”
could also be fixed extensionally as “the student of Plato” and in many other ways
(including by using combinations of predicates), which shows dependency on vari-
able context and, as such, falls outside the scope of the standard extensional logic.
The response to this is twofold: for one, the dependence on context is a so-called
pragmatic matter that is not properly handled through the resources of formal logic;
we may compare, for instance, the manner in which we approach logical predicates
as essentially non-logical symbols in the sense that the extensions of predicates are
semantically open to endless possibilities across constructions of models; this does
not vitiate the logical project because no logical truths can ever be provable depend-
ing on the incidental assignments of extensions to the predicates in specific models.
Similarly, names are to be understood as model-dependent in the logical construc-
tions within the formal system but this has no bearing, as it should not have any
bearing, on the characteristics of the logic (such as the collection of logical truths
provable in the logic or the relation of logical consequence that characterizes the
logic.) The second point to make is that names, and the definite descriptions to
which we will turn next, are incomplete terms for a reason that will soon become
visible.
A definite description is generated by making a unique attribution of a predicate
to an entity. We may render also, keeping in mind natural language, “the F,” when
the context makes it precisely given that this a unique F. The definite article is usu-
ally deployed to indicate uniqueness in natural language but, as always, there are
other usages as well. Sometimes, it means “all” as in “the soldier can know no fear.”
The standard textbook example of a definite description case, from Russell’s work,
speaks of “the present King of France.” The sentence that is queried, as to its truth
value, is: “the present King of France is bald.” Russell’s view is that “the present
King of France” is not to be treated as a logical constant but to be treated instead in
regimentational fashion: it has to be rendered in quantificational language as “there
is a unique entity, such that it is the present King of France.” We may not see imme-
diately how to express uniqueness but this apparent challenge is easily circumvented
because the symbolic resources we have at our disposal in predicate logic allow us
to express uniqueness claims like this. The main thrust is that this regimentational
rendering of the definite description is the correct approach to capturing the logical
meaning of this linguistic component, and also this is the right way for representing
9 Definite Descriptions: ∏πφ=⍳ 399
the logical structure of any meaning conveyed by any meaningful sentence that has
in it definite descriptions. Reliance on linguistic grammatical structure is fatally
misleading. It appears indeed that we should rather treat a definite description like
the one in our example as a name, but this is wrong according to Russell. There also
seems to be a poignant challenge that would arise if we attempted to treat the defi-
nite description as a name: there is no referent, we want to say, since there is no
“present” King of France. Of course, we could construct semantically a model in
which the name would have a referent in the specified domain but, as we have
already indicated, this is a pragmatic issue: it is as crucial that we can always con-
struct a model in which the definite description does not refer – and, indeed, this is
always possible for any definite description. There seems to be problem about this:
what should we say that the truth value of the sentence is when we are dealing with
a non-referring or non-denoting definite description?
There is, however, a way around this particular issue but the solution, initially
contemplated by Frege himself, comes across as ad hoc and, some might say,
“unnatural” although this is a rather vague way of speaking for our purposes. We
could specify that one distinct and, inevitably unique, element of our domain is
what we may call the “nullary object.” Any non-referring constant in our domain
would then be stipulated as having this nullary entity as its referent and in our
semantics we would accordingly mandate that the sentence with such a constant has
to be false. Bivalence dictates this approach since we only have as truth value true
and false and we ought to refrain, for obvious reasons, from assigning the truth
value true to a sentence that contains a non-referring term. Of course, this may be
philosophically contentious but the formal resources that would be required to
accommodate, for instance, a view that such sentences are neither true nor false are
simply not available in our formalism. The discussion can then be pushed to another
level, as a logical-philosophical discussion, about the aptness and limitations of the
standard bivalent logic, or about the motivations for choosing a logic or, indeed, it
could be seen as an opportunity to discuss the overarching issue about possible
motivations and supports for alternative or non-classical logics. As it is, the stipula-
tion of a nullary object fixes our problem but this solution is one of the most unpop-
ular fixes ever considered. But Russell’s theory of definite descriptions does not
originate from a perceived difficulty with handling such expressions as names; his
point, rather is that the deep structure of definite descriptions is quantificational and
ought to be represented through regimentation in the language of predicate lan-
guage rather than regarded as names. When this is done, as we will see, opportuni-
ties arise for disambiguating with respect to scope of the quantifier – which has to
be used in rendering definite descriptions in Russell’s theory – but this may be pre-
sented as an additional benefit of the approach in that we now have additional
opportunities for detecting and removing such logically anomalous interferences as
arise from ambiguity of scope, which are otherwise undetected and can cause con-
fusion in ordinary discourse. Objections have been raised against Russell’s view of
definite descriptions, having to do with the role played by existential presupposi-
tions in assigning truth values to claims about entities but such matters tend to be
either pragmatic – to be relegated to analysis of linguistic usage rather than
400 9 Definite Descriptions: ∏πφ=⍳
formally in the family of logics within which we are operating: the truth value of the
sentence ⌜B⍳xKx⌝ in the model would have to be unassignable or indeterminate,
neither one of the truth values true and false. This, however, cannot be contem-
plated. Every meaningful sentence is to take exactly one truth value from the set of
truth values {T, F}; otherwise, it cannot be a meaningful sentence, and yet, our
sentence ought to register as carrying meaning and as being amenable to our seman-
tic analysis. If we constructed a deviation, a formal system with an added third truth
value (for indeterminate) and accordingly presented connective symbols that are
defined differently from the way they are defined in ∏πφ=⍳, we could also attempt a
different management of this subject; this option is also unavailable, however, since
we are working strictly within the confines of our standard bivalent logic. Two solu-
tions we can briefly examine, both of which render the truth value as false in the
case of a non-denoting or non-referring definite description sentence (when there is
no referent in the domain of the model for the term formed for the presumably
uniquely referring expression.) One approach (the Frege approach) tinkers with the
semantics of our language and the other approach (the Russell or regimentation
approach) radically pushes for the systematic elimination of the iota-quantified
symbolic expressions.
1. Frege’s solution, to which we alluded earlier and dismissed as possibly ad hoc
and unintuitive, enhances our semantic arrangements in a certain way: a special
object, the nullary or nil object, symbolized by “⓪”, is added to each domain;
every non-referring term is taken as referring to this nothing-object with talk
about “nothing” managed systematically in the semantics, so that sentences with
terms referring to this non-object object are defined as being false. Frege thought
that this tinkering can be justified because it is a limitation of natural language
that it would permit grammatical constructions of names that do not refer; in the
perspicuous symbolic notation, in which ambiguity cannot be tolerated, the
absent referent must be rendered explicit: this does not mean that we would com-
mit ourselves to a nonsensical way of speaking, in our semantics presented in
our metalanguage, about a thing that is and is not a thing; instead, the mysterious
thing is demystified by the services of the always explicatory and perspicuous
specifications of our formal language and its semantics and is treated as an hon-
orary object (as member of the things we are talking about) but such references
are rewarded – so to speak – with giving the expression the truth value false. In
this way, the truth values do the work and no mystification about an elusive
object need arise since the mechanism for including such an object in the domain
is merely by systematically managed stipulation. Given the Fregean stipulation,
we have that the statement “the present King of France is bald” is false since the
referent of the term “the present King of France” is the nullary object of the
domain. Thus, the nullary object, symbolized by “⓪”, is the referent of ⌜⍳xKx⌝
and the symbolic expression, the formula, that contains this term has to take the
truth value false.
|B⍳xKx|ℳ, σ = F iff <|⍳xKx|ℳ, σ> = <⓪ ||⍳xKx ||>
402 9 Definite Descriptions: ∏πφ=⍳
9.1 Exercises
1. The symbols from {x, y, z, u, v, w, …, x1, …}, with subscripts from the countable
natural numbers, have been legislated in our formal grammars for predicate logic
to be used for individual variables. Yet, ⌜⍳x⌝ accompanied by an open predicate
letter (predicate letter with individual variable matching the symbol in ⌜⍳x⌝, is
not an individual variable although it is of course a term. What kind of term is it?
How do you explain this?
2. What is Frege’s solution to the problem of characterizing definite descriptions,
and specifically non-denoting definite descriptions? Discuss this suggested solu-
tion. Compare to Russell’s response and discuss further. Are there pros and cons
to both?
3. Let us introduce symbol “E!” defined as follows:
E!xFx ≝ ∃x(x = a ∙ Fx).
Does this symbolize unique existence? Do we need this in a formal system of the
standard predicate logic? (Recall and discuss the existential commitments gener-
ated automatically in the standard predicate logic.) How is this symbol different
from the symbol, we have also introduced, ⌜∃!⌝?
4. Discuss a proposal that we treat non-denoting definite descriptions as having no
truth value: they are not true and they are not false.
a. Distinguish two different cases: that non-denoting definite descriptions do
not have any truth value whatsoever; and, an entirely different approach, they
do have a truth value but this truth value is not “seen” by the standard logic:
it is the truth value “neither true nor false.”
b. What is the different between the two cases?
c. What does it mean to compel a revision of logic in order to treat non-denoting
definite descriptions as having a non-standard third value? (Recalling an ear-
lier discussion in this text, the formal system with the three truth values is not
an extension of but a deviation from the standard logic.)
d. Write out truth table definitions of the connective symbols for a language
with the three truth values, {T, N, F}. Surely, there is not just one way to do
this but define the truth connectives as you justifying your choices. (For
instance, the negation of N should be also N; N and N should yield N;
and so on.)
e. Are the connective in your three-value logic the same as the standard
connectives?
f. What happens in assessing Russell’s famous example about the bald king in
this three-valued system?
g. Discuss objections. Discuss also the broader implications for logic of the
acceptability of motivating formal systems with more than the standard two
truth values.
404 9 Definite Descriptions: ∏πφ=⍳
We have to define the concept of a set first. We could rely on an intuitive sense and
present a “set” as a collection of any objects regardless as to how those objects are
related. Offering a more systematic definition, we define a set as follows: A set is a
collection of discrete objects of any kind, regardless as to whether those objects are
related to each other or not, under the only proviso that for any given item u and a
specified set x it is possible to determine whether the item labeled (denoted, referred
to by) u is in x or not. It follows from this definition also that a set is completely
characterized by the objects that are included in it. A set is a collection of objects,
not of the symbols we will be using to refer to such objects. As we will see, sets
themselves can be members of sets. After all, we take sets to be objects without
belaboring any concerns about what kinds of objects they are. This shows that the
view of what counts as an object – the metaphysics we employ – is quite permissive.
We can regard sets themselves as abstract objects without bothering about the meta-
physical implications of this position.
An alternative view, due to von Neumann and not much in use, reserves the term
“set” only for collections that share all their members with a specified larger set; all
other collections are called “classes.” On this view, and for the nomenclature that is
developed for this purpose, there are properly so-called ultimate classes which are
not subsets of any other classes. We will disregard this view here.
We say that a set has members or elements. The notion of belonging to a set is
fundamental and the symbol we use is “∈” while we symbolize sets by variables
from the set {x, y, z, …, xi, …}, where i is a positive integer or natural number. The
stock of our variables is, accordingly, countable or denumerable (the infinitarian
size of the natural numbers.) In texts, capital letters are used to symbolize sets quite
often but we use small variable letters also permitting ourselves quantification over
such variables by using the formal language of predicate logic which we developed
in chapters 5–8. We can use small letters from the earlier part of the English alpha-
bet, {a, b, …}, possibly with subscripts from the positive integers, as variables
denoting distinguished elements of sets. Our notational symbolizations are pre-
sented in a metalanguage – not a formally constructed idiom but a convenient
assortment of English words and symbols that enrich the metalanguage. The sym-
bolic expressions that enrich our metalanguage are constructed grammatically in
accordance with the conventions we established for predicate logic.
For the sake of providing more detail, we point out that quantifying over set-
variables, as we do in our metalanguage, generates a higher-order logic as a proper
logical formal system: we are not presenting such a logic – properly called second-
order logic - but we are simply employing first-order logic variables for a metalin-
guistic presentation of set-theoretical statements. Incidentally, we catch a glimpse in
all this of the fact that set theory is not the same as logic; set theory is used as an
instrument in the construction of logical formalisms but it is a different species. This
10.1 Definitions: Set, Membership, Distinguished Types of Sets; Ways of Defining… 407
sentence in our predicate logic notation, since these are two free variables. It is
standard in mathematical textbooks to present definitions in this fashion.
Let us recall that sets can be members of sets. Sets of sets can be members of
sets, and so on. The building of sets can go on like this. Notice, however, that, offi-
cially, we cannot speak yet of infinite sets and we will need to make appropriate
stipulations legislating that infinite sets are indeed constructible insofar as such sets
can be said to exist in the appropriate sense. As we will see, there is an empty set
which is a set with no members whatsoever. The symbol used to refer to the empty
(also called Null or Nullary Set) is “∅”. (Other symbols in the literature are “{}”
and, in older texts “Λ” or “0”.) Only one such set can exist since there is only one
way of having no members: if we proposed that some other empty set exists besides
the first empty set we have, then the presumed two sets would have exactly the same
sets – none! – And, therefore, by Extensionality, they would be the same set. Thus,
no second empty set can exist; therefore, only one empty set exists. Another type of
set we define is the universal set, which we symbolize by “ω”. As we will see, this
set is not to be considered as the set of absolutely everything but rather as a specified
set of all the contextually relevant objects. This set could be infinite, of course: we
could be talking about the set of natural numbers, for instance.
We can show set by step that can only be one empty set.
1. ∅ ≠ ∅1 (assuming a second empty set)
2. ∃x(x ∈ ∅1 ∧ x ∉ ∅) (charged assumption, for reductio 1)
3. ∀x(x ∈ 𝛀 ⊃ x ∉ ∅) (by definition of the empty set)
4. ∀x(x ∈ 𝛀 ⊃ x ∉ ∅1) (by definition of the empty set)
5. t ∈ ∅1 ∧ t ∉ ∅ (existential instantiation for random t 2)
6. t ∈ ∅1 (Conjunction Elimination 5)
7. t∉∅ (Conjunction Elimination 5)
8. t∈ω⊃t∉∅ (Universal Instantiation 3)
9. t ∈ ω ⊃ t ∉ ∅1 (Universal Instantiation 4)
10. t∈ω (Fundamental Stipulation for all members)
11. t ∉ ∅1 (Conditional Elimination 9, 10)
12. ⊥ (Absurdity 6, 11– reductio success!)
13. ~ ∃x(x ∈ ∅1 ∧ x ∉ ∅) (reductio)
14. ∀x(x ∈ ∅1 ⊃ x ∈ ∅) (Exchange of Quantifiers and Conditional
Exchange Rule)
---The proof can be repeated in the opposite direction, proving:
∀x(x ∈ ∅ ⊃ x ∈ ∅1)
Thus,
∀x(x ∈ ∅1 ≡ x ∈ ∅)
Therefore,
∅1 = ∅
To ensure that our intuitively based theory does indeed permit constructability of
the empty set we need an axiom that stipulates the existence of such a set and such
an axiom is indeed included among those of the Zermelo-Fraenkel theory. We will
briefly scan some of those axioms.
410 10 Basics of Set Theory
Other comments we may make preliminarily are: the order with which the sym-
bols of the set members is written is unimportant. This does not apply to a special
category which we will call ordered n-tuples, which we will introduce below; but it
does apply to sets: accordingly, you could rewrite a set by altering the order of the
symbols of its members and you would still have a symbolization of the same set.
{1, 2, 3} = {1, 3, 2} = {2, 1, 3} = {2, 3, 1} = {3, 1, 2} = {3, 2, 1}.
Additionally, each symbol may be written only once.
{1, 2, 2} = {1, 1, 2} = … = {1, 2}.
The familiar equality sign “=” is used in our metalanguage to symbolize equality
of sets – which, by the Principle of Extensionality, means that the two sets share all
their members – and it is also used to define a set by means of enumerating, within
the set brackets, the symbols of all its members. Context removes ambiguity. There
is another subtle issue here worth mentioning briefly. Does the symbol for the set
represent the set symbolically or does it name the set? It is the former; to name the
set we should use a special notation – known as the lambda-notation. For instance,
x = {a, b, c}
is a symbolic representation of the set with members named by the letters
whereas the name of this set is,
λx.((x = a) ∨ (x = b) ∨ (x = c))
Let us see examples of sets and of membership, which is a binary relationship
symbolized by “∈”. We may also use “∉” as the symbol indicating non-membership.
Alternatively, we could indicate non-membership by negating a sentence indicating
membership. This reminds us that we allow ourselves metalinguistic usage of the
whole gamut of symbols we have constructed for our formal languages: {~, ∨, ∙, ⊃,
≡, ∀, ∃}. It is not the symbol but the object that we take as being a member of a set
or not; to indicate the value of the symbol – the thing to which the symbol refers –
we use “| |” placing within the parallel vertical bars the symbol: this means that the
object denoted or referred to by the symbol is a member of the set or not.
x = {1, 3, 5} |3| ∈ x, |2| ∉ x -- relative to a universal set {1, 2,
3, 4, 5}
y = {▭, △, ◊, ○} |△| ∈ y, |♦| ∉ y -- to be precise, we must have some
universal set that has as member the object symbolized by “♦”. We will also
see, however, that we will not permitting construction of a universal set in the
absolute sense – no set of all things whatsoever! This is done to prevent set-
theoretical paradoxes from arising. Hence, we can only speak of a relatively
universal set, similarly as we defined a universe or domain of discourse when
we studied predicate logic.
z = {{♠}, {♡}} -- indeed, sets can be members of sets! – {♡} ∈ z; notice
that |♡| ∉ z: the set is a member of the set y, not the object.
w = {∅, {∅}} -- the empty is a member of w; the set with the empty set
as its only member is also a member of w; these are two distinct sets, of
course: the empty set has no members but the set with the empty set as its
members does have a member.
10.1 Definitions: Set, Membership, Distinguished Types of Sets; Ways of Defining… 411
10.1.1 Exercises
3. Provide examples of pairs of sets which are equal with each other even though
they are defined (by abstraction) by means of different properties. In what sense
do such sets capture the meaning of the properties? Can we explain why they are
equal even though they are defined by means of different properties? Why isn’t
this a concern when it comes to the study of predicate logic?
4. Determine if the denoted objects are members of the given sets or not. The val-
ues of the symbols, should be notationally indicated by the use of “| |”: those
values are the denoted objects. (For instance, |2| denotes the number two.)
Nevertheless, bowing to standard notational conventions, we omit the vertical
bar symbols.
[“∈?” as a metalinguistic symbol in this exercise asks the question about
membership.]
a. 2 [∈?] {x/ x is a natural number}
b. 3 [∈?] {{3}, 1, 2, 3}
c. 4 [∈?] {1, 2, 3, {4}}
d. 5 [∈?] {x/ either x is 2, or x is 5}
e. φ [∈?] {ψ, χ}
f. ▭ [∈?] {△, ▭, ◊}
g. ↭ [∈?] {↫, ↬}
h. {∅} [∈?] {∅}
i. {∅} [∈?] {{∅}}
j. {{∅}} [∈?] {∅}
k. ∅ [∈?] ∅
l. ∅ [∈?] {∅}
5. Define the following sets, given in extensional or enumeration definition, by
abstraction and by recursion.
a. s1 = {1, a, 2, b}
b. s2 = {1, 2, …}
c. s3 = {2, 4, 8, 16, 32, …}
d. s4 = {□, ▭, △, ◊, ○}
e. s5 = {True, False}
f. s6 = {(,)}
g. s7 = {w, x, y, z}
h. s8 = {{∅}}
i. s9 = {∅}
j. s10 = {1, {1}}
k. s11 = ∅
l. s12 = {∅}
6. Define the following sets, given by abstraction, by enumeration and by recursion.
a. s1 = {x/ x is a natural number}
b. s2 = {x/ x is a vowel of the English alphabet}
c. s3 = {x/ x is an object in front of you as you are reading this}
416 10 Basics of Set Theory
A concept we need to define is that of subsethood. Other words used for this notion
are “containment” and “inclusion.” A set can be a subset of another set, including of
itself. Objects cannot be subsets of sets; only sets can be subsets of sets. Objects are
members of sets, not subsets of sets. Subsethood is a relation between sets. We may
wonder about constructability of a scheme by which a set can be a member of itself.
This appears feasible notionally: since we have permitted abstraction as a method
for defining a set, and we have invoked a postulate of specification, the only catch is
whether self-inclusion or self-membership is a meaningful concept: and it seems
that it is. Nevertheless, self-membership, and the inevitably definable parallel notion
of lacking self-membership, land us into notorious set-theoretical paradoxes. We
will attend to a brief examination of this topic subsequently. We could, indeed, and
we might as well, block by fiat construction of a set that has itself as a member. The
theoretical challenge remains, however, and we will see how it arises. It is more
complicated, and beyond our present scope, to describe and discuss proposed
10.2 Subsethood, Power Set, Cardinality 417
○○ x ⊆ y ≡ ∀z(z ∈ x ⊃ z ∈ y) ≡ ~ ∃z(z ∈ x ∙ z ∉ y)
○○ If a set x is a subset of a set y, then we say that y is the superset of x.
We may stipulate symbols for this too: x ⊆ y ≡ y ⊇ x
○○ x ⊂ y ≡ ∀z((z ∈ x ⊃ z ∈ y) ∙ ∃w(w ∈ y ∙ w ∉ x))
○○ A proper superset can be defined and symbolized analogously to the
case of the general superset.
Notice that, in defining the general notion of subset, nothing is said about y hav-
ing any additional members besides those that are in x. It may or may not. Of course,
this means that every set is a subset of itself. All its members are, trivially, members
of … itself. Also the empty set is a subset of every set. Let us try to see why: con-
sider the meaning of the sentence, “if t is a member of the empty set, then t is a
member of some set – any set – x.” But the empty set has no members! Thus, the
418 10 Basics of Set Theory
○○ ∅ ⊆ y --- the empty set is a subset of any set. In this case, the empty set
happens also to be a member of the set y. The set with the empty set as
member is a subset of y. This example illustrates the distinction between
membership and subsethood neatly.
¾¾ The set that has as members all, and only, the subsets of a given set x is called
the power set of x and is symbolized by “℘(x)”. The set x itself, and the empty
set, are also members of the power set since they are subsets of x.
¾¾ Example: x = {⫯, ⫰}
¾¾ ℘(x) = {{⫯, ⫰}, {⫯}, {⫰}, ∅}
¾¾ ℘({1}) = {∅, {1}}
¾¾ ℘(∅) = {∅}
¾¾ ℘({∅}) = {∅, {∅}}
The number of objects a set has is called the Cardinality of the set. If the cardi-
nality of a set x is n, then the cardinality of its power set ℘(x) is 2n.
10.2.1 Exercises
s1 = {∅}.
s1 = {{∅}}.
s1 = {∅, {∅}}.
s1 = {∅, {∅}, {{∅}}}
11. Construct examples to show that it is possible for three sets x, y and z that:
a. x ∈ y, y ∈ z, x ∉ z
b. x ∈ y, y ⊆ z, x ∉ z
c. x ⊆ y, y ∈ z, x ⊈ z
d. x ∈ y, y = z, x ⊈ z
12. Is subsethood an equivalence relation? Justify your answer.
13. Can we define a relation of non-subsethood, presumably symbolized by “⊈”?
What would it be defined, if it is definable? What about a relation of
non-proper-subsethood, presumably symbolized by “⊄”? If such relations are
definable, are they equivalence relations or not?
An ordered pair <x, y> can be written as, and is considered identical
with, the set: {{x}, {x, y}}.
It can also be written as: <x, y> = {x, {x, y}}}
Intuitively, we can account for this set-theoretic arrangement, by consid-
ering a specified order, from left to right, of two distinguished objects
denoted by “a” and “b” respectively.
422 10 Basics of Set Theory
a b
One arrangement that can accommodate a notational convention for show-
ing order is by considering how many objects are to the left of each object
(including the object itself in the tally): thus, to the left of the object denoted
by “a” we have nothing; in the case of the object denoted by “b” we have
both objects covered as we move to the left. Thus,
• left-a: {a}
• left-b: {a, b}
• ORDER(a, b) = {{a}, {a, b}}
• We may also write, as previously: {a, {a, b}}
• We can show that the condition laid down for determining equality of
pairs is satisfied in this way. This is crucial for showing that this is a
permissible notational arrangement.
• <x, y> = <u, v> Assumption
• x = u and y = v By definition of ordered pair 1
• {x} = {y} By definition of set equality 2
• {u} = {v} By definition of set equality 2
• {x, y} = {u, v} By def. of set equality 3, 4
• {{x}, {x, y}} = {{u}, {u, v}} Def. set equality 3, 4, 5
• Proceeding in the opposite direction, from bottom to top, we have:
• {{x}, {x, y}} = {{u}, {u, v}} Assumption
• {x} = {u} By definition of set equality 1
• {x, y} = {u, v} By def. Set equal. 1
• x=u By def. Set. equal. 2
• y=v By def. Set equal. 2, 3, 4
• <x, y> = <u, v> By definition of ordered pair 4, 5
As a corollary we have:
○○ <x, x> = {x, {x}}
○○ <x, x> ≠ <x>
○○ <x, y, y> ≠ <x, y>
99 In the case of an ordered triplet, we have:
○○ <x, y, z> = <<x, y>, z>
○○ Generalizing to the case of the ordered n-tuple, we have:
<x1, …, xn> = {{x1}, {x1, x2}, …, {x1, x2, …, xn}}
99 We define as the Cartesian Product of two sets x and y, symbolized as “x ⤬ y”,
the set of all definable ordered pairs such that the first member of each pair is
from set x and the second member of each pair is from set y, taken in all possible
combinations. Let us examine an example:
x ⤬ y = {<u, w> / (u ∈ x) ∙ (w ∈ y)}
○○ {1, 2} ⤬ {a, b, c} = {<1, a>, <1, b>, <2, a>, <2, b>, <1, c>, <2, c>}
○○ Notice that, {a, b, c} ⤬ {1, 2} = {<a, 1>, <a, 2>, <b, 1>, <b, 2>, <c, 1>,
<c, 2>} ≠ {1, 2} ⤬ {a, b, c}
10.3 Ordered Pairs and Cartesian Products 423
10.3.1 Exercises
We can now define operations on set or, as we call them, set-theoretic operations.
These operations are interpretations of corresponding functions of the standard
Boolean algebra: the same is the case, as we have pointed out already, for the truth
functions or connectives that are defined in the classical sentential logic we have
studied. It is an interesting exercise to attempt to match set-theoretic operations with
their corresponding sentential connectives which are also interpretations of the
same Boolean functions; the connectives are defined with inputs from the product
of the set of truth values {T, F} by itself ({T, F} ⤬ … ⤬ {T, F}) and with outputs
from the set of truth values {T, F} as range. The product of the set of truth values by
itself is taken n times when we speak generally of a connective of arity or degree n.
For our familiar cases of unary functions the domain is just the set {T, F} and for the
binary functions the domain is {T, F} ⤬ {T, F}. In the case of binary set-theoretic
operations, the domain is Cartesian product of the set of the given sets and the range
is the set of the given sets.
We continue with defining certain set-theoretical operations. One unary opera-
tion is defined and the rest are binary operations. The unary operation is called
Complementation and symbolized by a “c” as superscript to the set symbol. (Other
symbolizations include a prime as superscript on the set symbol and a straight line
on top of the set symbol.)
99 Monadic operations are from the power set of the universal set to the power
set of the universal set (meaning that the single input of the operation is a
subset of the universal set and outputs are also subsets of the universal set.)
○○ ℘(ω) ↦ ℘(ω)
99 Binary operations are from the Cartesian product of the power set of the uni-
versal set by itself to the power set of the universal set (meaning that inputs
are subsets of the universal set and outputs are subsets of the universal set .)
The generalized case of n-place operations is defined accordingly.
○○ (℘(ω) ⤬ ℘(ω)) ↦ ℘(ω)
10.4 Set-Theoretic Operations 425
ω c = ∅
(xc)c = xcc = x Involution Property
• The Difference (also Relative Difference) between sets x and y, symbolized as “x
− y”, is defined as the set z which contains as members all and only the members
of x which are not members of y.
○○ x – y ≝ {z/ z ∈ x ∙ z ∉ y}
The unary operation of complement we defined earlier can now be con-
ceptualized as the relative or binary complementation operation so that,
• xc = ω – x
Still, it is to be noted that complementation is a unary operation whereas
difference or relative complementation is a binary operation.
The complement of a set x is the difference of that set from the universal
set: we can call this Absolute Difference.
426 10 Basics of Set Theory
If the two sets x and y have no members in common or if the first set has
no members that are not in the second set, then the difference between x
and y is the set x. Obviously, if the first set is the empty set, then the dif-
ference is the empty set. The difference of a set y from the universal set
always yields the same set as the complement of y. The difference
between any set x and the empty set is the set x.
• Examples:
○○ x = {1, 2, 3}, y = {{1}, {2}, 3} -- x – y = {1, 2}
○○ x = {a, 1, {1}}, y = {{1}, a} -- x – y = {1}
○○ x – y ≠ y – x -- in the preceding example, y – x = ∅
○○ x = {a, b}, y = {1, 2} -- x – y = ∅
○○ x = ω, y = ω -- x – y = ∅
○○ x = ∅, y = ∅ -- x – y = ∅
○○ x = ∅, y = {1, 2, 3, …} -- x – y = ∅
○○ x = {1, 2, 3, …}, y = ∅ -- x – y = {1, 2, 3, …}
99 The next binary operation we define is called Union of sets x and y and
symbolized as “x ∪ y”: the union of sets x and y is the set z which contains
as its members all the members of x and all the members of y: but notice
how we are to express this (showing in this way the correspondence
between the set-theoretic operation of union and the sentential connective
inclusive disjunction, both of which are interpretations of the same Boolean
function.) A set z is the union of sets x and y if and only if for any member
of z, it is either a member of x or a member of y or both.
x ∪ y ≝ {z/ z ∈ x ∨ z ∈ y} -- inclusive disjunction, symbolized
by the wedge, means “either one or the other or both.”
The union of the universal set with any set yields the universal set.
The union of the empty set with any set x yields the set x. [Notice
that we are speaking of the union both as the operation and as the
result of the operation: we trust that the ambiguity is removed on
the basis of the context; if we were bulding a strict formal lan-
guage, we would have to be taking pains to occlude such sources
of ambiguity.]
The union of any set with itself is the set itself. After all, the cor-
responding sentential connective of inclusive disjunction has the
property we have called idempotence in other sections of this text:
the same ought to be expected for the set-theoretic operation that
interprets the associated Boolean function.
○○ Examples:
x = {1, 2}, y = {a, b} -- x ∪ y = {1, 2, a, b}
x = {△, ◊}, y = {○, △} -- x ∪ y = {△, ○, ◊}
x = {w/ w is an even positive integer}, y = {w/ w is an odd
positive integer} -- x ∪ y = {w/ w is either an odd or an
even positive integer} = I+
x = ω, y = {□, ▭} -- x ∪ y = ω (regardless of what the
universal set is!)
10.4 Set-Theoretic Operations 427
(continued)
○○ x ∩ (y ∪ z) = (x ∩ y) ∪ (x ∩ z) Distributive Property
(intersection over union)
○○ x ∪ (y ∩ z) = (x ∪ y) ∩ (x ∪ z) Distributive Property
(union over intersection)
○○ x ∩ (x ∪ y) = x ∪ (x ∩ y) = x Absorptive Property
○○ x ∪ ∅ = ∅ ∪ x = x Neutral Element ∪
○○ x ∩ ω = ω ∩ x = x Neutral Element ∩
○○ x ∪ ω = ω ∪ x = ω Universal Set with ∪
○○ x ∩ ∅ = ∅ ∩ x = ∅ Empty Set with ∩
○○ x ∪ xc = ω Excluded Middle
○○ x ∩ xc = ∅ Non-Contradiction
vice versa, since duality is a symmetric notion. The dual of the universal
is the empty set, and vice versa. One of the equations explicates the
characteristic property of distributivity – and, by duality, we have both
distributivity of union over intersection and of intersection over union.
Or, notice the interchange of universal with empty in: the complement
of the universal is the empty set and, by duality, the complement of the
empty set is the universal set. (As indicated above, the complement is
its own dual or it is self-dual, as we say: we can think of being inter-
changed degenerately, if we wish…) Also, the universal set serves as a
unit and the empty set serves as a zero for the corresponding equations:
the union of a set x with the empty set yields the set x; the intersection
of a set x with the universal set yields the set x. We detect duality here
too. And so on…
○○ x ∪ xc = xc ∪ x = ω [dual] x ∩ xc = xc ∩ x = ∅
○○ ωc = ∅ [dual] ∅c = ω
○○ x ∪ ∅ = ∅ ∪ x = x [dual] x∩ω=ω∩x=x
○○ x ∪ ω = ω ∪ x = ω [dual] x∩∅=∅∩x=∅
○○ (x ∪ y)c = xc ∩ yc [dual] (x ∩ y)c = xc ∪ yc
○○ x ∪ (y ∩ z) = (x ∪ y) ∩ (x ∪ z) [dual]
○○ x ∩ (y ∪ z) = (x ∩ y) ∪ (x ∩ z).
99 Another binary operation is the one called Symmetric Difference, symbolized by
“⊖” (or sometimes by “⊕”, by “+”, or by “△”), and corresponding to the sen-
tential connective of disequivalence (usually omitted from textbook presenta-
tions): these are interpretations of the associated Boolean operation which in
chapter 9 we called addition modulo-2. (In the context of the sentential logic,
which also serves, as we know by now, as an interpretation or model of the
Boolean algebra, the Boolean function can also be interpreted as negated equiva-
lence and as a matter of linguistic rendering it corresponds to the exclusive
either-or of a natural language like English.)
○○ The symmetric difference of sets x and y is the set z which is the union of the
difference of y from x and of the difference of x from y: otherwise, but equiva-
lently defined, the symmetric difference of x and y is the set z which is the
union of the intersection of x and of the complement of y and of the intersec-
tion of the complement of x and of y. Another way of defining symmetric
difference – equivalently and perhaps more easily comprehensible at first
glance – is as the set of the members of x and y, which are in the union of x
and y but not in the intersection of x and y.
x ⊖ y ≝ (x ∩ yc) ∪ (xc ∩ y) = (x − y) ∪ (y − x) = (x ∪ y) − (x ∩ y)
The dual of the symmetric difference is equality of sets! This gives us
another way, by duality, for defining set equality:
• x = y ≝ (x ∪ yc) ∩ (xc ∪ y)
• The symmetric difference of any set and itself is the empty set.
430 10 Basics of Set Theory
○○ x ⊖ x = (x ∩ xc) ∪ (xc ∩ x) = ∅ ∪ ∅ = ∅
○○ One of the definitions of symmetric difference requires specification
of the universal set, since the complements of the sets are in the
definition: if we are using the definition based on the differences of
the sets x and y, then we take as universal set the union of the set
x and y.
Examples:
x = {1, 2}, y = {1, 2, 3} -- x ⊖ y = (x − y) ∪ (y − x) = ∅ ∪ {3} = {3}
x = y = {a, b} -- x ⊖ y = ∅
x = ω, y = ∅ -- ω ⊖ ∅ = (ω ∩ ∅c) ∪ (ωc ∩ ∅) = (ω ∩ ω) ∪
(∅ ∩ ∅) = ω ∪ ∅ = ω
○○ x ⊖ y ≝ (x – y) ∪ (y – x)
x ⊖ y = y ⊖ x Commutative Property of Symm. Diff.
x ⊖ (y ⊖ z) = (x ⊖ y) ⊖ z Associative Property of Symm. Diff.
x ⊖ x = ∅ Symmetric Self-Difference
x ⊖ ∅ = ∅ ⊖ x = x The Empty Set as Neutral Element for
Symmetric Difference (SD)
x ⊖ ω = ω ⊖ x = xc Complementation defined in terms of SD
x = y if and only if x ⊖ y = ∅ Equality and Symmetric Difference
x ⊖ y ⊖ z = 0 if and only if x = y ⊖ z if and only if y = x ⊖ z if and only
if z = x ⊖ y Equations with SD
x ∪ y = x ⊖ y ⊖ (x ∩ y) Union Defined in Terms of SD and
Intersection
x ⊖ y = (x ∩ yc) ∪ (xc ∩ y) = (x − y) ∪ (y − x) =
(x ∪ y) − (x ∩ y) Alternative Definitions of SD
Relationships between Symmetric Difference and other Set-Theoretic Operations
¾¾ x ∩ (y ⊖ z) = (x ∩ y) ⊖ (x ∩ z)
99 There is a deep relationship between subsethood and the set-theoretic operations
we have defined. The following expressions are all true (presented as open sen-
tence, with unbounded variables.)
x ⊆ y.
x ∩ yc = ∅.
xc ∪ y = ω.
x ∩ y = x.
x ∪ y = y.
yc ⊆ xc.
1 ~2 3
2 ~2 2 ~2 0
3 ~2 1
4 ~2 2 ~2 0
etc.….
• ~2 is an equivalence relation:
○○ ∀x(x ∈ Z+0 ⊃ x ~2 x)
○○ ∀x∀y((x ∈ Z+0 ∧ y ∈ Z+0) ⊃ (x ~2 y ⊃ y ~2 x))
○○ ∀x∀y∀z((x ∈ Z+0 ∧ y ∈ Z+0 ∧ z ∈ Z+0) ⊃ ((x ~2 y ∧ y ~2 z) ⊃
x ~2 z))
○○ ~2 induces a partition on. Z+0, which can be represented as:
• Z+0 = {[0], [1]}
∀x(x ∈ Z+0 ⊃ (x ∈ [0] ∨ x ∈ [1]))
Z+0 = [0] ∪ [1]
[0] ∩ [1] = ∅
432 10 Basics of Set Theory
10.4.1 Examples
10.4.2 Exercises
1. For the given sets, carry out the specified set-theoretic operations.
ω = {1, 2, 3, 4}.
x = {1, 2, 3}.
y = {3, 4}.
s = {1, 4}.
t = {2, 3}
a. x ∪ (s ∩ t)
b. x ∪ yc
c. (x ∪ y)c ∩ sc
d. (s ∩ t) ∪ (xc ∪ yc)
e. (x ∪ y) – (s ∩ t)
f. (y ∩ t)c ∪ (x ∪ s)c
g. (s ∩ (y ∪ t))c
h. (x – yc) ∩ y
i. (x ∪ (yc – x)) ⊖ (s ∪ t)
j. xc ⊖ (y ∪ (s ∩ t))
2. Carry out the specified set-theoretic operations.
a. ∅c ∪ {∅}
b. {{∅}} ∩ {∅}
c. ∅⊖∅
d. {∅} – ∅
e. ω⊖∅
f. ({∅} ∩ ∅)c ∪ (∅ ∪ {∅})
3. Prove that the intersection of a set x and the empty set is the empty set.
4. Prove that the intersection of a set x and the universal set is the universal set.
5. Prove that the intersection of a set x and the empty set is the set x.
6. Prove that the union of a set x and the universal set is the universal set.
7. Prove the distributive laws for union and intersection.
8. What are the answers? Justify.
a. ∅ – ∅ =?
b. x – ∅ =?
c. ∅ ∩ {∅} =?
d. ∅ ∪ {∅} =?
e. {∅} ⊖ x =?
f. {∅} – ∅ =?
g. {∅, ∅} – {{∅}} =?
434 10 Basics of Set Theory
9. Does relative difference distribute over union? In other words, is either of the
following true? What about the case of distribution of relative difference over
intersection?
x – (y ∪ z) = (x – y) ∪ (x – z).
x – (y ∩ z) = (x – y) ∩ (x – z)
10. Prove the following inclusions. For an inclusion of x in y, we have the conditions:
x ⊆ y if and only if xyc = ∅, iff x ∪ y = y, iff x ∩ y = y, iff xc ∪ y = ω.
a. x – y ⊆ x
b. x – (y ∪ z) ⊆ x –(y ∩ z)
c. (x ∩ y) – z ⊆ (x ∪ y) – z
11. Proper inclusion can be defined as:
x ⊂ y if and only if (x ⊆ y ∙ x ≠ y).
Prove the following.
a. If x ⊂ y, then ~ (y ⊆ x)
b. ~ (x ⊂ x)
c. If x ⊂ y and y ⊂ z, then x ⊂ z
d. If (x ∩ y) ⊂ x, then x ≠ y
e. If x ∩ y ≠ ∅, then (x – y) ⊂ x
12. Prove the following equations. Some of them involve the Cartesian product,
whose definition was given, and about which properties were proven, in the
preceding section.
a. (xc)c = x
b. ωc = ∅
c. x ∪ (y ∩ z) = (x ∪ y) ∩ (x ∪ z)
d. x⊖y=y⊖x
e. x – (x – y) = x ∩ y
f. (x – y) – x = ∅
g. (x – y) – y = x – y
h. x ∩ (y ⊖ z) = (x ∩ y) ⊖ (x ∩ z)
i. (x – y) – z = (x – y) – (x – z)
j. (x ∪ y) ⤬ s = (x ⤬ s) ∪ (y ⤬ s)
k. (x ∩ y) ⤬ (s ∩ t) = (x ⤬ s) ∩ (y ⤬ t)
l. (x – y) ⤬ s = (x ⤬ s) – (y ⤬ s)
m. (x ∩ y) ∪ z = x ∩ (y ∪ z) if and only if z ⊆ x (z ∩ xc = ∅)
13. Use the Boolean equations to simplify the following expressions. Eliminate all
but the symbols for complementation, union and intersection as you proceed.
Also ensure that complementation is applied only to atomic variables. (What
you obtain at the end of a successful transformation is called a normal form.)
Because union and intersection are associative, we can omit parentheses when
we have successive union or intersection symbols, since no ambiguity results.
For example, we may write
10.5 The Zermelo-Fraenkel Systematization of Set Theory 435
x∪y∪z
instead of.
(x ∪ y) ∪ z
a. x ∪ (xc ∩ y)
b. xc ∩ (x ∪ y)
c. (x ∪ y)c ∪ (xc ∩ yc)
d. (x ∩ yc) ∪ (x ∩ y)
e. (x ∩ yc) – (yc ∩ (x – y))
f. (xc ∩ y ∩ zc)c ∪ (x ∩ y ∩ z)c
Let us take stock of what has been stipulated, adding as needed more stipulations
that are foundational. The Zermelo-Fraenkel theory axioms, which often appear
in advanced textbooks, make such stipulations on the basis of intuitive notions
about sets. You will notice that the existence – which is constructability – of
certain types of sets is stipulated axiomatically; of course, we think that we can
appeal to intuitions in order to justify such stipulations (as, for instance, in the
case of the existence of an empty set which is by definition the set without any
members.) Some of the other existential stipulations might strike you as unnec-
essary but this is only because you have not yet embarked on a sustained study
of set theory. In the course of presenting some of these stipulations, you will also
become familiar with some of the definable set-theoretic operations – but there
are more to be defined later. A detail we should mention is that some of the axi-
oms might be deducible from other axioms - which means that they are not inde-
pendent, as we say – but this theoretical refinement is of no concern given our
present purposes.
Sets are abstract objects, definable by the methods we have indicated: enumera-
tion, abstraction, and recursion.
The members of a set need not be related to each other.
For any specified object whatsoever, it is possible in principle to determine
whether this object is or is not a member of a defined set.
An object can only either be or not be a member of a given set: the notion of
vague membership – degrees of being a member – is not understood in this the-
ory! The law of sentential logic we have examined as the Law of Excluded
Middle has in this context a set-theoretic expression:
○○ ∀x∀y(x ∈ y ∨ x ∉ y)
Any object, including a set, can be a member of a set. But a restriction on this
follows.
A set cannot be a member of itself. This restricts the preceding postulate so that
now we have to say that any member, except for the set itself, can be a member
of a set.
436 10 Basics of Set Theory
○○ ∀x(x ∉ x)
○○ Another way of stating the restriction is this: for any set x, which is not the
empty set, any member of any of its members cannot be a member of the set x.
∀x(x ≠ ∅ ⊃ ∃y(y ∈ x · ∀z(z ∈ y ⊃ z ∉ x))
Axiom of Foundation
This restriction blocks a foundational paradox, known as the Russell
Paradox; discovery of this paradox undermined ambitious projects in the
history of mathematics, whose completion depended on using a reliable,
paradox-free set theory. The Russell paradox is generated once we admit
that we can use abstraction – which is intuitively admissible as a method
for defining sets – and we define by abstraction a set whose members are,
by definition, sets that are not members of themselves. Self-membership is
itself an apparently well-behaved notion: for instance, the book that com-
prises as texts the texts of all books – regardless of the practical difficulties
of constructing it – is conceivable and this book is conceptually a member
of itself (the book of all books.) Abstraction allows us to capitalize imme-
diately on conceivability in order to claim that such a set can be defined.
The problem, however, emerges once we similarly start by defining the set
of all sets that are not members of themselves and then ascend to another
step, which appears perfectly legitimate – of constructing the power set of
this set, which is the set of all sets that are not members of themselves.
Then, we come across the infamous paradox: if the statement “the set of all
sets that are not members of themselves is a member of itself” is true then
it follows from that this statement is also false because having the property
“not being a member of itself” entails that the set is not a member! But it
is essential also that we can proceed in the contrary direction: if the state-
ment “the set of all sets that are not members of themselves is a member of
itself” is false, then it is true since, after all, our set is said to have the
property of not being a member of itself. The problem then is: our state-
ment, which we can symbolize metalinguistically as “⋌”, is false if it is
true; and it is true if it is false: but this logical equivalence of a true with a
false statement is unacceptable since we have exactly two distinct truth
values (true and false.) There are proposed solutions, which are controver-
sial and discussed extensively. You may guess from the last sentence in
characterizing the problem that one such solution assigns to the problem-
atic statement a non-classical truth value (something other than true or
false, and something that is still to be accepted semantically as a value.)
• ⋏ = {x / x ∉ x}
○○ ⋌: x ∈ x ⇒ x ∈ {x / x ∉ x} ⇒ x ∉ x ⇒ ~ ⋌
○○ ~ ⋌: x ∉ x ⇒ x ∉ {x / x ∉ x} ⇒ x ∈ x ⇒ ~ ~ ⋌ ⇒ ⋌
• ((⋌ ⊃ ~ ⋌) ∙ (~ ⋌ ⊃ ⋌)) ≡ (⋌ ≡ ~ ⋌) [!]
Another method that is available for preventing the mischief
caused by paradoxical self-referential cases of abstraction is
10.5 The Zermelo-Fraenkel Systematization of Set Theory 437
x xc
∈ ∉ logical possibility 1: the specified object is a member of x. ⇒ it is not a member of the
∉ ∈ complement. Logical possibility 2: the specified object is not a member of x. ⇒ it is a
member of the complement.
We can use truth tables, constructed again by deploying the membership and
non-membership symbols, to define by truth-tabular means the other, binary, set-
theoretic operations we have introduced. All logically possible combinations are
represented and the convention applied here is, starting with the rows from top
to bottom:
the specified element belongs to the first set and belongs to the second set;
the specified element belongs to the first set and does not belong to the second set;
the specified element does not belong to the first set and belongs to the second set;
the specified element does not belong to the first set and does not belong to the
second set;
since this is a random, not specialized, item that we are discussing, we can sub-
stitute the expression “for all x, such that---.”
10.6 Use of Truth Tables in Set Theory 441
○○ Thus, for the third row and for the definition of the set-theoretic union of sets
x and y, we have, as appropriate: the specified object belongs to the union of
x and y if and only if either it does not belong to the first and it belongs to the
second. Notice the parallel to the sentential connective of inclusive disjunc-
tion, which has the same truth table for its definition once we replace: the set
variables by sentential variables, the symbols for membership and non-
membership with T and F respectively, and the symbol of inclusive disjunc-
tion for the symbol of set-theoretic union.
○○ The Boolean function interpreted as material implication and symbolized by
the horseshoe is of no interest in the set-theoretic interpretation of Boolean
algebra. We can, of course define the set-theoretic interpretation, which we do
below but there is no standard symbolization of this set-theoretic operation
because it lacks significance.
○○ On the other hand, the relative difference of sets is theoretically significant
while the corresponding sentential connective is usually missing from
textbooks.
10.6.1 Exercises
the same members in every possible modeling. The only exception is the binary
relation of identity which is treated as a logical notion but we will not enter into
the interesting philosophic and logical implications here. It suffices to draw our
attention to what has emerged from these comments: logicality or what is char-
acteristic of logical notions has to do with invariability across possible models.
What we are studying here overlaps with the lessons of chapter 5, as it ought to
be the case.
○○ |R| = {<x, y> / x is R-related to y} ⊆ 𝕯 ⤬ 𝕯
○○ |R| ∈ ℘(𝕯)36
For example, we can set model 𝔐 = <𝕯, ||>. The metalinguistic positing
of the model itself uses an ordered pair, by the way, with the domain as first
member and the valuation (also called interpretation and signature) as sec-
ond member of the pair. Valuations must be provided for the symbols of the
model: objects of the domain and predicates understood as sets of pairs are
the values of the symbols which, as we know from chapter 5, are individual
constants and predicate symbol letters. We can think of monadic predicates
as degenerate relations (thus, as sets of one-member pairs) and, of course,
we define binary relational predicates, and so on, as sets of ordered
n-tuples. In our example, we have one monadic and one binary predi-
cate symbol.
𝕯 = {⊲, ⊳}.
|a| = ⊲.
|b| = ⊳.
|F| = {<⊲>, <⊳>} ⊆ 𝕯.
|R| = {<⊲, ⊳>, <⊳, ⊲>} ⊆ 𝕯 x 𝕯.
The binary relation is obviously symmetric. For every pair <x, y> that is a
member of the relation, <y, x> is also a member. The relation is not reflex-
ive: <x, x> is not a member for all x in the domain.
We can use the language of predicate logic to write out formulas that can
then be evaluated as true or false insofar as the formulas do not have any
free variables in them. Here are some examples of such formulas, whose
assessment as to truth value is left as an exercise.
Rbb
Rab
Fa ⊃ ~ Rbb
∃xRbx
∃x∀y(Rxy ≡ ~ Fy)
Notably, for every member of the domain, either it is a member of a
monadic predicate or not. There are no indeterminate cases and we no
degrees of membership are allowed – in fact, the notion of degree of mem-
bership is nonsensical, being undefined, in this context. The set theory that
is being utilized is the classical theory, also called Crisp or Sharp; it is not
of a more recent variety called Fuzzy Set Theory which permits degrees of
set membership.
10.7 Relations and Functions; Inverses and Relative Products of Relations; Converses… 445
there is some member z such that <x, z> is a member of R and <z, y> is a
member of S.
If a given relation 𝓡 is defined as subset of the Cartesian product 𝓐 ⤬ 𝓑 and a
given relation 𝓢 is defined as a subset of the Cartesian product 𝓒 ⤬ 𝓓, then the
relative product 𝓡 ⤬ 𝓢 must be definable as a subset of the Cartesian prod-
uct 𝓐 ⤬ 𝓓.
Examples of determining the relative product of given relations are shown below.
Needless to say, it is possible that the relative product is the empty set. It is notable
that the operation of constructing the relative product is not commutative but it is
associative. The degenerate case of the relative product of a relation R and itself is
the relation R. (We omit the vertical bars, “|” and “|,” for the set-theoretic definition
of a relation.)
R = {<1, 1>, <2, 1>, <2, 2>, <3, 2>}
S = {<1, 3>, <2, 3>, <3, 1>}
R | S = {<1, 3>, <2, 3>, <3, 3>}.
Since <1, 1> ∈ R and <1, 3> ∈ S (note the bold type used for illustrative pur-
poses), it follows that <1, 3> ∈ R | S.
We do not expect generally the relative product S | R to be equal to R | S.
S | R = {<1, 2>, <2, 2>, <3, 2>}
We can also determine the relative product of a relation and the inverse of some
relation, and the inverse of a relative product of two given relations – and so on.
We show examples with R and S defined as above.
R−1 = {<1, 1>, <1, 2>, <2, 2>, <2, 3>}
S−1 = {<3, 1>, <3, 2>, <1, 3>}
R−1 | S−1 = {<1, 3>, <2, 1>, <2, 2>}
(R | S)−1 = {<3, 1>, <3, 2>, <3, 3>}
¾¾ Set-theoretic operations, many of which were defined in 11.4, can be adapted
notionally to the case of defining operations on relations: we continue to restrict
our attention to binary relations.
Thus, for instance, the relative difference of two relations R and S is defined –
analogously to the set-theoretic operation of the relative difference of two speci-
fied sets – as follows:
R ~ S = {<x, y> / <x, y> ∈ R ∙ <x, y> ∉ S}
Similarly, union and intersection of binary relations can be defined.
R ∪ S = {<x, y> / (<x, y> ∈ R) ∨ (<x, y> ∈ S)}
R ∩ S = {<x, y> / (<x, y>) ∈ R ∙ (<x, y> ∈ S)}
For example, using the previous examples’ R and S defined relations:
R ~ S = R If R ∩ S = ∅, then R ~ S = R
R ∪ S = {<1, 1>, <2, 1>, <2, 2>, <3, 2>, <1, 3>, <2, 3>, <3, 3>}
R∩S=∅
(R ∩ S)−1 = ∅
(R ~ S)−1 = R−1
(R | S) ∪ R = {<1, 3>, <2, 3>, <3, 3>} ∪ {<1, 1>, <2, 1>, <2, 2>, <3, 2>} = =
{<1, 3>, <2, 3>, <3, 3>, <1, 1>, <2, 1>, <2, 2>, <3, 2>}
448 10 Basics of Set Theory
¾¾ Generally:
○○ R | S ≠ S | R
○○ Q | (R | S) = (Q | R) | S
○○ (Q | R)−1 = R−1 | Q−1
Relations have characteristic properties, which we now define. All members x are
presumed to be members of the domain of the relation.
99 A relation ρ is reflexive if and only if:
∀xρxx
99 A relation ρ is irreflexive iff:
∀x ~ ρxx
99 A relation ρ is non-reflexive iff:
∃x ~ ρxx
99 A relation ρ is symmetric iff:
∀x∀y(ρxy ⊃ ρyx)
99 A relation ρ is asymmetric iff:
∀x∀y(ρxy ⊃ ~ ρyx)
99 A relation ρ is non-symmetric iff:
∃x∃y ~ ρxy
99 A relation ρ is anti-symmetric iff:
∀x∀y((ρxy · ρyx) ⊃ x = y)
99 A relation ρ is transitive iff:
∀x∀y∀z((ρxy · ρyz) ⊃ ρxz)
99 A relation ρ is non-transitive iff:
∃x∃y∃z(ρxy · ρyz · ~ ρxz)
99 A relation ρ is intransitive iff:
∀x∀y∀z((ρxy · ρyz) ⊃ ~ ρxz)
99 A relation ρ is serial iff:
∀x∃yρxy
99 A relation ρ is weakly connected iff:
∀x∀y(x ≠ y ⊃ (ρxy ∨ ρyx))
99 A relation ρ is strongly connected iff:
∀x∀y(ρxy ∨ ρyx)
99 A relation ρ is dense iff:
∀x∀y(ρxy ⊃ ∃z(ρxz · ρzy)).
Depending on what characterizing properties they have, relations impose certain
types of ordering on sets on which they are applied. Since these are seminal con-
cepts in the study of Logic, and of various branches of Mathematics, we give the
definitions:
99 A relation is a quasi-ordering if it is reflexive and transitive.
10.7 Relations and Functions; Inverses and Relative Products of Relations; Converses… 449
10.7.2 Functions
The following relations are functions; the same inputs are matched to the
same output but this does not matter insofar as there is no more than one
output matched to the same input.
• A = {<①, ▭>, <②, ▭>}
• B = {<1, 2>, <2, 2>, <3, 1>}
• C = {<1, a>, <2, b>, <3, c>, <4, a>}
¾¾ We can define a function as a relation between the input and the output only in
those cases in which there is a uniquely assigned output for each specified input
(member of the function’s domain.) The relation that defines a function is repre-
sented as an ordered pair of the input and the uniquely determined outputs. For
instance, having the set of positive integers as domain and range:
○○ ƒ(x) = x + 2:: 𝕯 = I+; 𝕽 = I+.
• ƒ(x) = {<x, ƒ(x)> /x ∈ I+, ƒ(x) ∈ I+, and ƒ(x) = x + 2}
• Thus, the function is defined from domain to range: ƒ: I+ ↦ I+
• The output has to be unique for each specified input. This follows from the
definition of function, which we have given. In our example, the output is
uniquely determined. But contrast the following case, with domain and
range identified with the set of integers.
○○ ƒ#(x) = √x
For instance: x = 1 ⇒ ƒ#(x) = √1 =… +1 or – 1!
○○ Binary functions can be represented analogously as ternary relations
with the functional output of two inputs being the third member of the
ordered triplet; for instance:
○○ ƒ(x, y) = {<<x, y>, ƒ(x, y)> / x, y ∈ {0, 1}, and ƒ(x, y) = xy} = {<<0,
0>, 0>, <<1, 0>, 0>, <<0, 1>, 0>, <<0, 0>, 0>}
○○ In general, the degree of the function is lowered by one compared to the
degree of the relation that represents the function.
○○ Notice that the domain of the binary relation is the Cartesian product of
the domain by itself, 𝕯 ⤬ 𝕯, and generally for an n-place relation the
domain is the nth Cartesian product of the domain by itself.)
○○ ƒ1: 𝕯 ↦ 𝕽
○○ ƒ2: 𝕯 ⤬ 𝕯 ↦ 𝕽
○○ ƒn: 𝕯 ⤬ 𝕯 ⤬ … ⤬ 𝕯 ↦ 𝕽
The range is the same as the domain. This is desirable in many cases
and we observe that in this case the outputs of all the operations are
in the domain itself. Moreover, all the operations are carried out and
their outputs are in the domain: we say in that case that the domain
is closed under the given operation – in the present example, this is
the multiplicative operation and 𝕯 = {1, 0} is closed under the mul-
tiplicative operation as defined. Of course, we could have defined a
different multiplicative operation, under which the domain is not
closed – which means that some output from applying the operation
to the domain members is not in the domain itself. Then we would
have: 𝕯 ⤬ 𝕯 ↦ 𝕽 ≠ 𝕯
10.7 Relations and Functions; Inverses and Relative Products of Relations; Converses… 451
fact, it has to be the inverse of the relation, and so it has to be R−1(ƒ). But it is not
guaranteed that this relation, R−1(ƒ), can define a function: it may or may not
have a uniquely defined output (second member of each ordered pair) for each
specified input (first member of each ordered pair.) (Strictly speaking, we should
reserve “input” and “ouput” terms only for functions and not use them cavalierly
including also cases in which we might not have a relation that defines a func-
tion. We could speak of “alleged input” and “alleged output” perhaps.) Because
of this divergence between inverses of relations and inverses of functions, we
require two different concepts and two different corresponding symbols: we call
the converse of a function ƒ the relation R−1(ƒ) which may or may not be a func-
tion. We reserve the term “inverse” for the case in which the converse of a func-
tion is a relation that defines a function. The symbol for the converse of a function
is “ƒ⏜” and the symbol for the inverse of a function is “ƒ−1”.
○○ Here is an example: ƒ(x) = x2 𝕯 = I
ƒ⏜(x) = √x
• For example: <2, 4> ∈ ƒ
○○ <4, 2> ∈ ƒ⏜ and <4, − 2> ∈ ƒ⏜
It may be queried whether we could pick one of the two ordered
pairs of the function ƒ⏜, and repeat this arbitrarily for every case
in which there are more than one outputs or second members of
ordered pairs: in this way, we define one of the available inverses
of the function ƒ. This expedient is often used in Mathematics
textbooks. Of course, more than one inverses of ƒ are definable in
this way and it is arbitrary which one is to be regarded as the
inverse of ƒ. In other words, ƒ lacks a uniquely definable inverse.
¾¾ We now turn to the task of determining the inverse of a given function. The pro-
cess we will present may or may not result in a function but, in either case, we
are able to ascertain whether a function is definable. The reason for this open-
ended prospect is the same as what was discussed above: the converse relation
R−1(ƒ) of the relation R(ƒ) that defines function ƒ is always definable but this
relation, R−1(ƒ), may or may not be itself defining of a function.
○○ ƒ(x) = 2x + 4
To find the inverse, if such exists, we take advantage of the following prov-
able equation relating a function to its converse. (Note that we speak of
“converse” because this equation is satisfied even when the converse of the
function is not itself a function. We may put it this way: this equation is
satisfied regarding the relation between a function and its converse for all
values of the functions. The converse – being a broader notion that that of
the inverse – may or may not itself be a function.
• ƒ(ƒ⏜(x)) = x
10.7 Relations and Functions; Inverses and Relative Products of Relations; Converses… 453
Thus, we have to solve the following equation in our given example. We sim-
ply plug in “ƒ⏜(x)” for “x” in a token of the equation and determine the value
of ƒ(x) at ƒ⏜(x).
ƒ(ƒ⏜(x)) = 2ƒ⏜(x) + 4 = x ⇒
2ƒ⏜(x) = x – 4 ⇒
ƒ⏜(x) = ½(x – 4) = ½x – 4/2 = ½x – 2.
Because the converse is a function – thus, the converse is an inverse – we
may write:
ƒ−1(x) = ½x – 2
• ƒ(x) = x2
○○ ƒ(ƒ⏜(x)) = (ƒ⏜(x))2 = x ⇒
ƒ⏜(x) = ∓√x.
Having negative and positive numbers in our domain entails that both the nega-
tive and positive values of the square root are second members of the ordered pairs
for the corresponding first members. Thus, for instance, we have:
<4, − 2> ∈ƒ⏜ and <4, + 2> ∈ ƒ⏜
since both (− 2)2 = 4 and (+ 2)2 = 4.
The converse is not an inverse. Our method worked in determining the converse
but it took additional discrimination on our part to determine that a function is not
definable by inversion of the relation that defines the given function ƒ.
¾¾ Given two functions ƒ1 and ƒ2 we will now define as composition of ƒ1 and ƒ2 a
function g which, as the composite of ƒ1 and ƒ2 is symbolized as “ƒ2 ∘ ƒ1” or
“ƒ2(ƒ1(x))” or “(ƒ1 ∘ ƒ2)(x)”. We know by now how we can define a function as a
set of ordered pairs. Applying this notion, we define the composition function as
the set of ordered pairs:
99 ƒ2(ƒ1(x)) = {<ƒ1(x), y> / y = ƒ2(ƒ1(x)), ƒ1(x) ∈ 𝕽(ƒ1) = 𝕯(ƒ2(ƒ1(x))) and y
∈ 𝕽(ƒ2)}
○○ Harking back to our examination of ordered pairs and relations, above, we
may recall the concept of the operation, defined there, of generating the rela-
tive product of two given relations R and S. Clearly, we have to keep in mind
that not all relations define functions. For relations that do define functions,
composition of functions (or functional composition) is defined analogously
to the generation of the relational relative product. But there is a catch. If we
are to use the relative-product symbol, “|” to indicate functional composition,
we have the following:
ƒ1(ƒ2(x)) = ƒ2 | ƒ1
This is not friendly to intuitions and we prefer not to make use of this notation.
Instead, another way of symbolizing the operation of functional composition is
as follows:
ƒ1(ƒ2(x)) = (ƒ1 ∘ ƒ2)(x)
(To facilitate mental agility in thinking about functional composition, notice that, in
the following remarks we reverse and speak of functional composition (ƒ2 ∘ ƒ1)(x)
instead of (ƒ1 ∘ ƒ2)(x).)
454 10 Basics of Set Theory
If domain and range for the two compounded functions are, respectively, 𝕯(ƒ1) and
𝕽(ƒ1) and 𝕯(ƒ2) and 𝕽(ƒ2), the domain of the composite function 𝕯(ƒ2(ƒ1(x))) =
𝕽(ƒ1) and the range of the composition function 𝕽(ƒ2(ƒ1(x))) = 𝕽(ƒ2). Thus, we can
symbolize the so-called mapping – another term we can use for a domain-to-range
function definition – as follows:
The composition of the negation function and the inclusive-disjunction func-
tion – as interpreted in a Boolean algebra – yields the function that interprets the
Pierce Arrow function (symbolized by “↓”). Notice that we can and in this case
indeed do have a composition of a unary with a binary function: the output of the
binary function is treated as input for the unary function: this is possible, under the
constraint we noted above, insofar as the range of the binary function is identical
with the domain of the composing unary function. Indeed, the range of a binary
function in the Boolean algebra is the set of values {1, 0} – while its domain is the
Cartesian product {1, 0} ⤬ {1, 0} since this is a binary function. In the third exam-
ple below, notice how binary functions are composed.
• ƒ(x) = 2x g(x) = x + 1 𝕯(ƒ) = 𝕽(g) = Integers
ƒ(g(x)) = ƒ(x + 1) = 2(x + 1) = 2x + 2.
g(ƒ(x)) = g(2x) = 2x + 1 -- in general, ƒ ∘ g ≠ g ∘ f
• ƒ~(x) = x + 1
ƒ∨(x, y) = xy + x + y.
(ƒ~) = (ƒ∨) = {1, 0}, (ƒ∨) = {1, 0} ⤬ {1, 0}.
ƒ~(ƒ∨(x, y)) = ƒ~(xy + x + y) = xy + x + y + 1 = ƒ↓(x, y)
• ƒ∙(x, y) = xy
ƒ|(x, y) = xy + 1.
ƒ⊃(x, y) = xy + x + 1.
(ƒ∙) = (ƒ|) = 𝕯(ƒ⊃) = {1, 0}
(ƒ∙) = (ƒ|) = 𝕽(ƒ⊃) = {1, 0} ⤬ {1, 0}
ƒ⊃(ƒ|(x, y), ƒ∙(x, y)) = ƒ⊃(xy + 1, xy) = (xy + 1)xy + xy + 1 + 1 = xy + xy + xy
+ 2 = xy + 0 = xy.
The semantic interpretation can be shown in the formal idiom we used in the
chapters on sentential logic:
((p | q) ⊃ (p ∙ q)) ≡ (p ∙ q)
99 We can represent a constraint under which a function ƒ is definable by a relation
R(ƒ) by showing how R(ƒ) and its inverse relation R−1(ƒ) are connected. If this
constraint is satisfied, then the relation defines a function. We also need to define
a relation called Identity, and symbolized by “I”, defined on the given universal
domain, such that:
○○ I = {<x, y> / x = y}
The constraint is as follows:
R−1 | R ⊆ I
• Let us examine this for the following cases:
○○ 𝕯 = {1, 2, 3, 4}
10.7 Relations and Functions; Inverses and Relative Products of Relations; Converses… 455
R(ƒ) = {<1, 2>, <2, 3>, <3, 4>, <4, 4>} [ƒ ∈ Functions]
R−1(ƒ) = {<2, 1>, <3, 2>, <4, 3>, <4, 4>}.
R−1 | R = {<2, 2>, <3, 3>, <4, 4>} ⊆ I = {<1, 1>, <2, 2>, <3, 3>, <4, 4>}.
R(ƒ1) = {<1, 2>, <2, 3>, <3, 3>, <4, 3>, <4, 2>}.
[ƒ1 ∉ Fuctions].
R−1(ƒ1) = {<2, 1>, <3, 2>, <3, 3>, <3, 4>, <2, 4>}
R−1 | R = {<2, 2>, <3, 3>, <3, 2>} ⊈ I
99 Functional composition is associative. Thus, we have:
We can apply our knowledge of composition of relations to show that:
(f/g)/h = (f/g)/h.
Then, we have:
(f/g)/h = h(g(f(x)) = h ∘ (g ∘ f)
(h ∘ g) ∘ f = f/(g/h)
Therefore,
h ∘ (g ∘ f) = (h ∘ g) ∘ f.
99 Given functions g and h, we can determine the function ƒ, such that:
g ∘ ƒ = h.
An example can show how we proceed:
g(x) = 2x.
h(x) = 3x + 1.
(g ∘ ƒ)(x) = g(ƒ(x)) = 2ƒ(x) = h(x) = 3x + 1.
From which we have:
ƒ(x) = ½ (3x + 1).
99 The case is more complicated, but can be solved, when we need to determine the
function ƒ for which:
ƒ ∘ g = h.
We retain the functions of the preceding example.
g(x) = 2x.
h(x) = 3x + 1.
We reason as follows: We know that, for any function φ,
φ(φ−1(x)) = x
We have also shown that functional composition is associative. Based on the
above observations, we have:
ƒ(x) = ƒ(g(g−1(x))) = ƒ ∘ (g ∘ g−1) = (ƒ ∘ g) ∘ g−1 = h ∘ g−1
This result suggests an algorithmic approach to discovering the solution. First, we
determine the inverse of g and then we compose g−1 with h: the result is function ƒ.
In our given example, we have:
g(g−1(x)) = 2 g−1(x) = x Therefore: g−1(x) = ½ x
g−1 ∘ h = g−1(h(x)) = ½ (3x + 1) = ƒ(x)
456 10 Basics of Set Theory
10.7.3 Exercises
d. (R3 | R4)−1 | R3
e. (R7 | (R6 | R5)−1)−1
6. For the relations of exercise 3, perform the following set-theoretic operations.
a. R1 ~ R7
b. (R2 ∩ R8)
c. R5 ∪ (R4 ∩ R3)
d. R3 ∪ (R4 ⊖ R5)
e. (R6 | R7) ∩ (R4 | R3)
7. What are the characteristic properties of the following relations and what kind
of ordering is imposed on a set by each of the following relations?
a. Identity of numbers
b. Equality of sets
c. Relative Preference for goods [what assumptions are you making?]
d. Subsethood
e. Not being a subset
f. Being the complement of
g. Being a member of a set
h. Not being the member of a set
8. For each of the given functions, determine the inverse, if it exists.
a. f(x) = x2
b. f(x) = x2 [Specified domain: the positive integers.]
c. f(x) = x3–2
d. f(x) = 10x
e. f(x) = x + 2
f. f(x) = 3x + 4
9. Perform the requested compositions of functions.
f (x) = 2x2–2.
g(x) = x + 5.
h(x) = 3x3 + 1
a. f∘g
b. f ∘ (g ∘ h)
c. g ∘ (f ∘ h)−1
d. (f ∘ g) ∘ (g ∘ f)
e. (g ∘ f) ∘ (f ∘ g)
f. g−1 ∘ (f ∘ h−1)
g. (g ∘ h)−1
458 10 Basics of Set Theory
which, often, there is at least one additional truth value (for example, motivated
to mean “indeterminate”), which stays fixed when it is negated.
A Priori. The meaning of a sentence is known a priori if it is known independently
of empirical verification and cannot be corrected by experience. This concept
is epistemic (related to what can be known and under what conditions it can
be known.) For example, considering that a mathematical truth like “one plus
one equals two” is known a priori: an experience that has putting two things
together resulting repeatedly to experiencing one thing remaining would not be
considered as correcting the belief that “one plus one equals two;” it would be
incomprehensible (not in a psychological sense but as related to meaning), no
matter how often it happens, rather than be considered as falsifying the sentence
“one plus one equals two.” The classical view is that logic is known a priori but
there are philosophic challenges to this view.
Argument. As a term of logic, an argument is a proof: a collection of meaning-
ful sentences, one of which is the conclusion while the rest are the premises
and possibly other sentences that are derived from the premises and from other
derived sentences. In natural language, arguments are often presented enthyme-
matically – with missing premises and/or conclusion, which are to be understood
as implied. Arguments can be deductive or inductive. A correct deductive argu-
ment is called valid: it is logically impossible for such an argument to have true
premises and false conclusion. An inductive argument is evaluated as a matter
of relative strength: if the premises are true, the conclusion is true to a degree of
probability and a stronger inductive argument, relative to another weaker induc-
tive argument, is one whose conclusion (from the same true premises) is rightly
assessed to be relatively more likely to be true.
Argument of a Function. The input of a function is called often “the argument
of the function.” This is one input, if the fuction is monadic or unary; an n-ary
function must be defined with n arguments. Disambiguation is important in this
regard since the term “argument” appears more commonly in logic as defined in
earlier entry in this glossary.
Arity. The arity of a function is the number of inputs of the function. The arity
(also called degree) of a connective of a logical system is the number of inputs
which the connective has, as defined: also, we may say, the arity or degree of
a logical connective is the number of well-formed formulas (possibly atomic)
which are within the scope of the connective as it is defined. A connective can
be unary/monadic/one-place if it has one input, binary/dyadic/two-place if it has
two inputs, and, generally, n-ary/n-adic/n-place if it has a number n of inputs for
n ≥ 1. For n = 0, a so-called zeroary/zero-degree connective is a logical constant:
a sentential variable that is defined as referring to a truth value (true, usually
symbolized by “⏉”, or false, usually symbolized by “⏊”.)
Assumed Premise (also, Posited Premise) see: Formal Proof.
Asymmetry, Asymmetric Property. Asymmetry is a property of a relation ρ
according to which, for every pair α and β that are ρ-related, if α is ρ-related to β,
then β is not ρ-related to α. Using symbolic resources from our first-order formal
language metalinguistically, we can indicate asymmetry in this way: ∀x∀y(ρxy ⊃
~ ρyx). An example of an asymmetric relation is: α is the parent of β.
Glossary 461
seen in the evening”); moreover, it was not known that the two names co-refer.
Significantly, having different connotations, the names cannot be inter-substi-
tuted within referentially opaque contexts (see related entry) and, in doing so,
preserve truth: for instance, the reasonable person knows that “the morning star
is identical with the morning star” but she may not know indeed that “the morn-
ing star is identical with the evening star” even though “the morning star” and the
“evening star” have the same denotation (but different connotations.) Thus, “to
know” generates a referentially opaque context (and it cannot be studied through
construction of a truthfuntional unary connective!)
Consequence, Logical Consequence. Logical consequence is the characteristic
relation between the conjunction of the premises of an argument and the conclu-
sion or the inclusive disjunction of the conclusions of the argument (if multiple
conclusions are allowed.) A logic is characterized by its logical consequence
relation: approached semantically (speaking of truth values, true and false, as
logical meanings), the logic is completely and accurately characterized by the
collection of the argument forms that are valid in this logic. An alternative char-
acterization of a logic is by means of the collections of its logical truths but
these two characterizations may fail to coincide if the logic does not validate the
Deduction Theorem (see related entry.)
If we use the metalinguistic symbol of the turnstile, “⊢”, to mark syntactic logi-
cal consequence, we have: φ1, …, φn ⊢ ψ1, …, ψm if and only if φ1 · … · φn ⊢ ψ1
∨ … ∨ ψm.
This may be also represented by means of a set of premises as follows (for one
conclusion, which is the usual case).
{φ1, …, φn} ⊢ ψ if and only if φ1 · … · φn ⊢ ψ.
The inverse turnstile, from right to left, can be used too. In the case in which logi-
cal consequence obtains in both directions, we have logical equivalence, which can
be indicated metalinguistically as follows for the case of one-conclusion logical
consequence: φ1, …, φn ⊣ ⊢ ψ.
Consequent see: Antecedent.
Consistency, Logical Consistency, Consistent Set. A set or collection of mean-
ingful sentences (a theory, description, story, etc.) is consistent if and only if all
of the sentences can possibly be true jointly. Possibility is not the same as actual-
ity. Even if a collection of sentences cannot have all its sentences true together
in the actual world, or in some state of affairs designated as actual, it suffices for
consistency that it is logically possible that all the sentences are true together in
some logically possible state of affairs (not necessarily the actual one.) Thus, the
set {the earth has two moons, the earth has more than one moon} is logically
consistent although actually both sentences in it are false; but there is a logi-
cally consistent (contradiction-free) alternative story in which the two sentenes
are both true. But the set {the earth has one moon, the earth has more than one
moon} is not logically consistent: there is no logically possible world in which
both sentences can be true together.
466 Glossary
Two sentences that are each other’s mutual contradictories cannot be logically
consistent taken together; a logical contradiction makes any set in which it is
included to be logically inconsistent; the negation of a tautology makes any set in
which it is included to be logically inconsistent. The familiar truth table method can
be deployed to determine if a given set of sentences is logically consistent: if at least
one row of the truth table that is constructed for the sentences takes the truth value
true for all the sentences, then the set is consistent. A set of sentences that is not
consistent is inconsistent.
In an argument form: the argument form is valid if and only if the set comprised
of its premises and the negation of its conclusion is inconsistent (it is not logically
possible for all its premises to be true and its conclusion to be false.)
Contingency, Logical Contingency. A meaningful sentence that can be logically
possibly true and logically possibly false (in different contexts, not in the same
context since the standard logic is characterized by the law of non-contradiction:
see related entry.) In the semantic approach to the standard sentential logic, which
is usually undertaken by use of truth tables: a well-formed sentential formula is
logically contingent (a contingency, a logical contingency (logically indetermi-
nate, logically indefinite) if and only if at least one of the rows of its truth table
has the truth value true and at least one of its rows has the truth value false.
Contradiction, Logical Contradiction. A meaningful sentence that is logically
necessarily false and cannot be logically possibly true: in every logically pos-
sible context, this sentence is false. In the semantic approach to the standard sen-
tential logic, which is usually undertaken by use of truth tables: a well-formed
sentential formula is a contradiction if and only if all the rows of its truth table
have the truth value false. The contradictions are the logical falsehoods of the
logic. Alternative logics may not agree among them on the logical falsehoods
they have. Contradictions or contradictory sentences of the natural language
are not informative, since they must logically be false in every possible context
(hence, they cannot “mark” or differentiate any one context from other contexts.)
Sometimes, a distinction is drawn between a contradiction as a necessary false-
hood of sentential logic and logical falsehood as a necessary falsehood of first-
order logic.
Contradictions are rendered false (logically necessarily false) solely on the basis
of the meanings of the defined logical connectives of the logic. Traditionally, it has
been remarked that a contradiction is necessarily false because of its logical form.
Given a specified logical system, its contradictions are automatically settled; it is
not a matter of empirical discovery what contradictions a logic has.
Alternative logics – non-standard or non-classical logics – which permit more
than the standard two truth values of true and false, may not regard every formula
with the form of a classical logical contradiction as false or even as having an anti-
designated truth value (see related entry): the motivating philosophical view, accord-
ing to which not all contradictions may be false or that not everything follows from
a contradiction (which is the case in the standard logic), is called Dialethism. A
rather unpopular view according to which all contradictions are true or take a desig-
nated truth value is called Trivialism.
Glossary 467
to an argument form that is being tested by the truth table is the assignment of
truth values to the atomic components of the premises and conclusions for any
row of the truth table across which all the premises are true and the conclusion
is false (if such row exists.) If there is no row of the truth table for an argument
form, for which all the premises are true and the conclusion is false, then there
is no assignment of truth values for which all the premises of the argument form
are true and the conclusion is false: therefore, the argument is valid (there is no
logical possibility of constructing a counterexample to it.)
For predicate (first-order) logic, a counterexample (countermodel) to an argu-
ment form is an interpretation or valuation assigning objects from a model’s domain
to names and function symbols, and assignment of collections of objects to predi-
cates, for which all the premises of the argument form are true and the conclusion is
false. If a countermodel can be produced for an argument form, the argument form
is invalid: validity of an argument form is defined as validity across all possibly
constructed models.
Countermodel see: Counterexample.
Counterfactuals, Counterfactual Conditionals (also, Subjunctive
Conditionals). A conditional or implicative statement of the natural language
is called a counterfactual or contrary-to-fact conditional (or, grammatically, a
subjunctive conditional) if it has a false antecedent and it is true in every state of
affairs which is appropriately similar to our present state of evaluation with the
difference that the antecedent is true in that alternative state. Since our standard
sentential logic has any conditional with a false antecedent as true (vacuously
true), it is obvious that we cannot depend on the resources of the standard logic
for a logical analysis of counterfactuals. The machinery of more advanced log-
ics – extensions of the standard sentential and, possibly, first-order logic – are
needed. A standard example of a counterfactual conditional statement of the nat-
ural language is: “If Oswald had not killed Kennedy, then someone else would
have.” The implied assumption is the the antecedent is indeed false. In contrast,
we may regard the following as a material conditional: “If Oswald did not kill
Kennedy, then someone else did.” The meanings of the two sentences are mark-
edly different.
Decision Procedure. A mechanical systematic process implemented within a for-
mal system, which terminates within polynomial time and by means of which
it can be determined: for given argument forms if they are valid or invalid; for
given formulas if they are tautologies, contradictions or contingencies; for given
sets of formulas, if they are consistent or inconsistent; and so on. A decision pro-
cedure must terminate in a finite number of steps and effectively, correctly, and
uniquely yield results which can be read according to specifications, for every
application of the process. A decision procedure must produce a counterexample
in case an argument form is invalid (in contrast, failure to derive a conclusion in
application of a proof-theoretical method like natural deduction does not deci-
sively show that the corresponding argument form is invalid.) The standard sen-
tential logic has effective decision procedures like the truth table method, as well
as effective tree and tableau procedures. This is not the case for the first-order
logic (excluding fragments of the logic.)
Glossary 469
the natural language into the symbolic language of a formal system must define
predicate symbols for the kinds of things that are being talked about.
Domain of a Function see: Function.
Double Negation In the standard sentential logic, the rule of double negation is
valid: one may infer φ from not-not- φ and may also infer not-not- φ from φ.
This is not the case for intuitionistic logic: negation of φ is understood by intu-
itionistic logicians and mathematicians to have the meaning of “a proof of φ can
be converted effectively by means of an in-principle constructible proof method
into deriving a line of logical absurdity.” For intuitionistic logic, φ cannot be
derived from not-not-φ (although the converse is derivable.)
Empty Domain of a First-Order Logic Model see: Domain.
Equivalence, Logical Equivalence see: Biconditional.
Equivalence Relation. A relation ρ is an equivalence relation if and only if it is
reflexive, symmetric and transitive. An equivalence relation induces a parti-
tion of a set over whose members it is defined: all members of the set, which
are ρ-related can be considered as members of the same equivalence class. We
may think of equivalence relations as being something like identity (all identical
things being placed together as one thing) albeit weaker than identity (which is
also an equivalence relation and, as such, the strongest equivalence relation.) For
instance, under the perspective of similarity (if it is treated as an equivalence
relation), we can justify placing all similar things within one and the same set
(which is a subset of the given set over which this similarity relation is defined.)
Ex Falso Quodlibet. A characteristic rule of derivation, which is valid for the stan-
dard sentential logic but also for intuitionistic logic, by which we may infer any
sentence when we have derived the absurd sentence (which is itself derivable
from a logical contradiction.)
Excluded Middle, The Law of Excluded Middle. This is a classical law of logic –
valid in the standard logic – by which: any meaningful sentence is either true
or false; the inclusive disjunction of any formula φ and its negation, not-φ, is a
logical truth. If a third truth value, for instance interpreted as “indeterminate”, is
added to the standard logical system and the logical connectives are defined over
the set of the three truth values so that the negation of indeterminate is also inde-
terminate, we have an example of a logical system in which the law of excluded
middle fails. In a famous case from classical antiquity, Aristotle courted what
was essentially an option like this – assigning a value of indeterminate to future
contingents or meaningful sentences about future events, which are not rendered
true or false before the actualization of the event – but he realized that the law of
excluded middle would not be valid anymore and rejected the solution.
Exclusive Disjunction see: Disjunction (Exclusive).
Existential Commitment. In the standard first-order/predicate logic, every named
object is presumed to exist: the machinery of the formalism, as it is set up,
imposes a commitment to existence of all the named objects; the following is a
logical truth of the standard first-order logic: for all x, there is a y such that x is
identical with y. Considering that an existence-predicate is definable as “t exists
if and only if there is an x such that x is identical with t”, we detect in the afore-
mentioned logical truth the commitment to existence of all named objects. In this
Glossary 471
of the model’s domain), individual variables and possibly also symbols for iden-
tity and functions. There is a philosophic view that no logic below the first-order
logic is adequate for symbolization – and this is certainly the case for symbol-
izing truths of mathematics.
Formal Language. A language with specified symbolic resources and a legislated
rigid grammar, constructed and applied (with translations into it from natural
language, or formalizations): unlike natural languages, formal languages are
austere, limited precisely to the stipulated resources and with strict implementa-
tion of the legislated grammar; accordingly, well-formed (grammatically cor-
rect) symbolic expressions of a formal language cannot exhibit ambiguity or
superfluity.
Formal Proof. The totality of lines that are constructed systematically and sequen-
tially by application of formal proof-theoretic rules (rules of deduction, rules of
derivation) on lines that are already given or have been constructed: the given
lines are called premises of the proof and the final line is called the conclusion
of the proof; certain rules of derivation require positing assumed premises which
must then be discharged if the proof sequence is to terminate.
Free Logic. An alternative species of first-order/predicate logic which, unlike the
standard predicate logic, is “free” of presuppositions regarding existence of
everything that is denoted by some term of the logic. Free Logic does not have
the standard rules of existential quantifier introduction or universal quantifier
elimination: usually, universal quantifier symbols can be eliminated in vari-
ants of Free Logic only if it is also given that some individual constant denotes
an object that exists in the domain. Free Logic also permits speaking of empty
domains of models, which is not countenanced for the standard predicate logic.
Consider also defining an existence-predicate as: Et ≝ ∃x(x = t).
Standard logic:
𝓢𝓛 ⊨ ∀x∃y(x = y) [everything for which there are names in the model exists].
𝓢𝓛 ⊨ ∀xFx ⊃ Fa.
But, for Free Logic:
𝓕𝓛 ⊮ ∀x∃y(x = y).
𝓕𝓛 ⊮ ∀xFx ⊃ Fa.
Free Variable (also, Unbound Variable), Free Variable Symbol. A variable sym-
bol that does not lie within the scope of any quantifier symbol. A formula with
free variables lacks meaning – it is called a sentential function or open sentence –
but it is grammatically well-formed. See: Bound Variable.
Function. A function, as an abstract mathematical object, is a relation between a
specified input value and the unique value of the output that is generated: given
the notion of a relation, a function ƒ is defined, then, as the set of the ordered
pairs <x, ƒ(x)> where x is an input value from the Domain over which the func-
tion is defined and ƒ(x) is the specified output from the specified Range of the
function (if there are restrictions imposed on the range.) The concept of partial
function is also definable – a partial function being one for which not all mem-
bers of the domain of the function have outputs.
Glossary 473
symbols are written matters when writing the symbols that denote the members.
Unlike two-member sets, the order, from left to right, in which the symbols of
the members of an ordered pair are written matters, so that, generally, and given
that a ≠ b:
<a, b> ≠ <b, a>.
An ordered pair <a, b> can be written as, and is considered identical with, the set:
{{a}, {a, b}}.
The definition is generalized to the case of ordered n-tuples:
Generalizing to the case of the ordered n-tuple, we have:
<x1, …, xn> = {{x1}, {x1, x2}, …, {x1, x2, …, xn}}.
Paralogism. A paralogistic claim is a claim that a logical connective captures the
logical behavior of a logic-word of the natural language when such a claim can-
not be properly supported: for instance, it is paralogistic to claim that the logi-
cal connective of material implication, the material conditional, of the standard
sentential logic can be motivated for the study of meanings of “if-then” that
are associated with entailment, deduction or other uses of “if-then” in natural
language. For the material conditional, if the antecedent is false, the conditional
sentence is true no matter what the consequent is; similarly, if the consequent
is true, then the material conditional is true no matter what the antecedent is.
These are often called paradoxes of material implication but they are not para-
doxes since they follow properly from the definition of the connective of material
implication; they are, however, paralogisms if, in addition, it is claimed that this
connective indeed corresponds to the uses of “if-then” in natural language.
Partition see: Equivalence Relation.
Power Set. The set that has as members all, and only, the subsets of a given set x is
called the power set of x and is symbolized by “℘(x)”. The set x itself, and the
empty set, are also members of the power set since they are subsets of x.
Predicate Logic see: First-Order Logic.
Proof see: Formal Proof.
Proof by Contradiction see: Indirect Proof.
Proof Without Premises, Proof Without Assumptions see: Conditional Proof.
Proof-Theoretic Method. A proof system in which specified rules of derivation
are applied to generate lines from given and already derived lines, in this fashion
constructing a proof from given premises, or from no premises at all, to the con-
clusion. Unlike the semantic approach to the study of logic, the proof-theoretic
method understands the logical connectives to be defined by means of the rules
by which the connective symbols are managed in the proof-theoretic system. In
the case of the standard logic, the semantic approach (for instance, the truth table
method) and the proof-theoretic approach harmonize in the sense that whatever
is valid as argument or tautology in the semantic approach is provable or prov-
able from no premises in the proof-theoretic approach and vice versa. See also:
Completeness.
Proposition see: Declarative Sentence.
Propositional Function see: Bound Variable.
Glossary 477
Quine Corners. Quine corners, “⌜” and “⌝”, are metalinguistic symbols used to
enclose symbolic expressions from the object language of a formal logical sys-
tem, which are mentioned rather than used when appearing in the symbolically
enhanced metalanguage.
Reductio Ad Absurdum see: Indirect Proof.
Referential Opacity, Referentially Opaque Contexts see: Extensionality,
Denotation, Connotation.
Reflexivity, Reflexive Property. Reflexivity is a property of a relation ρ accord-
ing to which every object is ρ-related to itself. Using symbolic resources from
our first-order formal language metalinguistically, we can indicate reflexivity in
this way: ∀xρxx. An example of a reflexive relation is: α is identical with β: for
every α, it is the case that α is identical with itself. A relation like, for example,
α-likes-β does not have to be reflexive.
Relation. In First-Order/Predicate Logic, a relation or relational predicate is a
predicate symbol of arity or degree two or larger.
In Set Theory: An n-place relation over a domain can be defined as a set of
ordered n-tuples from a specified domain 𝕯. An n-place or n-ary relation, as a set,
is a subset of the nthCartesian product of the domain by itself, symbolized as 𝕯
x …[n]… x 𝕯 = 𝕯n.
Relative Product see: Composition.
Relevantism, Relevantist Logic. Relevantism is a philosophic-logical view that
seeks to restrict logical truths (or the relation of logical consequence – what is
validly derivable from what) by imposing the requirement that there has to be
meaning-content connection (relevant or relevantist conection) between prem-
ises and conclusion of valid arguments. Logics that are constructed with a view
to observing this restriction are called Relevantist Logic. The standard logic is
not relevantist, as evidenced by such “paralogistic” valid argument schemata as:
p ∙ ~ p /.. q; q /.. p ∨ ~ p;.
~ p /.. p ⊃ q; q /.. p ⊃ q; p /.. p ∨ q;.
((p ⊃ q) ⊃ p) ⊃ p.
Satisfaction/Joint Satisfiability. A set of formulas is jointly satisfiable if and only
if there is a truth value assignment (a row of the truth table) on which all the for-
mulas are computed or determined as being true. This is a semantic concept since
value-assignment is mentioned in the definition, with the designated truth value
“true” being the “satisfying” value. The corresponding proof-theoretic concept is
consistency but, in practice, the two terms are interchangeable and “consistency”
is commonly used to characterize collections of meaningful sentences (theories)
whose sentences can all be possibly true together.
Sentential Function/Propositional Function/Open Sentence see: Bound
Variable.
Soundness see: Completeness.
Sub-Contrariety, Subcontrariety. Two formulas φ and ψ are called mutual sub-
contraries (or related by the relation of subcontrariety) if and only if it is pos-
478 Glossary
sible for them to be both true but it is not possible for them to be both false. The
truth table for these formulas has no rows across which they are both false; the
formulas can be true together but they cannot be false together. If two sentential
formulas are mutual subcontraries they are to be joined by the symbol for inclu-
sive disjunction to form a tautology.
Sub-Proof, Sub-Derivation see: Conditional Proof.
Substitutivity of Logical Equivalents see: Intersubstitutivity of Equivalents.
Symmetry. Symmetry is a property of a relation ρ according to which, for every
pair α and β that are ρ-related, if α is ρ-related to β, then β must also be ρ-related
to α. Using symbolic resources from our first-order formal language metalinguis-
tically, we can indicate symmetry in this way: ∀x∀y(ρxy ⊃ ρyx). An example of
a symmetric relation is: α is similar to β.
Synthetic Sentences (also see: Analytic Sentences). A sentence of a language is
synthetic if and only if its truth value (whether it is true or false) cannot be deter-
mined on the basis of the meanings of its logical and non-logical words. Thus,
a synthetic sentence is not necessarily true or necessarily false: it is logically
possibly true and logically possibly false: even if it is known to be actually true/
false, there is a consistent alternative state in which it can be made false/true. The
informative sentences of a language are synthetic whereas analytic sentences
(which are necessarily true/false on the basis of the defined meanings of their
logical or non-logical words) are not informative and are trivially true/false.
Tautology, Logical Truth. A meaningful sentence that is logically necessarily true
and cannot be logically possibly false: in every logically possible context, this
sentence is true. In the semantic approach to the standard sentential logic, which
is usually undertaken by use of truth tables: a well-formed sentential formula
is a tautology if and only if the sentence has the truth value true in all the rows
of its truth table. The tautologies are the logical truths of the logic. Alternative
logics may not agree on the logical truths they have. Tautologous or tautological
sentences of the natural language are not informative, since they must logically
be true in every possible context (hence, they cannot “mark” or differentiate any
one context from other contexts.)
Sometimes, a distinction is drawn between a tautology as a necessary truth of
sentential logic and logical truth as a necessary truth of first-order logic.
Tautologies are rendered true (logically necessarily true) solely on the basis of
the meanings of the defined logical connectives of the logic. Traditionally, it has
been remarked that a tautology is necessarily true because of logical form. Given a
specified logical system, its tautologies are automatically settled; it is not a matter
of empirical discovery what tautologies a logic has. Sometimes a logical system is
defined as the collection of its tautologies; alternatively, a logical system can be
defined by its characteristic relation of logical consequence (see related entry.)
Nevertheles, the two ways of defining a system might fail to coincide if the charac-
terization is about an alternative, non-classical, logical system that does not validate
the so-called Deduction Theorem (see related entry.)
Glossary 479
(although necessarily true) that “if Washington was a general, then he was a
general.” Taking the first sentence as input for “it is necessarily the case that___”
yields false; but taking the second sentence as input yields true. Thus, we have
both true and false as possible output values when the input is true. This shows
that “it is necessarily the case that___” is not truthfunctional.
Unary/Monadic/One-Place. A function or logical connective is unary/monadic/
one-place if and only if it is defined to have exactly one input or, for logical
connectives, to have exactly one well-formed formula symbol within its scope.
Universe see: Domain.
Validity, Valid Argument, Valid Argument Form. Validity is a characteristic of
deductive arguments that are intuitively “correct”, arguments in which the prem-
ises provide absolute, logically necessary support for the conclusion: if the prem-
ises are true in a valid deductive argument, then it is logically necessary that the
conclusion is true; it is logically impossible that all the premises of a valid argu-
ment are true and the conclusion is false. Validity is a matter of logical form: a
valid argument form is one that cannot possibly have instances with all premises
true and false conclusion. A valid argument exemplifies a valid argument form
and a valid argument form can only have valid arguments as instances.
Well-Formed Formula. A symbolic expression that is properly or correctly con-
structed in accordance with the stipulated formal grammar of a formal language.
If not well-formed, a symbolic expression cannot “scan,” it is nonsensical or
meaningless within the given formal language. Only well-formed formulas can
be used within a formal language. The grammatical rules, according to which
formulas are well-formed, are stipulated strictly within the formal systems and
cannot be assessed by reference to any transcendent standards. A formula which
is not well-formed, for a given formal grammar, is considered non-well-formed
or ill-formed. Any symbolic expression, assessed by the formal grammatical
standards of a given formal language, can be either well-formed or ill-formed,
and it cannot be neither or both.
Unbound Variable see: Free Variable.
Valuation see: Interpretation.
Zeroary Connective see: Arity.
References
Gabbay, D. M., & Guenther, F. (Eds.). (1989). Handbook of Philosophical Logic (Vol. 4).
Dordrecht: Reidel.
Gamut, L. T. (1991). Logic, Language, and Meaning (Vol. 1). Chicago: University of Chicago Press.
Geach, P. (Ed.). (1975). Logical Investigations. Oxford: Blackwell.
Geach, P. T. (1972). Logic Matters. Berkeley, CA: University of California Press.
Goble, L. (Ed.). (2001). The Blackwell Guide to Philosophical Logic. Oxford: Blackwell.
Haack, S. (1978a). Philosophy of Logics. Cambridge: Cabridge University Press.
Haack, S. (1978b). Philosophy of Logics. Cambridge: Cambridge University Press.
Heyting, A. (1956). Intuitionism. Amsterdam: North-Holland.
Horn, L. R. (1989). A Natural History of Negation. Chicago: University of Chicago Press.
Hughes, R. I. (Ed.). (1993). A Philosophical Compansion to First-Order Logic. Indianapolis, IN:
Hackett.
Hunter, G. (1971). Metalogic. London: Macmillan.
Jackson, F. (Ed.). (1991). Conditionals. Oxford: Oxford University Press.
Jeffrfey, R. C. (1967). Formal Logic: Its Scope and Limits. New York: McGraw-Hill.
Kalish, D., & Montague, R. (1964). Logic. New York: Harcourt Brace.
Keenan, E., & Faltz, L. (1985). Boolean Semantics for Natural Languages. Dordrecht: Reidel.
Kleene, S. C. (1952). Introduction to Metamathematics. Amsterdam: North-Holland.
Kneale, M., & Kneale, W. (1962). The Development of Logic. Oxford: Oxford University Press.
Lambert, K. (Ed.). (1969). The Logical Way of Doing Things. New Haven, CT: Yale University Press.
Lambert, K. (Ed.). (1991a). Philosophical Applications of Free Logic. Oxford: Oxford
University Press.
Lambert, K. (Ed.). (1991b). Philosophical Applications of Free Logic. Oxford: Oxford
University Press.
Lambert, K., & Fraassen, B. C. (1972). Derivation and Counterexample. Encino, CA: Dickenson.
Lemmon, E. J. (1965). Beginning Logic. London: Methuen.
Martin, R. (Ed.). (1984). Recent Essays on Truth and the Liar Paradox. Oxford: Oxford
University Press.
Massey, G. (1970). Understanding Symbolic Logic. London: Harper & Row.
McCall, S. (Ed.). (1967). Polish Logic: 1920–1939. Oxford: Oxford University Press.
Mill, J. S. (1979). System of Logic. London: Longmans.
Munitz, M. K. (Ed.). (1973). Logic and Ontology. New York: New York University Press.
Prawitz, D. (1965). Natural Deduction. Stockholm: Almqvist & Wiksell.
Quine, W. V. (1940). Mathenmatical Logic. Cambridge, MA: Harvard University Press.
Quine, W. V. (1952). Methods in Logic. London: Routledg & Kegan Paul.
Quine, W. V. (1953). From a Logical Point of View. Cambridge, MA: Harvard University Press.
Quine, W. V. (1970). Philosophy of Logic. Englewood Cliffs, NJ: Prentice Hall.
Read, S. (1988). Relevant Logic. Oxford: Blackwell.
Reichenbach, H. (1966). Elements of Symbolic Logic. New York: Feww Press.
Rescher, N. (1976). Plausible Reasoning. Assen: Van Gorcum.
Restall, G. (2000). An Introduction to Substructural Logic. London: Routledge.
Russell, B., & Whitehead, A. N. (1910–1913). Principia Mathematica. Cambridge: Cambridge
University Press.
Sainsbury, M. (1991). Logical Forms. Oxford: Blackwell.
Schechter, E. (2005). Classical and Non-Classical Logics. Princeton, NJ: Princeton University Press.
Smullyan, R. (1968). First-Order Logic. Berlin: Springer.
Strawson, P. F. (1967). Introduction to Logical Theory. London: Methuen.
Suppes, F. (1957). Introduction to Logic. New York: Van Nostrand.
Tarski, A. (1965). Introduction to Logic. Oxford: Oxford University Press.
Tennant, N. (1978). Natural Logic. Edinburgh: Edinburgh University Press.
Thomason, R. (1970a). Symbolic Logic. London: Macmillan.
Thomason, R. H. (1970b). Symbolic Logic: An Introduction. London: Macmillan.
References 483
van Benthem, J., & Meulen, A. t. (Eds.). (1984). Generalized Quantifiers in Natural Language.
Dordrecht: Foris.
van Fraassen, B. C. (1971). Formal Semantics and Logic. New York:: Macmillan.
van Heijenoort, J. (Ed.). (1967). From Frege to Gödel: A Source Book in Mathematical Logic.
1879–1931. Cambridge, MA: Harvard University Press.
Wright, C. (Ed.). (1984). Frege: Tradtion and Influence. Oxford, Blackwell.
Index
Content of sentence - does not matter in Declarative sentence, 50, 258, 272, 276, 289,
deductive logic, 33 305, 408
Content of sentences - not relevant for Declaratory sentence, see Declarative sentence
deductive logic, but relevant for Decomposition tree, 112
inductive logic, 9 Deduction Theorem, 125, 187
Context variability of truth values - not Deductive arguments, 3
admitted in the standard logic, 13 Deductive reasoning, 57, 66, 71
Contingency, 25, 27, 55, 78, 79, 126, 143, 144, Definite descriptions, 346, 397–400, 402, 403
148, 152, 155, 160, 169, 170, 172, DeMorgan Laws - natural deduction, 199
238, 257, 261, 265–268, 391, 395 DeMorgan Laws - predicatre logic, 374
determined by truth tables, 154 DeMorgan Property - Set Theory, 427
Contradiction, 16, 21, 22, 25–28, 49, 53–55, Denotation, 23
75, 78, 79, 126, 128, 143, 144, 147, Density - as a property of relations, 448
148, 150, 152, 154, 160, 161, 163, Denumerable, see Countable
164, 168–170, 172, 173, 184, 195, Denumerable set, 178
197, 207, 208, 216, 218, 229, 231, Derivation system, see Proof-theoretic system
233, 237, 238, 249, 257, 261, Designated truth value, 16, 42, 116, 127, 258,
264–269, 336, 337, 341, 348, 259, 264
354, 391 Deviation from a formal language, 83
determined by truth tables, 154 Differences between logic and geometry, 21
Contraposition, 244, 255, 311, 432 Discharge of an assumed in the Fitch-style
Contrapositive, 283, 432 proof method, 211
Contrariety, 157, 467 Discharge of premises, 190
Converse implication, 131 Discharge of the posited assumption, 195
Converse - not the same as the inverse in set Discharge of the posited assumption -
theory, 453 degenerate case, 224
Converse of a function, 452 Disequivalence, 131, 136, 157, 429, 432
Converse of a function may not be a exclusive disjunction, logical/material, 157
function, 452 Disjunction, 97, 99, 107, 118, 124,
Converse of an implication, 156 134–136, 157, 158, 161, 165,
Converse relation, 452 168–170, 183, 184, 191, 192, 195,
Converse turnstile, 125 201, 205, 217–222, 225, 231, 234,
Conversion, 156 237, 242, 249, 251, 262, 263, 282,
Conversions to and from implications 284, 334, 339, 362, 363, 374, 375,
formulas in predicate logic, 363 378–380, 411, 426, 432, 441,
Conversions to prenex form, 363 454, 469
Correct logic -- is there one?, 4 Disjunctive Syllogism, 187–190, 379, 380
Countable set, 178, 338 Distributive Properties - Set Theory, 428
Counterexample, 65, 153, 158, 159, Domain of a function, 70, 449
161–163, 267 Domain - predicate logic models, 294
to an argument form, 16 Double negation (DN), 195–197, 201, 228,
and argument validity, 68 229, 231, 233, 242, 255, 263, 264,
defined, 46 311, 432
Counterfactuals, 38
Countermodel, 357, 358, 392, 394, 396
Currency denomination - meanings of words E
in a language are like---, 45 E!, 403
Eigenparameter, 354, 358, 380–384,
389, 392
D Eliminations of quantifier symbols ---
Decision procedure, 5, 148, 152, 153, 159, predicate logic tree system, 394
209, 228, 237, 257, 271, 342, 343, Empty domain - not allowed in the standard
357, 390, 391 predicate logic, 353
488 Index
R
P Range of a formula – is a subset of the
Pairwise disjoint, 431 Cartesian product {T, F} x
Paralogism, 168, 184 {T, F}, 166
Index 491
Transitivity - as a property of relations, 448 Universal set, 409, 410, 413, 416, 417,
Translations from English into the Formal 419, 424–433
Language of Predicate Logic, 12, Universe of discourse, see Domain - predicate
108, 113, 197, 272, 274, 275, 279, logic models
282, 286, 291, 301, 331–333, 336, Use and mention, 106
338, 339, 341, 342, 345, 348,
363, 402
Translations of numerical statements, 345 V
Tree method, 266, 267 Vacuity, as a characteristic of logic, 17
Triadic, 117, 134, 303 Vacuous quantification, 382
Triple Bar, 85, 304 Vacuous truth, 367
Triviality, as a characteristic of logic, 17 Vagueness and Logic, 19
Trivial Truth, 354 Valid argument - defined, 46
Trivial truth, determined by the truth Validity, 64, 65, 72, 153, 154, 158, 162, 261,
table, 154 266, 267
Truth conditions, 70, 116 Valuation, 38, 129, 130, 150, 152, 174,
defined, 38 259, 266, 268, 294, 298, 299, 351,
Truth Conditions for Semantic Models with all 352, 354, 356, 358, 359, 361,
named objects, 359 438, 444
Truth Conditions for Semantic Model with Value-assignments, see Valuation
unnamed objects, 366 Variables, 63, 64, 66, 86, 181, 259, 266, 306
Truth function, 161, 163 Variants - assignments in the Semantics of
Truth preservation - is what is assessed in Predicate Logic, 365
argument validity, 43 Variations or idioms of a formal language, 82
Truth-Preservativeness, as a characteristic of
logic, 16
Truth Table for Checking Intuitionistic W
Validity, 241 Weak connectedness - as a property of
Truth tables, 153, 154 relations, 448
Truth value and meaning, 13 Weak ordering - as property of relations, 449
Truth value assignments, 162 Wedge, 85, 304
Turnstile, 125, 166, 167, 198, 200, 209, Well-formed formula, 101
222, 359 specific to a formal language, 33
Type of a symbolic system, 83 Well-formedness, 101, 105, 303, 306, 318,
Types of atomic variables, 119 323, 373
Wff, 86
Whitehead, A. N., 397
U Wittgenstein, L., 17, 148
Unary, 30, 70, 85, 87, 89, 91, 102, 103, 117,
124, 130–132, 141, 276, 291, 294,
302, 314, 318, 319, 334, 424, 425, Z
440, 449, 454 Zermelo-Fraenkel Systematization of Set
Unbound variable, 302, 308 Theory, 435
Universality, as a characteristic of logic, 14 Zermelo-Fraenkel Theory of Sets, 405